uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
2,869,038,154,128
arxiv
\section{fdCTMC in PRISM Language}\label{app:prism-lang} \medskip \noindent \textit{Extension of PRISM data structures { }} The \code{FDCTMCSimple} class extends the \code{CTMCSimple} class by a vector of objects of type \code{FDEvent} and few methods to work with them (the methods are explained in the corresponding interface \code{FDCTMC} that is an extension of interface \code{CTMC}). The \code{FDEvent} is basically an extension of \code{DTMCSimple} class by one \code{double} attribute that keeps delay of the fd event and one \code{String} attribute that keeps the label of the fd event. The transition kernel is kept in the inherited attributes from the \code{DTMCSimple} class. \begin{figure} \begin{center} \includegraphics[width=340pt]{rejuv3.jpg} \caption{The graphical user interface of PRISM with the source code of the rejuvenation model \cite{german-book}.} \label{fig:rejuv} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=330pt]{dpm.jpg} \caption{The source code of the fdCTMC from Example~\ref{fig:dpmsleep} in the PRISM language.} \label{fig:prism-lang} \end{center} \end{figure} \section{Discretization Bounds}\label{app:synthesis} Using the full version of \cite{BKKNR:QEST2015} we derived the exact formulas of the discretization bounds for each fd event: \begin{align*} &\overline{\delays} = \max\Big\{ \frac{\overline{\mathit{Val}}}{minP^{|S_{fd}|} \cdot minR} \; ; \; \frac{e \cdot | \ln(\alpha/2) | } {\lambda \cdot minP} \Big\}, \\ &\delta = \frac{\alpha}{D_1}, \\ &\kappa = \frac{\varepsilon \cdot \delta \cdot minR}{2 \cdot |S'| \cdot (1 + \overline{\mathit{Val}})}, \end{align*} where \begin{align*} &\alpha = \min\Big\{ \frac{\varepsilon}{Bound[\#] \cdot (1 + \overline{\mathit{Val}}) \cdot |S'|} \; ; \; \frac{1}{2 \cdot Bound[\#] \cdot |S'|} \Big\}, \\ &D_1 = \max \{2 \cdot \lambda \; ; \; 1 \cdot (\lambda+1)\cdot maxR \}, \end{align*} \begin{itemize} \item $ Bound[\#] $ is an upper bound on expected number of steps to reach target from any state in the created MDP, i.e. $$Bound[\#] = \frac{\overline{\mathit{Val}}}{\text{minimal expected one-step reward in the created MDP}},$$ \item $ \overline{\mathit{Val}} $ is the upper bound on the expected reward, \item $ S' $ is the state space of the created MDP, \item $ \lambda $ is the uniformization rate, and \item $ |S_{fd}| $, $ minP $, $ maxR $, and $ minR $ is the number of states, the minimal branching probability, the maximal reward, and the minimal reward in the subordinated CTMC for the given fd event, respectively. \end{itemize} \section{Introduction} \label{sec-intro} PRISM~\cite{KNP:prismCAV11} is an efficient tool for probabilistic model-checking of stochastic systems such as Markov decision processes (MDPs), discrete-time Markov chains (DTMCs), or continuous-time Markov chains (CTMCs). The PRISM community frequently raises requests to incorporate the possibility to express delays with deterministic durations in a CTMC.\footnote{\url{http://www.prismmodelchecker.org/manual/FrequentlyAskedQuestions/PRISMModelling\#det\_delay}} The standard PRISM recommendation is to approximate the deterministic durations using a phase-type technique \cite{Neuts81} and thus obtaining a~CTMC. This works for some models, however there are models for which such approximation can cause either a large error or a state space explosion (see, e.g. \cite{KKR:EPEW2014,fackrell2005fitting}). However, there is a formalism called fixed-delay CTMCs (fdCTMCs) \cite{guet2012delayed,KKR:EPEW2014,BKKNR:QEST2015} that is the requested extension of CTMCs by fixed-delay (fd) events, modeling the deterministic transitions or timeouts. Recent result \cite{BKKNR:QEST2015} came up with new synthesis algorithms working directly on fdCTMCs (rather than approximating them with CTMCs). Here we provide the first attempt to experimental evaluation of such synthesis algorithms and show that they are practically applicable. In the following running example we demonstrate the fdCTMC semantics as well as the parameters and objectives of the synthesis. \begin{example} The figure bellow depicts fdCTMC of a slightly modified model of dynamic power management of a Fujitsu disk drive taken from the PRISM case studies\footnote{\url{http://www.prismmodelchecker.org/casestudies/power\_ctmc3.php}} \cite{QWP99}. The disk has three modes $\mathit{idle}$, $\mathit{busy}$, and $\mathit{sleep}$. In the $\mathit{idle}$ and $\mathit{sleep}$ modes the disk receives requests, in the $\mathit{busy}$ mode it also serves them. The disk is equipped with a bounded buffer, where it stores requests when they arrive. The requests arrive with an exponential inter-arrival time of rate $1.39$ and increase the current size of the buffer. The requests are served in an exponential time of rate $12.5$, what decreases the buffer size. Note that restricting the model to the $\mathit{idle}$ and $\mathit{busy}$ modes only, we obtain a CTMC model of an M/M/1/n queue. Moreover, the disk can move from the $\mathit{idle}$ mode to the $\mathit{sleep}$ mode where it saves energy. Switching of the disk to the $\mathit{sleep}$ mode is driven by timeout. This is modeled by an fd event $f_1$ moving the state from $(\mathit{idle},0)$ to $(\mathit{sleep},0)$ when the disk is steadily idle for a specified amount of time (e.g. 1 second). The disk is woken up by another timeout modeled by an fd event $f_2$, which is active in all $\mathit{sleep}$ states. After staying in the $\mathit{sleep}$ mode for, e.g. $2$ seconds, $f_2$ changes the state according to the dashed~arrows. \newcommand{15}{15} \begin{center} \begin{tikzpicture}[outer sep=0.1em, xscale=1, yscale=1] \tikzstyle{fixed}=[dashed,->]; \tikzstyle{fixed label}=[font=\small]; \tikzstyle{exp}=[->,rounded corners,,>=stealth]; \tikzstyle{exp rate}=[font=\small]; \tikzstyle{loc}=[draw,circle, minimum size=3.2em,inner sep=0.1em]; \tikzstyle{accepting}+=[outer sep=0.1em]; \tikzstyle{loc cost}=[draw,rectangle,inner sep=0.07em,above=6, minimum width=0.8em,minimum height=0.8em,fill=white,font=\footnotesize]; \tikzstyle{trans cost}=[draw,rectangle,minimum width=0.8em,minimum height=0.8em,solid,inner sep=0.07em,fill=white,font=\footnotesize]; \tikzstyle{prob}=[inner sep=0.03em, auto,font=\footnotesize]; \node[loc] (b0) at (0,0) {${\mathit{idle},0}$}; \node[loc] (s0) at (0,-1.65) {${\mathit{sleep},0}$}; \node[loc] (b1) at (2.5,0) {${\mathit{busy},1}$}; \node[loc] (s1) at (2.5,-1.65) {${\mathit{sleep},1}$}; \node[loc] (b2) at (5,0) {${\mathit{busy},2}$}; \node[loc] (s2) at (5,-1.65) {${\mathit{sleep},2}$}; \node[] (b3) at (7.5,0) {$\cdots$}; \node[] (s3) at (7.5,-1.65) {$\cdots$}; \node[loc] (bn) at (10,0) {$\mathit{busy}, n$}; \node[loc] (sn) at (10,-1.65) {$\mathit{sleep}, n$}; \path[->,>=stealth] ($(b0)+(-0.9,0)$) edge (b0); \path[exp, bend left=15] (b0) edge node[prob] {1.39} (b1); \path[exp, bend left=15] (b1) edge node[prob] {12.5} (b0); \path[exp, bend left=15] (b1) edge node[prob] {1.39} (b2); \path[exp, bend left=15] (b2) edge node[prob] {12.5} (b1); \path[exp, bend left=15] (b2) edge node[prob] {1.39} (b3); \path[exp, bend left=15] (b3) edge node[prob] {12.5} (b2); \path[exp, bend left=15] (b3) edge node[prob] {1.39} (bn); \path[exp, bend left=15] (bn) edge node[prob] {12.5} (b3); \path[loop right,exp,looseness=5] (bn) edge node[prob, above, pos=0.2] {1.39} (bn); \path[exp] (s0) edge node[prob] {1.39} (s1); \path[exp] (s1) edge node[prob] {1.39} (s2); \path[exp] (s2) edge node[prob] {1.39} (s3); \path[exp] (s3) edge node[prob] {1.39} (sn); \path[loop right,exp,looseness=5] (sn) edge node[prob, above, pos=0.2] {1.39} (sn); \path[bend right=15,fixed] (b0) edge node[prob,left] {$f_1$} (s0); \path[bend right=15,fixed] (s0) edge node[prob, right] {$f_2$} (b0); \path[fixed] (s1) edge node[prob,right] {$f_2$} (b1); \path[fixed] (s2) edge node[prob,right] {$f_2$} (b2); \path[fixed] (sn) edge node[prob,right] {$f_2$} (bn); \end{tikzpicture} \end{center} \label{fig:dpmsleep} Additionally, every state is given a rate cost that specifies an amount of energy consumed per each second spent there. Optionally, an impulse cost can be specified, e.g., say that the change from $(\mathit{idle},0)$ to $(\mathit{sleep},0)$ consumes 0.006 energy units instantaneously. Now, one might be interested in how much energy on average is consumed before emptying the buffer, i.e. to compute the expected energy consumed until reaching target that is a new successor of $(\mathit{busy},1)$ instead of the initial state $(\mathit{idle},0)$. But, being a developer of the disk, can we set better timeouts for $f_1$ and $f_2$? Hence, we consider timeouts as parameters and synthesize them in order to minimize the expected amount of consumed energy. \end{example} \medskip \noindent \textit{Our Contribution} is as follows. 1.~We provide an extension of the PRISM language and of the internal data structures to support specification of fdCTMC with impulse and rate costs (or equivalently rewards). Hence, our version of PRISM is now ready for other experiments with fdCTMC algorithms including the possibility to support model-checking options as for CTMCs and DTMCs. 2.~We added an evaluation of expected reward until reaching a given set of target states. 3.~We analyzed the synthesis algorithm from \cite{BKKNR:QEST2015}, derived exact formulas and implemented the algorithm. 4.~Additionally, we accelerated the implementation by few structural changes, that significantly improved the running time and the space requirements of the synthesis implementation. 5.~We provide a performance evaluation proving that current implementation is practically applicable to a complex model from the PRISM case-study. \medskip \noindent \textit{Related Work { }} There are many papers that contain models with fd events suitable for synthesis such as deterministic durations in train control systems \cite{Z:ECTS_synthesis}, time of server rejuvenation \cite{german-book}, timeouts in power management systems \cite{QWP99}, etc. Some of the models already contain specified impulse or rate costs. In \cite{XSCT:TRIVEDI_timeout_synthesis} authors compute the optimal value of webserver timeout using impulse and rate costs. The implementation can dynamically change the optimal value of timeout based on the current inter-arrival times of requests. It works on the exact fdCTMC model and cannot be easily applied to the more general fdCTMC models our implementation can handle. The formalism of deterministic and stochastic Petri nets (DSPNs) is equivalent to fdCTMCs. DSPNs have been extensively studied and many useful results are directly applicable to fdCTMCs. To the best of our knowledge the synthesis of fd events has not been studied for DSPNs. The most useful tools for DSPNs are ORIS \cite{HPRV:SSC} and TimeNET \cite{timenet}. There was also an option to implement the synthesis algorithm as an extension of ORIS. However, PRISM is much more used in practice and contains solution methods for MDPs, that we needed for our implementation. Thus, we decided to implement the synthesis into PRISM, even thought we had to extend the PRISM language and data structures. Therefore, the ORIS and TimeNET algorithms can be now reimplemented for fdCTMCs in PRISM easily, exploiting its efficient symbolic structures and algorithms for CTMCs or MDPs. In the rest of the paper we first formally define the fdCTMC and explain the extension of PRISM language. Then we discuss the implemented algorithms and the performance results. \section{Conclusions and Future Work }\label{sec:concl} In this paper, we incorporated the fdCTMC models into PRISM and implemented the expected reward computation and the synthesis algorithm. The tool is available on \url{http://www.fi.muni.cz/~xrehak/fdPRISM/}. We have used the explicit state PRISM engine. Based on the promising results, it is reasonable to (re)implement the synthesis and other model checking algorithms for fdCTMCs in the more efficient PRISM engines. Moreover, new effort can be put to reduce the number of current restrictions on the fdCTMC models. For instance the method of stochastic state classes~\cite{HPRV:SSC} implemented in ORIS may be applied for computation of transient analysis instead of uniformization. \subsubsection{Acknowledgments} We thank Vojt\v{e}ch Forejt and David Parker for fruitful discussions. This work is partly supported by the Czech Science Foundation, grant No.~P202/12/G061. \vspace{-1.0\baselineskip} \bibliographystyle{splncs03} \section{Preliminaries} \label{sec-prelims} We use $\Nset_0$, $\mathbb{R}_{\ge 0}$, and $\mathbb{R}_{>0}$ to denote the set of all non-negative integers, non-negative real numbers, and positive real numbers, respectively. Furthermore, for a countable set $A$, we denote by $\mathcal{D}(A)$ the set of discrete probability distributions over $A$, i.e. functions $\mu: A \to \mathbb{R}_{\ge 0}$ such that $\sum_{a\in A} \mu(a) = 1$. \begin{definition} A \emph{fixed-delay CTMC} (fdCTMC) $C$ is a tuple $(S, Q, F, A, N, d, \sta_{in})$ where \begin{itemize} \item $S$ is a finite set of states, \item $Q: S \times S \to \mathbb{R}_{\ge 0}$ is a rate matrix, \item $F$ is a finite set of fixed-delay (fd) events, \item $A : S \to 2^{F}$ assigns to each state $s$ a set of active fd events in $s$, \item $N : S \times F \to \mathcal{D}(S)$ is the successor function, i.e. assigns a probability distribution specifying the successor state to each state and fd event that is active there, \item $d: F \to \mathbb{R}_{>0}$ is a delay vector that assigns a positive delay to each fd event, \item $\sta_{in} \in S$ is an initial state. \end{itemize} \end{definition} Note that fdCTMC $C$ with empty set of fd events is a CTMC. The fdCTMC formalism can be understood as a stochastic event-driven system, i.e. the amount of time spent in each state and the probability of moving to the next state is driven by the occurrence of events. In addition to the fd events of $F$, there is an \emph{exponential event} $\mathcal{E}$ that is active in all states $s$ where $\sum_{s' \in S} Q(s,s') > 0$. During an execution of an fdCTMC all active events keep one \emph{timer}, that holds the remaining time until the event occurs. The execution starts in the state $\sta_{in}$. The timer of each fd event $f$ in $A(\sta_{in})$ is set to $d(f)$. The timer of the exponential event is set randomly according to the exponential distribution with a rate $\sum_{s' \in S} Q(\sta_{in},s')$. The event $e$ with least\footnote{For the sake of simplicity, when multiple events $X = \{e_1,\ldots,e_n\}$ occur simultaneously, the successor is determined by the minimal element of $X$ according to some fixed total order on~$F$.} timer value $t$ occurs and causes change of state. In case $e$ is an fd event, the next state is chosen randomly according to the distribution $N(\sta_{in},e)$, otherwise $e$ is an exponential event and the probability of choosing $s$ as a next state is $Q(\sta_{in},s)/\sum_{s' \in S} Q(\sta_{in},s')$. In the next state $s$, the timers of all newly active fd events (i.e. $A(s) \setminus A(\sta_{in})$), the occurred event $e$, and the exponential event are set in the same way as above. Observe that the timers of the remaining active fd events decreased by time $t$ spent in the previous state. The execution then proceeds in the same manner. We illustrate the definition on the fdCTMC model from Example~\ref{fig:dpmsleep}. The execution starts in $(\mathit{idle},0)$. The events $f_1$ and $\mathcal{E}$ are active and their timers are set to $1$ and e.g. $1.18$, respectively. Hence, after $1$ second $f_1$ occurs and changes the state to $(\mathit{sleep}, 0)$ with probability $1$. The timers of newly active event $f_2$ and $\mathcal{E}$ are set to $2$ and e.g. $1.5$, respectively. Now, $\mathcal{E}$ occurs and changes the state to $(\mathit{sleep},1)$. Here $f_2$ is still active and thus its timer holds the original value subtracted by the time spent in $(\mathit{sleep},0)$, i.e. $2-1.5=0.5$. The timer of the exponential event is set, etc. A \emph{run} of the fdCTMC is an infinite sequence $(s_0, e_0, t_0)(s_1, e_1, t_1)\ldots$ where $s_0 = \sta_{in}$ and for each $i \in \Nset_0$ it holds that $s_i \in S$ is the $i$-th visited state, $e_i \in \{ \mathcal{E} \} \cup F$ is the event that occurred in $s_i$, and $t_i \in \mathbb{R}_{\geq 0}$ is the time spent in $s_i$. For the formal definition of the semantics of fdCTMC and the probability space on runs see \cite{krcal_phd_thesis}. \medskip \noindent \textit{Total Reward Before Reaching a Target{ }} To allow formalization of performance properties we enrich the model in a standard way with rewards or costs (see,~e.g.~\cite{Puterman:book}). For an fdCTMC $C$ with a state space $S$ we additionally define a set of target states $T$, reward rates $\mathcal{R}$, and impulse rewards $\mathcal{I}$. Formally, the target state $T$ is a subset of $S \setminus \sta_{in}$, $\mathcal{R}: S \to \mathbb{R}_{\ge 0}$ assigns a reward rate to every state, and $\mathcal{I}: S \times (\{ \mathcal{E} \} \cup F) \times S \to \mathbb{R}_{\ge 0}$ assigns an impulse reward to every change of state. Now the reward assigned to a run $(s_0, e_0, t_0)(s_1, e_1, t_1)\ldots$ is the reward accumulated before reaching a state of $T$, i.e. $\sum_{i=0}^{n-1} \left( t_i\cdot\mathcal{R}(s_i) + \mathcal{I}(s_i,e_i,s_{i+1}) \right)$ where $n>0$ is the minimal index such that $s_n\inT$. We set the reward to infinity whenever there is no such~$n$. The reward of a run can be viewed as a random variable, say $\mathit{Cost}_{\fdC,\goalStates,\rateRew,\impRew}$. By $E_{C,T,\mathcal{R},\mathcal{I}}$ (or simply $E_{C}$) we denote the expected value of $\mathit{Cost}_{\fdC,\goalStates,\rateRew,\impRew}$. \paragraph{Synthesis } Given a delay vector $d'$, let (parametric) fdCTMC $C(d')$ be the fdCTMC $C$ where the delay vector is changed to $d'$. Our aim is to find a delay vector $d$ such that the expected reward $E_{C(d)}$ is minimal. Formally, given an error bound $\varepsilon >0 $ the synthesis algorithm computes delay vector $d$, such that $E_{C(d)} \leq \Value{C} + \varepsilon$, where $\Value{C}$ denotes the optimal reward $\inf_{d'} E_{C(d')}$. \section{PRISM Language and User Interface Extension} Each fdCTMC model file must begin with the keyword \code{fdctmc}. For the purpose of our synthesis and expected reward implementation, the set of target states has to be specified by label \code{"target"},~e.g.\vspace{\zmensovatkoVertMezer} \begin{center} \code{label "target" = s=2;} \end{center} \vspace{\zmensovatkoVertMezer} The exponential event (the matrix $Q$) is specified the same way as in CTMC models of PRISM. The fd events are local to a module and must be declared immediately after the module name. E.g. the \code{fdelay f = 1.0} defines the fd event $f$ with delay of a double value $1.0$. For an fd event $f$ we specify its set of active states (i.e. $A^{-1}(f)$) and transition kernel (i.e. $N(\cdot, f)$) by PRISM commands where the identifier $f$ is in the arrow. E.g. \vspace{\zmensovatkoVertMezer} \begin{center} \code{[L] s=1 {-}{-}f-> 0.3:(s'=0) + 0.7:(s'=2)} \end{center} \vspace{\zmensovatkoVertMezer} specifies that the fd event $f$ is active in all states where \code{s=1} and whenever it occurs, the next state is derived from the original one by changing variable \code{s} to \code{0} with probability $0.3$ and to \code{2} with probability $0.7$. The probabilities in each command have to sum to one. Observe that fd event commands are similar to DTMC commands in PRISM. The synchronization labels are used only to impose impulse rewards as for CTMC, e.g. \vspace{\zmensovatkoVertMezer} \begin{center} \code{ rewards} ~~~ \code{[L] true : 1.0;}~~~\code{ endrewards} \end{center} \vspace{\zmensovatkoVertMezer} The rate rewards are specified the same way as for CTMC in PRISM. The PRISM source code for the fdCTMC of Example~\ref{fig:dpmsleep} is in Appendix~\ref{app:prism-lang}. The implementation details concerning the fdCTMC structure are provided in Appendix~\ref{app:prism-lang} as well. Users can run the implemented algorithms from both the graphical and the command-line interfaces of PRISM. The expected reward and synthesis implementations are available in menu \code{Model -> Compute -> Exp. reachability reward} and \code{Model -> Compute -> FD synthesis}, respectively or using the command-line option \code{-expreachreward} and \code{-fdsynthesis}, respectively. The error bound $\varepsilon$ is specified in \code{Options -> Options -> Termination epsilon} or in the command-line option \code{-epsilon}. \section{Experimental Results}\label{sec:experiments} We tested the performance of our synthesis implementation on the model from Example~\ref{fig:dpmsleep} for various sizes of the queue ($2,4,6$, and $8$) and the rejuvenation model provided in Appendix~\ref{app:prism-lang}. The considered error bounds are $0.005$, $0.0025$, $0.0016$, $0.00125$, and $0.001$. The following table shows the expected rewards and the computation times for a given error bound. As the expected rewards are very similar for different error bounds, we show their longest common prefix, instead of listing five similar long numbers. \begin{center} \input{tabular_results} \end{center} Note that the computed values of the expected reward are of a much better precision than required. This indicates that there might even be a space for improvements of the synthesis algorithm, e.g. by computation of tighter discretization bounds. It is worth mentioning that the longest computation (dpm8 for error $0.001$) took only 1~hour and 30 minutes of real clock time thanks to the native parallelism of Java (the table shows the sum for all threads). Our experiments show that the implementation retains the theoretical complexity bounds saying that the computation time is exponential to the number of states and polynomial to $1/\varepsilon$. The computations were run on platform HP DL980 G7 with 8 64-bit processors Intel Xeon X7560 2.26GHz (together 64 cores) and 448 GiB DDR3 RAM, but only 304GB was provided to Java. The time was measured by the Linux command \code{time}. \section{Implementation Issues} Implementation of the expected reward computation was a straightforward application of existing PRISM methods. For the synthesis we implemented the \emph{unbounded optimization} algorithm from \cite{BKKNR:QEST2015}. The algorithm is based on discretization, i.e. we provide discretization bounds and restrict the uncountable space of delay vectors into a finite space. Instead of an exhaustive search through the finite space, we use the idea of \cite{BKKNR:QEST2015} and transform the parametric (discretized) fdCTMC into an MDP where actions correspond to the choices of fd event delays. Now, the minimal solution of the MDP yields the optimal delay vector. The discretization bounds consist of the discretization step $ \delta $, the upper bound on fd event delay $ \overline{\delays} $ and the precision $ \kappa $ for computation of action parameters. They are computed for each fd event separately from the error bound $\varepsilon$, the number of states, the minimal transition probability, and other fdCTMC model attributes. For more detail see Appendix~\ref{app:synthesis}. Note that in every fdCTMC model, the delays for all fd events have to be specified. Applying these delays, we compute the corresponding expected reward $\overline{\mathit{Val}}$ which is used as an upper bound for the optimal reward. Then $\overline{\mathit{Val}}$ is employed when computing the discretization bounds. The lower the $\overline{\mathit{Val}}$ is, the faster the synthesis implementation performs. Thus it is worth to think of good delays of fd events when specifying the model. Given the discretization bounds one has to compute the transition probabilities and expected accumulated reward for each action in the MDP corresponding to the discretized delay of fd event. This can be done using the transient analysis of subordinated CTMCs~\cite{DBLP:journals/pe/Lindemann93}. \medskip \noindent \textit{Prototype Implementation { } }In the first implementation we used straightforward approach to call built-in methods of PRISM to compute the required quantities for each discretized fd event delay separately. This is reasonable since the built-in methods are correctly and efficiently programmed for all PRISM engines and methods of computing transient analysis. However, we experienced that most of the time was spent computing the transient analysis rather than solving the created MDP, e.g. $ 520 $ seconds out of $ 540 $ seconds of total time.\footnote{Computed for the rejuv model and the error bound $ 0.001 $, see Section~\ref{sec:experiments}.} One of the reasons is that in each iteration a small portion of memory is allocated and freed by built-in PRISM methods. Since there is a large number of actions, the amount of reallocated memory was slowing down the computation. Thus we decided to reimplement the computation of transient probabilities the applying principles of dynamic programming. \medskip \noindent \textit{Iterative Computation of Transient Analysis { } } The transient probabilities can be very efficiently approximated up to an arbitrary small error using the uniformization technique. The problem is that we have to compute the transient probabilities for each value of a very large set $\{i \cdot \delta \mid i \in \Nset_0 \text{ and } 0 < i \leq \overline{\delays} /\delta \}$ and allow only fixed error $\kappa$ for each computation. The transient probability vector $\pi(\delta)$ of a CTMC $C$ at time $\delta$ can be computed using uniformization by \begin{equation} \pi(\delta) = \sum_{j=0}^{J} \mathbf{1}_{\sta_{in}} \cdot P^j \cdot \frac{(\lambda \cdot \delta)^j}{j!} \cdot e^{-\lambda \cdot \delta}, \end{equation} where $\mathbf{1}_{\sta_{in}}$ is the initial vector of $C$, $\lambda$ is the uniformization rate of $C$, and $P$ is the transition kernel of the uniformized $C$. The choice of number $J$ influences the error of the formula. It is easy to compute the value of $J$ such that the error is sufficiently small. However, for time $i\cdot \delta $ we can use the previously computed transient probabilities as \begin{equation}\label{eq:iterative1} \pi(i \cdot \delta) = \sum_{j=0}^{J} \pi((i-1) \cdot \delta) \cdot P^j \cdot \frac{(\lambda \cdot \delta)^j}{j!} \cdot e^{-\lambda \cdot \delta}. \end{equation} It is again easy to compute $J$ such that the overall allowed error is not exceeded. Instead of performing na\"{i}ve computation for each number in $\{i \cdot \delta \mid i \in \Nset_0 \text{ and } 0 < i \leq \overline{\delays} /\delta \}$ with according number of steps $J_1, \ldots, J_{\overline{\delays}/\delta}$ to cause error bounded by $\kappa$ in each computation, we compute the transient probabilities iteratively with sufficiently large $J$ to cause small error in all computations. For example, if we have $\delta = 0.1$, $\overline{\delays}/\delta=1000$, rate $\lambda= 1.0$ and $\kappa = 0.01$ using the na\"{i}ve method we have to do $ J_1 + \cdots + J_{\overline{\delays}/\delta} = 66,265 $ steps and using the iterative method $J \cdot \overline{\delays} /\delta = 3,000$ steps. This is significant difference since a vector matrix multiplication is performed in each step. Thus we hard-programmed the iterative computation of transient probabilities and accumulated rewards in CTMC what caused a dramatic speedup thanks to the smaller number of arithmetic operations and better memory management. \medskip \noindent \textit{Precomputation{ } }Careful reader may have noticed that \eqref{eq:iterative1} can be further simplified to \begin{equation} \label{eq:iterative2} \pi(i \cdot \delta) = \pi((i-1) \cdot \delta) \cdot e^{-\lambda \cdot \delta} \cdot \sum_{j=0}^{J} P^j \cdot \frac{(\lambda \cdot \delta)^j}{j!}. \end{equation} Hence, the matrix $ e^{-\lambda \cdot \delta} \cdot \sum_{j=0}^{J} P^j \cdot {(\lambda \cdot \delta)^j}/{j!} $ can be easily precomputed beforehand and used for computation of each $ \pi(i \cdot \delta) $ to increase the savings even more. However, this is not true. $ J $ is small and the matrix $ P $ is sparse for the most reasonable models and error bounds. But $ e^{-\lambda \cdot \delta} \cdot \sum_{j=0}^{J} P^j \cdot {(\lambda \cdot \delta)^j}/{j!} $ is not sparse for almost each error bound, $ P $, and $ \lambda $, what is known as "fill-in" phenomenon. Thus using \eqref{eq:iterative1} is typically more efficient than using \eqref{eq:iterative2}. Similar observations were discussed in \cite{HMM:transien_analysis_DSPN}. Implementing the synthesis algorithm of \cite{BKKNR:QEST2015}, we inherited the following restrictions on the input fdCTMC models. There is at most one concurrently active fd event in each state, i.e. $\forall s \in S \, : \, |A(s)| \leq 1$. For each fd event there is at most one state where its timer is set. Every state has a positive rate reward, i.e. $\forall s \in S \, : \, \mathcal{R}(s) > 0$. Moreover, we add that all fd events have positive impulse rewards, i.e. $ \forall f \in F \wedge s,s' \in S : N(s,f)(s') >0 \implies \mathcal{I}(s,f,s') > 0$. For the expected reward implementation only the first two restrictions are valid.
2,869,038,154,129
arxiv
\section{Introduction} Suppose we want to run numerical simulations of a physical reality described by a set of ordinary differential equations (ODEs) \begin{equation}\label{eq:LTI} \dot x(t) = A x(t) + \f(x(t)),\;\;x(0)=x_0\in\R^m, \end{equation} where $A\in\R^{m\times m}$, $\f:\R^m\longrightarrow \R^m$. Often, of interest is $y(t)=C x(t)$, with some given $p\times m$ matrix $C$. Such a system of ODEs can arise from discretization of a spatial differential operator in time dependent PDEs (e.g. method of lines), e.g. for the purposes of prediction and/or control, or optimization with respect to a set of parameters. In a parameter dependent case we have $A=A(\mu)$, $x=x(t;\mu)$, $x_0=x_0(\mu)$, and $\f(\cdot;\mu)$ is also parameter dependent, where the parameter $\mu$, that may carry e.g. information on material properties, is from a parameter domain $\mathcal{P}\subset \R^d$, $d\geq 1$.\footnote{To keep the notation simple, we suppress the explicit parameter dependence until numerical experiments in \S \ref{S=Examples}.} The function $\f(\cdot;\cdot)$ is in general assumed to be nonlinear. Large dimension $m$ (say, $m>10^5$) makes the task computationally intractable {for multiple query problems}, and one is forced to devise and use a reduced order system that emulates (\ref{eq:LTI}). In a projection based model order reduction, one constructs a suitable low dimensional subspace $\mathcal{V}_k$ as the range of an $m\times k$ orthonormal matrix $V_k$ ($V_k^T V_k=\Id_k$) and seeks an approximation of the form $x(t) \approx \overline{x} + V_k \widehat{x}(t)$, $\widehat{x}\in\R^k$. The solution $x(t)$ is stored at a set of discrete times (also known as snapshots) and $\overline{x}$ is the average over the snapshots. The matrix $V_k$ can be, e.g., the POD basis of the $k$ leading left singular vectors of the centered snapshots $x(t_i)-\overline{x}$, computed at the discrete times $t_i$ from high resolution numerical simulations in the off-line phase; possibly over a parameter grid. It is assumed that $k\ll m$. By enforcing the orthogonality of the residual and the space $\mathcal{V}_k$, one obtains Galerkin projection of the original problem \begin{equation}\label{eq:G1} \dot{\widehat{x}}(t) = {V_k^T A V_k} \widehat{x}(t) + V_k^T A \overline{x}+ V_k^T \f(\overline{x}+V_k \widehat{x}(t)),\;\;\widehat{x}(0)= V_k^T (x(0)-\overline{x}), \end{equation} where $A_k=V_k^T A V_k$ is $k\times k$, $V_k^T A \overline{x}\in\R^k$, but the projected nonlinear forcing term $V_k^T \f(\overline{x}+V_k \widehat{x}(t))$ still involves the dimension $m$, in computing $\widetilde{x}(t)=\overline{x}+V_k \widehat{x}(t)$ and $f=\f(\widetilde{x}(t))$ (at a sequence of discrete values $t=t_i$), as well as in computing $V_k^T f$. For large $m$, this carries substantial computational effort and heavy memory traffic. The Discrete Empirical Interpolation (DEIM) \cite{DEIM} method provides a way to alleviate these burdens, and to efficiently approximate $\f(\cdot)$ from a learned subspace. DEIM originates in the Empirical Interpolation Method (EIM) \cite{grepl-maday-2007}, \cite{EIM}, \cite{Maday2009383} and it uses the reduced basis provided by the POD. For a related discrete version of EIM see \cite{Haasdonk-Ohlberger-Rozza-2008}. {Here, for the reader's convenience, we first briefly review the main steps of DEIM approximation and its error estimate, and then we place it in the more general concepts of GEIM and PBDW.} \subsection{DEIM}\label{s_deim} Suppose we have empirically determined an $r$-dimensional subspace $\mathcal{U}_r$ as the range of an orthonormal $U_r$ such that $U_r U_r^T \f(\overline{x}+V_k \widehat{x}(t)) \approx \f(\overline{x}+V_k \widehat{x}(t))$. This can be done e.g. by the POD, which will determine a suitable dimension $r$ from the decay of the singular values of the matrix of snapshots. The tacit assumption is that $\f(\cdot)$ is from a set of functions $\mathcal{F}$ with small Kolmogorov $r$-width \cite{Kolmogorov-1936}, \cite[Chapter 6]{kowalski-sikorski-stenger-1995}, \cite{Maday2009383}. Inserting the orthogonal projection $U_r U_r^T$ into (\ref{eq:G1}) gives \begin{equation}\label{eq:G2} \dot{\widehat{x}}(t) = {V_k^T A V_k} \widehat{x}(t) + V_k^T A \overline{x}+ V_k^T U_r U_r^T\,\f\left(\overline{x}+V_k \widehat{x}(t)\right) + V_k^T (\Id_m - U_r U_r^T)\,\f\left(\overline{x}+V_k \widehat{x}(t)\right), \end{equation} where $\Id_m\in\R^{m\times m}$ denotes the identity, and the last term (the POD error, as seen from $\mathcal{V}_k$) can be neglected. However, this still does not solve the problem of computational complexity because it requires all $m$ components of $\f\left( \overline{x}+V_k \widehat{x}(t)\right)$, and the matrix vector product $U_r^T\f\left( \overline{x}+V_k \widehat{x}(t)\right)$ takes $\mathcal{O}(mr)$ operations for every time point $t=t_i$. The DEIM \cite{DEIM} trick is to select a submatrix of the $m\times m$ identity $\Id_m$, $$\WSO\equiv\begin{pmatrix} \Id_m(:,i_1)& \cdots &\Id_m(:,i_r)\end{pmatrix}\in\R^{m\times r},$$ and to replace the orthogonal projector $U_r U_r^T$ by the oblique projector $$ \D \equiv U_r (\WSO^T U_r)^{-1}\WSO^T. $$ {Note that $\D$ has an interpolating property at the $r$ selected coordinates, $\WSO^T\D f = \WSO^T f$.} The alternative for (\ref{eq:G2}) is thus \begin{equation}\label{eq:G3} \dot{\widehat{x}}(t) \approx {V_k^T A V_k} \widehat{x}(t) + V_k^T A \overline{x}+ V_k^T \D\,\f\left( \overline{x}+V_k \widehat{x}(t)\right) \end{equation} where in the matrix product $V_k^T\D$, the factor $V_k^T U_r (\WSO^T U_r)^{-1}$ can be pre-computed in the off-line phase. Obviously, important is only the component of the error $(\Id_m - \D)\f(\overline{x}+V_k \widehat{x}(t))$ that lies in $\mathcal{V}_k$. The on-line computation $\WSO^T \f(\overline{x}+V_k \widehat{x}(t))$ at any particular $t$ involves only $r$ values $\f_{i_j}(\overline{x}+V_k \widehat{x}(t))$, $j=1,\ldots, r$. If $\f$ is defined at a vector $x=(x_i)_{i=1}^m$ component-wise as\footnote{For a general nonlinear $\f(x)=( \varphi_1(x_{\mathcal{I}_1}), \varphi_2(x_{\mathcal{I}_2}), \ldots, \varphi_m(x_{\mathcal{I}_m}))^T$, where $x_{\mathcal{I}_j}$ ($\mathcal{I}_j\subseteq \{ 1, \ldots, m \}$) denotes a sub-array of $x$ needed to evaluate $\varphi_j(x)$, the situation is more complicated, see \cite[\S 3.5]{DEIM}.} $\f(x) = ( \phi_1(x_1), \phi_2(x_2), \ldots, \phi_m(x_m))^T$ then $$ \WSO^T \f(\overline{x}+V_k \widehat{x}(t)) = \begin{pmatrix} \phi_{i_1}(\overline{x}_{i_1}+V_k(i_1,:) \widehat{x}(t)) \cr \phi_{i_2}(\overline{x}_{i_2}+V_k(i_2,:) \widehat{x}(t)) \cr \vdots \cr \phi_{i_r}(\overline{x}_{i_r}+V_k(i_r,:) \widehat{x}(t))\end{pmatrix} \equiv \f_{\WSO}(\WSO^T \overline{x}+(\WSO^T V_k)\widehat{x}(t)) , \;\; t= t_1, t_2, \ldots $$ and the computational complexity of $$ V_k^T\D \f(\overline{x}+V_k \widehat{x}(t)) = (V_k^T U_r)(\WSO^T U_r)^{-1} \f_{\WSO}(\WSO^T \overline{x}+(\WSO^T V_k)\widehat{x}(t)) $$ becomes independent of the dimension $m$, once the time independent matrices are precomputed in the off-line phase.\footnote{In the sequel, for the sake of simplicity, we do not include centering of the snapshots.} This tremendously reduces both the flop count and the memory traffic in the (on-line) simulation. The error of the DEIM oblique projection can be bounded in the Euclidean norm by that of the orthogonal projector, \begin{eqnarray}\label{e_projcond} \| f - \D f\|_2\leq \kappa\, \|(\Id_m - U_r U_r^T)f\|_2, \quad \text{where}\quad \kappa\equiv \|(\WSO^T U_r)^{-1} \|_2. \end{eqnarray} The condition number $\kappa$ determines the quality of the approximation, and satisfies $\kappa \leq \mathcal{O}\left(m^{(r-1)/2}\right) / \|U_r(:,1)\|_{\infty}$ \cite{DEIM}. In practical situations, however, this bound is pessimistic and $\kappa$ is much lower. (Using the concept of maximal volume \cite{Knuth-volume}, \cite{Goreinov19971}, it can be shown that there exists a strategy such that $\kappa\leq \sqrt{1+r(m-r)}$.) \subsubsection{{Variations and generalizations}} { DEIM has been successfully deployed in many applications, and tuned for better performance, giving rise to the localized DEIM \cite{peherstorfer2014localized}, unassembled DEIM (UDEIM) \cite{Tiso-Rixen-2013}, \cite{TISO:2013:UDEIM}, matrix DEIM \cite{Wirtz-Sorensen-Haasdonk=2014}, \cite{Negri:2015:MDEIM}, nonnegative DEIM (NNDEIM) \cite{NNDEIM-2016}, and Q-DEIM \cite{drmac-gugercin-DEIM-2016}. The latter is an orthogonal variant of DEIM, which can be efficiently implemented with high-performance libraries such as LAPACK \cite{LAPACK} and ScaLAPACK \cite{ScaLAPACK}. Furthermore, Q-DEIM admits a better condition number bound, $\kappa \leq \sqrt{m-r+1}\mathcal{O}(2^r)$; it allows randomized sampling; and it can work with only a subset of the rows of $U_r$ for computing selection matrices $\WSO$ while keeping $\kappa$ moderate. } \subsection{GEIM and PBDW}\label{SS=GEIM-PBDW} { In many applications, the functions' values may not be available through point evaluation because, e.g., they are from a class that does not contain continuous functions, there is no analytical expression, or they may be noisy sensor data (measurements) obtained by weighted averaging.} In those cases, point-wise interpolation may not be possible, nor even desirable -- for a most illuminating discussion see \cite{GEIM}. {This motivated the development of a generalization of EIM, GEIM (Generalized Empirical Interpolation Method), which replaces point interpolation by more general evaluation functionals selected from a dictionary; see \cite[Chapters 4, 5]{Mula-Thesis} and \cite{maday:hal-00812913}, \cite{GEIM}, \cite{Maday-Mula-Turinici-GEIM-2016} } {These ideas have been further extended in the Parametrized-Background Data-Weak approach to data assimilation (PBDW) \cite{NME:NME4747}. PBDW is an {elaborate data assimilation scheme} whose weak formulation naturally fits variational framework for (parametrized) PDEs, and facilitates error estimates (both \emph{a priori} and \emph{a posteriori}) with the capability to identify optimal observation functionals. Additional insights and analysis of PBDW with respect to noise in the data, and an updating strategy for many-query scenarios, are provided in \cite{PBDW-more}; the optimality of the approximation is established in \cite{Binev-data-assim-2017}. {Furthermore, \cite{Binev-data-assim-2017} contains a multi-space extension.} } { In the context of empirical interpolation, PBDW allows more (generalized) approximation positions than the cardinality of the POD basis, thus calling for least squares approximation. In particular, it contains GEIM as a special (one-space) case. } \subsection{Proper inner-product space structure}\label{SS=UPISPS} {In applications in engineering and applied sciences the solution of (\ref{eq:LTI}) represents an approximation of a function from an appropriate function space, that is subject to governing equations that describe a physical reality. The quality of an approximation is then naturally measured in an appropriate (interpretable) metric in that space. For instance, in many applications the natural ambient space is (weighted) $L^2(\Omega)$, $ L^2(\Omega) = \{ f:\Omega\longrightarrow \R \; :\; \int_\Omega|f(x)|^2 \rho(x) dx < \infty\}, $ with the Hilbert space structure generated by the inner product $ (f,g)_{L^2(\Omega)} = \int_\Omega f(x) g(x) \rho(x) dx, $ and with the corresponding induced norm $\|f\|_{L^2(\Omega)}=\sqrt{(f,f)_{L^2(\Omega)}}$. Both the weight function $\rho(\cdot)$ and a quadrature formula in the course of constructing a discrete (finite $m$-dimensional) framework yield a weighted inner product in $\R^m$, $(u,v)_W = v^T W u$, where $W$ is the corresponding symmetric positive definite matrix. Then the natural framework for devising e.g. a POD approximation \cite[\S 1.2]{volkwein-2011-mor} is given by the Hilbert space structure of $(\cdot,\cdot)_W$. Further, for the equations of e.g. compressible fluid flow, Galerkin projection in an $(\cdot,\cdot)_{L^2(\Omega)}$ inner product may not preserve the underlying physics, such as energy conservation or stability, see e.g. \cite{Rowley2004115}, \cite[\S 3.4.3]{holmes2014turbulence}. Different inner products (with corresponding norms) may yield substantially different results, see e.g. \cite{POD-Sound-Freund}, \cite{POD-Symm-TW}, \cite{Kalashnikova-Arun-2014}. In model order reduction, for instance, a Galerkin projection may be naturally defined in a Lyapunov inner product, generated by the positive definite solution $W$ of a Lyapunov matrix equation, see e.g. \cite[\S 6.1]{ROM-SANDIA-2014}, \cite{Serre20125176}, \cite{Rowley-MRF-2005}, \cite[\S 5.4.3]{holmes2014turbulence}. For further examples and in-depth discussion see \cite{Barone20091932} \cite{Kalashnikova2014569}, \cite{Calo2014204}, \cite{ZIMMERMANN2010165, zimmermann-sisc-2016}, \cite{Satish-et-all-Orr-Sommerfeld}, \cite{Noack-thermo-unsteady}, \cite{NME:NME4820}. {It should be clear that the use of a weighted inner product in the POD-DEIM framework does not guarantee the stability of the reduced system, unless the DEIM is additionally adapted to a particular structure. An excellent example of energy stable DEIM approximation is the NNDEIM \cite{NNDEIM-2016}.} {The use of a proper inner product is implicitly assumed in the abstract framework of PBDW, including the special case of GEIM. The resulting numerical realization of the proper inner product results in the discrete $(\cdot,\cdot)_W$ inner product.} From the numerical point of view, this is not a mere change to another inner product, as the condition number of $W$ becomes an important factor both in the theoretical projection error bound and in the computation in finite precision arithmetic. Hence, it seems natural and important to revise the numerical implementation of DEIM oblique projection, to place it in the wider context of PBDW, and to ensure its robustness independent of the possibly high condition number of the weight matrix $W$.} \subsection{Scaling of variables}\label{SSS-scaling-of-variables} We discuss difficulties due to scaling issues in the practical computation of a POD basis and construction of a DEIM projection, and argue that, when appropriate, the DEIM projection must be weighted in a consistent manner with the POD basis. Scaling issues discussed here arise from two sources. First, when unknowns $x_i(t)$ represent different physical quantities, such as velocity and pressure, and the numerical values of one of them, say pressure, can dominate all others by several orders of magnitude. Second, a single variable can vary over a wide range. In both scenarios, the components of $\f(x(t))$ may vary over several orders of magnitude, so that the matrix of nonlinear snapshots $$F\equiv\begin{pmatrix} \f(x(t_1)) & \cdots & \f(x(t_n))\end{pmatrix}\in\R^{m\times n}$$ has graded rows, with widely varying norms. Let us try to understand the computational ramifications. Suppose the rows of $F=\left(\begin{smallmatrix} B \cr s\end{smallmatrix}\right)$ are permuted so that $B$ contains the rows with large norm, $s$ the rows with small norm, so that in the Frobenius norm $\|B\|_F \gg \|s\|_F$. Typically $m\gg n$, and let the thin SVD be $F = U \Sigma K^T$, where $U$ is $m\times n$ orthonormal, $\Sigma$ is diagonal and $K$ is an orthogonal matrix. An economical way to compute $U_r$, often used in practice, is to first compute the eigenvalue decomposition $G\equiv F^T F=K \Sigma^2 K^T$. Then choose a suitable $r$, compute $U_r = F K(:,1:r)\Sigma(1:r,1:r)^{-1}$, and apply a Gram-Schmidt correction to improve the numerical orthonormality of $U_r$. Since $F^T F = B^T B + s^T s \approx B^T B$, in this procedure the contribution of $s$ to the computation of $K$ and $U_r$ is marginal and the subdominant variables are almost invisible. Further, the POD basis may inherit the graded structure of $F$. Assume, for the purpose of demonstration, that the dominant $r$ singular values of $F$ are nearly equal and much larger than the remaining, subdominant, singular values. Rearranging the thin SVD $F = U \Sigma K^T$, where $K$ is an orthogonal matrix, shows that the row norms of $U\Sigma = F K$ are the same as those of $F$. Furthermore, the row norms of the matrix of the leading $r$ singular vectors $U_r$ are distributed like the corresponding row norms of $F$. The indices corresponding to dominant variables have dominant rows in $U_r$, which creates difficulties for the representation of subdominant variables. Moreover, the DEIM \cite{DEIM} and Q-DEIM \cite{drmac-gugercin-DEIM-2016} are based on greedy algorithms that try to identify an $r\times r$ submatrix of $U_r$ of maximal volume, thus preferring row indices corresponding to dominant variables and ignoring the others. The resulting small approximation error in the Euclidean norm is misleading, though. Without prior scaling, relevant and informative subdominant variables are unnecessarily suppressed.\footnote{Recall the discussion in \S \ref{SS=UPISPS}.} Finally, it should be pointed out that strongly graded $F$ poses intrinsic computational difficulties for any algorithm for computing $U_r$. Even the backward error $\delta F$ that corresponds to the numerical computation, and which is small in the sense that $\|\delta F\|_F/\|F\|_F$ is small, may wipe out the information on the subdominant variables. The corresponding entries of the left singular vectors $u_{k}$, $k=1,\ldots, r$, are computed possibly with large relative error, as numerical methods in general compute the singular vectors with error such that $\|\delta u_k\|_2$ is appropriately bounded by the machine roundoff times a condition number \cite{Wedin1972}, \cite[V.4]{ste-sun-90}. Tiny components of $u_k$ are usually computed with large relative error. \subsection{Contributions and overview of the paper} {Our contributions to the theory and practice of empirical interpolation methods in the framework of PBDW approximations (in particular, EIM and GEIM) are towards numerical linear algebra and matrix theory; the goal is to setup a more general algorithmic schemes and principles for development of numerical methods with sharp error bounds, and for their successful software implementations and applications in scientific computing.} In Section~\ref{S:SRRQR}, we present a substantial improvement of the bound on the condition number $\kappa$ in (\ref{e_projcond}). The selection operator $\WSO$ is based on local maximal volume approach \cite{Knuth-volume,Goreinov19971}, implemented via a strong rank-revealing QR decomposition \cite{GuE96}; the resulting DEIM condition number is $\kappa\leq \sqrt{1+\eta^2\, r(m-r)}$, with tunable parameter $\eta\geq 1$. In \S \ref{S=Canonical}, we present a canonical form for the DEIM projector, which is based on the well known structure of oblique projections. It provides better understanding of the structure of DEIM and its approximation error. In \S \ref{S=WDEIM} we introduce and give detailed analysis of the weighted DEIM ($W$-DEIM) which naturally applies in the situations discussed in \S \ref{SS=UPISPS}, \S \ref{SSS-scaling-of-variables}. The goal is to establish a universal framework for DEIM projections in weighted inner product spaces, where inner products are induced by positive definite matrices $W$ of various origins and with various interpretations. {We present several algorithms for computing the $W$-DEIM approximation. The algorithms come in two flavors depending on whether generalized or pointwise interpolation is desired. When generalized interpolation is to be used, we present different algorithms depending on whether $W$ is dense or sparse. When pointwise interpolation is used, our analysis shows that the condition number of $W$ plays a role in the error analysis. To mitigate this issue, we present an algorithm for which the spectral condition number $\kappa_2(W)=\|W\|_2 \|W^{-1}\|_2$ enters the error bound, up to the factor of $\sqrt{m}$, as $\sqrt{\min_{D=\mathrm{diag}}\kappa_2(DWD)}$.} {$W$-DEIM can be considered as a numerical realization of disretized (one-space) PBDW that includes, as a special case, a discrete version of the generalized empirical interpolation method (GEIM).} In \S \ref{S=Examples}, we corroborate the results with numerical examples. \section{Nearly optimal subset selection}\label{S:SRRQR} We review strong rank revealing QR methods for matrices with at least as many rows as columns (\S \ref{s_tall}); and present an extension to matrices with fewer rows than columns and apply it to matrices with orthonormal rows (\S \ref{s_wide}). This yields a new DEIM selection with superior error bound. \subsection{Tall and skinny matrices}\label{s_tall} For $\mat{A}\in\R^{m \times n}$ with $m\geq n$, and a target rank $r<n$, a QR factorization with column pivoting computes \begin{eqnarray*}\label{eqn:rrqr} \mat{A}\mat{\Pi} \quad =\mat{Q} \begin{pmatrix} \mat{R}_{11} & \mat{R}_{12} \\ 0 & \mat{R}_{22} \end{pmatrix}, \end{eqnarray*} where $\mat{\Pi}\in\R^{n\times n}$ is a permutation; $\mat{Q} \in \R^{m\times m}$ is an orthogonal matrix; $\mat{R}_{11} \in \R^{r\times r}$ and $\mat{R}_{22} \in \R^{(m-r) \times (n-r)}$ are upper triangular; and $\mat{R}_{12} \in \R^{r \times (n-r)}$. Let $\sigma_1(A)\geq \cdots \geq \sigma_n(A)\geq 0$ be the singular values of $A$. Singular value interlacing \cite[Corollary 8.6.2]{GovL13} implies for the non-increasingly ordered singular values $\sigma_j(\mat{R}_{11})$ and $\sigma_j(\mat{R}_{22})$ of the diagonal blocks $\mat{R}_{11}$ and $\mat{R}_{22}$, respectively, \begin{eqnarray*} \sigma_j(\mat{R}_{11}) &\leq & \sigma_j(\mat{A}) ,\qquad \;\;\;1\leq j \leq r\\ \sigma_{r+j}(\mat{A}) &\leq & \sigma_{j}(\mat{R}_{22}), \qquad 1\leq j\leq n-r. \end{eqnarray*} So-called \textit{rank-revealing QR (RRQR) factorizations} \cite{ChI91a,GuE96} try to make the singular values of $\mat{R}_{11}$ as large as possible, and those of $\mat{R}_{22}$ as small as possible. In particular, the \textit{strong RRQR (sRRQR) factorization} \cite[Algorithm 4]{GuE96} with tuning parameter $\eta \geq 1$ computes a triangular matrix $R$ whose diagonal blocks have singular values within, essentially, a polynomial factor (in $n$ and $r$) of the singular values of $\mat{A}$, \begin{eqnarray*}\label{eqn:rrqr_A} \frac{\sigma_j(\mat{A})}{\sqrt{1 + \eta^2r(n-r)}} &\leq &\sigma_j(\mat{R}_{11}) ,\qquad 1\leq j \leq r\\ \sigma_{j} (\mat{R}_{22}) & \leq & \sqrt{1 + \eta^2r(n-r)} \,\sigma_{r+j}(\mat{A}), \qquad 1\leq j\leq n-r ,\nonumber\\ \end{eqnarray*} and whose off-diagonal block is bounded by \begin{eqnarray*} \label{eqn:rrqr2} \left| \left(\mat{R}_{11}^{-1} \mat{R}_{12} \right)_{ij}\right| \leq \eta, \qquad 1\leq i\leq r, \, 1\leq j\leq n-r. \end{eqnarray*} For $\eta>1$ the sRRQR factorization can be computed in $\mathcal{O}\left((m+n\log_{\eta}{n})n^2\right)$ arithmetic operations \cite[Section 4.4]{GuE96}. Recommended values for $\eta$ are small fractional powers of $n$ \cite[Section 4.4]{GuE96}, such as $\eta=10\sqrt{n}$ \cite[Section 6]{GuE96}, which result in a $\mathcal{O}(mn^2)$ time complexity. The traditional Businger-Golub QR with column pivoting \cite{bus-gol-65}, \cite[Algorithm 5.4.1]{GovL13} often achieves the above bounds in practice, but fails spectacularly on contrived examples such as the Kahan matrix \cite{kahan-66}, \cite[Section 6]{GuE96}. Sometimes, the failure is caused by the software implementation of a RRQR factorization; for details see \cite{drmac-bujanovic-2008}. \subsection{Short and fat matrices}\label{s_wide} The sRRQR factorization can be adapted to matrices with fewer rows than columns, $m<n$, and of full row rank, to select a well-conditioned $m\times m$ submatrix. For $\mat{A}\in\R^{m \times n}$ with $m\leq n$, and target rank $r=m$, a QR factorization with column pivoting computes \begin{eqnarray*} \mat{A}\mat{\Pi} =\mat{Q} \begin{pmatrix}\mat{R}_{11} & \mat{R}_{12} \end{pmatrix}, \end{eqnarray*} where $\mat{\Pi}\in\R^{n\times n}$ is a permutation matrix, $\mat{Q} \in \R^{m\times m}$ is an orthogonal matrix, $\mat{R}_{11} \in \R^{m\times m}$ is upper triangular, and $\mat{R}_{12} \in \R^{m \times (n-m)}$. A sRRQR factorization, in particular, is computed with a simplified version of \cite[Algorithm 4]{GuE96}. A column of $R_{11}$ is swapped with one in $R_{12}$ until $\left| \left(\mat{R}_{11}^{-1}\mat{R}_{12}\right)_{i,j}\right|\leq \eta$, $1\leq i \leq m, 1 \leq j \leq n-m$. From \cite[Lemma 3.1]{broadbent2010subset} follows \begin{eqnarray} \label{e_sigmaA} \frac{\sigma_j(\mat{A})}{\sqrt{1 + \eta^2m(n-m)}} &\leq &\sigma_j(\mat{R}_{11}) ,\qquad 1\leq j \leq m. \end{eqnarray} Given a matrix $V$ with $r$ orthonormal columns, this algorithm can be used to select a well conditioned $r\times r$ submatrix. \begin{lemma}\label{l_det} Let $\mat{V} \in \R^{m\times r}$ with $\mat{V}^T\mat{V}=\mat{I}_r$. Applying \cite[Algorithm 4]{GuE96} with target rank $r$ and tuning parameter $\eta\geq 1$ to $V^T$ gives a submatrix $\WSO\in\R^{m\times r}$ of $\Id_m$ with $$\frac{1}{\sqrt{1 + \eta^2r(m-r)} } \leq \sigma_j(\WSO^T\mat{V}) \leq 1, \qquad 1\leq j\leq r ,$$ and $$ 1 \leq \|(\WSO^T\mat{V})^{-1}\|_2 \leq \sqrt{ 1+\eta^2 r(m-r)}.$$ \end{lemma} \begin{proof} Applying \cite[Algorithm 4]{GuE96} to $V^T$ gives $$\mat{V}^T \begin{pmatrix}\mat{\Pi}_1 & \mat{\Pi}_2\end{pmatrix} = \mat{Q}\begin{pmatrix}\mat{R}_{11} & \mat{R}_{12} \end{pmatrix},$$ where $Q\in\R^{r\times r}$ is an orthogonal matrix; $R_{11}\in\R^{r\times r}$ is upper triangular; and $\begin{pmatrix} \Pi_1& \Pi_2\end{pmatrix}\in\R^{m\times m}$ is a permutation matrix with $\Pi_1\in\R^{m\times r}$. Since $\mat{V}$ has $r$ orthonormal columns, $\sigma_j(V)=1$, $1\leq j\leq r$. From (\ref{e_sigmaA}) follows $$\frac{1}{\sqrt{1 + \eta^2r(m-r)} } \leq \sigma_j(R_{11}) \leq 1, \qquad 1\leq j\leq r.$$ Set $\WSO=\Pi_1$, so the first block column equals $\mat{V}^T\WSO = \mat{V}^T\mat{\Pi}_1 = \mat{Q}\mat{R}_{11}$. Since $\mat{Q}$ is an orthogonal matrix, $\mat{V}^T\WSO$ has the same singular values as $\mat{R}_{11}$. \end{proof} Lemma~\ref{l_det}, applied with $V=U_r$, implies a tremendous improvement for the error of the oblique projector $\D$ in (\ref{e_projcond}). If $\WSO$ is computed from a sRRQR factorization of the transposed POD basis $U_r^T$, the condition number is bounded by \begin{eqnarray}\label{e_projcond1} \kappa\leq \sqrt{ 1+\eta^2 r(m-r)}. \end{eqnarray} \section{Canonical representation of $\D$}\label{S=Canonical} The DEIM operator is an oblique projection and, as such, it possesses certain canonical structure that is revealed in an appropriately chosen basis. In this section we derive representation of the DEIM projection operator in a particular basis, in order to gain better understanding of the effectiveness of DEIM. {As already mentioned in \S \ref{SS=GEIM-PBDW}, the PBDW framework \cite{NME:NME4747} allows selecting $s\geq r$ approximation points, and we will proceed with the general case of rectangular $\WSO^T U_r$.} {The oversampling has been successfully used in the related context of missing point estimation, see \cite{Astrid-etal-MPE-2008} \cite{Peherstorfer-Willcox-adeim}, \cite{Geom-subspace-Zimm-Per-Will-2015}, \cite{zimmerman-willcox-sisc-2016}.} We adopt the following notation. Let $\WSO \in \mathbb{R}^{m\times s}$ be a selection of $s$ columns of the identity $\Id_m$ and let $U_r \in \mathbb{R}^{m\times r}$. Define the orthogonal projectors $\Prj_{\WSO}=\WSO \WSO^T$ and $\Prj_{U_r}=U_r U_r^T$ onto $\mathcal{R}(\WSO)$ and $\mathcal{R}(U_r)$, respectively. \subsection{Generalization of oblique DEIM}\label{SS=Gen-DEIM-3.1} We first derive a representation of the DEIM projection in terms of $\Prj_{\WSO}$ and $\Prj_{U_r}$. {Suppose $\WSO$ and $U_r$ have full column rank, then the DEIM projector $\D$ can be written as~\cite[Theorem 2.2.3]{Bjo15}} \begin{equation}\label{eq:D1} \D = U_r (\WSO^T U_r)^{-1}\WSO^T = (\Prj_{\WSO} \Prj_{U_r})^{\dagger}, \end{equation} where the superscript $\dagger$ denotes the Moore-Penrose inverse. Note that the expression $(\Prj_{\WSO} \Prj_{U_r})^{\dagger}$ does not require existence of the inverse $(\WSO^T U_r)^{-1}$; in fact it does not even require $\WSO$ and $U_r$ to have the same number of columns, or the same rank. We now consider the case that $\WSO^TU_r \in \mathbb{R}^{s \times r}$ is a rectangular matrix where $s\neq r$. In this case, one can check (e.g., using the SVD of $\WSO^T U_r$) that it holds \begin{equation} (\Prj_{\WSO} \Prj_{U_r})^{\dagger} = U_r (\WSO^T U_r)^{\dagger}\WSO^T. \end{equation} This observation leads to a general definition of the DEIM projection as $\D = (\Prj_{\WSO} \Prj_{U_r})^{\dagger}$ which is valid when $\WSO$ has different number of columns as $U_r$, and different rank. We now investigate whether this generalization retains the properties of interpolation ($\WSO^T \D f=\WSO^T f$) and projection ( $\D \Prj_{U_r} = \Prj_{U_r}$). With the observation $\rank(\D) = \rank(\WSO^T U_r) = \min\{ s, r\}$, suppose that $s\neq r$ and split the analysis into two cases. \smallskip \begin{enumerate} \item Case $\rank (\D) = s < r$ \begin{enumerate} \item The interpolation property still holds, i.e., $$ \WSO^T (\D f ) = \WSO^T U_r (\WSO^T U_r)^{\dagger}\WSO^T f = \WSO^T f . $$ The reason for this is because $\WSO^T U_r$ has full row rank, and $(\WSO^T U_r)^{\dagger}$ is a right multiplicative inverse. \item On the other hand, the projection property is lost, i.e., $\D \Prj_{U_r} \neq \Prj_{U_r}$. However, $\D$ is still a projector, $\D^2=\D$. To find its range, let $W_s$ be the leading right $s$ singular vectors of $\WSO^T U_r$. Then the DEIM projection operator. $\D \Prj_{U_r} = \Prj_{V_s}$, where $V_s= U_r W_s$ spans an $s$--dimensional subspace of $\mathcal{R}(U_r)$. Therefore, $\D$ is a projector onto $\mathcal{R}(V_s) \subset \mathcal{R}(U_r)$. \end{enumerate} \item Case $\rank(\D) = r < s $ \begin{enumerate} \item The interpolation property does not hold, i.e., $\WSO^T (\D f ) \neq \WSO^T f$. This is because $(\WSO^T U_r)^{\dagger}$ is no longer a right multiplicative inverse. However, $\WSO^T (\D f )$ is the least square projection of $\WSO^T f$ onto the range of $\WSO^T U_r$. To see this $$ \WSO^T (\D f ) = \WSO^T U_r (\WSO^T U_r)^{\dagger}\WSO^T f = \Prj_{\mathcal{X}} (\WSO^T f) ,\;\; \mathcal{X}=\mathcal{R}(\WSO^T U_r) . $$ \item In this case $\D \Prj_{U_r} = \Prj_{U_r}$ since $(\WSO^T U_r)^{\dagger}$ is a left multiplicative inverse of $\WSO^T U_r$. \end{enumerate} \end{enumerate} As can be seen above, when the DEIM operator is generalized to the setting $s \neq r$ only the projection property or the interpolation property is retained but not both simultaneously. For related developments, see~ \cite{NME:NME4747}, \cite{zimmerman-willcox-sisc-2016}, \cite{Casenave-EIM-variants-2016}. \subsection{Canonical structure of $\D$} We present the following theorem that sheds light onto the canonical structure of the DEIM operator $\D$. \begin{theorem}\label{TM-canonical-form} {Let $U_r\in\R^{m\times r}$ and $\WSO\in\R^{m\times s}$ have orthonormal columns, and $\D = U_r (\WSO^T U_r)^{\dagger}\WSO^T$ and assume that $1\leq r,s \leq m$. Let $\ell\equiv \mathrm{dim}(\mathcal{R}(\WSO) \bigcap \mathcal{R}(U_r))$, set $p\equiv \mathrm{rank}(\D) - \ell$, and let the singular values $\sigma_i=\cos\psi_i$ of $\WSO^T U_r$ be ordered as \begin{equation} 1=\sigma_1=\ldots = \sigma_\ell > \sigma_{\ell+1}\geq \ldots \geq \sigma_{\ell+p}>\sigma_{\ell+p+1} = \ldots = \sigma_{\min(r,s)}=0 . \end{equation} (Here $0<\psi_{\ell+1}\leq\ldots \leq \psi_{\ell+p} <\pi/2$ are the acute principal angles between the ranges of $\WSO$ and $U_r$.)} \emph{(i)} There exists an orthogonal $m\times m$ matrix $Z$ such that the matrix $\D$ can be represented as \begin{equation}\label{eq:D:canonical} \D = (\Prj_{\WSO} \Prj_{U_r})^{\dagger} = Z \begin{pmatrix} \Id_\ell & & \cr & {\displaystyle \bigoplus_{i=1}^p T_i} & \cr & & \0 \end{pmatrix} Z^T,\;\; T_i = \begin{pmatrix} 1 & 0 \cr \tan\psi_{\ell+i} & 0 \end{pmatrix} . \end{equation} Here the $\0$ block is of size $m-\ell -2p$. \emph{(ii)} The DEIM projector $\D$ satisfies {$\|\D\|_2 = 1/ \cos\psi_{\ell+p}$}. {If, in addition, $\D\neq \0$ and $\D\neq \Id_m$, then $\|\D\|_2 = \|\Id_m-\D\|_2 = 1/ \cos\psi_{\ell+p} $.} \end{theorem} \begin{proof} The above representation follows immediately from the canonical representation of a pair of orthogonal projectors \cite{wed-82}. In a particularly constructed orthonormal basis given by the columns of $Z$, the two projectors have the following matrix representations: \begin{eqnarray} \Prj_{\WSO} &=& Z \left(\begin{smallmatrix} \Id_\ell & & \cr & {\displaystyle \bigoplus_{i=1}^p J_i} & \cr & & D_{s} \end{smallmatrix}\right) Z^T,\;\; \mbox{where}\;\; J_i = \begin{pmatrix} 1 \cr 0 \end{pmatrix} \begin{pmatrix} 1 & 0 \end{pmatrix},\;\;\mbox{and} \label{eq:PS}\\ \Prj_{U_r} &=& Z \left(\begin{smallmatrix} \Id_\ell & & \cr & {\displaystyle \bigoplus_{i=1}^p \Psi_i} & \cr & & D_u \end{smallmatrix}\right) Z^T,\;\; \Psi_i = \begin{pmatrix} \cos\psi_{\ell +i} \cr \sin\psi_{\ell+i}\end{pmatrix} \begin{pmatrix} \cos\psi_{\ell+i} & \sin\psi_{\ell+i} \end{pmatrix} , \label{eq:PU} \end{eqnarray} with $\psi_{\ell+i}$'s as stated in the theorem, and $D_s$, $D_{u}$ are diagonal matrices with diagonal entries $0$ or $1$ and such that $D_s D_u=\0$. Note that each $(D_s)_{ii}=1$ ($(D_u)_{ii}=1$) corresponds to a direction in the range of $\WSO$ ($U_r$) orthogonal to the entire range of $U_r$ ($\WSO$). In the special case when $\WSO^TU_r$ is invertible, $D_s=D_u=\0$. The expression for $\D$ is obtained by multiplying the representations in (\ref{eq:PS}) and (\ref{eq:PU}), and taking the pseudoinverse. It follows that \[ (\mathcal{P}_\WSO\mathcal{P}_{U_r})^\dagger = Z \left(\begin{smallmatrix} \Id_\ell & & \cr & {\displaystyle \bigoplus_{i=1}^p (J_i\Psi_i)^\dagger} & \cr & & \0 \end{smallmatrix}\right) Z^T. \] A direct evaluation shows that \[ (J_i \Psi_i)^\dagger = \left[ \begin{pmatrix} 1 \cr 0 \end{pmatrix} \cos\psi_{\ell+i} \begin{pmatrix} \cos\psi_{\ell+i} & \sin\psi_{\ell+i} \end{pmatrix}\right]^\dagger = \begin{pmatrix} 1 & 0 \cr \tan\psi_{\ell+i} & 0 \end{pmatrix} = T_i.\] From the canonical representation~\eqref{eq:D:canonical} each block $T_i$ has the norm $$\|T_i \|_2 = {\sqrt{1+\tan^2\psi_{\ell+i}}}={1/\cos\psi_{\ell+i}}.$$ Therefore, it also follows that $\| \D \|_2 = 1/\cos\psi_{\ell+p}$. From (\ref{eq:D:canonical}) we can also derive the canonical form of $\Id_m-\D$: \[ \Id_m - \D = Z \left(\begin{smallmatrix} \0 & & \cr & {\displaystyle \bigoplus_{i=1}^p (\Id_2 - T_i)} & \cr & & \Id \end{smallmatrix}\right) Z^T.\] The $\0$ block has dimensions $\ell$, whereas the identity block has dimensions ${m-\ell -2p}.$ When $\D \neq \0, \Id_m$, from~\cite[Corollary 5.2]{IpsM94} and~\cite{Szyld2006} it follows that $\|\D\|_2 = \|\Id_m-\D\|_2$. \end{proof} The novelty and the importance of Theorem \ref{TM-canonical-form} are in the interpretation in the DEIM setting, allowing for a deeper understanding of the structure of the DEIM projection and its error. For related usage of canonical angles between subspaces \cite{bjo-gol-73}, see the construction of the favorable bases in \cite{Binev-data-assim-2017}. \begin{remark} {\em Let $\WSO^TU_r$ be invertible and $\D = U_r (\WSO^TU_r)^{-1} \WSO^T$. If $f \in \mathcal{R}(U_r)$ then both the DEIM error and the orthogonal projection error are zero, as $\D f = \Prj_{U_r} f =f$. In the case $f\neq \Prj_{U_r} f$, write $ f - \D f = (\Id_m - \Prj_{U_r})f + (\Prj_{U_r}-\D)f$; verify that the summands are orthogonal, apply Pythagoras' theorem to get \[ \| f - \D f \|_2^2 = \|(\Id_m - \Prj_{U_r})f\|_2^2 + \|\D f - \Prj_{U_r} f\|_2^2. \] Since $f \notin \mathcal{R}(U_r)$, we can factor out $\|(\Id_m - \Prj_{U_r})f\|^2$ to get \begin{equation}\label{eq:kappa} \| f - \D f \|_2 = \kappa' \| f - \Prj_{U_r} f\|_2 \qquad \kappa' \equiv \sqrt{1+ \frac{\| \D f - \Prj_{U_r} f\|_2^2}{\|f-\Prj_{U_r} f\|_2^2}} . \end{equation} (This is illustrated graphically in Figure~\ref{f_deim}.) Next, introduce the partition of $f$, represented in the basis $Z$, as follows: $$ Z^T f=\left( \begin{smallmatrix} f_{[0]} \cr f_{[1]}\cr\vdots\cr f_{[p]} \cr f_{[p+1]} \end{smallmatrix}\right),\;\;f_{[0]}\in\mathbb{R}^\ell,\;\;f_{[1]},\ldots, f_{[p]}\in\mathbb{R}^2,\;\;f_{[p+1]}\in\mathbb{R}^{m-(\ell+2p)} . $$ Now, straightforward computation for each $i=1,\ldots, p$ reveals that \begin{eqnarray*} \| ( \Id_2 - \Psi_i) f_{[i]}\|_2 &=& \cos\psi_{\ell+i} \left\| \left( \begin{smallmatrix} \frac{\sin^2\psi_{\ell+i}}{\cos\psi_{\ell+i}}& -\sin\psi_{\ell+i} \cr -\sin\psi_{\ell+i} & \cos\psi_{\ell+i} \end{smallmatrix}\right) f_{[i]}\right\|_2 \\ \| ( T_i - \Psi_i) f_{[i]}\|_2 &=& \sin\psi_{\ell+i} \left\| \left( \begin{smallmatrix} \sin\psi_{\ell+i} & -\cos\psi_{\ell+i} \cr \frac{\sin^2\psi_{\ell+i}}{\cos\psi_{\ell+i}} & -\sin\psi_{\ell+i}\end{smallmatrix}\right) f_{[i]}\right\|_2 = \tan\psi_{\ell+i} \| ( \Id_2 - \Psi_i) f_{[i]}\|_2. \end{eqnarray*} Together this gives \begin{align*} \| \D f - \Prj_{U_r} f\|_2^2 = \| Z^T\D Z Z^T f - Z^T \Prj_{U_r} Z Z^T f \|_2^2 = & \> \sum_{i=1}^p\tan^2 \psi_{\ell+i} \| ( \Id_2 - \Psi_i) f_{[i]}\|_2^2,\\ \end{align*} Since $\WSO^TU_r$ is invertible, from the proof of Theorem~\ref{TM-canonical-form}, we have $D_u = \0$, and therefore \[ \|f-\Prj_{U_r} f\|_2^2 =\| Z^T f - Z^T \Prj_{U_r} Z Z^T f \|_2^2 = \> \sum_{i=1}^p \| ( \Id_2 - \Psi_i) f_{[i]}\|_2^2 + \| f_{[p+1]}\|_2^2. \] Since $f\notin \mathcal{R}(U_r)$, we can divide throughout by $\|f-\Prj_{U_r} f\|_2^2$ to obtain the inequality \[\sum_{i=1}^p \frac{\| ( \Id_2 - \Psi_i) f_{[i]}\|_2^2}{\|f-\Prj_{U_r} f\|_2^2} \leq 1. \] Combining this inequality with the relation for $\| \D f - \Prj_{U_r} f\|_2^2$ into~\eqref{eq:kappa} gives \begin{eqnarray*} \frac{\|f-\D f\|_2^2}{\|f-\Prj_{U_r} f\|_2^2} = & \> 1 + \sum_{i=1}^p \tan^2 \psi_{\ell+i} \frac{\| ( \Id_2 - \Psi_i) f_{[i]}\|_2^2}{\|f-\Prj_{U_r} f\|_2^2} \\ \leq & \> 1 + \tan^2\psi_{\ell + p} = \frac{1}{\cos^2\psi_{\ell+p}} . \end{eqnarray*} Therefore, $\kappa' \leq \| \D\|_2 = 1/\cos\psi_{\ell+p}$. This result, of course, reproduces the bound~\eqref{e_projcond}. However, the analysis shows that a tighter condition number $\kappa'$ can be obtained by considering how the contributions of the error are weighted in the principal directions identified in Theorem \ref{TM-canonical-form}. } \end{remark} \begin{figure} \begin{center} $\quad$ \input fig2.tex \end{center} \caption{(Cf. \cite[Figure 1]{GEIM}) DEIM interpolatory projection and its comparison with the corresponding orthogonal projection. Even in the general $m$-dimensional case, the nontrivial action of DEIM projection consists of $\mathrm{dim}(\mathcal{R}(\WSO) \bigcap \mathcal{R}(U_r))$--dimensional identity and $\mathrm{rank}(\D)-\mathrm{dim}(\mathcal{R}(\WSO) \bigcap \mathcal{R}(U_r))$ $2$--dimensional oblique (interpolatory) projections as shown in the figure.} \label{f_deim} \end{figure} \subsection{Connection to CS decomposition} The structure of $\D$ can also be analyzed using the Cosine--Sine (CS) decomposition~\cite{ste-82}. Assume for simplicity that the rows of $U_r$ are ordered so that $\WSO=\Id_m(:,1:r)$. If this is not the case, we work with $\Pi^T \D \Pi$, where $\Pi$ is a permutation matrix. Assume that $\WSO^TU_r$ is invertible and therefore, the DEIM operator is $\D = U_r (\WSO^T U_r)^{-1}\WSO^T$. Further, let $\WSO_\perp=\Id_m(:,r+1:m)$. With these assumptions, $U_r$ has the CS decomposition \[ U_r = \begin{pmatrix} \WSO^TU_r \\ \WSO_\perp^T U_r\end{pmatrix} = \begin{pmatrix} \Omega_1 & \\ & \Omega_2 \end{pmatrix} \begin{pmatrix} \mathrm{Cos}\Psi \\ \mathrm{Sin}\Psi\end{pmatrix}\Gamma^T. \] Here $\Omega_1, \Gamma\in \mathbb{R}^{r\times r}$ and $\Omega_2 \in \mathbb{R}^{(m-r)\times (m-r)}$ are orthogonal matrices and \[ \mathrm{Cos}\Psi =\mathrm{diag}(\cos\psi_i)_{i=1}^r \in \mathbb{R}^{r\times r}, \qquad \mathrm{Sin}\Psi =\mathrm{diag}(\sin\psi_i)_{i=1}^r \in \mathbb{R}^{(m-r)\times r}.\] We can therefore represent $\D$ as \begin{displaymath} \D = \begin{pmatrix} \Omega_1\, \mathrm{Cos}\Psi\, \Gamma^T \cr \Omega_2 \,\mathrm{Sin}\Psi\, \Gamma^T \end{pmatrix} \Gamma\, (\mathrm{Cos} \Psi)^{-1}\, \Omega_1^T \begin{pmatrix} \Id _r & \0 \end{pmatrix} = \begin{pmatrix} \Id_r & \0 \cr \mathrm{Tan}\Psi & \0 \end{pmatrix}, \end{displaymath} where $\mathrm{Tan}\Psi = \Omega_2 \mathrm{Sin}\Psi(\mathrm{Cos} \Psi)^{-1} \Omega_1^T = \Omega_2 \mathrm{diag}(\tan\psi_i)_{i=1}^r \Omega_1^T$. Similarly, we have $$ \Id_m - \D = \begin{pmatrix} \0 & \0 \cr - \mathrm{Tan}\Psi & \Id_{m-r} \end{pmatrix}, $$ and we (again) see that $\|\D\|_2=\|\Id_m -\D\|_2=\sqrt{1+\|\mathrm{Tan}\Psi\|_2^2}.$ For further insights on the tangents between subspaces, see e.g., \cite{Angles-KnyazevZhu}. \section{Weighted DEIM}\label{S=WDEIM} {As discussed in \S\ref{SS=UPISPS}, the discrete analogue of a (generalized) interpolatory projection based approximation must be constructed within an appropriate weighted inner product, and the selection of the interpolation indices must ensure sharp error bounds. In particular, care must be taken to control how the condition number of the positive definite weight matrix $W$ influences the projection error, expressed in the $W$-weighted norm $\|u\|_W=\sqrt{u^TWu}$. In this section, we address this issue and propose two new algorithms for $W$-weighted variants of DEIM.} {To set the scene and to introduce notation, in \S \ref{SS=Scene-WPOD} we recall the weighted POD.} In \S\ref{s_wdeim}, we propose $W$-DEIM oblique projection that relies on a more general form of the selection operator, and in the numerical realization uses $W$ implicitly through its Cholesky factor. In this case, although the pointwise interpolation is lost, the more general interpolation condition in the sense of GEIM holds true. In \S\ref{ss_pointwise} and \S\ref{ss_pointwise_scaling} we propose alternative methods for point selection in the weighted setting that allow for pointwise interpolation; however, the resulting approximation error bounds depend on the condition number of $W$ or on the condition number of optimally scaled $W$. \subsection{Setting the scene}\label{SS=Scene-WPOD} Let $W\in\R^{m\times m}$ be symmetric positive definite, and define the weighted inner product for $u,v\in\R^m$ by $(u,v)_W \equiv v^T W u.$ Let $W = LL^T$ be a factorization where the nonsingular matrix $L$ is a Cholesky factor or the positive definite square root $L=W^{1/2}$. The original problem might give rise to a nonsingular matrix $L$, so that the weight matrix $W=LL^T$ is then given implicitly by its factor $L$. {Recall that any two square {``Cholesky''} factors of $W$ are related by an orthogonal matrix $Q$, so that $W^{1/2}=LQ$ \cite[page 67, {Exercise} (x)]{IIbook}.} \begin{remark}\label{RE:||W} {\em In the weighted norm $\|u\|_W\equiv \sqrt{(u,u)_W}=\sqrt{u^TWu}=\|L^Tu\|_2,$ the induced operator norm of an $M\in\R^{m\times m}$ equals $$ \|M\|_W = \max_{x\neq 0}\frac{\|Mx\|_W}{\|x\|_W}= \max_{y\neq 0}\frac{\|L^T M L^{-T}y\|_2}{\|y\|_2}=\|L^{T}ML^{-T}\|_2.$$ Further, in the $W$-inner product space, the adjoint of $M$ is $M^{[T]}\equiv W^{-1} M^T W,$ where $M^T$ is the transpose of $M$. } \end{remark} The POD basis with respect to $(\cdot,\cdot)_W$ is determined by the 3-step procedure in Algorithm \ref{zd:ALG:POD}. For more details see \cite{volkwein-2011-mor}. For the sake of simplicity, we do not include centering of the snapshots matrix $Y$. \begin{algorithm}[hbt] \caption{$\WU=\mathrm{POD}(Y,W\equiv LL^T)$} \label{zd:ALG:POD} \begin{algorithmic}[1] \REQUIRE Symmetric positive definite $W\in\R^{m\times m}$, or $L\in\R^{m\times m}$ such that $W=LL^T$ is positive definite. Matrix $Y\in\R^{m\times n_s}$ of $n_s$ snapshots. \STATE Compute the thin SVD $L^T Y = U \Sigma V^T$. \STATE Determine an appropriate index $1\leq r\leq \rank(L^TY)$ and select ${U}_r\equiv U(:,1:r)$. \ENSURE $\WU \equiv L^{-T}{U}_r$. \end{algorithmic} \end{algorithm} \noindent Algorithm~\ref{zd:ALG:POD} computes a matrix $\WU$ whose columns are $W$-orthonormal, {i.e.,} $\WU^T W \WU=\Id_r,$ and the POD projection in the weighted inner product space is represented by \begin{equation}\label{eq:WPU} \WP_{\WU} \equiv \WU \WU^T W = L^{-T}{U}_r U_r^T L^T. \end{equation} Note that $\WP_{\WU}^2=\WP_{\WU}$ and that $\WP_{\WU}^{[T]}=\WP_{\WU}.$ In fact, $Y = \WU \Sigma V^T$ is a GSVD \cite{van1976generalizing} of $Y$. \begin{remark}\label{R:<T>} {\em For $\R^{m\times r}\ni \WU: ( \R^r,(\cdot,\cdot)_2)\longrightarrow (\R^{m},(\cdot,\cdot)_W)$, the adjoint matrix in the two inner products is, by definition, given as $\WU^{<T>}=\WU^T W$. Hence, $\WU^{<T>}\WU=\Id_r$ and we can write the $W$-orthogonal projector (\ref{eq:WPU}) conveniently in the usual form as $\WP_{\WU} = \WU\WU^{<T>}$. Recalling the discussion from \S \ref{SS=UPISPS}, the projected problem (\ref{eq:G1}) is then computed in the sense of $(\cdot,\cdot)_W$. } \end{remark} \subsection{$W$-DEIM}\label{s_wdeim} Once a discrete inner product $(\cdot,\cdot)_W$ has been chosen to capture the geometric framework (e.g., for Petrov-Galerkin projection, POD), one needs to define an appropriate DEIM projection operator in this weighted setting. Furthermore, the resulting quantities are now measured in the weighted norm $\|x\|_W$. To that end, using the notation introduced in Remark \ref{R:<T>}, we define a $W$-DEIM projector as follows. \begin{definition}\label{zd:eq:DEF:WDEIM} Let $\WU\in\R^{m\times r}$ be $W$-orthogonal. With a full column rank generalized selection operator $\SO\in\R^{m\times s}$ (where $s \geq r$), define a weighted $W$-DEIM projector \begin{equation} \D \equiv \WU (\SO^{<T>}\WU)^{\dagger}\SO^{<T>} = \WU (\SO^T W \WU)^{\dagger}\SO^T W. \end{equation} \end{definition} In the above definition, in addition to {the use of a} more general inner product, we also allow {for} tall rectangular $\SO^T W\WU {\in \mathbb{C}^{s\times r}}$. The only constraint is that $\SO^T W \WU$ has full column rank. {However, in practice, we will use the square nonsingular case}. For the moment, we leave the (generalized) selection operator $\SO$ unspecified, and we remark that the columns of $\SO$ need not be the columns of the identity matrix. \footnote{In fact, one can also allow full row rank to obtain a further variation of the DEIM projection as discussed in \S \ref{SS=Gen-DEIM-3.1}, but we omit this for the sake of brevity.} As in the case of DEIM, the matrix $\D$ is an oblique projector, {i.e., it satisfies} $\D^2=\D.$ {The following proposition is a recast of \cite[Proposition 2.1]{zimmerman-willcox-sisc-2016} to the $\|\cdot\|_W$ norm.} \begin{proposition} Let $\D$ be {as} in Definition~\ref{zd:eq:DEF:WDEIM} and let $\SO^T W\WU$ have full column rank. Then \begin{equation}\label{e_wdeim_inter} \| f - \D f \|_W \leq \|\D\|_W \| f - \WP_{\WU} f\|_W . \end{equation} \end{proposition} \begin{proof} Since $\SO^T W \WU$ has full column rank, $(\SO^T W \WU)^{\dagger}$ is a left inverse, so that $\D \WP_{\WU} = \WP_{\WU},$ hence $(\Id_m - \D)\WP_U = 0$. Consequently for any vector $f\in\R^m$ \begin{equation} (\Id_m - \D) f = (\Id_m-\D)(\Id_m-\WP_{\WU})f. \end{equation} Since $\D$ is non-trivial projector ($\D\neq\0$, $\D\neq\Id_m$) it holds that $\|\D\|_W=\|\Id_m-\D\|_W$, and~\eqref{e_wdeim_inter} follows. \end{proof} The condition number that amplifies the POD projection error $\| f - \WP_{\WU} f\|_W$ is the weighted norm $\|\D\|_W$. A naive application of the result in Remark~\ref{RE:||W} suggests the bound $\| \D\|_W \leq \sqrt{\kappa(W)} \| \D\|_2$. That is, the condition number of the inner product matrix $W$ could potentially amplify the $W$-DEIM projection error. However, by a clever choice of $\SO$ we can eliminate the factor $\sqrt{\kappa(W)}$. \begin{definition}\label{d_wso} {Let the weighted selection operator $\SO$ and the corresponding $W$-DEIM projector $\D$, respectively, be defined as \begin{equation}\label{zd:eq:SL} \SO^T = \WSO^T L^{-1},\;\;\D \equiv \WU (\WSO^T U_r)^{\dagger}\WSO^T L^T = L^{-T} U_r (\WSO^T U_r)^{\dagger}\WSO^T L^T, \end{equation} where $\WSO$ is an $m\times s$ index selection operator ($s$ selected columns of the identity $\Id_m$, $s\geq r$). } \end{definition} Note that while $\SO$ is possibly dense, $\WSO$ is a sparse matrix. We now present a result that quantifies the condition number $\|\D\|_W$ for the specific choice of selection operator $\SO$. \begin{proposition}\label{zd:PROP:SL} Let $\SO$ and $\D$ be defined as in~\eqref{zd:eq:SL}. Then $\SO^T W \SO=\Id_k$ and $\|D\|_W = \|(\WSO^T U_r)^{\dagger}\|_2$. \end{proposition} \begin{proof} Recall that $L^T\WU = U_r$ and by (\ref{zd:eq:SL}), $\SO^T L=\WSO^T$. Following Remark \ref{RE:||W}, straightforward computation yields \begin{align} \nonumber \| \D \|_W = & \> \| L^T (L^{-T}U_r)(\SO^T LL^T (L^{-T}U_r))^{\dagger}\SO^T LL^T L^{-T}\|_2 \\ = & \> \| U_r (\SO^T L U_r)^{\dagger} \SO^T L\|_2 , \end{align} where, by (\ref{zd:eq:SL}), $\SO^T L=\WSO^T$, and thus $\| \D \|_W =\| U_r (\WSO^T U_r)^{\dagger} \WSO^T\|_2 = \| (\WSO^T U_r)^{\dagger}\|_2$. \end{proof} Therefore, with this choice of $\SO$, the condition number of $\|W\|_2$ does not explicitly appear in the bounds. However, the dependence on $W$ is implicitly contained in the matrix $U_r$ of the left singular vectors, and in the definition of $\WU$. {In \S\ref{ss_pointwise} we present alternative choices for the Selection Operator $\SO$ which can ensure pointwise interpolation.} \begin{remark} {\em To obtain the canonical structure of $W$-DEIM, one follows the derivation from \S \ref{S=Canonical}, properly adapted to the structure induced by $(\cdot,\cdot)_W$. } \end{remark} \subsection{How to choose $\WSO$} Recall that $\WSO$ contains carefully chosen columns of $\Id_m$. The index selection to determine the columns of $\WSO$ can be computed using the original DEIM approach \cite{DEIM}. Another approach, Q-DEIM proposed in \cite{drmac-gugercin-DEIM-2016}, uses a rank revealing QR factorization~\cite{bus-gol-65}, {implemented} in high performance software libraries such as LAPACK \cite{LAPACK} and ScaLAPACK \cite{ScaLAPACK}. However, in this paper, we adopt the {strong Rank Revealing QR} (sRRQR) factorization \cite[Algorithm 4]{GuE96}. We present a result that characterizes the error of $W$-DEIM \begin{theorem}\label{t_dgeim_rrqr} Applying sRRQR~\cite[Algorithm 4]{GuE96} to $U_r$ produces an index selection operator $\WSO$ whose {\rm $W$-DEIM} projection error satisfies \begin{equation} \| f - \D f\|_W \leq \sqrt{1+\eta^2 r (m-r)} \| f - \WP_{\WU} f\|_W . \end{equation} \end{theorem} \begin{proof} Combining~\eqref{e_wdeim_inter} and Proposition~\ref{zd:PROP:SL} gives \[ \| f - \D f\|_W \leq \| (\WSO^TU_r)^{\dagger}\|_2 \|f - \WP_{\WU} f\|_W .\] Since $U_r$ has orthonormal columns sRRQR~\cite[Algorithm 4]{GuE96} gives a selection operator $\WSO \in \mathbb{R}^{m\times r}$ such that $\WSO^TU_r$ is invertible. Applying Lemma~\ref{l_det} to bound $\| (\WSO^TU_r)^{-1}\|_2$ gives the desired result. \end{proof} The importance of this result is that the point selection can also be applied in the weighted inner product case, and the resulting error bound similar as the {DEIM} bound in \S\ref{S:SRRQR}. \subsection{On the interpolating property and its generalization} {Recall that the original DEIM formulation allows pointwise interpolation $\WSO^T\D f = \WSO^Tf$, i.e., the projection $\D f$ and $f$ match exactly for a set of indices $i_1,\dots,i_r$ determined by the columns of $\WSO$. In the case of $W$-DEIM, the following interpolation properties hold. \begin{proposition}\label{p_w_interp} Let $\SO^TW\WU$ be invertible and let $\D$ be as in Definition~\ref{zd:eq:DEF:WDEIM}. Then $\SO^T W \D f = \SO^TWf$. \end{proposition} This can be readily verified; since $\SO^T W\WU$ is invertible then \[ \SO^T W \D f = (\SO^TW\WU)(\SO^TW\WU)^{-1}\SO^TWf = \SO^TWf.\] With the choice $\SO = L^{-T}\WSO$, Proposition~\ref{p_w_interp} simplifies to \begin{equation}\label{eq:W-interpolation} \WSO^T (L^T \D f)=\WSO^T (L^T f) . \end{equation} Hence, $W$-DEIM cannot in general interpolate $f\in\R^m$ at the selected indices $f_{i_j}=\phi_{i_j}(x_{i_j})$, $j=1,\ldots, r$. An exception to this is the case that $W$ has diagonal entries, see \S \ref{SSS::W=diag} for details. But, in many applications the discretized functions values may not be available through point evaluation either because there is no analytical expression or they may be sensor data corrupted by noise. In those cases, pointwise interpolation may not be possible, nor desirable -- for a most illuminating discussion see \cite{GEIM}. \subsubsection{DGEIM} The DEIM is a realization of the discrete version of the Empirical Interpolation Method (EIM) \cite{EIM} in which interpolation was handled by only using pointwise function evaluation. In the same way we can interpret the interpolation condition~\eqref{eq:W-interpolation} as a discrete version of GEIM, DGEIM, as a particular case of $W$-DEIM. {To this end, consider a more general concept of interpolation using a } family of linear functionals, see \cite[Chapter 11]{Deutsch-BestApprInnPS-book}. Introduce in \eqref{eq:W-interpolation} a column partition of $L = \begin{pmatrix} \ell_1 & \dots & \ell_m \end{pmatrix}$ and rewrite it as \begin{equation}\label{zd:eq:gen_interpol} \WSO^T \left( \begin{smallmatrix} \ell_1^T\D f \cr \vdots \cr \ell_m^T\D f\end{smallmatrix}\right) = \WSO^T \left( \begin{smallmatrix} \ell_1^T f \cr \vdots \cr \ell_m^T f\end{smallmatrix}\right),\;\;\mbox{i.e.,}\;\; \ell_{i_j}^T\D f = \ell_{i_j}^T f,\;\;j=1,\ldots, r. \end{equation} {If} we interpret $\ell_i\in\R^m$ as the discretized Riesz representation of a given linear functional, then (\ref{eq:W-interpolation}) {interpolates the desired function $f$} at selected functionals. (The point interpolation corresponds to using the point evaluation functional, $(\ell_i)_j=W_{ji}=W_{ij}=(\ell_j)_i=\delta_{ij}$, where $\delta_{ij}$ is the Kronecker delta.) \subsection{How to ensure sparse selection} {The original DEIM approximation was computationally efficient because it only required evaluating a small number of components of the vector $f$. However, in the computation of $\D f$, the factor $\WSO^T L^T f$ may, in the worst case, require many, or possibly all, components of $f$. This might make $W$-DEIM computationally inefficient. It is clear that the selection is sparse when the matrix $L$ is sparse. The analysis is subdivided into three different cases. When the weighting matrix is sparse, or diagonal, the Cholesky factor $L$ is also sparse. When $W$ is sparse, reordering the matrix may lead to sparse factors $L$. On the other hand, if $W$ is dense, we must resort to an inexact sparse factorization. These cases are discussed below.} \subsubsection{Diagonal weighting matrix $W$}\label{SSS::W=diag} If $W=\mathrm{diag}(w_i)_{i=1}^{m}$, then $$L={W}^{1/2}=\mathrm{diag}(\sqrt{w_i})_{i=1}^{m},$$ {and the computation of} $\D f=W^{-1/2}U_r (\WSO^T U_r)^{-1}\WSO^T {W}^{1/2}f$ requires only the indices $i_1,\ldots, i_r$ of $f$ selected by $\WSO$. Furthermore, in this case {the interpolation condition} (\ref{eq:W-interpolation}) {simplifies to} $$ {\sqrt{w_{i_j}}} (\D f)_{i_j} = {\sqrt{w_{i_j}}} f_{i_j},\;\;j=1,\ldots, r, $$ i.e.{,} $\D$ is an interpolating projection. \subsubsection{Sparse weighting matrix $W$} In some cases, the matrix $W$ that defines a discrete inner product is large and sparse, {and possibly contains} additional block structure, see e.g.{,} \cite[\S 5.4]{ROM-SANDIA-2014}. {Examples of sparse weighting matrices are discussed in the section on numerical experiments (Section~\ref{S=Examples}).} {When $W$ is sparse}, one can {take advantage of} sparse factorization techniques {to} compute a pivoted factorization $\Pi^T W \Pi = {L}_s {L}_s^T$, where the permutation matrix $\Pi$ is determined to produce a sparse Cholesky factor ${L}_s$. (In fact, {the permutation matrix} $\Pi$ {has the additional benefit of making} ${L}_s$ well conditioned for inversion by trying to improve diagonal dominance.) Then we factor $W=LL^T$ with $L=\Pi{L}_s$, and we have $$\D f = \WU (\WSO^T U_r)^{-1}\WSO^T L_s^T \Pi^T f. $$ Since $\WSO^T (L_s^T \Pi^T)$ will select only a small portion of the rows of a sparse matrix $L_s^T \Pi^T$, the product $\WSO^T L_s^T \Pi^T f$ is expect to require only relatively small number of the entries of $f$. An efficient implementation of this procedure would deploy the data structure and algorithms from the sparse matrices technology. We now see an advantage of pure algebraic selection of the interpolation indices, as featured in the Q-DEIM version of the method \cite{drmac-gugercin-DEIM-2016}. In Q-DEIM, the index selection is computed by a rank revealing (column) pivoting in the QR factorization of the $r\times m$ matrix $U_r^T$, where $r\ll m$. The role of pivoting is to select an $r\times r$ submatrix of $U_r$ with small inverse. Hence, as argued in \cite{drmac-gugercin-DEIM-2016}, it might be possible to find such a submatrix without having to touch all rows of $U_r$. One possible way to improve sparsity is to lock certain columns of $U_r^T$ (whose indices correspond to non-sparse rows of $L_s^T$) and exclude them from the pivot selection. Since $m\gg r$, it is very likely that even with some columns of $U_r^T$ excluded, the selection will perform well. In fact, pivoting in the QR factorization can be modified to prefer indices that correspond to most sparse rows of $L_s^T$. \subsubsection{General dense positive definite $W$} In the most difficult case, the natural inner product is defined with large dimensional dense positive definite $W$ that is also difficult to compute. For instance, as mentioned in \S \ref{SS=UPISPS}, $W$ can be the Gramian obtained by solving a large scale Lyapunov equation, or replaced by an empirical approximation based on the method of snapshots. If computational complexity requires enforcing sparsity of the selection operator, then we can resort to inexact sparse factorization of the form $\Pi^T W \Pi + \delta W = \widetilde{L_s} \widetilde{L_s}^T$, i.e.{,} we compute $W\approx (\Pi\widetilde{L_s}) (\Pi\widetilde{L_s})^T$. {The resulting approximation has } the backward error $\Delta W = \Pi \delta W\Pi^T$ {as a result of a thresholding} strategy to produce the sparse factor $\widetilde{L_s}$. {We mention two possibilities here.} The incomplete Cholesky factorization is one candidate, see e.g. \cite{Lin-ICHOL}. The matrix $W$ can also be sparsified by zeroing entries $W_{ij}$ if e.g.{,} $|W_{ij}|/\sqrt{W_{ii}W_{jj}}$ is below some threshold. Let us identify, for simplicity, $W\equiv \Pi^T W\Pi = LL^T$, so that $W+\delta W = \widetilde{L_s} \widetilde{L_s}^T$. Set $\widetilde{\D}=\WU (\WSO^T U_r)^{-1}\WSO^T \widetilde{L_s}$. Then \begin{align*} \| \D -\widetilde{\D}\|_W \leq & \> \|(\WSO^T U_r)^{-1}\|_2 \|\WSO^T (\Id_m - \widetilde{L_s}^T L^{-T})\|_2 \\ = & \> \|(\WSO^T U_r)^{-1}\|_2 \|L^{-1}(L - \widetilde{L_s})\WSO\|_2 . \end{align*} Now, from $\widetilde{\D}f = \D f + (\widetilde{\D} - \D)f$ we have \begin{align*} \frac{\|f-\widetilde{\D}f\|_W}{\|f\|_W} \leq & \> \frac{\|f-{\D}f\|_W}{\|f\|_W} + \| \D -\widetilde{\D}\|_W \\ \leq & \> \frac{\|f-{\D}f\|_W}{\|f\|_W} + \|(\WSO^T U_r)^{-1}\|_2 \|L^{-1}(L - \widetilde{L_s})\WSO\|_2. \end{align*} One can also justify using the sparsified weighting matrix in a backward sense, i.e. using $W+\delta W$ as the generator of the inner product. This line of reasoning via the incomplete factorization requires further analysis which we defer to our future work. {Of course, in the case of dense $W$, saving the work in evaluating $f$ by the generalized interpolation (\ref{zd:eq:gen_interpol}) is nearly impossible as it may require too many entries to be practical. In that case, one can resort to point-wise interpolation that we discuss next.} \subsection{Pointwise-interpolating $W$-DEIM}\label{ss_pointwise} Note that in the formula for the $W$-DEIM projection in Definition \ref{zd:eq:DEF:WDEIM} there is a certain freedom in choosing $\SO$. The key in our formulation is indeed that we have left it as an adaptable device. In the case of the original DEIM with $W=\Id_m$, $\SO\equiv \WSO$ is a submatrix of $\Id_m$, resulting in more efficient computation of the projection \cite{DEIM}. If a generalized interpolation of the type (\ref{eq:W-interpolation}) and (\ref{zd:eq:gen_interpol}) is desired, then $\SO^T = \WSO^T L^{-1}$ as in (\ref{zd:eq:SL}) in Proposition \ref{zd:PROP:SL} will accomplish the task. On the other hand, if we want point-wise interpolation \begin{equation}\label{e_pointwise} {\WSO^T\D f = \WSO^T f \qquad \Longleftrightarrow \qquad } (\D f)_{i_j}=f_{i_j} ,\;\;j=1,\ldots , r \end{equation} also in the weighted case with a general positive definite $W$, then this can be obtained using the following definition. \begin{definition}\label{d_pointwise} Let the weighted selection operator $\SO$ and the corresponding $W$-DEIM projector $\D$, respectively, be defined as \begin{equation}\label{zd:eq:SL_pointwise} \SO^T \equiv \WSO^T W^{-1}\qquad \D \equiv \WU (\WSO^T \WU)^{\dagger}\WSO^T . \end{equation} Here $\WU$ is $W$-orthogonal and $\WSO$ has columns from the identity matrix $\Id_m$ \end{definition} Note that the {relations~\eqref{e_pointwise}, $\D \WP_{\WU} = \WP_{\WU},$ and the error estimate (\ref{e_wdeim_inter}) still apply. However, now, the condition number $\| \D\|_W$ will depend on the specific choice of $\WSO$. We now show how to pick the indices that determine the columns of $\WSO$. The algorithm proceeds as follows. First, as in Algorithm \ref{zd:ALG:POD}, a thin generalized SVD \cite{van1976generalizing} of the $m\times n_s$ snapshot matrix $Y$ is computed and truncated to obtain low rank approximation $Y \approx \WU\widehat{\Sigma} \widehat{V}^T$, where $\widehat{V}^T \widehat{V}=\Id_r$ and $\WU\in\R^{m\times r}$ is $W$-orthonormal, i.e., $\WU^TW\WU = \Id_r$. Then, the thin QR of $\WU = Q_{\WU}R_{\WU}$ is computed, and strong RRQR is applied to $Q_{\WU}^T$, to obtain the selection operator $\WSO$ (whose columns come from the $m\times m$ identity matrix). Finally, we set $\SO \equiv W^{-1} \WSO$. This procedure is summarized in Algorithm~\ref{zd:ALG:POD_W}, where the first two steps are implemented as in Algorithm \ref{zd:ALG:POD}. The corresponding error bound is given in Theorem \ref{t_dgeim_rrqr_2}. \begin{algorithm}[hbt] \caption{$[\WU,\WSO,Q_{\WU}] =\mbox{$W$-POD-DEIM}(Y,W, \eta)$} \label{zd:ALG:W-POD-DEIM-1} \begin{algorithmic}[1] \REQUIRE Snapshots $Y\in\R^{m\times n_s}$, $n_s<m$. Symmetric positive definite $W\in\R^{m\times m}$. {Tuning parameter $\eta$.} \STATE Compute the thin generalized SVD of $Y$ as $Y = {U_Y} \Sigma V^T$ with ${U_Y^TWU_Y} = \Id_{n_s}$. \STATE Determine an appropriate index $r$ and define $\WU={U_Y}(:,1:r)$. \STATE Compute the thin QR factorization of $\WU = Q_{\WU}R_{\WU}$ . \STATE Apply strong RRQR{~\cite[Algorithm 4]{GuE96} (with parameter $f=\eta$)} to $Q^T_{\WU}$ to give \[ Q^T_{\WU} \begin{pmatrix} \mat{\Pi}_1 & \mat{\Pi}_2\end{pmatrix} = \mat{Q} \begin{pmatrix} \mat{R}_{11} & \mat{R}_{22} \end{pmatrix},\;\;\Pi= \begin{pmatrix} \mat{\Pi}_1 & \mat{\Pi}_2\end{pmatrix}.\] \STATE $\WSO = \mat{\Pi_1}$. \ENSURE $W$-orthogonal basis $\WU$ (optional), interpolation selection matrix $\WSO$, and orthogonal basis $Q_{\WU}$ (optional), defining $$\D = \WU (\WSO^T \WU)^{-1}\WSO^T \equiv Q_{\WU} (\WSO^T Q_{\WU})^{-1}\WSO^T .$$ \end{algorithmic} \label{zd:ALG:POD_W} \end{algorithm} \begin{theorem}\label{t_dgeim_rrqr_2} Assume that the DEIM projection operator $\D$ is defined as in Algorithm \ref{zd:ALG:POD_W}. Then \begin{equation}\label{zd:eq:W-dgeim-bound} \| f - \D f\|_W \leq \sqrt{1 + \eta^2 r(m-r) } \sqrt{\kappa_2(W)} \| f - \WP_{\WU} f\|_W . \end{equation} \end{theorem} \begin{proof} Note that $\| \D\|_W = \| L^T \D L^{-T}\|_2 \leq \sqrt{\kappa_2(W)} \| \D\|_2.$ We now bound $\| \D\|_2$. Consider the thin QR of $\WU = Q_{\WU}R_{\WU}$, where $R_{\WU}$ must be nonsingular. Then $$ \D = Q_{\WU}R_{\WU} (\WSO^TQ_{\WU}R_{\WU})^{-1}\WSO^T = Q_{\WU} (\WSO^TQ_{\WU})^{-1}\WSO^T. $$ Since $Q_{\WU}$ and $\WSO$ have orthonormal columns, $\|\D\|_2 = \|(\WSO^TQ_{\WU})^{-1}\|_2$. The rest of the proof is similar to Theorem~\ref{t_dgeim_rrqr}. \end{proof} \subsubsection{Scaling invariant error bound}\label{ss_pointwise_scaling} Note that, compared to Theorem~\ref{t_dgeim_rrqr}, the error bound (\ref{zd:eq:W-dgeim-bound}) has an additional factor of $\sqrt{\kappa_2(W)}$. For highly ill-conditioned matrices $W$, this considerably inflates the error bound and possibly the actual error as well. It is instructive to see how a simple trick can improve this undesirable situation. Let $\Delta=\mathrm{diag}(\sqrt{W_{ii}})_{i=1}^m$ and $W_s= \Delta^{-1} W \Delta^{-1}$; {note that this scaling ensures} $(W_s)_{ii}=1$ for all $i=1,\dots,m$. It is well known (see \cite{slu-69}) that this diagonal equilibration nearly minimizes the spectral condition number over all diagonal scalings, \begin{equation}\label{zd:eq:Sluis} \kappa_2(W_s)\leq m \min_{D{\in\mathcal{D}^m}}\kappa_2(DWD), \end{equation} {where $\mathcal{D}^m$ is the space of diagonal $m\times m$ matrices. } The task is to eliminate the scaling factor $\Delta$ from the bound on $\| \D\|_W$ (by {the use of} a different subset selection) and to replace $\sqrt{\kappa_2(W)}$ with $\sqrt{\kappa_2(W_s)}$ -- {which can be} a substantial improvement {for certain applications of interest}. To that end, we must examine how $W$ influences the structure of $\widehat{U}$, and interweave assembling of $\widehat{U}$ with the construction of the DEIM selection operator. The selection operator is $\SO^T = \WSO^T W^{-1}$, as in Algorithm \ref{zd:ALG:W-POD-DEIM-1}. We use the expression for the weighted POD basis $\widehat{U}$ as in Algorithm \ref{zd:ALG:POD}, i.e. $\widehat{U}=L^{-T}U_r$, where $W=LL^T$ and $U_r^T U_r=\Id_r$. If we define $L_s = \Delta^{-1}L$, then $W_s = L_s L_s^T$; $L_s$ has rows of unit Euclidean length, and, since $\D = \widehat{U}(\WSO^T\widehat{U})^{-1}\WSO$, $$ L^T \D L^{-T} = U_r (\WSO^T \Delta^{-1}L_s^{-T}U_r)^{-1}\WSO^T \Delta^{-1}L_s^{-T} . $$ The key observation is that $\WSO^T\Delta^{-1} = \widehat{\Delta}^{-1}\WSO^T$, where $\widehat{\Delta}$ is a diagonal matrix with the vector $\WSO^T \Delta$ on its diagonal. This cancels out $\Delta$, $$ L^T \D L^{-T} = U_r (\widehat{\Delta}^{-1}\WSO^T L_s^{-T}U_r)^{-1}\widehat{\Delta}^{-1}\WSO^T L_s^{-T} = U_r (\WSO^T L_s^{-T}U_r)^{-1}\WSO^T L_s^{-T} . $$ Let now $L_s^{-T}U_r = Q_{\WU} R_s$ be the QR factorization. (Note that $L_s^{-T}U_r = \Delta \widehat{U}$.) Then $$ \D = \Delta^{-1} Q_{\WU} (\WSO^T Q_{\WU})^{-1}\widehat{\Delta}\WSO^T,\;\; L^T \D L^{-T} = L_s^T Q_{\WU} (\WSO^T Q_{\WU})^{-1}\WSO^T L_s^{-T} , $$ and we conclude that DEIM selection using $Q_{\WU}$ yields the desired bound $$ \| \D\|_W \leq \|L_s^T\|_2\|L_s^{-T}\|_2 \| (\WSO^T Q_{\WU})^{-1}\|_2 = \sqrt{\kappa_2(W_s)} \| (\WSO^T Q_{\WU})^{-1}\|_2 . $$ These considerations are summarized in Algorithm \ref{zd:ALG:POD_W-s} and Theorem \ref{zd:TM:W-s-DEIM}. \begin{algorithm}[hbt] \caption{$[\WU,\WSO,Q_{\WU},\Delta, \widehat{\Delta}] =\mbox{$W$-$\Delta$-POD-DEIM}(Y,W\equiv LL^T,\eta)$} \label{zd:ALG:W-POD-DEIM-2} \begin{algorithmic}[1] \REQUIRE Snapshots $Y\in\R^{m\times n_s}$, $n_s<m$. Symmetric positive definite $W\in\R^{m\times m}$. {Tuning parameter $\eta$.} \STATE Compute the thin SVD of $L^T Y$ as $L^T Y = {U} \Sigma V^T$. \COMMENT{$Y=(L^{-T}U)\Sigma V^T$ is a GSVD of $Y$, with $W$-orthogonal $L^{-T}U$ and orthogonal $V$.} \STATE Determine an appropriate index $r$ and define $U_r={U}(:,1:r)$. \STATE $\Delta=\mathrm{diag}(\sqrt{W_{ii}})_{i=1}^m$ ; $L_s = \Delta^{-1} L$. \STATE Compute the thin QR factorization of $L_s^{-T}U_r$ as $L_s^{-T}U_r = Q_{\WU}R_{s}$ . \STATE Apply strong RRQR {~\cite[Algorithm 4]{GuE96} (with parameter $f=\eta$)} to $Q^T_{\WU}$ to give \[ Q^T_{\WU} \begin{pmatrix} \mat{\Pi}_1 & \mat{\Pi}_2\end{pmatrix} = \mat{Q} \begin{pmatrix} \mat{R}_{11} & \mat{R}_{22} \end{pmatrix},\;\;\Pi= \begin{pmatrix} \mat{\Pi}_1 & \mat{\Pi}_2\end{pmatrix}.\] \STATE $\WSO = \mat{\Pi_1}$; $\widehat{\Delta}= \mathrm{diag}(\WSO^T \mathrm{diag}(W))$. \ENSURE $W$-orthogonal basis $\WU=L^{-T}U_r$ (optional), interpolation selection matrix $\WSO$, diagonal matrices $\Delta$, $\widehat{\Delta}$ (optional), and orthogonal basis $Q_{\WU}$ (optional), defining $$\D = \WU (\WSO^T \WU)^{-1}\WSO^T \equiv \Delta^{-1} Q_{\WU} (\WSO^T Q_{\WU})^{-1} \widehat{\Delta}\WSO^T \equiv \Delta^{-1} Q_{\WU} (\WSO^T Q_{\WU})^{-1} \WSO^T \Delta.$$ \end{algorithmic} \label{zd:ALG:POD_W-s} \end{algorithm} \begin{theorem}\label{zd:TM:W-s-DEIM} Assume that the DEIM projection operator $\D$ is defined as in Algorithm \ref{zd:ALG:POD_W-s}. Then \begin{equation}\label{zd:eq:W-s-dgeim-bound} \| f - \D f\|_W \leq \sqrt{1 + \eta^2 r(m-r) } \sqrt{\kappa_2(W_s)} \| f - \WP_U f\|_W . \end{equation} \end{theorem} \begin{remark} {\em It follows from (\ref{zd:eq:Sluis}) that the DEIM projection error bound (\ref{zd:eq:W-s-dgeim-bound}) that applies to Algorithm \ref{zd:ALG:POD_W-s} is never much worse ($\sqrt{\kappa_2(W_s)}\leq \sqrt{m}\sqrt{\kappa_2(W)}$) and it is potentially substantially better\footnote{Take e.g. diagonal and highly ill-conditioned $W$.} ($\sqrt{\kappa_2(W_s)} \ll \sqrt{\kappa_2(W)}$) than the estimate (\ref{zd:eq:W-dgeim-bound}) that holds for Algorithm \ref{zd:ALG:POD_W}. Although the two algorithms determine $\WSO$ from different orthonormal matrices, the factor $\sqrt{1 + \eta^2 r(m-r)}$ is the same, because of the property of the sRRQR. } \end{remark} \begin{remark} {\em In both Algorithm \ref{zd:ALG:POD_W-s} and Algorithm \ref{zd:ALG:POD_W}, the sRRQR and computation of $\WSO$ can be replaced with the Q-DEIM selection \cite{drmac-gugercin-DEIM-2016}, which is more efficient, essentially nearly as robust, but with weaker theoretical bound. However, the weaker upper bound on $\kappa$ is unlikely to make a substantial difference in practical computations, and both algorithms can be implemented using Q-DEIM. } \end{remark} \begin{remark} {\em For better numerical properties, the Cholesky factorization can be computed with pivoting, $\Pi^T W \Pi = LL^T$, i.e. $W = (\Pi L)(\Pi L)^T$, and we can easily modify Algorithm \ref{zd:ALG:POD_W-s} to work implicitly with $\Pi L$ instead of $L$. } \end{remark} \begin{remark} {\em Note that the computation in Line 1. of Algorithm \ref{zd:ALG:W-POD-DEIM-2} can be rephrased as the GSVD of $Y$, $Y = U_Y \Sigma V^T$, where $U_Y = L^{-T}U$ is $W$-orthogonal, $U_Y^T W U_Y=\Id_m$; see Algorithm \ref{zd:ALG:W-POD-DEIM-1}. Then the matrix $\widehat{U}$ optionally returned by Algorithm \ref{zd:ALG:W-POD-DEIM-2} is $\widehat{U}=U_Y(:,1:r)=L^{-T}U_r$. Since $L_s^{-T}=\Delta L^{-T}$, the matrix $L_s^{-T}U_r$ in Line 4. can be expressed as $L_s^{-T}U_r = \Delta L^{-T}U_r=\Delta\widehat{U}$. } \end{remark} \section{Numerical Examples}\label{S=Examples} In this section, we show numerical examples that highlight the benefits of our proposed algorithms. \subsection{Example 1} This example is based on~\cite[Example 3.1]{drmac-gugercin-DEIM-2016}. In this example we study the performance of sRRQR~\cite[Algorithm 4]{GuE96} for subset selection compared to the DEIM approach~\cite{DEIM} and Q--DEIM~\cite{drmac-gugercin-DEIM-2016}. Therefore, we let the weighting matrix $W=\Id_m$. Let \begin{equation}\label{e_func_ex1} {f}(t;\mu) = 10\exp(-\mu t) \left(\cos(4\mu t) + \sin(4\mu t)\right), \qquad 1 \leq t \leq 6, \;\;\; 0 \leq \mu \leq \pi. \end{equation} The snapshot set is generated by taking $40$ evenly spaced values of $\mu$ and $n=10,000$ evenly spaced points in time. The snapshots are collected in a matrix of size $10000\times 40$, the thin SVD of this matrix is computed and the left singular vectors corresponding to the first $34$ modes are used to define $U_r$. To test the interpolation accuracy, we compute its value using the DEIM approximation at $200$ evenly spaced points in the $\mu$-domain. Three different subset selection procedures were used: DEIM, Pivoted QR labeled Q--DEIM, and sRRQR. In each case, we report the relative error defined as \[ \text{Rel Err}(\mu_j) \> \equiv \> \frac{\|f_{\mu_j} - \D f_{\mu_j} \|_2}{\| f_{\mu_j}\|_2} \qquad j = 1,\dots,200.\] The results of the comparison are provided in Figure~\ref{f_example1}. \begin{figure}[!ht]\centering $\quad$ \includegraphics[scale=0.3]{figs/example1_error} \includegraphics[scale=0.3]{figs/example1_ratio} \caption{Comparison of the approximation errors used to approximate~\eqref{e_func_ex1}. (left) The relative errors are plotted for different subset selection scheme. (right) Ratio of relative errors of (1) Q--DEIM and sRRQR, and (2) DEIM and sRRQR. } \label{f_example1} \end{figure} We observe that while all three methods are very accurate, Q--DEIM and sRRQR are much more accurate compared to DEIM for this example. Furthermore, from the right plot in Figure~\ref{f_example1}, we see that sRRQR is more accurate compared to both Q--DEIM and sRRQR. In practice, the performance of sRRQR is very similar to Q--DEIM, except for some adversarial cases in which Q--DEIM can fail spectacularly. In the subsequent examples, we use sRRQR for subset selection. \subsection{Example 2} Our next example is inspired by the Nonlinear RC-Ladder circuit, which is a standard benchmark problem for model reduction (see, for example~\cite[Section 6]{condon2004empirical}). The underlying model is given by a dynamical system of the form \[ D\frac{dx(t)}{dt} = \begin{pmatrix}-g(x_1(t)) - g(x_1(t)-x_2(t)) \\ g(x_1(t)-x_2(t)) - g(x_2(t)-x_3(t)) \\ \vdots \\ g(x_{N-1}(t)-x_N(t)) \end{pmatrix} + \begin{pmatrix} u(t) \\ 0 \\ \vdots \\ 0 \end{pmatrix}, \] where $g(x) = \exp(40 x) + x - 1$ and $u(t) = \exp(-t)$ and $N=1000$. The diagonal matrix $D$ is chosen to have entries \[ D_{ii} = \left\{ \begin{array}{ll} 1 & 251\leq i \leq 750 \\ \frac{1}{2} & \text{otherwise}\end{array}\right.\] The diagonal matrix $D$ induces the norm $\| \cdot\|_D$ and the relative error between the full and the reduced order models are measured in this norm. \begin{figure}[!ht]\centering $\quad$ \includegraphics[scale=0.3]{figs/example2_error} \includegraphics[scale=0.3]{figs/example2_first} \caption{The plots refer to Example 2. (left) Relative error of the full and reduced order systems for different times, as a function of number of basis vectors. (right) The $W$-DEIM based reconstruction of the first component $x_1(t)$ as a function of time with $k=40$. } \label{f_example2} \end{figure} The dynamical system is simulated over $t=[0,7]$ seconds and $2000$ snapshots of the dynamical system and the nonlinear function are collected with equidistant time steps. Based on the decay rate of the snapshots, we vary the number of basis vectors from $5$ to $40$. The relative error is defined to be \[ \text{Rel. Err.}(t) \equiv \frac{\|x(t)-\hat{x}(t) \|_D}{\|x(t)\|_D},\] where $x(t)$ is the solution of the dynamical system at time $t$, whereas $\hat{x}(t)$ is the reduced order approximation at the same time. The relative error as a function of number of retained basis vectors is plotted in left panel of Figure~\ref{f_example2}. On the right, the reconstruction of the first component of the dynamical system $x_1(t)$ is shown; here $k=40$ basis vectors were retained. As can be seen, the reconstruction error is low and the $W$-DEIM, indeed, approximates the large-scale dynamical system accurately. \subsection{Example 3} This example is inspired by~\cite[Section 2.3]{peherstorfer2014localized}. The spatial domain is taken to be $\Omega = [0,1]^2$ and the parameter domain is $\mathcal{D} = [0,1]^2$. We define a function $g : \Omega \times \mathcal{D} \rightarrow \mathbb{R}$ which satisfies \[ g(x_1,x_2;\mu_1,\mu_2) \equiv \frac{1}{\sqrt{h(x_1;\mu_1) + h(x_2;\mu_2) + 0.1^2 }}.\] where $h(z;\mu) = ((1-z)-(0.99\cdot\mu-1))^2 $. The function that is to be interpolated is \begin{eqnarray} f({x};{\mu}) = & g(x_1,x_2;\mu_1,\mu_2) + g(1-x_1,1-x_2; 1-\mu_1, 1-\mu_2) \\ & + g(1-x_1,x_2; 1-\mu_1,\mu_2) + g(x_1,1-x_2; \mu_1, 1-\mu_2). \end{eqnarray} Depending on the parameter $\mu$, it has a sharp peak in one of the four corners of $\Omega$. The function is discretized on a $100\times 100$ grid in $\Omega$, and parameter samples are drawn from a $25\times 25$ equispaced grid in $\mathcal{D}$. These $625$ snapshots are used to construct the DEIM approximation. We choose three different weighting functions: $W_1$ is the identity matrix, $W_2$ is the weighting matrix corresponding to the $L^2(\Omega)$ inner product, and $W_3$ is the weighting matrix corresponding to the $H^1(\Omega)$ inner product. \begin{figure}[!ht]\centering $\quad$ \includegraphics[scale=0.3]{figs/interperr} \includegraphics[scale=0.3]{figs/errorc} \caption{(left) {Maximum} relative error {over the test parameters} as a function of number of basis vectors used in the DEIM approximation. (right) Error constants $\| \D\|_W = \| (\WSO^T U_r)^{-1}\|$. Three different weighting matrices were used: $W_1$ is the identity matrix, $W_2$ is the weighting matrix corresponding to the $L^2(\Omega)$ inner product, and $W_3$ is the weighting matrix corresponding to the $H^1(\Omega)$ inner product.} \label{f_example3} \end{figure} We then compute the average relative error over a test sample corresponding to a $11\times 11$ equispaced grid in $\mathcal{D}$. The relative error is defined to be \[ \text{Rel. Err.}_j = \frac{\| f - \D f\|_{W_j}}{\| f\|_{W_j}} \qquad j=1,2,3. \] The POD basis is computed using Algorithm~\ref{zd:ALG:POD}, whereas the subset selection is done using sRRQR~\cite[Algorithm 4]{GuE96}. The results of the interpolation errors as a function of number of DEIM interpolation points retained, is displayed in the left panel of Figure~\ref{f_example3}. On the right hand panel of the same figure, we display the error constants $\| \D\|_W = \| (\WSO^T U_r)^{-1}\|$. As can be seen, although the error constants increase with increasing number of basis vectors, the overall interpolation error decreases resulting an effective approximation. \subsection{Example 4}\label{ss_ex_4} This is a continuation of Example 3. We use the same setup as before; however, we compare the different algorithms for $W$-DEIM. In `Method 1' we use Algorithm~\ref{zd:ALG:POD} to generate the POD basis, while the subset selection is done using sRRQR~\cite[Algorithm 4]{GuE96}. The error constant for this method is $\eta_1 \equiv \| (\WSO^T U_r)^{-1}\|_2$. In `Method 2' we use Algorithm~\ref{zd:ALG:POD_W} with error constant $\eta_2 \equiv \sqrt{\kappa_2(W)} \| (\WSO^TQ_{\WU})^{-1}\|_2$ and in `Method 3' we use Algorithm~\ref{zd:ALG:POD_W-s} with error constant $\eta_3 \equiv \sqrt{\kappa_2(W_s)} \| (\WSO^TQ_{\WU})^{-1}\|_2$. In Algorithm~\ref{zd:ALG:POD_W}, the GSVD of the snapshot matrix w.r.t. the weighting matrix $W$ was computed as follows. First, the weighted QR was computed using~\cite[Algorithm 2]{lowery2014stability} to obtain $Y = Q_Y R_Y$. Note that $Q_Y^TWQ_Y = \Id_{n_s}$. Then the SVD of $R_Y$ is computed as $R_Y = U_R\Sigma V^T$. We obtain the GSVD of $Y = {U}_Y \Sigma V^T$, where now ${U}_Y = Q_Y U_R$. \begin{figure}[!ht]\centering $\quad$ \includegraphics[scale=0.3]{figs/interperr_ex4_m} \includegraphics[scale=0.3]{figs/errorc_ex4_m} \caption{(left) Maximum relative error as a function of number of basis vectors used in the DEIM approximation. (right) Error constants for the three methods as defined in Section~\ref{ss_ex_4}. } \label{f_example4a} \end{figure} \begin{figure}[!ht]\centering $\quad$ \includegraphics[scale=0.3]{figs/interperr_ex4_k} \includegraphics[scale=0.3]{figs/errorc_ex4_k} \caption{(left) Maximum relative error as a function of number of basis vectors used in the DEIM approximation. (right) Error constants for the three methods as defined in Section~\ref{ss_ex_4}. } \label{f_example4b} \end{figure} For a given weighting matrix, we define the relative error as \[ \text{Rel. Err.}_j = \frac{\| f - \D_j f\|_{W}}{\| f\|_{W}} \qquad j=1,2,3. \] The DEIM operators $\D_j$ correspond to the different Methods described above. In Figure~\ref{f_example4a} we plot the relative error using the DEIM approximation and error constants; here the weighting matrix $W= W_2$ corresponds to the $L^2(\Omega)$ inner product. As can be seen, the overall interpolation error from all three methods are comparable. However, the error constants for Method 2 are highest as expected, since it involves $\sqrt{\kappa_2(W)}$. In Figure~\ref{f_example4b} we repeat the same experiment; however, the weighting matrix $W= W_3$ corresponds to the $H^1(\Omega)$ inner product. Our conclusions are similar to the previous weighting matrix. Note here that $W_3$ is more ill-conditioned than $W_2$ and furthermore, for $W= W_3$ we have that $\kappa_2(W) \approx \kappa_2 (W_s)$. Therefore, the difference between the error constants for Methods 2 and 3 is very small. In conclusion, for the application at hand, all three $W$-DEIM methods produce comparable results. Methods 2 and 3 maybe desirable if factorization of $W$ is computationally expensive, or even infeasible. \subsection{Example 5} In this example, we consider a parameterized PDE based on~\cite[Section 8.5]{quarteroni2015reduced}. Consider the following parameterized PDE form defined on domain $\Omega = [0,1]^2$ with boundary $\partial \Omega$ \begin{eqnarray} - \Delta u + \boldsymbol{b}(\mu_1) \cdot \nabla u = & s(\mathbf{x};\boldsymbol{\mu}) & \mathbf{x} \in \Omega\\ \mathbf{n}\cdot \nabla u = & 0 & \mathbf{x} \in \partial \Omega. \end{eqnarray} Here $\boldsymbol{\mu} = [\mu_1,\mu_2,\mu_3]$, and $\mathbf{n}$ is the normal vector. The wind velocity $\mathbf{b}(\mu_1)$ is taken to be as $\boldsymbol{b} = [\cos\mu_1,\sin\mu_1]$, which is a constant in space but depends nonlinearly on the parameter $\mu_1$. The source term $s(\boldsymbol{\mu})$ has the form of a Gaussian function centered at $(\mu_2,\mu_3)$ and spread $0.25$ \[ s(\mathbf{x};\boldsymbol{\mu}) = \exp\left( -\frac{(x_1-\mu_2)^2 + (x_2-\mu_3)^2}{0.25^2}\right).\] The goal of this problem is to construct a reduced order model for the solution $u(\mathbf{x};\boldsymbol{\mu})$ in the domain $\Omega$ over the range of parameters $\mu_1 \in [0,2\pi]$, $\mu_2 \in [0.2,0.8]$ and $\mu_3 \in [0.15,0.35]$. A POD based approach is used to reduce the model of the parameterized PDE with DEIM/WDEIM approximation for the source term. \begin{figure}[!ht]\centering \includegraphics[scale=0.3]{example5_deim} \includegraphics[scale=0.3]{example5_pod} \caption{(left) Error in the DEIM approximation and WDEIM approximations of the source term $s(\mathbf{x};\boldsymbol{\mu})$. (right)the error in the solution of $u(\mathbf{x};\boldsymbol{\mu})$. In both cases, the error is averaged over $10$ test samples.} \label{f_ex5} \end{figure} As the weighting matrix $W$, we choose the arising from the discrete representation of the $H_1(\Omega)$ inner product. For constructing the WPOD and WDEIM bases, we first generated a training set of parameters $\boldsymbol\mu$ of $1000$ points generated by Latin Hypercube sampling; then the source term and the solution of the PDE is computed at each training point $\boldsymbol\mu$. The maximum dimension for the WPOD and WDEIM bases were chosen to be $20$ and $24$ respectively based on the decay of the singular values. From the same snapshot set we also compute bases for the POD and DEIM with dimensions $20$ and $24$ respectively. For both approaches, we use the PQR for computing for point selections. We report the errors used by both approaches in Figure~\ref{f_ex5}. The errors were averaged over $10$ different randomly generate samples in the parameter range. In the left panel, we compare the error in the DEIM approximation and WDEIM approximations of the source term $s(\mathbf{x};\boldsymbol{\mu})$, whereas in the right panel, we consider the error in the solution of $u(\mathbf{x};\boldsymbol{\mu})$ over the same test samples. For the right panel, the dimension of the DEIM/W-DEIM was chosen to be $24$ and the dimension of the POD/WPOD basis dimension was chosen to be $28$. All the errors were computed with the weighted norm $\|\cdot\|_W$. We see that the error in our approach (WPOD-WDEIM) is comparable with that of the POD-DEIM approach, whereas the error in the WDEIM approach is slightly better than the error in the error in the DEIM approach. \section{Conclusions} The main contributions of this work are: \emph{(i)} it defines a new index selection operator, based on strong rank revealing QR factorization, that nearly attains the optimal DEIM projection error bound; \emph{(ii)} it facilitates the understanding of the canonical structure of the DEIM projection; \emph{(iii)} it establishes a core numerical linear algebra framework for the DEIM projection in weighted inner product spaces; \emph{(iv)} it defines a discrete version of the Generalized Empirical Interpolation Method (GEIM). We believe that these will be useful for further development of the DEIM idea and its applications in scientific computing. \section*{Acknowledgements} We are indebted to Ilse Ipsen and in particular to the two anonymous referees for constructive criticism and many suggestions that have improved the presentation of the paper.
2,869,038,154,130
arxiv
\section{Introduction} A one parameter family $(g(t))_{t\in [-T,0)}$ of Riemannian metrics on a compact manifold $M^n$ is a Ricci flow if it satisfies the evolution equation \begin{eqnarray} \frac{\partial}{\partial t} g(t)=-2\ric(g(t)). \label{rf_eqn} \end{eqnarray} If $(g(t))_{t\in [-T,0)}$ can not be extended smoothly past time $t=0$ then the flow is singular and $\sup_{M\times [-T,0)} |\Rm(g(t))|_{g(t)} = \infty$. We call a singular Ricci flow \textit{Type I} if there is a $C>0$ such that \begin{eqnarray} \sup_M |\Rm(g(t))|_{g(t)}\leq \frac{C}{|t|}, \end{eqnarray} for all $t\in [-T,0)$. For Type I flows, Naber \cite{Naber} shows that the Cheeger-Gromov limit $(N, h(t),q)_{t\in (-\infty,0)}$ of any blow-up sequence of the form $(M,\tau_i^{-1}g(\tau_it), p)$ , where $\tau_i\rightarrow 0$, is a gradient shrinking Ricci soliton. Namely there exists an $f\in C^\infty (N)$ such that \begin{eqnarray} \ric(h(-1))+\hess_{h(-1)} f=\frac{h(-1)}{2}, \end{eqnarray} We will call such a limit \textit{a tangent flow of $g(t)$ at $p$}. Enders, M\"uller and Topping \cite{EMT} then show that tangent flows at singular points of the Ricci flow are necessarily non-flat. Here the set of singular points is defined as follows. \begin{definition}\label{singular_set} A point $p\in M$ is at the singular set $\Sigma$ of $(M,g(t))_{t\in [-T,0)}$ if there is no neighbourhood $U$ of $p$ such that \begin{eqnarray} \sup_{U\times [-T,0)} |\Rm(g(t))|_{g(t)} < \infty. \end{eqnarray} \end{definition} Very little is known regarding the structure of the singular set $\Sigma$ or its behaviour as the flow approaches the singular time. In the setting of the K\"ahler Ricci flow on a compact K\"ahler manifold $X$, Collins and Tosatti \cite{Collins} prove a conjecture of Feldman, Ilmanen and Knopf \cite{FIK}, that if $\Sigma$ is a proper subset of $X$ then it is the union of irreducible analytic subvarieties of $X$ whose volume in the respective dimension decays to zero. Moreover, $\Sigma$ is a subset of real codimension at least two. For a general Type I Ricci flow, such precise understanding of the singular set is not yet available. It is shown however in \cite{EMT} that the $n$-dimensional volume of the singular set should decay to zero as the flow approaches the singular time. Namely, if $\vol_{g(0)} \Sigma <\infty$ then $\lim_{t\rightarrow T}\vol_{g(t)} \Sigma=0$. This decay motivates our work in two different ways. First of all, it raises the question of the rate of this decay. Note that, in principle, estimates on the volume of high curvature or singular regions could lead to $L^p$ curvature estimates along a Type I Ricci flow. From a different point of view, the decay of the volume of the singular set may be seen as an estimate for the ``dimension" of $\Sigma$ at the singular time. Dimension estimates for singular sets of geometric PDEs have a long history. For instance, we have dimension estimates for the singular set of mass minimizing integral currents (see Federer \cite{Federer} and Almgren \cite{Almgren}), as well as for energy minimizing maps (see Schoen-Uhlenbeck \cite{SU} and Simon \cite{SimonL}). Moreover, White in \cite{White} proves very general stratification theorems for upper-semicontinuous functions on domains in Euclidean space, which put the previous results in a general framework and make the theory applicable in a variety of contexts, including the mean curvature flow. Last but not least, one has the dimension estimates for the singular set of non-collapsed limits of Riemannian manifolds with Ricci curvature bounded below, arising from the theory of Cheeger and Colding in \cite{CheegerColding}. In general such stratification theorems involve decomposing the singular set $\Sigma$ into an ascending sequence $\Sigma_0 \subseteq\Sigma_1 \subseteq ... \subseteq \Sigma_N=\Sigma$ (for some $N\geq 0$) and then proving Hausdorff dimension estimates of the form $\dim \Sigma_j\leq j$. In this article, inspired particularly from \cite{White}, we prove a stratification theorem for the singular set of a Type I Ricci flow. Let $\Sigma$ be the singular set of a Type I Ricci flow $(M,g(t))_{t\in [-T,0)}$, as in Definition \ref{singular_set}, and for every $j=0,\ldots,n-2$ set \begin{eqnarray} \Sigma_j&=&\{ x\in \Sigma, \textrm{ no tangent flow at } x \textrm{ splits as}\; (N^{n-j-1},h(t))\times (\mathbb R^{j+1},g_{Eucl}) \}. \nonumber \end{eqnarray} It is clear that $\Sigma_0\subseteq \Sigma_1\subseteq \cdots\subseteq \Sigma_{n-2}\subseteq\Sigma$. The main result provides analogues for Type I Ricci flows of the Hausdorff dimension estimates mentioned above. However, several subtleties arise when we attempt to make sense of such estimates in the case of the Ricci flow, since the aid of an ambient space is no longer available. Namely, although Ricci flow shrinks a round $n$-sphere to a point and we would like to regard such singularity as $0$-dimensional, the singular set $\Sigma$ is the whole manifold. One way to remedy this issue would be to study instead the limiting space of the flow as it approaches the singular time. However, it is not known whether a singular Ricci flow $(M,g(t))_{t\in[0,T)}$ (even a Type I flow) converges to a metric space $(X,d_X)$ as $t$ approaches the singular time $T$ in general. Alternatively, one could make sense of the concept of the singular set and its dimension by embedding the flow in a larger ambient space, see \cite{KL1} and \cite{HN}. However, this is beyond the scope of the present paper. In the following, we interpret the dimensionality of a singular stratum $\Sigma_j$ via volume decay estimates, observing that along a cylindrical Ricci flow $g(t)=-2(n-j-1)t g_{S^{n-j}} + g_{Eucl}$ on $S^{n-j}\times \mathbb R^j$ the volume form is given by $d\mu_{g(-\tau)}=\tau^{\frac{n-j}{2}} d\mu_{g(-1)}$. \begin{theorem}\label{main_theorem} Fix $j=0,\ldots, n-2$ and let $\varepsilon >0$. Then, there exist closed $A_i\subset \bar{\Sigma}_j$ ($i=1,2,\ldots$), depending on $j$ and $\varepsilon$, such that $\Sigma_j \subset \bigcup_{i=1}^\infty A_i$ and \begin{eqnarray} \frac{\vol_{g(-\tau)} (A_i) }{\tau^{\frac{n-j-\varepsilon}{2}}} &\leq& C(j,\varepsilon,i)\tau^{\beta},\label{size_est_1} \end{eqnarray} for some $\beta=\beta(\varepsilon)\in (0,1)$. Also, for every $\delta>0$ there is $i_0$ such that \begin{eqnarray} \vol_{g(-\tau)}(\bar\Sigma_j\setminus A_i) <\delta, \label{small_vol} \end{eqnarray} for every $i\geq i_0$ and $\tau\in (0,T]$. Moreover, for each $x \in \Sigma_0$ there exist $R_0,\bar\tau >0$ such that \begin{eqnarray} B_g(x,-\bar\tau, R_0\sqrt{\bar\tau}) \cap \{y\in M,\;\Theta_g(y)\leq \Theta_g(x)\} \subseteq B_g(x,- \tau, R_0\sqrt\tau), \label{size_est_3} \end{eqnarray} for every $\tau \in (0,\bar\tau]$. \end{theorem} Here, $B_g(x,t,r)$ denotes the $g(t)$-metric ball of radius $r$ centered at $x\in M$, $\vol_{g(t)}$ is the $n$-dimensional Hausdorff measure with respect to $g(t)$ and $\Theta_g(\;\cdot\; )$ is a lower semicontinuous function on $M$, analogous to the Gaussian density for the mean curvature flow, which is defined in Section \ref{def_of_density}. Observe that estimate (\ref{size_est_3}) may be interpreted as the isolatedness of points in the $0$-dimensional stratum $\Sigma_0$ with a fixed density value. Of course such statement taken literally can not be true. For instance, for the shrinking round sphere $S^n$, $\Sigma_0=S^n$ and all points have the same density due to symmetry. On the other hand, the diameter goes to zero and all points may be thought to represent a single singular point. Idealy, in Theorem \ref{main_theorem} we would prefer an estimate on the volume of $\Sigma_j$ instead of the sets $A_i$. These sets arise by the decomposition of $\Sigma_j$ according to the scale below which the flow is sufficiently close to a shrinking soliton. It is below that scale that our argument allows to iteratively refine a given covering, making it more efficient as the flow approaches the singular time. On the other hand, an estimate on the volume on the whole $\Sigma_j$ would be more in the spirit of Minkowski content estimates. Such estimates for singular sets have recently been obtained using quantitative differentiation arguments in different contexts (see for instance \cite{ChNab1}, \cite{ChNab2}, \cite{ChHasNab1}, \cite{ChHasNab2}, \cite{ChNabVAl}). We intend to explore this direction further in a future paper. Finally, the following corollary partially improves the volume decay statement in \cite{EMT} exploiting the fact that a shrinking Ricci soliton splitting more than $n-2$ Euclidean factors should necessarily be flat. Moreover, when the Weyl tensor remains bounded along the flow we obtain an improved volume decay estimate, as a consequence of the fact that Weyl-flat gradient shrinking Ricci solitons can split at most one Euclidean factor, which follows from \cite{Weylflat}. \begin{corollary}\label{corol} If $\Sigma=\Sigma_j$ it follows that for every $\varepsilon>0$ there exist closed $A_i\subseteq \Sigma$, $i=1,2,...$, such that $\Sigma=\bigcup_{i=1}^\infty A_i$ and \begin{eqnarray} \vol_{g(-\tau)}(A_i) \leq C(i,\varepsilon) \tau^{\frac{n-j}{2}-\varepsilon}, \label{cor_est} \end{eqnarray} for every $\tau\in (0,T]$. In particular we distinguish the following cases. \begin{enumerate} \item In general $\Sigma= \Sigma_{n-2}$ and $\vol_{g(-\tau)}(A_i) \leq C(i,\varepsilon) \tau^{1-\varepsilon}$. \item Suppose that the Weyl curvature satisfies \begin{eqnarray} \sup_{M\times [-T,0)} |W_g|_g < \infty. \nonumber \end{eqnarray} Then $\Sigma=\Sigma_1$ and $\vol_{g(-\tau)}(A_i) \leq C(i,\varepsilon) \tau^{\frac{n-1}{2}-\varepsilon}$. \end{enumerate} In both cases, for every $\delta>0$ there is $i_0$ such that $\vol_{g(-\tau)}(\Sigma\setminus A_i) <\delta$ for every $i\geq i_0$ and $\tau\in (0,T]$. \end{corollary} The outline of the paper is as follows. In Section \ref{preliminaries} we collect a few preliminary facts. Then, in Section \ref{def_of_density} we introduce a monotone quantity which plays the role of Perelman's reduced volume based at the singular time, and its associated density function. They are both lower-semicontinuous under the Cheeger-Gromov convergence of Ricci flows, which is essential to our arguments. In Section \ref{spl}, given any non-flat gradient shrinking Ricci soliton, we distinguish the set of points where the density function above achieves its minimum, called the spine. We then prove a splitting theorem (Theorem \ref{splitting}), which asserts that the soliton splits enough Euclidean factors $\mathbb R^j$, $0\leq j\leq n-2$, so that its spine is of the form $V\times \mathbb R^j$ and the diameter of $V$ decays to zero as the flow induced by the soliton approaches the singular time. Finally, in Section \ref{sizeof} we prove Theorem \ref{main_theorem} via a covering argument similar to \cite{SimonL}. \begin{acknowledgements} The author would like to thank Alix Deruelle, Felix Schulze and Peter Topping for many interesting discussions and their valuable support. This research has been supported by the grant of the German Science Foundation entitled ``Regularity and stability of curvature flows and their applications to geometric variational problems". The author would also like to acknowledge support by the EPSRC, on a programme grant entitled ``Singularities of Geometric Partial Differential Equations'' reference number EP/K00865X/1, during the first stages of the project. \end{acknowledgements} \section{Preliminaries}\label{preliminaries} In this section we collect some preliminary results on which we will rely in the rest of the paper. \subsection{Gradient shrinking Ricci solitons} A triplet $(M^n,g,f)$ is called a gradient shrinking Ricci soliton if it satisfies the equation \begin{eqnarray} \ric_g+\hess_g f = \frac{g}{2}.\label{soliton} \end{eqnarray} Clearly, if $(M,g)$ is complete with bounded curvature, the vector field $\nabla^g f$ is complete. It then follows from (\ref{soliton}) that the shrinking Ricci soliton $(M,g,f)$ induces a Ricci flow $h(t)=-t \phi_t^* g$ on $M$, where $\phi_t$ are the diffeomorphisms generated by $\nabla^g f$ via \begin{eqnarray} \frac{d}{dt}\phi_t&=&-\frac{1}{t} \nabla^g f \circ \phi_t , \nonumber\\ \phi_{-1}&=&id_M.\nonumber \end{eqnarray} It is well known that the following identity holds \begin{eqnarray} R+|\nabla f|^2-f&=& c. \label{soliton_identity} \end{eqnarray} The soliton function $f$ is well-defined up to a linear function. However, when $c=0$ in (\ref{soliton_identity}), we call $f$ a \textit{normalized soliton function}. Normalized soliton functions will be important to us mainly because of the following result from \cite{Naber}. \begin{lemma}[Lemma 2.1 in \cite{Naber}]\label{f_vol} Let $(M,g,f), (M',g',f')$ be normalized shrinking solitons and suppose that \begin{enumerate} \item $\int_M e^{-f} d\mu_g, \int_{M'} e^{-f'} d\mu_{g'} < +\infty$, \item $(M,g)$ and $(M',g')$ are isometric. \end{enumerate} Then, $\int_M e^{-f} d\mu_g= \int_{M'} e^{-f'} d\mu_{g'}$. \end{lemma} \subsection{The reduced distance and volume under Type I curvature bounds} \begin{definition}[Type I Ricci flow] For every positive integer $n$ and $C>0$ we define the following classes of pointed complete Ricci flows \begin{eqnarray} \rf&=&\{ (M^n,g(t),p)_{t\in (-T,0)},\; g(t) \textrm{\;solves\;} (\ref{rf_eqn})\;\textrm{and}\;|\Rm_g|_g\leq \frac{C}{|t|}\; \textrm{on}\; M\times (-T,0) \}. \nonumber\\ \rfr&=&\{(M^n,g(t),p)\in \rf,\; \sup_{ M\times (-T,0)}|\Rm_g|_g<+\infty \}. \nonumber \end{eqnarray} \end{definition} We equip $\rf$ with the topology of smooth ($C^\infty$) pointed Cheeger-Gromov-Hamilton convergence for Ricci flows (uniformly in compact sets). Since for any $\mathfrak g=(M,g(t),p)_{t\in (-T,0)}$ and $T_i>0$ such that $T_i\searrow 0$, the sequence $\mathfrak g_i=(M,g(T_i +t),p)_{t\in(-T+T_i,0)}$ converges to $\mathfrak g$, it follows that $\overline{\rfr}=\mathcal{RF}^C$. \begin{definition}\label{reduced_distance} For $\mathfrak g=(M,g(t),p)_{t\in [-T,0]}\in \rfr$, the reduced distance function, originaly defined in \cite{Perelman1}, is given by \begin{eqnarray} l_{\mathfrak g}(x,\bar\tau)=\inf_\gamma\left\{ \frac{1}{2\sqrt{\bar\tau}} \int_0^{\bar\tau}\sqrt \tau (R_{g(-\tau)} (\gamma(\tau) )+\left|\frac{d\gamma}{d\tau}\right|_{g(-\tau)}^2 ) d\tau \right\}, \label{r_dist} \end{eqnarray} where $\gamma: [0,\bar\tau]\rightarrow M$, $\gamma(0)=p$, $\gamma(\bar\tau)=x$. \end{definition} The following estimate from \cite{Naber} will be crucial to our work. \begin{proposition}\label{red_dist_est} Let $\mathfrak g=(M,g(t),p)\in \rfr$. There exists $A=A(n,C)>0$ such that \begin{enumerate} \item $\frac{1}{A}(1+\frac{d_{g(-\tau)}(p,x)}{\sqrt{\tau}})^2 - A \leq l_{\mathfrak g}(x,\tau) \leq A (1+\frac{d_{g(-\tau)}(p,x)}{\sqrt\tau})^2$, \item $|\nabla l_{\mathfrak g}|(x,\tau)\leq \frac{A}{\sqrt{\tau}} (1+\frac{d_{g(-\tau)}(p,x)}{\sqrt{\tau}})$, \item $|\frac{\partial l_{\mathfrak g}}{\partial \tau}|(x,\tau)\leq \frac{A}{\tau} (1+\frac{d_{g(-\tau)}(p,x)}{\sqrt{\tau}})^2$. \end{enumerate} \end{proposition} Now, whenever $\mathfrak g_i\rightarrow \mathfrak g$, with $\mathfrak g_i,\mathfrak g\in \rf$, it is possible to limit out the reduced distance functions of $\mathfrak g_i$ using the estimates in Proposition \ref{red_dist_est}, as is done in \cite{Naber}. This motivates the following definition. \begin{definition} Let $\mathfrak g=(M, g(t),p)_{t\in (-T,0)}\in \rf$. A function $l: M\times (0, T)\rightarrow \mathbb R$ is called a singular reduced distance if the following holds. There exists a sequence $\mathfrak g_i\in \rfr$ converging to $\mathfrak g$ in the topology of $\rf$ such that $l_{g_i}\rightarrow l$ in $C^{0,\alpha}_{loc}$. \end{definition} \begin{remark}\label{rmk_s_r_d} Since $\overline{\rfr}=\mathcal{RF}^C$ it follows that the set of singular reduced distance functions of $\mathfrak g\in\rf$ is non-empty. Moreover, since the estimates of Proposition \ref{red_dist_est} pass to the limit, it follows that the space of singular reduced distance functions of $\mathfrak g\in\rf$ is compact in the $C^{0,\alpha}_{loc}$ topology. \end{remark} Given a singular reduced distance $l$ on $\mathfrak g$ and $\tau\in (0,T)$, following \cite{Perelman1} we define the reduced volume associated to $l$ as \begin{eqnarray} \rv_{\mathfrak g,l}(\tau):=\int_M (4\pi \tau)^{-\frac{n}{2}} e^{-l(\cdot,\tau)} d\mu_g. \end{eqnarray} \begin{remark} Note that if $\mathfrak g\in \rfr$ then any singular reduced distance function $l$ of $\mathfrak g$ is given by (\ref{r_dist}). To see this, consider $\mathfrak g_i\rightarrow \mathfrak g$, with $\mathfrak g_i,\mathfrak g\in \rfr$. By Perelman's pseudolocality theorem (see \cite{Perelman1}) it follows that $\mathfrak g_i$ have uniformly bounded curvature in time intervals $[-a,0]$, $a<T$, hence they converge to $\mathfrak g$ uniformly locally in $M\times (-T,0]$. Hence, the reduced distance functions $l_{\mathfrak g_i}$ pointwize converge to $l_{\mathfrak g}$. \end{remark} In the following lemma we collect a few useful facts about $\rv_{\mathfrak g,l}(\; \cdot\;)$. \begin{lemma}\label{red_vol_props} The reduced volume $\rv_{\mathfrak g,l}(\tau)$ with respect to a singular reduced distance $l$ of $\mathfrak g\in \rf$ is non-increasing in $\tau$. Moreover, if there exist $0<\tau_1<\tau_2$ such that \begin{eqnarray} \rv_{\mathfrak g, l}(\tau_1)=\rv_{\mathfrak g, l}(\tau_2), \end{eqnarray} then $g(t)$ is a gradient shrinking Ricci soliton, namely \begin{eqnarray} \ric(g(-\tau))+\hess_{g(-\tau)} l(\cdot,\tau)=\frac{1}{2\tau} g(-\tau),\label{soliton_eqn} \end{eqnarray} and $l(\cdot,1)$ is a normalized soliton function. \end{lemma} \begin{proof} The monotonicity statement is essentially Lemma 2.8 in \cite{Naber}. The statement (\ref{soliton_eqn}) and the fact that $l(\cdot,1)$ is a normalized soliton function are proven together with Theorem 2.1 in \cite{Naber}. \end{proof} \subsection{The non-inflating property of the Ricci flow.} Now we recall the non-inflating propety of smooth compact Ricci flows, as it appears in Zhang \cite{Zhang}. A similar result was also obtained by Chen and Wang in \cite{WangChen} under additional assumptions. The result in \cite{Zhang} however is more suitable for the setting of Type I Ricci flows. \begin{theorem}[Theorem 1.1 in \cite{Zhang}]\label{noninflatingthm} Let $(M^n,g(t))_{t\in [0,t_0]}$ be a smooth and compact Ricci flow. Then for every $\alpha>0$ there exists a $\kappa>0$, depending on $n,\alpha, g(0)$, with the following property. If for some $x_0\in M$ and $r\in (0,\sqrt{t_0})$ the estimate \begin{eqnarray} R(g(t))\leq \frac{\alpha}{t_0-t}, \end{eqnarray} holds in $B_g(x_0,t_0,r)\times [t_0-r^2,t_0]$, then \begin{eqnarray} \vol_{g(t_0)}(B_g(x_0,t_0,r))\leq \kappa r^n. \end{eqnarray} \end{theorem} The non-inflating property has the following consequence for Type I flows, which is a direct consequence of Theorem \ref{noninflatingthm} and the Type I curvature bound. \begin{corollary} Let $(M,g(t),x_0)_{t\in [-T,0)}\in \rf$. Then, there exists a $\kappa_0>0$ depending on $n,g(-T),C$, such that for every $r\in (0,\sqrt{\frac{T}{2}})$ and $t_0\in (-\frac{T}{2},0)$ \begin{eqnarray} \vol_{g(t_0)}(B_g(x_0,t_0,r))\leq \kappa_0 r^n.\label{noninflate} \end{eqnarray} \end{corollary} \begin{proof} Since $(M,g(t),x)_{t\in [-T,0)}\in \rf$, there exists $c(n,C)>0$ such that $R(g(t))\leq \frac{c(n,C)}{|t|} \leq \frac{c(n,C)}{t_0-t}$ on $M\times [t_0-r^2,t_0]$. Estimate (\ref{noninflate}) then follows from Theorem \ref{noninflatingthm}. \end{proof} \section{The reduced volume based at the singular time}\label{def_of_density} In this section we define a monotone quantity which plays the role of a reduced volume based at a singular time and consider the associated density function. In particular, the compactness property of Remark \ref{rmk_s_r_d} allows the following definition. \begin{definition}\label{singular_red_vol} We define the singular reduced volume at scale $\tau>0$ of $\mathfrak g\in \rf$ as \begin{eqnarray} \rv_{\mathfrak g}(\tau)=\rv_{\mathfrak g, \bar l}(\tau),\nonumber \end{eqnarray} where $\bar l$ is the minimizer of $\rv_{\mathfrak g,l}(\tau)$ among all singular reduced distances $l$ of $\mathfrak g$. \end{definition} A direct implication of this definition is the following. \begin{proposition}\label{rv_props} Let $ \mathfrak g=(M,g(t),p)_{t\in (-T,0)} \in \rf$. Then \begin{enumerate} \item If $0<\tau_1<\tau_2$ then $\rv_{\mathfrak g}(\tau_1) \geq \rv_{\mathfrak g}(\tau_2)$. \item If $\rv_{\mathfrak g}(\tau_1) = \rv_{\mathfrak g}(\tau_2)$ for some $0<\tau_1<\tau_2$, then there exists a singular reduced distance $l$ of $\mathfrak g$ such that $\rv_{\mathfrak g}(\tau)=\rv_{\mathfrak g,l}(\tau)$, for every $\tau\in (0,T)$. Moreover, $l$ is a normalized soliton function. \item Let $\mathfrak g_i=(M_i,g_i(t),p_i)\in \rf$ such that $\mathfrak g_i\rightarrow \mathfrak g$. Then for every $\tau\in(0,T)$ \begin{eqnarray} \liminf_i \rv_{\mathfrak g_i}(\tau)\geq \rv_{\mathfrak g} (\tau). \nonumber \end{eqnarray} \end{enumerate} \end{proposition} In particular, the monotonicity property in Proposition \ref{rv_props} justifies the following definition. \begin{definition} The density of $\mathfrak g\in \rf$ is defined as \begin{eqnarray} \Theta_{\mathfrak g}:=\lim_{\tau \searrow 0} \rv_{\mathfrak g} (\tau). \end{eqnarray} \end{definition} \begin{remark} By Proposition \ref{rv_props}, if $\mathfrak g_i,\mathfrak g\in \rf$ and $\mathfrak g_i\rightarrow \mathfrak g$ it follows that \begin{eqnarray} \liminf_{i\rightarrow \infty} \Theta_{\mathfrak g_i}\geq \Theta_{\mathfrak g}. \end{eqnarray} \end{remark} \begin{remark}\label{density_values} For every $\mathfrak g\in \rfr$ the reduced volume satisfies $\mathcal V_{\mathfrak g,l_{\mathfrak g}}(\tau) \in (0,1]$ for every $\tau>0$, since $\lim_{\tau\rightarrow 0^+} \mathcal V_{\mathfrak g,l_{\mathfrak g}}(\tau) =1$ (see for instance \cite{ricciflow}). Thus, $\Theta_{\mathfrak g}\in [0,1]$ for every $\mathfrak g\in \rf$. In fact, $\Theta_g >0$, due to Perelman's no-local collapsing theorem. \end{remark} \begin{definition} Let $(M,g(t))_{t\in (-T,0)}$ be a Type I Ricci flow and $x\in M$. We will always denote $\mathfrak g_x:=(M,g(t),x)_{t\in (-T,0)}$. Suppose that for some $C>0$, $\mathfrak g_x \in \rf$. Then, the density of $g$ at $x$ is defined as \begin{eqnarray} \Theta_g(x)=\Theta_{\mathfrak g_x}. \end{eqnarray} \end{definition} \begin{remark} In \cite{EMT} the authors introduce a different notion of reduced volume based at the singular time and density function for a Type I Ricci flow. In their approach, they consider limits of reduced distance functions arising from different sequences of times converging to the singular time. Then, they define a singular reduced distance by considering the infimum over all these limits, and use this to build a monotone quantity and its associated density. It is not clear to the author if the density function in \cite{EMT} behaves well under Cheeger-Gromov convergence. Instead, the lower semicontinuity is essentially built in the definition of our density. \end{remark} The reduced volume based at the singular time involves minimizing over all approximating sequences of Ricci flows, and in principle is hard to compute. However, for shrinking Ricci solitons, the following lemma provides some information. \begin{lemma}\label{density_soliton} Let $\mathfrak g:=(M,g(t),p)_{t\in (-\infty,0)}\in \rf$ be the Ricci flow associated to a normalized gradient shrinking Ricci soliton $(M,g(-1),f)$. Then, \begin{enumerate} \item Let $l$ be an arbitrary singular reduced distance function for $\mathfrak g$. Then $$\lim_{\tau\rightarrow\infty} \rv_{\mathfrak g}(\tau)=\lim_{\tau\rightarrow\infty} \rv_{\mathfrak g,l}(\tau)=\int_M (4\pi)^{-\frac{n}{2}} e^{-f}d\mu_{g(-1)}.$$ \item If $p\in M$ is a critical point of $f$, then \begin{eqnarray} \Theta_g(p)=\int_M (4\pi)^{-\frac{n}{2}} e^{-f}d\mu_{g(-1)} \leq \Theta_g(x), \nonumber \end{eqnarray} for every $x\in M$. In particular the density function $\Theta_g$ on a shrinking Ricci soliton attains a minimum. \item If a singular reduced distance function $l$ for $\mathfrak g$ is also a soliton function, then \begin{eqnarray} \rv_{\mathfrak g}(\tau)=\rv_{\mathfrak g,l}(\tau), \nonumber \end{eqnarray} for every $\tau>0$. \end{enumerate} \end{lemma} \begin{proof} The first statement is essentially Theorem 3.2 in \cite{Naber}. We describe its proof again here for completeness. Fix a $\bar\tau>0$, and let $l$ be a singular reduced distance for $\mathfrak g$. Take a sequence $\tau_i\rightarrow +\infty$ and define the blow-down sequence $\mathfrak g_i:=(M,\tau_i^{-1}g(\tau_i t), p)_{t\in(-\infty,0)}$ and set $l_i(\cdot,\tau)=l(\cdot,\tau_i \tau)$. Note that $l_i$ is a singular reduced distance for $\mathfrak g_i$. By monotonicity and the scaling behaviour of $l$ it follows that \begin{eqnarray} \lim_{\tau\rightarrow \infty}\rv_{\mathfrak g,l}(\tau)=\lim_{i\rightarrow \infty} \mathcal V_{\mathfrak g,l}(\tau_i \bar\tau)=\lim_{i\rightarrow \infty} \mathcal V_{\mathfrak g_i, l_i}(\bar\tau). \label{limit} \end{eqnarray} On the other hand, there exists $q\in M$ such that $\mathfrak g_i \rightarrow \mathfrak g_q=(M,g(t),q)_{t\in(-\infty,0)}$. To prove this, observe that since $g(t)=-t\phi_t^* g(-1)$ \begin{eqnarray} \tau_i^{-1}g(\tau_i t)&=& (\phi_t^{-1}\circ\phi_{\tau_i t})^* g(t),\label{scaling1} \\ &=&\phi_{-\tau_i}^* g(t), \label{scaling2} \end{eqnarray} since $\phi_{\tau_i t}= \phi_t\circ\phi_{-\tau_i}$. Hence, the sequence $(M,\tau_i^{-1}g(\tau_i t), p)$ is isometric to $(M,g(t), \phi_{-\tau_i}(p))$. Now, since $\phi_{-\tau_i}(p)\rightarrow_i q$, where $q$ is a critical point of $f$, it follows that $\mathfrak g_i\rightarrow \mathfrak g_q:=(M,g(t),q)$. Moreover, there is a singular reduced distance $\bar l$ for $\mathfrak g_q$ such that $l_i \rightarrow \bar l$. Hence, using the estimates in Lemma \ref{red_dist_est} we conclude that for every $\bar\tau>0$ \begin{eqnarray} \lim_{i\rightarrow \infty} \rv_{\mathfrak g_i,l_i}(\tau) =\mathcal V_{\mathfrak g_q,\bar l}(\bar\tau), \label{bar_l_normalized} \end{eqnarray} which together with (\ref{limit}) implies that $\bar l$ is a normalized soliton function, by Lemma \ref{red_vol_props}. Therefore, using Lemma \ref{f_vol} we obtain \begin{eqnarray} \mathcal V_{\mathfrak g_q,\bar l}(\bar\tau)= \int_M (4\pi)^{-\frac{n}{2}} e^{-f}d\mu_{g(-1)}.\label{equal_f_vols} \end{eqnarray} Finally, combining (\ref{limit}), (\ref{bar_l_normalized}) and (\ref{equal_f_vols}) we obtain that \begin{eqnarray} \lim_{\tau\rightarrow \infty} \rv_{\mathfrak g,l}(\tau) = \int_M (4\pi)^{-\frac{n}{2}} e^{-f}d\mu_{g(-1)},\label{arv} \end{eqnarray} for every singular reduced distance function $l$ for $\mathfrak g$. This suffices to prove the first statement of the lemma. To prove the second assertion, let $p,x\in M$, $\nabla f(p)=0$ and denote, as usual, $\mathfrak g_p=(M,g(t),p)_{t\in(-\infty,0)}$, $\mathfrak g_x=(M,g(t),x)_{t\in(-\infty,0)}$. We will first prove that \begin{eqnarray} \Theta_g(p)=\lim_{\tau\rightarrow\infty} \mathcal V_{\mathfrak g_p}(\tau)=\int_M (4\pi)^{-\frac{n}{2}} e^{-f}d\mu_{g(-1)}. \label{density_ctl_point} \end{eqnarray} Consider $\tau_i\rightarrow 0$ and observe using (\ref{scaling1})-(\ref{scaling2}) that $\mathfrak g_i=(M,\tau_i^{-1}g(\tau_i t), p)$ is isometric to $(M,g(t), \phi_{-\tau_i}p)$. Since $p$ is a critical point of $f$, it follows that $\phi_{-\tau_i}(p)=p$, hence $\mathfrak g_i\rightarrow \mathfrak g_p$. As in the proof of the first assertion of the lemma, let $l$ be an arbitrary singular reduced distance for $\mathfrak g$ and set $l_i(\cdot,\tau)=l(\cdot,\tau_i \tau)$. By the monotonicity of $\rv_{\mathfrak g,l}(\cdot)$ and the scaling behaviour of $l$, it follows that \begin{eqnarray} \lim_{\tau\rightarrow 0} \rv_{\mathfrak g,l}(\tau)=\lim_{i\rightarrow \infty} \rv_{\mathfrak g,l}(\tau_i \tau) =\lim_{i\rightarrow \infty} \rv_{\mathfrak g_i,l_i}( \tau). \end{eqnarray} Moreover, $\rv_{\mathfrak g_i,l_i}(\tau)\rightarrow \rv_{\mathfrak g,\bar l}(\tau)$, for some singular reduced distance $\bar l$ for $\mathfrak g$, since $\mathfrak g_i\rightarrow \mathfrak g$. Therefore, $\rv_{\mathfrak g,\bar l}(\tau)=\lim_{\tau\rightarrow 0} \rv_{\mathfrak g,l}(\tau)$, for every $\tau>0$, which implies that $\bar l$ is a normalized soliton function. By Lemma \ref{f_vol} it follows that $\lim_{\tau\rightarrow 0}\rv_{\mathfrak g,l}(\tau)= \int_M (4\pi)^{-\frac{n}{2}}e^{-f}d\mu_g$. Since $l$ was arbitrary, this implies (\ref{density_ctl_point}). The assertion of the lemma then follows from $$\Theta_g(x)\geq \lim_{\tau\rightarrow \infty} \rv_{\mathfrak g_x}(\tau)=\lim_{\tau\rightarrow \infty} \rv_{\mathfrak g_p}(\tau)= \Theta_g (p),$$ where again we used (\ref{arv}). Note that by estimates on the growth of the soliton function of a gradient shrinking Ricci soliton (see for instance \cite{Naber} or \cite{HM}), $f$ always has a critical point, hence $\Theta_g$ always attains a minimum. For the last assertion, note that if $l$ is a singular reduced distance for $\mathfrak g$ which is also a soliton function, then by monotonicity we obtain, for every $\tau>0$, \begin{eqnarray} \rv_{\mathfrak g, l}(\tau)\geq \rv_{\mathfrak g}(\tau)\geq \lim_{\tau\nearrow \infty} \rv_{\mathfrak g}(\tau)=\lim_{\tau\nearrow \infty} \rv_{\mathfrak g, l}(\tau)= \rv_{\mathfrak g, l}(\tau). \end{eqnarray} Hence $\rv_{\mathfrak g}(\tau)=\rv_{\mathfrak g,l}(\tau)=\lim_{\tau\nearrow \infty} \rv_{\mathfrak g}(\tau)$. \end{proof} Given a shrinking Ricci soliton $(M,g(t))_{t\in (-\infty,0)}$, Lemma \ref{density_soliton} shows that the limit \begin{eqnarray} \lim_{\tau \rightarrow\infty} \rv_{\mathfrak g_x}(\tau) \label{rvlimit} \end{eqnarray} does not depend on $x\in M$. This naturally leads to the following definition. \begin{definition} The asymptotic reduced volume from the singular time of a shrinking Ricci soliton $(M,g(t))_{t\in (-\infty,0)}$ is defined as \begin{eqnarray} \mathcal{ARV}(M,g):=\lim_{\tau \rightarrow\infty} \rv_{\mathfrak g}(\tau). \end{eqnarray} \end{definition} \begin{remark} The asymptotic reduced volume in the setting of ancient smooth (super)solutions to the Ricci flow was first studied by Yokota in \cite{Yokota}. However, the arguments in \cite{Yokota} do not seem to carry over to the setting of singular Type I flows, to show the invariance of the limit (\ref{rvlimit}) on the basepoint. For our work though, it suffices to establish this for shrinking Ricci solitons. \end{remark} \section{Splitting Ricci shrinkers.}\label{spl} \begin{definition} Let $(M, g(-1),f)$ be a gradient shrinking Ricci soliton, $g(t)$ the associated Ricci flow, and $p\in M$ a minimizer of $\Theta_g$. The subset \begin{equation} S(M,g)=\{x\in M,\;\; \Theta_g(x) = \Theta_g(p)\}, \end{equation} will be called the spine of the gradient shrinking Ricci solition. \end{definition} \begin{remark}\label{closed} The lower semi-continuity of the density implies that $S(M,g)$ is closed. \end{remark} \begin{lemma}[Splitting principle]\label{cone_splitting} Let $g(t)=g_M(t)+g_{Eucl}$, $t\in (-\infty,0)$, be a gradient shrinking Ricci soliton on $M^k\times \mathbb R^{n-k}$, $0< k\leq n$, satisfying $|\Rm(g(-1))|_{g(-1)}\leq C$. Moreover, let $V\subseteq M$ such that $S(M\times \mathbb R^{n-k},g)=V\times \mathbb R^{n-k}$. Suppose there exist $\tau>0$ such that \begin{eqnarray} \frac{\diam_{g_M(-\tau)} (V)}{\sqrt \tau} > A\sqrt 2 -1, \label{assumption1} \end{eqnarray} where $A>0$ is given by Proposition \ref{red_dist_est}. Then, there exists a gradient shrinking Ricci soliton $(N^{k-1},h(t))_{t\in (-\infty,0)}$ and $V'\subseteq N$ such that $(M,g_M(t))$ splits isometrically as $(N,h(t))\times (\mathbb R, g_{Eucl})$ and $S(M\times \mathbb R^{n-k},g)=V'\times \mathbb R^{n-k+1}$. \end{lemma} \begin{proof} Since $S(M\times \mathbb R^{n-k},g)$ is closed, assumption (\ref{assumption1}) implies that there exist $x,y\in V$ satisfying \begin{eqnarray} \frac{d_{g_M(-\tau)}(x,y)}{\sqrt \tau} > A \sqrt 2 - 1. \label{assumption2} \end{eqnarray} Let $p=(x,0), q=(y,0) \in S(M\times \mathbb R^{n-k})$. By Lemma \ref{density_soliton}, $\Theta_g (p) = \Theta_g(q) = \mathcal{ARV}(M\times \mathbb R^{n-k})$. This implies that there exist singular reduced distance functions $l_p,l_q$ of $\mathfrak g_p$ and $\mathfrak g_q$ respectively, which are both soliton functions. It follows from the soliton equation that the difference $L(\cdot,\tau)=l_p(\cdot,\tau)-l_q(\cdot,\tau)$ satisfies $ \hess_{g(-\tau)} L(\cdot,\tau)=0$. Moreover, since the metric on $M\times \mathbb R^{n-k}$ splits, its restriction $\bar L=L|_{M\times 0}$ satisfies $\hess_{g_M(-\tau)} \bar L(\cdot,\tau) =0$. We will show that $\nabla^{g_M(-\tau)} \bar L(\cdot,\tau) \neq 0$, which will imply the splitting $M= N\times \mathbb R$ and $g_M(- \tau)=\bar h + dr^2$. For this, observe that Proposition \ref{red_dist_est} gives \begin{eqnarray} \bar L(x,\tau)=L(p,\tau) &\leq& 2A-\frac{1}{A} \left(1+\frac{d_{g(-\tau)}(p,q)}{\sqrt{\tau}}\right)^2, \nonumber\\ \bar L(y,\tau)=L(q,\tau) &\geq& \frac{1}{A} \left(1+\frac{d_{g(-\tau)}(p,q)}{\sqrt{\tau}}\right)^2-2A.\nonumber \end{eqnarray} From (\ref{assumption2}) it follows that $\bar L(x, \tau)<0<\bar L(y,\tau)$, hence $\bar L(\cdot,\tau)$ is not constant. By scaling, we may assume that $\tau=1$ and $g_M(-1)=\bar h +dr^2$. Moreover, the restriction $f(\cdot)$ of $l_p(\cdot,1)$ on $N\times 0$ satisfies $$\ric_{\bar h}+\hess_{\bar h} f=\frac{1}{2} \bar h,$$ hence it is a soliton function for $\bar h$. It is easy to see there are $a,b\in \mathbb R$ such that $l_p((z,v),1)=f(z)+\left(\frac{|v|}{2}+a\right)^2+b$, for every $(z,v)\in N\times\mathbb R^{n-k+1}$. Hence $\nabla^{g(-1)}l_p((z,v),1)=\nabla^{\bar h}f(z) +\left(\frac{|v|}{2} + a\right)\frac{\partial}{\partial r}(v)$, where $\frac{\partial}{\partial r}$ denotes the radial vector field in $\mathbb R^{n-k+1}$. It follows that the diffeomorphisms $\phi_t$ which generate $g(t)$ are of the form $\phi_t(z,v)=(\phi_{1,t}(z),\phi_{2,t}(v))$, where \begin{eqnarray} \frac{d}{dt} \phi_{1,t}(z)&=&-\frac{1}{t}(\nabla^{\bar h}f)(\phi_{1,t}(z)),\qquad \phi_{1,-1}(z)=z,\nonumber\\ \frac{d}{dt} \phi_{2,t}(v)&=&-\frac{1}{t} \left(\frac{|\phi_{2,t}(v)|}{2}+a\right)\frac{\partial}{\partial r}, \qquad \phi_{2,-1}(v)=v.\nonumber \end{eqnarray} Therefore, putting $h(t)=-t(\phi_{1,t})^* \bar h$ for the Ricci flow generated by $(N,\bar h, f)$ we obtain \begin{eqnarray} g(t)&=&-t\phi^*_t g(-1),\nonumber\\ &=&-t\phi^*_{1,t} h(-1) -t\phi^*_{2,t} dr^2,\nonumber\\ &=& h(t)+dr^2.\nonumber \end{eqnarray} Hence, the flow $(M,g(t))_{t\in(-\infty,0)}$ splits for all time. \end{proof} \begin{theorem}\label{splitting} Let $(M^n,g(t))_{t\in (-\infty,0)}\in \rf$ be a non-flat gradient shrinking Ricci soliton. Then, there exists an integer $2\leq k\leq n$, a gradient shrinking Ricci soliton $(N^k, h(t))_{t\in (-\infty,0)}$ and $D=A\sqrt 2-1>0$ ($A$ is as in Proposition \ref{red_dist_est}) such that \begin{enumerate} \item $(M,g(t))$ splits isometrically as $(N^k,h(t))\times (\mathbb R^{n-k}, g_{Eucl})$. \item There is a $V\subseteq N$ such that $S(M,g)= V\times \mathbb R^{n-k}$ and $\diam_{h(-\tau)} (V) \leq D \sqrt \tau$. \end{enumerate} \end{theorem} \begin{proof} Let $0\leq k\leq n$ be the minimal $k$ with the property that $(M^n,g(t))=(N^k,h(t))\times (\mathbb R^{n-k},g_{Eucl})$, for some non-flat gradient shrinking Ricci soliton $(N,h(t))$. Since $(M,g(t))$ is not flat, $k\geq 2$. Moreover, by the translational symmetry there exists a $V\subseteq N$ such that $S(M,g)=V\times \mathbb R^{n-k}$. All we need to show is that $\diam_{h(-\tau)}(V)\leq D\sqrt \tau$. If this is violated for some $\tau>0$, Lemma \ref{cone_splitting} implies that $(N^k,h(t))$ splits a line, thus contradicting the minimality of $k$. \end{proof} \section{The size of the singular strata.}\label{sizeof} \subsection{Density uniqueness of tangent flows.} Let $\mathfrak g=(M,g(t),p)_{t\in [-T,0)}\in \rf$. Given an arbitrary sequence $\tau_i\searrow 0$ consider the blow up sequence $\mathfrak g_i=(M,\tau_i^{-1}g(\tau_i t),p)$ converging to a tangent flow $\mathfrak h=(N,h(t), q) \in \rf$. As was described in the introduction, the combined work of \cite{Naber}, \cite{EMT} and \cite{Mante_Muller} shows that $\mathfrak h$ is in fact the Ricci flow induced by a gradient shrinking Ricci soliton $(N,h(-1), f)$ and is non-flat, provided that $p$ belongs to the singular set $\Sigma$ of $(M,g(t))_{[-T,0)}$ (see Definition \ref{singular_set}). A tangent flow of $\mathfrak g$ may depend on the choice of the sequence $\tau_i$, and is not unique in general. However, the following theorem asserts that all tangent flows of $\mathfrak g$ should have the same asymptotic reduced volume. \begin{theorem}\label{ARV_uniqueness} Let $\mathfrak g=(M,g(t),p)_{t\in (-T,0)}\in \rf$ and $\mathfrak h=(N,h(t),q)_{t\in (-\infty,0)}$ be a tangent flow of $\mathfrak g$. Then \begin{eqnarray} \Theta_g(p)=\Theta_h(q)=\mathcal{ARV}(N,h). \end{eqnarray} \end{theorem} \begin{proof} The proof is similar to that of Lemma \ref{density_soliton}. Fix some singular reduced distance $l$ for $\mathfrak g$ and consider a sequence $\tau_i \searrow 0$ such that the blow up sequence $\mathfrak g_i=(M,\tau_i^{-1}g(\tau_i \tau),p)$ converges to $\mathfrak h=(N,h(t),q)_{t\in (-\infty,0)}$. Set $l_i(\cdot,\tau)=l(\cdot,\tau_i \tau)$ for the corresponding singular reduced distance. By Proposition \ref{red_dist_est} we obtain that along a subsequence $l_i$ converge to some singular reduced distance $\bar l$ for $\mathfrak h$. Moreover, \begin{eqnarray} \rv_{\mathfrak h,\bar l}(\bar\tau)=\lim_{\tau \searrow 0} \rv_{\mathfrak g,l}(\tau), \end{eqnarray} By Part 3 of Lemma \ref{density_soliton} we also obtain that $\Theta_h(q)=\rv_{\mathfrak h,\bar l}(\tau) $. Hence, since $\Theta_g(p)=\liminf_i \Theta_{g_i}(p) \geq \Theta_h(q)$ we obtain \begin{eqnarray} \Theta_g(p)\geq \Theta_h(q)=\rv_{\mathfrak h,\bar l}(\tau)=\lim_{\tau\searrow 0}\rv_{\mathfrak g,l}(\tau)\geq \lim_{\tau \searrow 0 }\rv_{\mathfrak g}(\tau)= \Theta_g(p), \end{eqnarray} for every $\bar\tau>0$, which proves the theorem. \end{proof} \begin{remark} Compare Theorem \ref{ARV_uniqueness} with the entropy uniqueness of tangent flows observed by Mantegazza and M\"{u}ller in \cite{Mante_Muller}. The $\mathcal W$-entropy of a gradient shrinking Ricci soliton $(N,h,f)$ is defined as \begin{eqnarray} \mathcal W=\int_N (R_h-|\nabla f|^2+f-n)\frac{e^{-f}}{(4\pi)^{\frac{n}{2}}} d\mu_{h}, \end{eqnarray} where $R_h$ denotes the scalar curvature of $h$ and $f$ is normalized so that $\int_N \frac{e^{-f}}{(4\pi)^{\frac{n}{2}}}d\mu_{h}=1$. \end{remark} \subsection{Estimating the size of the strata.} Now we are ready to prove Theorem \ref{main_theorem} and Corollary \ref{corol}. Fix a singular compact Type I Ricci flow $(M,g(t))_{t\in [-T,0)}$, and let $C>0$ be such that \begin{eqnarray} \sup_M|\Rm(g(t))|_{g(t)}\leq \frac{C}{|t|}, \end{eqnarray} for $t\in[-T,0)$. Given any $r>0$, set $g_r(t)=r^{-2}g(r^2 t)$. \begin{definition} For every $x\in \Sigma$, $r>0$ and $\delta>0$ define \begin{eqnarray} S^{x,r,\delta}=\{y\in B_{g_r}(x,-1,4D),\;\; \Theta_{g_r}(y)< \Theta_g(x)+\delta \}. \nonumber \end{eqnarray} \end{definition} The sets $S^{x,r,\delta}$ are important because of the following lemma. \begin{lemma}(Line-up lemma)\label{approx} For every $\epsilon,\alpha>0$ and $x\in \Sigma_j$, $j=0,...,n-2$, there is a $\delta=\delta(x,j,\epsilon, \alpha)>0$ such that for every $r\in (0,\delta)$ there exists a non-flat shrinking Ricci soliton $(X, z(t), m)_{t\in (-\infty,0)}\in \rf$ with the following properties. \begin{enumerate} \item $(X,z(t))$ splits isometrically as $(N^{n-k}, h(t))\times (\mathbb R^k, g_{Eucl})$, for some $k\leq j$. \item $m\in S(X,z)=V\times \mathbb R^k$, where $V\subseteq N$ satisfies $\diam_{h(-\tau)}V\leq D \sqrt \tau$, for all $\tau>0$. Here $D=D(n,C)$ is given by Theorem \ref{splitting}. \item There is a diffeomorphism $F:B_{z}(m,-1, 5D)\rightarrow M,$ with $F(m)=x$, such that \begin{eqnarray} F^{-1}(S^{x,r,\delta})\subseteq \mathcal N^{z(t)}_\epsilon(V\times \mathbb R^k),&& \textrm{and} \nonumber\\ (1.001)^{-2}z(t)\leq F^* g_r(t)\leq 1.001^2 z(t),&& \nonumber \end{eqnarray} for every $t\in[-1,-\alpha]$. Here, $\mathcal N^{z(t)}_\epsilon(\:\cdot\:)$ denotes the $\epsilon$-neighbourhood with respect to $z(t)$. \end{enumerate} \end{lemma} \begin{proof} Fix $x\in \Sigma_j$ and $\epsilon,\alpha>0$. Arguing by contradiction and passing to a subsequence if necessary, we obtain sequences $0< r_i<\delta_i$ such that $\delta_i,r_i\searrow 0$ such that: \begin{enumerate} \item[(i)] There is a non-flat shrinking Ricci soliton $(X, z(t), m)_{t\in (-\infty,0)}\in \rf$ satisfying (1), (2) and $(M,g_{r_i}(t),x)\rightarrow (X, z(t), m)$. Moreover, there are diffeomorphisms $F_i:B_{z}(m,-1, 5D)\rightarrow M,$ with $F_i(m)=x$ and \begin{eqnarray} (1.001)^{-2}z(t)\leq F_i^* g_{r_i}(t)\leq 1.001^2z(t),\nonumber \end{eqnarray} for every $t\in [-1,-\alpha]$. \item[(ii)] There are sequences $t_i\in [-1,\alpha]$ and $y_i\in B_{g_{r_i}}(x,-1,4D)$ such that $t_i\rightarrow \bar t$, $F_i^{-1}(y_i)\rightarrow y_\infty$ and $\Theta_{g_{r_i}}(y_i)<\Theta_g(x)+\delta_i$, but $F^{-1}(y_i) \notin \mathcal N^{z(t_i)}_\epsilon (V\times \mathbb R^k )$. \end{enumerate} It follows that $y_\infty \notin \mathcal N_\epsilon^{z(\bar t)}(S(X,z))$. However, the lower semicontinuity of the density under Cheeger-Gromov convergence and Theorem \ref{ARV_uniqueness} imply that \begin{eqnarray} \Theta_z(y_\infty)\leq \liminf_i \Theta_{g_{r_i}} (y_i)\leq\Theta_g(x)=\Theta_z(m). \end{eqnarray} Hence $y_\infty \in S(X,z)$, which is a contradiction. \end{proof} \begin{lemma}(Covering lemma)\label{covering_lemma} Let $(X,z(t),m)_{t\in (-\infty,0)}\in \rf$ be a non-flat, shrinking Ricci soliton satisfying properties (1) and (2) of Lemma \ref{approx} and $s=j+\varepsilon$, $j=0,...,n-2$, $\varepsilon>0$. Then, there is $\sigma_s\in (0,\frac{1}{2})$ such that for every $\sigma\in (0,\sigma_s]$, $\rho \in (0,4D)$, $\alpha\in (0,\left(\frac{\sigma\rho}{2D}\right)^2 )$ and $x\in \mathbb R^k$ we obtain the covering $$\mathcal N^{z(-\alpha)}_{\frac{\sigma\rho}{4}}(V\times B_\rho(x))\subseteq\bigcup_{l=1}^{P(\sigma)} B_{z}((q,x_l),-\alpha,\frac{\sigma \rho}{1.001^2}),$$ for some $x_l\in B_\rho(x)\subseteq \mathbb R^k$, $l=1,\ldots, P(\sigma)$, and $q\in V$. Moreover, $P(\sigma)$ satisfies \begin{eqnarray} P(\sigma) \sigma^j &\leq& C(n) ,\label{cov1}\\ P(\sigma)\sigma^s &\leq& \frac{1}{2}. \label{cov2} \end{eqnarray} \end{lemma} \begin{proof} There is a $C(n)>0$ such that, for every $\sigma>0$ and $k\leq j$, we can cover the unit ball $\overline{B_1(0)}\subseteq \mathbb R^k$ with $P(\sigma)$ balls of radius $\frac{\sigma}{4}$ such that \begin{eqnarray} P(\sigma) \sigma^j &\leq& C(n) . \end{eqnarray} Moreover, we can chose $\sigma >0$ small enough in order to satisfy \begin{eqnarray} P(\sigma)\sigma^s &\leq& \frac{1}{2}. \end{eqnarray} Hence, we can cover any $B_\rho (x)\subseteq \mathbb R^k$ by $P(\sigma)$ balls of radius $\frac{\sigma\rho}{4}$, \begin{eqnarray} B_\rho(x) \subseteq \bigcup_{l=1}^{P(\sigma)} B_{\frac{\sigma\rho}{4}}(x_l), \label{covering} \end{eqnarray} for some $x_l\in B_\rho(x)$, $l=1,\ldots,P(\sigma)$, so that (\ref{cov1}) and (\ref{cov2}) hold. Now, suppose that $m=(q,0)\in V\times \mathbb R^k$, let $y=(p,\bar x) \in \mathcal N^{z(-\alpha)}_{\frac{\sigma\rho}{4}}(V\times B_\rho (x))$ and choose $y'=(q',x')\in V\times B_\rho (x)$ such that $d_{z(-\alpha)}(y,y')<\frac{\sigma \rho}{4}$. Then, from (\ref{covering}) there is $x_{l_0}$ such that $x'\in B_{\frac{\sigma\rho}{4}}(x_{l_0})$. We then compute, using Theorem \ref{splitting}, \begin{eqnarray} d_{z(t)}(y,(q,x_{l_0}))&\leq& d_{z(t)}(y,y') + \sqrt{ (\diam_{h(t)}V)^2 + \left(\frac{\sigma\rho}{4}\right)^2 },\\ &\leq& d_{z(t)}(y,y') + \sqrt{ -t D^2 + \left(\frac{\sigma\rho}{4}\right)^2 }. \end{eqnarray} Putting $-t=\alpha\leq \left(\frac{\sigma\rho}{2D}\right)^2$ we obtain \begin{eqnarray} d_{z(-\alpha)}(y,(q,x_{l_0}))\leq \frac{\sigma\rho}{4}(1+\sqrt 5)<\frac{\sigma\rho}{1.001^2}, \end{eqnarray} which suffices to prove the lemma. \end{proof} \begin{proof}[Proof of Theorem \ref{main_theorem}] Fix $j$ and $s=j+\varepsilon$, as in the statement of the theorem. Let $\sigma:=\sigma_s\in (0,\frac{1}{2})$ and $D:=D(n,C)>0$ be the constants defined in Theorems \ref{covering_lemma} and \ref{splitting} respectively, and set $\alpha:=\sigma^2$, $\epsilon_s=\frac{2D 1.001\sigma}{4}$. Define, for each $i,m\geq 1$, \begin{eqnarray} D_{i}=\left\{ x\in \Sigma_j, \delta(x,j,\epsilon_s,\alpha)\geq \frac{1}{i}\right\} \end{eqnarray} and \begin{eqnarray} S^{i,m}=\left\{x \in D_i, \;\; \Theta(x)\in \left[\frac{m-1}{i}, \frac{m}{i}\right)\right\}. \end{eqnarray} Since $\Theta_g(\;\cdot\;)\in [0,1]$, by Remark \ref{density_values}, it follows that $D_i = \bigcup_{m=1}^i S^{i,m}$. In the following, we fix $i\geq 1$. We will show that there exists $L_i>0$ such that for every $q\geq 0$ and $\bar\tau \in [ \alpha i^{-2},i^{-2}]$ \begin{eqnarray} D_i \subseteq \bigcup_{l=1}^{Q_q} \overline{B_g (x_{q,l}, -\alpha^q \bar\tau, 2D\sqrt{\alpha^q \bar\tau})}, \label{kalyma} \end{eqnarray} and $Q_q (2D \sqrt{\alpha^q \bar\tau})^s \leq 2^{-q} L_i$. Assuming for now this is true, for each $\tau \in (0,1]$ define the closed sets \begin{eqnarray} C_i (\tau)&=&\left\{ \begin{array}{ll} \bigcup_{l=1}^{Q_q} \overline{B_g (x_l, -\alpha^q \bar\tau, 2D\sqrt{\alpha^q \bar\tau})}, & \tau=\alpha^q \bar \tau , \;\bar\tau \in [\alpha i^{-2} , i^{-2}],\; q\geq 0,\\ & \\ C_i(i^{-2}), & \tau \geq i^{-2}, \end{array} \right.\nonumber\\ C_i &=&\bigcap_{\tau>0} C_i(\tau), \nonumber \end{eqnarray} Using the property of the cover (\ref{kalyma}) and the non-inflating property of the Ricci flow we compute \begin{eqnarray} \vol_{g(-\tau)} (C_i) &\leq& \vol_{g(-\tau)} (C_i (\tau)) \\ &=& \vol_{g(-\alpha^q \bar\tau)} (C_i (\alpha^q \bar \tau) )\\ &\leq& Q_q \kappa_0 (2D \sqrt{\alpha^q \bar \tau})^{n-s} (2D \sqrt{\alpha^q \bar \tau})^s \label{vol_cover}\\ &\leq& L_i\kappa_0 2^{-q} (2D\sqrt{\alpha^q\bar\tau})^{n-s}. \end{eqnarray} Since we can uniquely write $\tau=\alpha^q \bar\tau$, for some $\bar\tau\in [\alpha i^{-2},i^{-2}]$, it follows that \begin{eqnarray} \frac{\vol_{g(-\tau)} (C_i) }{\tau^{\frac{n-s}{2}}} \leq L_i \kappa_0 \left( \frac{\sigma}{i}\right)^{2\beta} \tau^{\beta}, \end{eqnarray} where $\beta=-\frac{1}{2\log_2\sigma}$. The theorem is then proven setting $A_i=C_i \cap \bar \Sigma_j$. Observe also that $\Sigma_j\subset \cup_{i=1}^\infty A_i$, since $\Sigma_j = \cup_{i=1}^\infty D_i$. Moreover, since along Ricci flow the estimate $R \geq -\frac{n}{2(t+T)}$ is valid for all $t\in [-T,0)$, it follows that there is a $C=C(n,T)>0$ such that $\vol_{g(t)}(\bar\Sigma_j \setminus A_i)\leq C \vol_{g(-T)}(\Sigma_j\setminus A_i)$ for every $i$. Choosing $i$ large enough so that $C \vol_{g(-T)}(\bar\Sigma_j\setminus A_i)<\delta$ suffices to prove (\ref{small_vol}). Now we will prove (\ref{kalyma}) by induction in $q$. First, we claim that there exist $L^m_i,Q^m_0>0$ and $p_{m,1},\ldots,p_{m,Q^m_0}\in M$ such that for every $\bar\tau\in [\frac{\alpha}{i^2},\frac{1}{i^2}]$ \begin{eqnarray} S^{i,m}\subseteq\bigcup_{l=1}^{Q^m_0} B_g(p_{m,l},-\bar\tau,2D\sqrt{\bar\tau}), \label{tricky_cover} \end{eqnarray} and $Q^m_0(2D\sqrt{\bar\tau})^s \leq L^m_i$. To see this, observe that there exists $R_i>0$ (depending also on the the Type I curvature bound $C$) such that for every $p\in M$ and $\bar\tau\in [\frac{\alpha}{i^2},\frac{1}{i^2}]$ \begin{eqnarray} B_g(p,-\frac{1}{i^2},R_i)\subseteq B_g(p, -\bar\tau,2 D\sqrt{\bar\tau}),\label{cover} \end{eqnarray} Hence, choosing a cover of $S^{i,m}$ by $Q^m_0$ balls $B_g(x_l,-\frac{1}{i^2},R_i)$, $l=1,\ldots, Q^m_0$, we immediately obtain (\ref{tricky_cover}), for $L^m_i=Q^m_0(\frac{2D}{i})^s $. Now, let $q\geq 1$, $\tau\in [\frac{\alpha^{q+1}}{i^2},\frac{\alpha^q}{i^2}]$, and suppose there exist $p_l\in M$, $l=1,\ldots,Q_q^m$ such that \begin{eqnarray} S^{i,m}\subseteq \bigcup_{l=1}^{Q_q^m} B(p_l,-\tau, 2D\sqrt{\tau}), \end{eqnarray} $B(p_l,-\tau, 2D\sqrt{\tau})\cap S^{i,m}\neq \emptyset$ for every $l$ and $Q_q^m(2D\sqrt \tau)^s \leq 2^{-q}L^m_i$. Choose any such ball $B_g(p_{l_0}, -\tau, 2D\sqrt{\tau})$ and $x\in S^{i,m}\cap B_g(p_{l_0},-\tau,2D\sqrt{\tau}) $. Then, from the definitions of $S^{i,m}$ and $S^{x,\sqrt \tau, i^{-1}}$ it follows that \begin{eqnarray} S^{i,m}\cap B_{g_{\sqrt{\tau}}}(x,-1,4D)\subseteq S^{x,\sqrt{\tau},i^{-1}}. \end{eqnarray} Hence, there exist $k\leq j$, $X=N^{n-k}\times \mathbb R^k$, $z(t)=h(t)+g_{Eucl}$, $V\subset N$ and $F$ as in Lemma \ref{approx} such that $$F^{-1}(S^{i,m}\cap B_{g_{\sqrt{\tau}}}(p_{l_0},-1,2D))\subseteq F^{-1}(S^{i,m}\cap B_{g_{\sqrt{\tau}}}(x,-1,4D))\subseteq \mathcal N^{z(-\alpha)}_{\epsilon_s}(V\times \mathbb R^k).$$ Moreover, note that there is a ball $B_{ 2D(1.001)}\subseteq \mathbb R^j$ such that \begin{eqnarray} F^{-1}(S^{i,m}\cap B_{g_{\sqrt{\tau}}}(p_{l_0},-1,2D))\subseteq \mathcal N^{z(-\alpha)}_{\epsilon_s}(V\times B_{2D(1.001)}). \end{eqnarray} Hence, putting $\rho=2D(1.001)$ in the covering Lemma \ref{covering_lemma} we obtain \begin{eqnarray} F^{-1}(S^{i,m}\cap B_{g_{\sqrt{\tau}}}(p_{l_0},-1,2D))\subseteq \bigcup_{a=1}^{P(\sigma)} B_z((q,y_a), -\alpha, \frac{2\sigma D}{1.001}), \end{eqnarray} for $y_a\in \mathbb R^k$ and $P(\sigma)\sigma^s\leq \frac{1}{2}$, since $\alpha=\sigma^2 < (\frac{\sigma \rho}{2D})^2=\sigma^2 1.001^2$. Thus, there exist $o_l\in M$ such that \begin{eqnarray} S^{i,m}\cap B_{g_{\sqrt{\tau}}}(p_{l_0},-1,2D)\subseteq \bigcup_{l=1}^{P(\sigma)} B_{g_{\sqrt{\tau}}}(o_l, -\sigma^2, 2\sigma D), \end{eqnarray} $o_l\in M$, which implies that there exist $p'_l \in M$, $l=1,\ldots,Q_{q+1}^m$, $Q_{q+1}^m\leq Q_q^mP(\sigma)$ such that \begin{eqnarray} S^{i,m}\subseteq \bigcup_{l=1}^{Q_{q+1}^m} B(p'_l,- \alpha \tau, 2D\sqrt{\alpha\tau}), \end{eqnarray} $B(p'_l,-\alpha\tau, 2D\sqrt{\alpha\tau})\cap S^{i,m}\neq \emptyset$ for every $l$ and $Q_{q+1}^m (2D\sqrt{\alpha\tau})^s \leq 2^{-(q+1)}L_i$. This proves (\ref{kalyma}) for $L_i=\sum_m L_i^m$ and $Q_q =\sum_m Q_q^m$. In order to prove (\ref{size_est_3}), take any $\epsilon,\alpha>0$, $x \in \Sigma_0$ and set $\bar\delta=\delta(x,0,\epsilon,\alpha)$. From Theorem \ref{splitting} and Lemma \ref{approx}, for every $\tau\in(0,\bar\delta^2]$ there is a non-flat shrinking Ricci soliton $(X,z(t), m)_{t\in (-\infty,0)}$ such that $\diam_{z(t)}S(X,z) \leq D\sqrt{-t}$, and a diffeomorphism $F:B_z(m,-1,5D)\rightarrow M$ with $F(m)=x$ satisfying \begin{eqnarray} F^{-1}(S^{x,\sqrt \tau, i^{-1}}) \subset \mathcal N_{\epsilon}^{z(t)}( S(X,z) ),\label{ena}\\ (1.001)^{-2}z(t) \leq F^* g_{\sqrt \tau}(t) \leq 1.001^2 z(t),\label{dyo} \end{eqnarray} for every $t\in [-1,-\alpha]$. Now, take any $\tau\in (0,\bar\delta^2]$ and let $y'\in B_g(x,-\tau,4D \sqrt \tau)\cap \{ y\in M, \; \Theta_g(y)\leq \Theta_g(x)\}$. Since $ \{ y\in B_g(x,-\tau,4D \sqrt \tau), \; \Theta_g(y)\leq \Theta_g(x)\} \subset S^{x,\sqrt \tau,\bar\delta}$ it follows from (\ref{ena}) that $F^{-1}(y')\in B_z(m,-\lambda, \sqrt{\lambda}D+\epsilon)$, for every $\lambda\in [\alpha,1]$. Then, by (\ref{dyo}), $y'\in B_g (x, -\lambda \tau, 1.001(\sqrt{\lambda}D+\epsilon) \sqrt{\tau})$. Moreover, there is a $\bar\lambda \in [\alpha , 1)$ (independent of $\tau$) such that for every $\lambda \in[\bar \lambda, 1]$, \begin{eqnarray} B_g (x, - \lambda\tau, 1.001(\sqrt{\lambda}D+\epsilon) \sqrt{\tau})\subset B_g (x, -\lambda \tau, 4D \sqrt{\lambda\tau}). \end{eqnarray} We conclude that for every $\tau\in (0,\bar\delta^2]$ and $\lambda\in [\bar\lambda,1]$ \begin{eqnarray} B_g(x,-\tau,4D \sqrt \tau)\cap \{ y\in M, \; \Theta_g(y)\leq \Theta_g(x)\} \subset B_g (x, - \lambda\tau, 4D \sqrt{\lambda\tau}), \end{eqnarray} which suffices to prove (\ref{size_est_3}) for $\bar\tau=\bar\delta^2$ and $R_0=4D$. \end{proof} Below, we finish by proving Corollary \ref{corol}. \begin{proof}[Proof Corollary \ref{corol}] First of all, notice that $\Sigma$ is closed, hence $\overline{\Sigma_j}=\Sigma$. Now, estimate (\ref{cor_est}) follows from Theorem \ref{main_theorem} by setting $s=j+2\varepsilon$. In general $\Sigma=\Sigma_{n-2}$, since every shrinking Ricci soliton which splits more than $n-2$ Euclidean factor is flat. Therefore, the volume estimate of Case 1 follows by setting $j=n-2$. In Case 2, every tangent flow should have vanishing Weyl curvature. Thus, it should be isometric either to flat $\mathbb R^n$ (the Gaussian soliton), or to quotiens of $S^{n-1}\times \mathbb R$ or $S^n$, by \cite{Weylflat}. Hence $\Sigma=\Sigma_1$ and the volume estimate follows by setting $s=1+2\varepsilon$. \end{proof} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
2,869,038,154,131
arxiv
\section{Introduction and Previous Work} \label{intro} Evolutionary game theory (EGT) is an attempt to study the conflicting objectives among agents playing non-cooperative games by using Darwinian concepts related to frequency-dependent selection of strategies in a population~\cite{maynard82,weibull95,hofb-sigm-book-98}, instead of positing mathematically convenient but practically unrealistic conditions of agent rationality and common knowledge as is customary in classical game theory~\cite{Myerson}. Two concepts play a prominent role in EGT: the first is the idea of an \textit{evolutionarily stable strategy} (ESS) and the second is the set of equations representing the dynamical system called \textit{replicator dynamics} (RD)~\cite{taylor-jonker}. Both concepts are related to an ideal situation in which there are random independent encounters between pairs of anonymous memoryless players using a given strategy in an infinite population. In such a situation, a strategy is said to be an ESS if a population using that strategy cannot be invaded by a small amount of mutant players using another strategy (this idea can be expressed in rigorous mathematical terms, see~\cite{weibull95}). However, the ESS concept has a static character, i.e.~it can be applied only once the population has reached a robust rest point following certain dynamics. In other words, an ESS is restricted to the analysis of a population in which all the members play the same strategy and the stability of the strategy is gauged against the invasion of a small amount of individuals playing another strategy. The replicator dynamics, on the other hand, given an initial population in which each strategy is present with some frequency, will end up in attractor states, as a result of the preferential selection and replication of certain strategies with respect to others. Simply stated, strategies that do better than the average will increase their share in the population, while those that do worse than the average will decline. The link with standard game theory is the following: the ESSs for a game, if at least one exists, is a subset of the game-theoretic equilibria called Nash equilibria (NE). The attractor states of the dynamics may be fixed points, cyclical attractors, or even chaotic attractors in some situation. However, a result of replicator dynamics guarantees that, among the rest points of the RD, one will find the NE and thus, a fortiori, the game's ESSs~\cite{weibull95}. These results pertain to infinite populations under standard replicator dynamics; they are not necessarily true when the assumptions are not the same e.g., finite populations with local interactions and discrete time evolution, which is the case considered here. Several problems arise in EGT when going from very large to finite, or even small populations which are, after all, the normal state of affairs in real situations. For example, in small populations theoretical ESS might not be reached, as first observed by Fogel et al.~\cite{fogeletal97,fogeletal98} and Ficici et al.~\cite{ficici-pollack-07}, and see also~\cite{nowak-et-al-finite-04}. The method affecting the selection step can also be a source of difference with respect to standard EGT, even for infinite mixing populations. Recently, Ficici et al.~\cite{ficici-pollack-05} have shown that using selection methods different from payoff proportionate selection, such as truncation, tournament or ranking leads to results that do not converge to the game theory equilibria postulated in standard replicator dynamics. Instead, they find different non-Nash attractors, and even cyclic and chaotic attractors. While the population structure assumed in EGT is panmictic, i.e.~any player can be chosen to interact with any other player, it is clear that ``natural'' populations in the biological, ecological, and socio-economical realms often do have a structure. This can be the case, for instance, for territorial animals, and it is even more common in human interactions, where a given person is more likely to interact with a ``neighbor'', in the physical or relational sense, rather than with somebody else that is more distant, physically or relationally. Accordingly, EGT concepts have been extended to such structured populations, starting with the pioneering works of Axelrod~\cite{axe84} and Nowak and May~\cite{nowakmay92} who used two-dimensional grids which are regular lattices. However, today it is becoming clear that regular lattices are only approximations of the actual networks of interactions one finds in biology and society. Indeed, it has become apparent that many real networks are neither regular nor random graphs; instead, they have short diameters, like random graphs, but much higher clustering coefficients than the latter, i.e.~agents are locally more densely connected. These networks are collectively called \textit{small-world} networks (see~\cite{watts99,newman-03}). Many technological, social, and biological networks are now known to be of this kind. Thus, research attention in EGT has recently shifted from mixing populations, random graphs, and regular lattices towards better models of social interaction structures~\cite{social-pd-kup-01,santos-pach-05,tom-luth-giac-06,luthi-pest-tom-physa08}. Fogel et al.~\cite{fogeletal97,fogeletal98} and Ficici et al.~\cite{ficici-pollack-05,ficici-pollack-07} studied the deviations that occur in EGT when some of the standard RD assumptions are not fully met. In this paper we would like to address another problem which arises when using RD in network-structured populations. In the standard setting, populations are panmictic, i.e.~any agent may interact with any other agent in the population. However, in complex networks, players may have a widely different number of neighbors, depending on the graph structure of the network interactions. On the other hand, panmictic populations may be modeled as complete graphs, where each vertex (agent) has the same number of neighbors (degree). The same is true for any regular graph, and thus for lattices, and also, at least in a statistical sense, for Erd\"os--R\'enyi random graphs~\cite{bollobas-random}, which have a Poissonian degree distribution. In the cases where the number of neighbors is the same for all players, after each agent has played the game with all of its neighbors, one can either accumulate or average the payoff earned by a player in order to apply the replicator dynamics. Either way, the result is the same except for a constant multiplicative factor. However, when the degrees of agents differ widely, these two ways of calculating an agent's payoff give very different results, as we show in this paper. Furthermore, we show that when using accumulated payoff, the RD is not invariant with respect to a positive affine transformation of the payoff matrix as it is prescribed by the standard RD theory~\cite{weibull95}. In other words, the game depends on the particular payoff values and is non-generic~\cite{samuel97}. Finally, we propose another way of calculating an agent's payoff that both takes into account the degree inhomogeneity of the network and leaves the RD invariant with respect to affine transformations of the payoff matrix. We illustrate the mathematical ideas with numerical simulations of three well-known games: the \textit{Prisoner's Dilemma}, the \textit{Hawk-Dove}, and the \textit{Stag-Hunt} which are universal metaphors for conflicting social interactions. In the following, we first briefly present the games used for the simulations. Next, we give a short account of the main population graph types used in this work, mainly for the sake of making the paper self-contained. Then we describe the particular replicator dynamics that is used on networks, followed by an analysis of the influence of the network degree inhomogeneity on an individual's payoff calculation. The ensuing discussion of the results of many numerical experiments should help illuminate the theoretical points and the proposed solutions. Finally, we give our conclusions. \section{Three Symmetric Games} \label{games} The three representative games studied here are the Prisoner's Dilemma (PD), the Hawk-Dove (HD), and the Stag-Hunt (SH) which is also called the Snowdrift Game or Chicken. For the sake of completeness, we briefly summarize the main features of these games here; more detailed accounts can be found in many places, for instance~\cite{axe84,poundstone92,skyrms04}. These games are all two-person, two-strategy, symmetric games with the payoff bi-matrix of Table~\ref{pbm}. \begin{table}[hbt] \begin{center} {\normalsize $ \begin{array}{c|cc} & C & D\\ \hline C & (R,R) & (S,T)\\ D & (T,S) & (P,P) \end{array} $} \end{center} \caption{Generic payoff bi-matrix for the two-person, symmetric games discussed in the text.\label{pbm}} \end{table} \noindent In this matrix, $R$ stands for the \textit{reward} the two players receive if they both cooperate ($C$), $P$ is the \textit{punishment} for bilateral defection ($D$), and $T$ is the \textit{temptation}, i.e.~the payoff that a player receives if it defects, while the other cooperates. In this case, the cooperator gets the \textit{sucker's payoff} $S$. In the three games, the condition $2R > T + S$ is imposed so that mutual cooperation is preferred over an equal probability of unilateral cooperation and defection. For the PD, the payoff values are ordered numerically in the following way: $T > R > P > S$. Defection is always the best rational individual choice; $(D,D)$ is the unique NE and also an ESS~\cite{weibull95}. Mutual cooperation would be preferable but it is a strongly dominated strategy. In the Hawk-Dove game, the order of $P$ and $S$ is reversed yielding $T > R > S > P$. Thus, in the HD when both players defect they each get the lowest payoff. $(C,D)$ and $(D,C)$ are NE of the game in pure strategies, and there is a third equilibrium in mixed strategies where strategy $D$ is played with probability $p$, and strategy $C$ with probability $1-p$, where $p$ depends on the actual payoff values. The only ESS of the game is the mixed strategy, while the two pure NE are not ESSs~\cite{weibull95}. The dilemma in this game is caused by ``greed'', i.e.~players have a strong incentive to ``bully'' their opponent by playing $D$, which is harmful for both parties if the outcome produced happens to be $(D,D)$. In the Stag-Hunt, the ordering is $R > T > P > S$, which means that mutual cooperation $(C,C)$ is the best outcome, Pareto-superior, and a NE. However, there is a second NE equilibrium where both players defect $(D,D$) which is inferior from the Pareto domination point of view, but it is less risky since it is safer to play $D$ when there is doubt about which equilibrium should be selected. From a NE standpoint, however, they are equivalent. Here the dilemma is represented by the fact that the socially preferable coordinated equilibrium $(C,C)$ might be missed for ``fear'' that the other player will play $D$ instead. There is a third mixed-strategy NE in the game, but it is commonly dismissed because of its inefficiency and also because it is not an ESS~\cite{weibull95}. \section{Network Types} \label{nets} For our purposes here, a network will be represented as an undirected graph $G(V,E)$, where the set of vertices $V$ represents the agents, while the set of edges $E$ represents their symmetric interactions. The population size $N$ is the cardinality of $V$. A neighbor of an agent $i$ is any other agent $j$ such that there is an edge $\{ij\} \in E$. The cardinality of the set of neighbors $V_i$ of player $i$ is the degree $k_i$ of vertex $i \in V$. The average degree of the network will be called $\bar k$. An important quantity that will be used in the following is the \textit{degree distribution function} (DDF) of a graph $P(k)$ which gives the probability that a given node has exactly $k$ neighbors. To expose the technical problems and their solution, we shall investigate three main graph population structures: regular lattices, random graphs, and scale-free graphs. These graph types represent the typical extreme situations studied in the literature. Regular lattices are examples of degree-homogeneous networks, i.e.~all the nodes have the same number of neighbors; they have been studied from the EGT point of view in~\cite{nowakmay92,nowaketal94,nowak-sig-00,hauer-doeb-2004}, among others. In random graphs the degree fluctuates around the mean $\bar k$ but the fluctuations are small, of the order of the standard deviation of the associated Poisson distribution. The situation can thus be described in mean-field terms and is similar to the standard setting of EGT, where the large mixing population can be seen as a completely connected graph. On the other hand, scale-free graphs are typical examples of degree-heterogeneous graphs as the degree distribution is broad (see below). For the sake of illustration, examples of these three population network types are shown in Fig.~\ref{net_types}. For random and scale-free graphs only one among the many possible realizations is shown, of course. \begin{figure} [!ht] \begin{center} \begin{tabular}{cc} \mbox{\includegraphics[width=4cm]{lattice3}} \protect & \mbox{\includegraphics[width=6cm]{random-graph}}\protect\\ \vspace*{0.5cm}(a) & (b) \\ \multicolumn{2}{c}{\mbox{\includegraphics[width=6cm]{sf2-graph}} \protect}\\ \multicolumn{2}{c}{(c)} \\ \end{tabular} \caption{A regular lattice (a), a random graph (b), and a scale-free graph (c). In (c) the nodes are shown with a size proportional to their number of neighbors. \label{net_types}} \end{center} \end{figure} Recent work~\cite{newman-03} has shown that scale-free and other small-world graphs are structurally and statistically much closer to actual social and biological networks and are thus an interesting case to study. Evolutionary games on scale-free and other small-world networks have been investigated, among others, in ~\cite{social-pd-kup-01,santos-pach-05,tom-luth-giac-06,santos-pach-06}. Another interesting result for evolutionary games on networks has been recently obtained by Ohtsuki et al.~\cite{ohtsuki-et-al}. In this study the authors present a simple rule for the evolution of cooperation on graphs based on cost/benefit ratios and the number of neighbors of a given individual. This result is closely related to the subject matter of the present work but its application in the present context will be the subject of further study. Our main goal is to consider the global influence of network structure on the dynamics using a particular strategy update rule. A further step toward real social structures has been taken in~\cite{luthi-pest-tom-physa08}, where some evolutionary games are studied using model social networks and an actual coauthorship network. \begin{comment} We use two-dimensional regular lattices with $k_i=4, \; \forall i \in V_i$ and periodic boundary conditions. This neighborhood of an individual comprises the four closest individuals in the north, east, south, and west directions. \end{comment} The DDF of a regular graph is a normalized delta function centered at the constant degree $k$ of the graph. Random graphs, which behave similar to panmictic populations, are constructed according to the standard Erd\"os--R\'enyi~\cite{bollobas-random} model: every possible edge among the $N$ vertices is present with probability $p$ or is absent with probabililty $1-p$. The DDF of such a random graph is Poissonian for $N \rightarrow \infty$. Thus most vertices have degrees close to the mean value $\bar k$. In contrast, DDFs for complex networks in general have a longer tail to the right, which means that nodes with many neighbors may appear with non-negligible probability. An extreme example are scale-free networks in which the DDF is a power-law $P(k) \propto k^{- \gamma} $. Scale-free networks have been empirically found in many fields of technology, society, and science~\cite{newman-03}. To build scale-free networks, we use the model proposed by Barab\'asi and Albert ~\cite{alb-baraba-02}. In this model, networks are grown incrementally starting with a small clique of $m_0$ nodes. \begin{comment} USE CONFIGURATION GRAPHS ALSO? AT LEAST SOME RUNS \end{comment} At each successive time step a new node is added such that its $m \le m_0$ edges link it to $m$ nodes already present in the graph. It is assumed that the probability $p$ that a new node will be connected to node $i$ depends on the current degree $k_i$ of the latter. This is called the \textit{preferential attachment} rule. The probability $p(k_i)$ of node $i$ to be chosen is given by $p(k_i) = {k_i}/ \sum_{j} k_j,$ where the sum is over all nodes already in the graph. The model evolves into a stationary network with power-law probability distribution for the vertex degree $P(k) \sim k^{-\gamma}$, with $\gamma\sim 3$. \section{Replicator Dynamics in Networks} \label{rd} The local dynamics of a player $i$ only depends on its own strategy and on the strategies of the $k_i$ players in its neighborhood $V_i$. Let us call $\pi_{ij}$ the payoff player $i$ receives when interacting with neighbor $j$. Let $M$ be the payoff matrix corresponding to the row player. Since the games used here are symmetric the corresponding payoff matrix of the column player is simply $M^T$, the transpose of $M$. For example, from table~\ref{pbm} of section ~\ref{games} one has: \begin{center} \begin{tabular}{cc} \mbox{$ M = \begin{pmatrix} R & S \\ T & P \end{pmatrix}$}, &~ \mbox{$ M^T = \begin{pmatrix} R & T \\ S & P \end{pmatrix}$}, \end{tabular} \end{center} where suitable numerical values must be replaced for $R,S,T,P$. This payoff $\pi_{ij}$ of the row player is now defined as $$ \pi_{ij}(t) = s_i(t)\; M\; s_{j}^T(t), $$ \noindent where $s_i(t)$ and $s_j^T(t)$ are, respectively, row and column vectors representing the players' mixed strategies i.e., the probability distributions over the rows or columns played by $i$ and $j$ at time $t$. A pure strategy is the particular case in which only one row or column is chosen. The quantity $$ \widehat{\Pi}_i(t) = \sum _{j \in V_i}\pi_{ij}(t) $$ \noindent is the \textit{accumulated payoff} collected by player $i$ at time step $t$, whereas the quantity $\overline{\Pi}_i(t) = \frac{1}{k_i} \widehat{\Pi}_i(t)$ is his \textit{average payoff}. Accumulated payoff seems more logical in degree-heterogeneous networks such as scale-free graphs since it reflects the very fact that players may have different numbers of neighbors in the network. Average payoff, on the other hand, smooths out the possible differences although it might be justified in terms of number of interactions that a player may sustain in a given time. For instance, an individual with many connections is likely to interact less often with each of its neighbors than another that has a lower number of connections. Also, if there is a cost to maintain a relationship, average payoff will roughly capture this fact, while it will be hidden if one uses accumulated payoff. On the other hand, if in a network some individuals happen to have many more connections than the majority, this also means that they have somehow been able to establish and maintain them; maybe this is a result of better social skills, more opportunities or for other reasons but it is something that is commonly observed on actual social networks. Because of this, most recent papers dealing with evolutionary games on networks have used accumulated payoff~\cite{social-pd-kup-01,santos-pach-05,santos-pach-06,santos-biol-06,luthi-pest-tom-physa08}, and this is the main reason why we have focused on the technical problems that this may cause in degree-heterogeneous networks. The rule according to which agents update their strategies is the conventional RD. The RD rule in networks aims at maximal consistency with the original evolutionary game theory equations and is the same as proposed by~\cite{hauer-doeb-2004}. It is assumed that the probability of switching strategy is a monotonic increasing function $\phi$ of the payoff difference~\cite{weibull95,hofb-sigm-book-98}. To update the strategy of player $i$, another player $j$ is first drawn uniformly at random from $i$'s neighborhood $V_i$. Then, strategy $s_i$ is replaced by $s_j$ with probability \begin{equation} p_i = \phi(\Pi_j - \Pi_i), \label{repl_dyn_eq0} \end{equation} Where $\Pi$ may stand either for the above defined accumulated $\widehat{\Pi}$ or average $\overline{\Pi}$ payoffs, or for the modified accumulated payoff $\widetilde{\Pi}$ to be defined below. The major difference with standard replicator dynamics is that two-person encounters between players are only possible among neighbors, instead of being drawn from the whole population. Other commonly used strategy update rules include imitating the best in the neighborhood, or replicating in proportion to the payoff, meaning that each individual $i$ reproduces with probability $p_i = \pi_i / \sum_j \pi_j$, where $pi_i$ is $i$'s payoff and the sum is over all $i's$ neighbors~\cite{hauer-doeb-2004}. However, in the present work we do not examine these alternative rules. Finally, contrary to~\cite{santos-pach-05}, we use asynchronous dynamics in the simulations presented here. More precisely, we use the discrete update dynamics that makes the least assumption about the update sequence: the next node to be updated is chosen at random with uniform probability and with replacement. This asynchronous update is analogous to the one used by Hauert et al.~\cite{hauer-doeb-2004}. It corresponds to a binomial distribution of the updating probability and is a good approximation of a continuous-time Poisson process. We believe that asynchronous update dynamics are more likely in a system of independently interacting agents that may act at different and possibly uncorrelated times. Furthermore, it has been shown that asynchronous updating may give rise to steadier quasi-equilibrium states by eliminating artificial effects caused by the nature of perfect synchronicity~\cite{tom-luth-pest-07}. Nevertheless, in this work, we have checked that synchronous update of the agents' strategies does not qualitatively change the conclusions. \subsection{Payoff Invariance} \label{payoff-inv} In standard evolutionary game theory one finds that replicator dynamics is invariant under positive affine transformations of payoffs with merely a possible change of time scale~\cite{weibull95}. Unfortunately, on degree-heterogenous networks, this assumption is not satisfied when combining replicator dynamics together with accumulated payoff. This can be seen as follows. Let $p_i$ in Eq. \ref{repl_dyn_eq0} be given by the following expression, as defined by Santos and Pacheco~\cite{santos-pach-05}, \begin{eqnarray} p_i = \phi(\Pi_j -\Pi_i) = \begin{cases} \dfrac{\Pi_j - \Pi_i}{d_{M}k_>} & \textrm{if $\Pi_j - \Pi_i > 0$}\\\\ 0 & \textrm{otherwise,} \end{cases} \label{repl_dyn_eq1} \end{eqnarray} with $d_{M} = max\{T, R, P, S\} - min\{T, R, P, S\}$, $k_> = max\{k_i, k_j\}$, and $\Pi_i$ (respectively $\Pi_j$) the aggregated payoff of a player $i$ (respectively $j$). If we set $\Pi_x = \widehat{\Pi}_x$ for all $x \in V$ and now apply a positive affine transformation of the payoff matrix, this leads to the new aggregated payoff $$ \widehat{\Pi}^{'}_i = \sum_{j \in V_i}{\pi_{ij}^{'}} = \sum_{j \in V_i}{(\alpha\pi_{ij} + \beta)} = \alpha\sum_{j \in V_i}{\pi_{ij} + \sum_{j \in V_i}\beta} = \alpha \widehat{\Pi}_i + \beta k_i $$ with $\alpha > 0, \beta \in \mathbb{R}$ and hence \begin{eqnarray*} \phi(\widehat{\Pi}_{j}' - \widehat{\Pi}_{i}') & = & (\alpha\widehat{\Pi}_j + \beta k_j - \alpha\widehat{\Pi}_i - \beta k_i)/(\alpha d_{M}k_>)\\ & = & \phi(\widehat{\Pi}_{j} - \widehat{\Pi}_{i}) + \beta (k_j - k_i)/(\alpha d_{M}k_>). \end{eqnarray*} One can clearly see that using accumulated payoff does not lead to an invariance of the replicator dynamics under shifts of the payoff matrix. \\ As for the average payoff, although it respects the replicator dynamics invariance under positive affine transformation, it prevents nodes with many edges to have potentially a higher payoff than those with only a few links. Furthermore, nodes are extremely vulnerable to defecting neighbors with just one link.\\ Thus, we propose here a third definition for a player's payoff that retains the advantages of the accumulated and average payoff definitions without their drawbacks. Let $\pi_\gamma$ denote the guaranteed minimum payoff a player can obtain in a one-shot two-person game. This is what a player would at least receive were he to attempt to maximize his minimum payoff. For example in the PD, a player could choose to play $C$ with the risk of obtaining the lowest payoff $S$ were its opponent to play $D$. However, by opting for strategy $D$ a player would maximize its minimum payoff thus guaranteeing itself at least $\pi_\gamma = P > S$ no matter what its opponent's strategy might be. In the HD game we have $\pi_\gamma = S$, for this time the payoff ordering is $T > R > S > P$ and a player needs only to play $C$ to receive at least payoff $S$. Finally, in the SH game, $\pi_\gamma = P$. We can now define a player $i$'s aggregated payoff as being $\widetilde{\Pi}_i = \sum_{j \in V_i}{(\pi_{ij} - \pi_\gamma)}.$ Intuitively, it can be viewed as the difference between the payoff an individual collects and the minimum payoff it would get by ``playing it safe''. Our modified payoff $\widetilde{\Pi}$ has the advantage of leaving the RD invariant with respect to a positive affine transformation of the payoff matrix both on degree-homogeneous and heterogeneous graphs while still allowing the degree distribution of the network to have a strong impact on the dynamics of the game. Indeed, a player placed on a highly connected node of a graph can benefit from its numerous interactions which enables it to potentially collect a high payoff. However, these same players run the risk of totaling a much lower score than a player with only a few links. One can notice that on degree-homogeneous graphs such as lattices or complete graphs, using accumulated, average, or the new aggregated payoff definition yields the same results. The proof of the RD invariance under positive affine transformation of the payoff matrix when using this new payoff definition is straightforward: \begin{eqnarray*} \phi(\widetilde{\Pi}_{j}' - \widetilde{\Pi}_{i}') & = & \frac{1}{\alpha d_{M}k_>}\left(\sum_{k \in V_j}{\bigl((\alpha\pi_{jk} + \beta) - (\alpha\pi_\gamma + \beta)}\bigr)\right.\\ & & \left.- \sum_{k \in V_i}{\bigl((\alpha\pi_{ik} + \beta) - (\alpha\pi_\gamma + \beta)}\bigr)\right)\\ & = & \frac{1}{\alpha d_{M}k_>}\left(\alpha\sum_{k \in V_j}{(\pi_{jk} - \pi_\gamma)}\right.\\ & & \left.- \alpha\sum_{k \in V_i}{(\pi_{ik} - \pi_\gamma)}\right)\\ & = & (\widetilde{\Pi}_j - \widetilde{\Pi}_i)/(d_{M}k_>)\\ & = & \phi(\widetilde{\Pi}_{j} - \widetilde{\Pi}_{i}). \end{eqnarray*} \begin{comment} simply note that in this case $\Pi_{x,M1} = \pi_{M} - \pi_\gamma$ and $\Pi_{x,m1} = \pi_{m} - \pi_\gamma$. \end{comment} \subsection{Modified Replicator Dynamics} \label{mrd} Let us turn our attention once again to the replicator dynamics rule (Eq.\ref{repl_dyn_eq1}). Dividing the payoff difference between players $j$ and $i$ by $d_{M}k_>$ might seem reasonable at first since it does ensure that $\phi$ is a probability, i.e.\ has a value between 0 and 1. Nevertheless, we don't find it to be the adequate division to do for subtle reasons. To illustrate our point, let us focus on the following particular case and use the accumulated payoff to simplify the explanation. \begin{figure*} [!ht] \begin{center} \begin{tabular}{ccc} \vspace*{0.2cm}\mbox{\includegraphics[width=3.5cm, height=3.5cm]{particular_case1}} \protect & \hspace*{1cm} & \mbox{\includegraphics[width=3.5cm, height=3.5cm]{particular_case2}} \protect\\ (a) & \hspace*{1cm} & (b) \end{tabular} \caption{Example\label{example}} \end{center} \end{figure*} On the one side, Fig.~\ref{example}~(a) shows a cooperator $C1$ surrounded by three defectors each having three cooperating neighbors. Using the replicator dynamics as defined in Eq.~\ref{repl_dyn_eq1}, the probability cooperator $C1$ would turn into a defector, given that it is selected to be updated, is equal to \begin{eqnarray*} \phi(\widehat{\Pi}_j - \widehat{\Pi}_{C1}) & = & (\widehat{\Pi}_j - \widehat{\Pi}_{C1})/(d_{M}k_>)\\ & = & (3T - 3S)/(3d_{M})\\ & = & (T - S)/d_{M}, \end{eqnarray*} and this no matter which defecting neighbor $j$ is chosen since they all have the same payoff. On the other side, the central cooperator $C2$ in Fig.~\ref{example}~(b) would adopt strategy $D$ with probability \begin{eqnarray*} \phi(\widehat{\Pi}_j - \widehat{\Pi}_{C2}) & = & (\widehat{\Pi}_j - \widehat{\Pi}_{C2})/(d_{M}k_>)\\ & = & (3T - 6S)/6d_{M}\\ & = & (T - 2S)/2d_{M}, \end{eqnarray*} a value that is once again independent of the selected neighbor $j$. Now, if $T > 0$ and $\phi(\widehat{\Pi}_j - \widehat{\Pi}_{C1}),\phi(\widehat{\Pi}_j - \widehat{\Pi}_{C2}) > 0$, then $C2$ has a bigger chance of having its strategy unaltered than $C1$ does. This last statement seems awkward since in our opinion, the fact of being surrounded by twice as many defectors as $C1$ (with all the $D$-neighbors being equally strong), should have a negative impact on cooperator $C2$, making it difficult for it to maintain its strategy. To make the situation even more evident, let us also suppose $S = 0$. In this case, a cooperator surrounded by an infinite number of $D$-neighbors, who in turn all have a finite number of neighbors, would have a zero probability of changing strategy, which is counter-intuitive. Therefore, and with all the previous arguments in mind, we adjust Eq.~\ref{repl_dyn_eq1} to define another replicator dynamics function namely \begin{eqnarray} \phi(\Pi_j - \Pi_i) = \begin{cases} \dfrac{\Pi_j - \Pi_i}{\Pi_{j,\textrm{max}} - \Pi_{i,\textrm{min}}} & \textrm{if $\Pi_j - \Pi_i > 0$}\\\\ 0 & \textrm{otherwise,} \end{cases} \label{repl_dyn_eq2} \end{eqnarray} where $\Pi_{x,\textrm{max}}$ (resp.\ $\Pi_{x,\textrm{min}}$) is the maximum (resp.\ minimum) payoff a player $x$ can get. If $\pi_{x,\textrm{max}}$ and $\pi_{x,\textrm{min}}$ denote player $x$'s maximum and minimum payoffs in a two-player one-shot game ($\pi_{x,\textrm{max}} = max\{T,R,P,S\}$ and $\pi_{x,\textrm{min}} = min\{T,R,P,S\}$ for the dilemmas studied here), we have: \begin{itemize} \item $\Pi_{x, \textrm{max}} = \pi_{x,\textrm{max}}$ and $\Pi_{x, \textrm{min}} = \pi_{x,{\textrm{min}}}$ for average payoff; \item $\Pi_{x, \textrm{max}} = k_x\pi_{x,\textrm{max}}$ and $\Pi_{x, \textrm{min}} = k_x\pi_{x,\textrm{min}}$ for accumulated payoff; \item $\Pi_{x, \textrm{max}} = k_x(\pi_{x,\textrm{max}} - \pi_{x,\gamma})$ and $\Pi_{x, \textrm{min}} = k_x(\pi_{x,\textrm{min}} - \pi_{x,\gamma})$ for the new payoff scheme. \end{itemize} Finally, one can easily verify that using $\Pi_i = \widetilde{\Pi}_i$ as the aggregated payoff of a player $i$ leaves equation Eq.~\ref{repl_dyn_eq2} invariant with respect to a positive affine transformation of the payoff matrix. \begin{figure} [!ht] \begin{center} \includegraphics[width=14.5cm,bb=0 0 1404.365 1299.3849]{HD_acc_shift} \caption{Amount of cooperation in the HD game using accumulated payoff on three different network types in three different game spaces (see text). Lighter areas mean more cooperation than darker ones (see scale on the right side). Left column: scale free; Middle column: random graph; Right column: grid. Upper row: $2 \le T \le 3$, $R=2$, $1 \le S \le 2$, $P = 1$; Middle row: $1 \le T \le 2$, $R=1$, $0 \le S \le 1$, $P = 0$; Bottom row: $0 \le T \le 1$, $R=0$, $-1 \le S \le 0$, $P = -1$\label{shifts-acc}} \end{center} \end{figure} \section{Numerical Simulations} \label{ns} \begin{figure} [!ht] \begin{center} \begin{tabular}{ccc} \vspace*{-0.2cm} \mbox{\includegraphics[width=5.5cm] {plot_cuba}} \protect && \mbox{\includegraphics[width=5.5cm] {plot_cuba1}} \protect\vspace*{0.2cm}\\ (a) & \hspace*{.1cm} & (b) \end{tabular} \caption{Standard deviation for the HD using accumulated payoff on scale-free networks for two different game spaces. (a) $1 \le T \le 2$, $R=1$, $S=0.1$, $P = 0$, (b) $2 \le T \le 3$, $R=2$, $S=1.1$, $P = 1$. Note that (a) is a cut at $S=0.1$ of the middle image in the leftmost column of Fig.~\ref{shifts-acc}, while (b) represents a cut of the topmost image in the leftmost column of Fig.~\ref{shifts-acc} at $S=1.1$.\label{accumulated_deviation}} \end{center} \end{figure} We have simulated the PD, HD and SH described in Sect.~\ref{games} on regular lattices, Erd\"os--R\'enyi random graphs and Barab\'asi--Albert scale-free graphs, all three of which were presented in Sect.~\ref{nets}. Furthermore, in each case, we test the three payoff schemes discussed in Sect.~\ref{rd}. \begin{figure} [!ht] \begin{center} \includegraphics[width=14.5cm,bb=0 0 1404.365 449.3849]{HD_inv_shift} \caption{Levels of cooperation in the HD game using the new aggregated payoff $\widetilde{\Pi}$ on scale-free graphs in three different game spaces (see text). Left: $2 \le T \le 3$, $R=2$, $1 \le S \le 2$, $P = 1$; Middle: $1 \le T \le 2$, $R=1$, $0 \le S \le 1$, $P = 0$; Right: $0 \le T \le 1$, $R=0$, $-1 \le S \le 0$, $P = -1$.\label{shifts_inv}} \end{center} \end{figure} The networks used are all of size $N=4900$ with an average degree $\overline{k} = 4$. The regular lattices are two-dimensional with periodic boundary conditions, and the neighborhood of an individual comprises the four closest individuals in the north, east, south, and west directions. The Erd\"os--R\'enyi random graphs were generated using connection probability $p=8.16\times10^{-4}$. Finally, the Barab\'asi--Albert were constructed starting with a clique of $m_0=2$ nodes and at each time step the new incoming node has $m=2$ links.\\ For each game, we limit our study to the variation of only two parameters per game. In the case of the PD, we set $R=1$ and $S=0$, and vary $1 \leq T \leq 2$ and $0 \leq P \leq 1$. For the HD, we set $R=1$ and $P=0$ and the two parameters are $1 \leq T \leq 2$ and $0 \leq S \leq 1$. Finally, in the SH, we decide to fix $R = 1$ and $S = 0$ and vary $0 \leq T \leq 1$ and $0 \leq P \leq T$.\\ We deliberately choose not to vary the same two parameters in all three games. The reason we choose to set $T$ and $S$ in both the PD and the SH is to simply provide natural bounds on the values to explore of the remaining two parameters. In the PD case, $P$ is limited between $R=1$ and $S=0$ in order to respect the ordering of the payoffs ($T>R>P>S$) and $T$'s upper bound is equal to 2 due to the $2R > T+S$ constraint. In the HD, setting $R=1$ and $P=0$ determines the range of $S$ (since this time $T>R>S>P$) and gives an upper bound of 2 for $T$, again due to the $2R > T+S$ constraint. Note however, that the only valid value pairs of $(T,S)$ are those that satisfy the latter constraint. Finally, in the SH, both $T$ and $P$ range from $S$ to $R$. Note that in this case, the only valid value pairs of $(T,P)$ are those that satisfy $T>P$.\\ It is important to realize that, when using our new aggregated payoff or the average payoff, even though we reduce our study to the variation of only two parameters per game, we are actually exploring the entire game space. This is true owing to the invariance of Nash equilibria and replicator dynamics under positive affine transformations of the payoff matrix~\cite{weibull95}. As we have shown earlier and as we will confirm numerically in the next section, this does not hold for the accumulated payoff.\\ Each network is randomly initialized with exactly 50\% cooperators and 50\% defectors. In all cases, the parameters are varied between their two bounds by steps of $0.1$. For each set of values, we carry out 50 runs of 15000 time steps each, using a fresh graph realization in each run. Cooperation level is averaged over the last 1000 time steps, well after the transient equilibration period. In the figures that follow, each point is the result of averaging over 50 runs. In the next two sections, in order to avoid overloading this document with figures, we shall focus each time on one of the three games, commenting on the other two along the way. \subsection{Payoff Shift} \label{pay-shift} We have demonstrated that in theory, the use of accumulated payoff does not leave the RD invariant under positive affine transformations of the payoff matrix. However, one can wonder whether in practice, such shifts of the payoff matrix translate into significant differences in cooperation levels or are the changes just minor. \begin{figure} [!ht] \begin{center} \includegraphics[width=14.5cm,bb=0 0 1404.365 900]{PD} \caption{Levels of cooperation in the PD game space using three different payoff schemes and two different network types. Left column: Accumulated Payoff; Middle column: New Aggregated Payoff; Right column: Average Payoff. Upper row: Scale free graph; Bottom row: Random graph. Game space: $1\le T \le 2$, $R=1$, $0 \le P \le 1$, $S = 0$.\label{PD}} \end{center} \end{figure} Figure~\ref{shifts-acc} depicts the implications of a slight positive and negative shift of the HD payoff matrix. As one can clearly see, the cooperation levels encountered are notably different before and after the shift. As a matter of fact, when comparing between network types, scale-free graphs seem to do less well in terms of cooperation than regular grids with a shift of $-1$, and not really better than random graphs with a shift of $+1$. Thus, one must be extremely cautious when focusing on a rescaled form of the payoff matrix, affirming that such a re-scaling can be done without loss of generality, for this is far from true when dealing with accumulated payoff.\\ The noisy aspect of the top two figures of the leftmost column of Fig.~\ref{shifts-acc} has caught our attention. It is essentially due to the very high standard deviation values we find in the given settings (see Fig.~\ref{accumulated_deviation}). This observation is even more pronounced with a shift of $+1$. This shows that replicator dynamics becomes relatively unstable when using straight accumulated payoff.\\ We have run simulations using our payoff $\widetilde{\Pi}$, on all three network types in order to numerically validate the invariance of the RD with this payoff scheme. However, to save space, we only show here the results obtained on scale-free graphs which are the networks that generated the biggest differences in the accumulated payoff case (see Fig.~\ref{shifts-acc}, leftmost colummn). As one can see in Fig.~\ref{shifts_inv}, using $\widetilde{\Pi}$ does indeed leave the RD invariant with respect to a shift of the payoff matrix. There are minor differences between the figures, but these are simply due to statistical sampling and roundoff errors. Finally, a shift of the payoff matrix has, as expected, no influence at all on the general outcome when using the average payoff. We point out that the same observations can also be made for the PD and SH cases (not shown here). \subsection{Payoff and Network Influence on Cooperation} \label{res} In this section we report results on global average cooperation levels using the three payoff schemes for two games on scale-free and random graphs.\\ Figure~\ref{PD} illustrates the cooperation levels reached for the PD game, in the $1\le T \le 2$, $R=1$, $0 \le P \le 1$, $S = 0$ game space, on a Barab\'asi--Albert scale-free and random graphs, and when using each of the three different payoff schemes mentioned earlier, namely $\overline{\Pi}$, $\widetilde{\Pi}$ and $\widehat{\Pi}$.\\ We immediately notice that there is a significant parameter zone for which accumulated payoff (leftmost column) seems to drastically promote cooperation compared to average payoff (rightmost column). This observation has already been highlighted in some previous work~\cite{tom-luth-pest-07,santos-biol-06}, although it was done for a reduced game space. We nevertheless include it here to situate the results obtained using our adjusted payoff in this particular game space in comparison to those obtained using the two other extreme payoff schemes. On both network types, $\widetilde{\Pi}$ (central column of Fig.~\ref{PD}) yields cooperation levels somewhat like those obtained with accumulated payoff but to a lesser degree. This is especially striking on scale-free graphs (upper row of Fig.~\ref{PD}). However, we again point out that the situation shown in the image of the upper left corner of Fig.~\ref{PD} would change dramatically under a payoff shift, as discussed in Sect.~\ref{pay-shift} for the HD game. The same can be observed for the HD and SH games (see Fig.~\ref{SH} for the SH case). On regular lattices, there are as expected no differences whatsoever between the use of $\widetilde{\Pi}$ over $\widehat{\Pi}$ or $\overline{\Pi}$ due to the degree homogeneity of this type of network (not shown). \begin{figure} [!ht] \begin{center} \includegraphics[width=14.5cm,bb=0 0 1404.365 900]{SH} \caption{Cooperation levels for the SH game space using three different payoff schemes and two different network types. Left column: Accumulated Payoff; Middle column: New Aggregated Payoff; Right column: Average Payoff. Upper row: Scale free graph; Bottom row: Random graph. Game space: $R=1$, $0\le T \le 1$, $0 \le P \le 1$, $S = 0$. Note that the meaningful game space is the upper left triangle, i.e.~when $T \geq P$. \label{SH}} \end{center} \end{figure} The primary goals of this work are to highlight the non-invariance of the RD under affine transformations of the payoff matrix when using accumulated payoff, and to propose an alternative payoff scheme without this drawback. How does the network structure influence overall cooperation levels when this latter payoff is chosen? Looking at the middle column of figures~\ref{PD} and~\ref{SH}, we observe that degree non-homogeneity enhances cooperation. The relatively clear separation in the game space between strongly cooperative regimes and entirely defective ones in the middle column of Fig.~\ref{SH}, which refers to the SH game, can be explained by the existence of the two ESSs in pure strategies in this case. Similarly, the large transition phase from full cooperation to full defect states in the HD (middle image of Fig.~\ref{shifts_inv}) is due to the fact that the only ESS for this game is a mixed strategy.\\ Cooperation may establish and remain stable in networks thanks to the formation of clusters of cooperators, which are tightly bound groups of players. In the scale-free case this is easier for, as soon as a highly connected node becomes a cooperator, if a certain number of its neighbors are cooperators as well, chances are that all neighbors will imitate the central cooperator, which is earning a high payoff thanks to the number of acquaintances it has. An example of such a cluster is shown in Fig.~\ref{cluster} for the PD. A similar phenomenon has been found to underlie cooperation in real social networks~\cite{luthi-pest-tom-physa08}. \begin{figure} [!ht] \begin{center} \includegraphics[width=8cm]{cluster} \caption{A cluster with a majority of cooperators (triangles) with many links to a central cooperator. Symbol size is proportional to degree. Links to other nodes of the network have been suppressed for clarity.\label{cluster}} \end{center} \end{figure} In order to explore the dependence of the evolutionary processes on the network size, we have performed simulations with two other graph sizes ($N=2450$ and $N=9800$) for the HD game. To save space, we do not show the figures but cooperation results are qualitatively very similar to those shown here for $N=4900$. We have also simulated populations with two different initial percentages of randomly distributed cooperators: $30\%$ and $70\%$; again, there are no qualitative differences with the $50$-$50$ case shown here. \section{Conclusions} \label{concl} Standard RD assumes infinite mixing populations of playing agents. Actual and simulated populations are necessarily of finite size and show a network of ties among agents that is not random, as postulated by the theory. In this work we have taken the population finiteness for granted and we have focused on the graph inhomogeneity aspects of the problem. It is a well known fact that agent clustering may provide the conditions for increased cooperation levels in games such as those studied here. However, up to now, only regular structures such as grids had been studied in detail, with the exception of a few investigations that have dealt with small-world population structures of various kinds ~\cite{social-pd-kup-01,santos-pach-05,tom-luth-giac-06,ohtsuki-et-al,luthi-pest-tom-physa08}. But most have used an accumulated payoff scheme that makes no difference in regular graphs, but in the other cases, it does not leave the RD invariant with respect to affine transformations of the payoff matrix, which is required by evolutionary game theory. This gives rise to results that are not generalizable to the whole game space. The alternative of using average payoff respects invariance but is much less realistic in degree-inhomogeneous networks that are the rule in society. Here we have proposed a new payoff scheme that correctly accounts for the degree inhomogeneity of the underlying population graph and, at the same time, is invariant with respect to these linear transformations. Using this scheme, we have shown that, on complex networks, cooperation may reach levels far above what would be predicted by the standard theory for extended regions of the game's parameter space. The emergence of cooperation is essentially due to the progressive colonization by cooperators of highly connected clusters in which linked cooperators that earn a high payoff mutually protect themselves from exploiting defectors. This phenomenon had already been observed to a lesser extent in populations structured as regular grids but it is obviously stronger for scale-free graphs, where there exist a sizable number of highly connected individuals and it is the same effect that underlies cooperation in actual social networks. This observation alone may account for observed increased levels of cooperation in society without having to take into account other factors such as reputation, belonging to a recognizable group, or repeated interactions giving rise to complex reciprocating strategies, although these factors also play a role in the emergence of cooperation. \section*{Acknowledgments} E. Pestelacci and M. Tomassini gratefully acknowledge financial support by the Swiss National Science Foundation under contract 200021-111816/1. \bibliographystyle{elsart-num}
2,869,038,154,132
arxiv
\section*{\refname}}{}{}{} \usepackage{amsmath} \usepackage[T1]{fontenc} \usepackage[latin1]{inputenc} \usepackage{amssymb} \usepackage{setspace} \usepackage{color} \usepackage{float} \usepackage[english]{babel} \usepackage[squaren]{SIunits} \usepackage{dcolumn \usepackage{bm \usepackage{txfonts} \usepackage{graphicx} \newcommand{\cre}{\color{black}} \newcommand{\cmre}{\color{myred}} \newcommand{\cmbr}{\color{mybrown}} \newcommand{\cbu}{\color{black}} \newcommand{\cbk}{\color{black}} \newcommand{\cmdb}{\color{mydarkblue}} \newcommand{\cgr}{\color{green}} \newcommand{\cmo}{\color{myolive}} \newcommand{\cma}{\color{magenta}} \newcommand{\cmpur}{\color{mypurple}} \newcommand{\ccy}{\color{cyan}} \newcommand{\comment}{\textbf} \newcommand{\iotabar} {\mbox{$\,\iota\!\!$-}} \renewcommand{\comment}{\noindent \textbf} \newcommand {\ie} {{i.e.}} \newcommand {\eg} {{e.g.}} \newcommand {\etal} {\textit{et al. }} \newcommand {\bJ} {\mathbf{J}} \newcommand {\bk} {\mathbf{k}} \newcommand {\bB} {\mathbf{B}} \newcommand {\bA} {\mathbf{A}} \newcommand {\bdS} {\mathbf{dS}} \newcommand {\bdl} {\mathbf{dl}} \newcommand {\br} {\mathbf{r}} \newcommand {\bI} {\mathbf{\mathcal{I}}} \newcommand {\bD} {\mathbf{\mathcal{D}}} \newcommand {\bM} {\mathbf{\mathcal{M}}} \newcommand {\bC} {\mathbf{\mathcal{C}}} \newcommand {\bF} {\mathbf{\mathcal{F}}} \newcommand {\bII} {\mathbf{I}} \newcommand {\bPI} {\mathbf{P}} \newcommand {\bDI} {\mathbf{D}} \newcommand {\bMI} {\mathbf{M}} \newcommand {\bCI} {\mathbf{C}} \newcommand {\bQI} {\mathbf{Q}} \newcommand {\bWI} {\mathbf{W}} \newcommand {\bone} {\mathbf{1}} \newcommand {\bzero} {\mathbf{0}} \newcommand {\bW} {\mathbf{W}} \newcommand {\bsigma} {\mathbf{\Sigma}} \newcommand {\bjpsi} {\mathbf{j_\phi}} \newcommand {\bppsi} {\mathbf{p}} \newcommand {\bfpsi} {\mathbf{f}} \newcommand {\grad} {\nabla} \newcommand {\curl} {\nabla \times} \newcommand{\mody}{\color{blue}} \newcommand{\norm}{\color{black}} \providecommand{\tabularnewline}{\\} \begin{document} \title{Sandpile modelling of dual location fuelling in fusion plasmas} \author{C. A. Bowie} \email{craig.bowie@anu.edu.au} \affiliation{Australian National University, Canberra, ACT 0200, Australia} \author{M. J. Hole} \affiliation{Australian National University, Canberra, ACT 0200, Australia} \begin{abstract} We modify the Chapman sandpile model (Chapman \textit{et al} \textit{Physical Review Letters} 86, 2814 (2001)) to form comparisons with pellet pacing, which is used to reduce or eliminate ELMs in a fusion plasma. We employ a variation of that model in which a pedestal with feedback is introduced (Bowie and Hole \textit{Phys. Plasmas} 25, 012511 (2018)), which we further modify to provide for dual fuelling - sand is added both at the centre of the sandpile, and near the edge. We observe that when the additional sand is added at the top of the pedestal, MLEs are largely suppressed. While this suppression comes at a cost by way of reduction in total energy confinement, that reduction is lower than the reduction in MLE size. The trade-off between MLE suppression and reduction in energy confinement depends not only on the amount of extra sand, but also on its precise location relative to the top of the pedestal. We suggest that the approach of constant dual fuelling may be equally applicable to plasmas, and may suggest a strategy for ELM suppression in fusion plasmas. We observe that when the proposed amount of extra sand is added in 'pellets', using frequencies and amounts based on those proposed for ELM suppression for ITER, MLEs are similarly suppressed, although MLEs are not significantly suppressed when the pellet rate does not substantially exceed the MLE frequency. This suggests that pellet injection at the top of the pedestal at small pellet size and high frequency may represent a reasonable physical proxy for our proposed scheme. However, our results suggest that it is not the synchronisation of pellets to ELM frequencies which is the key factor for ELM suppression in this regime, but rather the introduction of additional fuelling at the top of the pedestal. \end{abstract} \maketitle \section{Introduction \label{sec:Introduction}} Nuclear fusion, if it can be effectively controlled, may be critical to our future energy needs. The primary method of seeking to achieve fusion power is via a plasma which is magnetically confined in a torus known as a tokamak. The goal of fusion research is to increase the fusion triple product of temperature, plasma density, and particle confinement time. A step towards this goal, known as H-mode, occurs when the plasma enters into a higher confinement mode, via a mechanism which is not yet fully understood, but which results in the production of a `pedestal' at the edge of the plasma, in which energy confinement rises sharply over a distance of approx 3\% of the toroidal minor radius\cite{Beurskens2009}. However, with H-mode comes a plasma instability known as an edge localised mode, or ELM, which triggers a loss of confinement~\cite{ASDEX1989}. A large ELM may result in a loss of confinement of 5\%~\cite{ASDEX1989}, or from 10-40\% of the pedestal energy~\cite{Beurskens2009} and can cause damage to the first wall of the tokamak\cite{Igitkhanov2013}. For ITER, an upper tolerable limit for ELMs of $\sim$1\% of the pedestal energy has been suggested\cite{Beurskens2009,Zhitlukhin2007}. Controlling ELMs in H-mode is therefore a key objective of fusion plasma research. Injection of fuel `pellets' has been extensively used as a candidate for ELM control and reduction in fusion plasmas, using pellets to trigger ELMs to increase ELM frequency ($f_{ELM}$), and consequently decrease their maximum size ($W_{ELM}$), on the basis that $f_{ELM}*W_{ELM}=constant$.~\cite{Hong1999,Baylor2005,Baylor2007,Baylor2013,Baylor2015,Lang2004,Lang2014,Lang2015,Pegourie2007,Rhee2012} Pellet size, frequency, and location have all been tested experimentally on ASDEX Upgrade~\cite{Lang2004,Lang2015, Lang2018}, DIII-D~\cite{Baylor2005,Baylor2013}, JET~\cite{Baylor2015, Lang2011, Lang2015}, and EAST~\cite{Li2014,Yao2017} and ELM control using pellets is being considered for use in ITER~\cite{Doyle2007,Baylor2015}. Injection of pellets to the top of the pedestal has been suggested to produce ELM pacing with reduced energy loss in modelling by Hayashi~\cite{Hayashi2013}, using the code TOPICS-IB. That modelling suggested that pellets with $\sim$1\% of the pedestal particle content, with speed sufficient to reach the pedestal top, will reduce energy loss significantly. The penetration depth of the pellet depends both on its size and speed, as smaller pellets do not penetrate as far into the plasma before ablation. Experiments at JET determined a minimum threshold pellet size which was necessary to reach the top of the pedestal in order to trigger ELMs~\cite{Lang2011}, where the pellet frequency exceeded the natural ELM frequency. For example, Lang\cite{Lang2015} discusses the use of pellets of $1.5-3.7\times10^{20}$D, introduced into a plasma with particle inventory of $6\times10^{20}$D, i.e. $25-60\%$ of the total plasma inventory. It has also been observed that in a 2016 series of discharges in JET, the highest fusion performance was observed using a particle fuelling scheme consisting of pellet injection combined with low gas puffing~\cite{Kim2018}. Lang \cite{Lang2015} discussed pellets added at lower frequencies (higher $\Delta t_P$) with pellet timing aligned to ELM onset. These pellets triggered ELMs. Lang\cite{Lang2015} observes that as pellets increase the plasma density, this in turn increases the L-H threshold. At DIII-D, pellet injection has been observed to trigger synchronous ELMs with a frequency of $12$ times the natural $f_{ELM}$\cite{Huijsmans2015,Baylor2013}. It is proposed that a dual pellet injection system will be used in ITER with large pellets to provide fuelling, and smaller pellets to trigger ELMs\cite{Baylor2015}, and it has been suggested that a pellet frequency of $\sim45$ times the natural $f_{ELM}$ will be required to provide the necessary reduction in heat flux. One way of understanding the impact of pellet injection on both confinement and ELM behaviour is to seek to identify a physical system whose relaxation processes have characteristics similar to those of the ELMing process under consideration. Of particular interest is the sandpile~\cite{Bak1987}, whose relevance to fusion plasmas is well known~\cite{Chapman1999,Dendy1997}. Sandpile models generate avalanches, which may be internal or result in loss of particles from the system. These avalanches are the response to steady fuelling of a system which relaxes through coupled near-neighbour transport events that occur whenever a critical gradient is locally exceeded. The possibility that, in some circumstances, ELMing may resemble avalanching was raised~\cite{Chapman2001A} in studies of the specific sandpile model of Chapman~\cite{Chapman2000}. This simple one-dimensional N-cell sandpile model~\cite{Chapman2000,Chapman2001A} incorporates other established models~\cite{Bak1987,Dendy1998A} as limiting cases. It is centrally fuelled at cell $n = 1/500$, and its distinctive feature is the rule for local redistribution of sand near a cell (say at $n = k$) at which the critical gradient $Z_{c}$ is exceeded. The sandpile is conservatively flattened around the unstable cell over a fast redistribution lengthscale $L_{f}$, which spans the cells $n = k - (L_{f} - 1), k - (L_{f} - 2), ... , k+1$, so that the total amount of sand in the fluidization region before and after the flattening is unchanged. Because the value at cell $n = k+1$ prior to the redistribution is lower than the value of the cells behind it (at $n<k+1$), the redistribution results in the relocation of sand from the fluidization region, to the cell at $n = k + 1$. If redistributions are sequentially triggered outwards across neighbouring cells, leading to sand ultimately being output at the edge of the sandpile, an avalanche is said to have occurred. The sandpile is then fuelled again, after the sandpile has iterated to stability so that sand ceases to escape from the system. The lengthscale $L_{f}$, normalized to the system scale $N$, is typically ~\cite{Chapman1999,Chapman2001A,Chapman2001B,Chapman2003,Chapman2004} treated as the model's primary control parameter $L_{f}/N$, which governs different regimes of avalanche statistics and system dynamics. Here, we employ a modification to the classic model in which the lengthscale is variable over a distance from the edge, which itself depends upon the energy of the system~\cite{Bowie2018}. As $L_f$ reduces near the edge, the gradient increases at the edge, resulting in a pedestal which is subject to feedback due to the dependence of the distance on the energy. The resulting pedestal was introduced as a proxy for the pedestal of a fusion plasma in a H-mode plasma~\cite{Bowie2018}. The feedback loops were seen to be analagous to the feedback effects intrinsic to the H-mode pedestal in a fusion plasma~\cite{Bowie2018}. It was suggested that reduction of feedback in the pedestal could result in ELM suppression within a H-mode plasma~\cite{Bowie2018}. Typically, the model is centrally fuelled only. Here, we introduce a new feature, being dual fuelling, in which the sandpile is constantly fuelled concurrently at two locations, in order to observe the effect on energy confinement and mass loss event (MLE) size. We observe that by adding $\sim$2.5\% of the sand at a location near the top of the pedestal (near the edge of the plasma), the maximum amount of sand lost in an MLE ($\Delta S_{max}$) is significantly reduced. \section{Dual-fuelled sandpile} We begin with a feedback model in which sand is added only at the core (as is typical for other implementations of the model). We add sand at a constant rate ($dx_{fc}=1.2$) until the sandpile builds up and enters a `steady state' in which the time averaged amount of sand lost in MLEs equals the amount of sand added. The median waiting time, $\Delta t_n$, between MLEs is $\sim$$135000$, and $\Delta S_{max}$ is $\sim$$630000$. The energy of the system ($E_p$), measured by the sum of the squares of the values of the cells, is $\sim$$2.7\times10^9$. The parameters chosen are based on Bowie and Hole~\cite{Bowie2018}. For the sandpile chosen, the width of the pedestal, $P_w$, is $\sim$$15/500$ cells, meaning that the top of the pedestal is located at $n=485$. Due to the feedback effects built into the model, the pedestal edge moves with time, approximately synchronously with $E_p$. The resulting shape of the sandpile is shown in Figure \ref{fig:Sandpile, Ep, and Max MLE}(a), with the values of $E_p$ and $P_w$ over 2 million iterations shown in Figure \ref{fig:Sandpile, Ep, and Max MLE}(b) and (c). \begin{figure \centering \begin{minipage}{1.2in \includegraphics[clip,trim=0cm 0 0cm 0,width=\linewidth]{./"Sandpile_-_Pellet_-_Model_3_-Cell_480-492__RLstart300_LFstart_100_Time_1_NO_BONUS_SAND"} \end{minipage \begin{minipage}{1.2in \includegraphics[clip,trim=0cm 0 0cm 0,width=\linewidth]{./"Potential_Energy_-_Pellet_-_Model_3_-Cell_490__RLstart300_LFstart_100_Time_NO_BONUS_SAND"} \end{minipage \begin{minipage}{1.2in \includegraphics[clip,trim=0cm 0 0cm 0,width=\linewidth]{./"Larmor_Radius_-Pellet_-_Model_3_-Cell_480-492__RLstart300_LFstart_100_Time_1_NO_ADDED_SAND"} \end{minipage} \begin{minipage}{1.2in \includegraphics[width=\linewidth]{./"Sandpile_-_Pellet_-_Model_3_-Cell_487__RLstart300_LFstart_100_Time_1_Pellet_0_03"} \end{minipage \begin{minipage}{1.2in \includegraphics[width=\linewidth]{./"Potential_Energy_-_Pellet_-_Model_3_-Cell_487__RLstart300_LFstart_100_Time_1_Pellet_0_03"} \end{minipage \begin{minipage}{1.2in \includegraphics[width=\linewidth]{./"Larmor_radius_-_Pellet_-_Model_3_-Cell_487__RLstart300_LFstart_100_Time_1_Pellet_0_03"} \end{minipage \caption{(L to R) Sandpile, $E_p$, and $P_w$ plots for base case ($dx_{fe} = 0$) (top); $dx_{fe} = 0.03$, added at $n=487$ (bottom) \label{fig:Sandpile, Ep, and Max MLE \end{figure} We then modify the model, by adding most of the sand ($dx_{fc}=1.2$) at the core ($n=1$), and some of the sand ($dx_{fe}$)at another location within the sandpile, $n_{fe}$. Sand is added continuously at both the core and $n_{fe}$, representing dual fuelling rather than time separated pellets. We record the average value of $E_p$, and $\Delta S_{max}$. We then repeat the process for a number of values in the range from $n_{fe}=2$ to $n_{fe}=500$. The sandpile, and values of $E_p$ and $P_w$, using this dual fuelling model, are shown in Figure \ref{fig:Sandpile, Ep, and Max MLE}(d-f). We observe that, consistent with the reduction in $\Delta S_{max}$, the $E_p$ and $P_w$ traces are much smoother where dual fuelling is employed. Figure \ref{fig:Sandpile, Ep, and Max MLE}(f) shows us that for $dx_{fe}=0.03$, $P_w$ is about $13$ when $n_{fe}=487$, i.e. the sand is added at about the top of the pedestal. \begin{figure \centering \includegraphics[clip,trim={0cm 0cm 0cm 0cm},width=\linewidth]{./"dual_fuelling_MLE02"} \caption[Pellet size 0.03]{$E_p$ and $\Delta S_{max}$ for $dx_{fe}= 0.03$, added at $n_{fe}=480$ to $500$ (bottom) and $n_{fe}=1$ to $500$ (top). Figure \ref{fig:pellet-size-0_03}(a) shows the full range from $n=1-500$, while Figure \ref{fig:pellet-size-0_03}(b) shows detail within the pedestal. The average amount of sand in the sandpile is $\sim$$10^6$ units, meaning that $\Delta S_{max}$ is up to $50\%$ for the base case, and is reduced to $\sim$$0.5\%$ when $n_{fe}=487, dx_{fe}=0.03$.} \label{fig:pellet-size-0_03} \end{figure} \begin{figure} \centering \includegraphics[clip,trim={0cm 0cm 0cm 0cm},width=\linewidth]{"dual_fuelling_490_1"} \caption[Variable pellet sizes]{$E_p$ and $\Delta S_{max}$ for $dx_{fe}= 0.01$ to $0.2$. In each case $n_{fe}=490$. $\Delta S_{max}$ is significantly reduced where $dx_{fe}>0.03$, while $E_p$ declines more slowly, suggesting that $dx_{fe}=0.03$ is optimal for $\Delta S_{max}$, while maintaining $E_p$.} \label{fig:variable-pellet-sizes---max-mle-and-pe} \end{figure} Figure \ref{fig:pellet-size-0_03} shows how $E_p$, and $\Delta S_{max}$ vary as we change $n_{fe}$, for $dx_{fe}=0.03$. Both $E_p$, and $\Delta S_{max}$ are minimised when $n_{fe}$ is located within the pedestal. MLEs are maximally suppressed when $n_{fe}$ is in the range from $487 - 497$, with the maximum $E_p$ in that range at $n_{fe}=487$ (i.e. the top of the pedestal). When $n_{fe}$ is located at the top of the pedestal, $E_p$ declines by about 30\%, with a concurrent $\sim$93\% reduction in $\Delta S_{max}$. If $n_{fe}$ is located just outside the pedestal, a reduction in $\Delta S_{max}$ of $\sim$50\% can be achieved with little effect on $E_p$. By contrast, dual fuelling significantly outside the pedestal has little effect on either $E_p$ or $\Delta S_{max}$, as shown in Figure \ref{fig:pellet-size-0_03}(a). Essentially, what is observed is that $n_{fe}$, when located at the top of, or within, the pedestal, sets a maximum value for $P_w$, by suppressing further growth of $P_w$. This in turn prevents the sandpile from becoming sufficiently large that it collapses. The trade-off between reduction in $\Delta S_{max}$ and $E_p$ can also be seen if $dx_{fe}$ is varied. In Figure \ref{fig:variable-pellet-sizes---max-mle-and-pe}, we show $\Delta S_{max}$ and $E_p$ for a range of pellet sizes, added at $n_{fe}=490$, which is near the top of the pedestal. We see that as we increase $dx_{fe}$, $\Delta S_{max}$ and $E_p$ both decline. $\Delta S_{max}$ has been reduced by an order of magnitude at $dx_{fe}=0.03$ and remains relatively steady after that, while $E_p$ continues to decrease as we increase $dx_{fe}$. In addition, generally speaking, for values of $dx_{fe}$ below $0.03$, the `dip' in $E_p$ and $\Delta S_{max}$ is smaller, and occurs over a smaller range of values of $n_{fe}$. For higher values, the dip is larger over a $\sim17$ cell range, representing an approximate radial width of $17/500=0.034$ of the plasma. The `sweet spot' appears where the dip is over a wide enough range such that extreme precision in adding $dx_{fe}$ is not required, without resulting in a large decrease in $E_p$. Taking these factors into account, we suggest that the optimal value for $dx_{fe}$ is about $0.03$, or $2.5\%$ of $dx_{fc}$. As noted above, for $dx_{fe}=0.03$, maximal suppression of MLEs, coupled with minimal reduction in $E_p$, occurs at about $n_{fe}=487$, being the top of the pedestal. \section{Discussion} To date, pellet fuelling in fusion plasmas has been aimed at the triggering of an ELM immediately following the introduction of a pellet, so as to increase $f_{ELM}$, and consequently decrease $W_{ELM}$, on the basis that $f_{ELM}\times W_{ELM}=constant$.~\cite{Hong1999,Baylor2005,Baylor2007,Baylor2013,Baylor2015,Lang2004,Lang2014,Lang2015,Pegourie2007,Rhee2012}. Here we suggest a potentially different path to ELM reduction, as the dual fuelling proposed here is constant, rather than pelletized, and therefore does not produce MLEs synchronised with the introduction of additional fuelling. Instead, the constant injection of fuel at or about the top of the pedestal in a feedback modified sandpile, when coupled with the feedback mechanism, triggers MLEs more regularly, but still with a waiting time of at least several thousand time steps. We observe that MLE suppression does not occur when $n_{fe}$ is significantly outside the pedestal in which feedback occurs. MLE suppression also does not occur for dual fuelling in the classic sandpile model, in which no feedback occurs. This suggests that MLE suppression by dual fuelling is directly related to modification of feedback in the pedestal. The feedback model, including a pedestal, has been suggested to be analogous to a fusion plasma, including a H-mode pedestal in which feedback effects occur\cite{Bowie2018}, perhaps because a common underlying dynamical behaviour occurs in both the model and the fusion plasma. As a result, we suggest that dual fuelling in a fusion plasma may similarly lead to ELM suppression. Specifically, it may be advantageous to operate a fusion plasma in a mode in which most of the fuelling occurs at the core, while 2.5\% of the fuelling occurs at the top of the pedestal. If our conjecture is correct, and the fuelling properties/insights of the MLE model are portable to a tokamak, such an operating mode will result in the suppression of ELMs at a low energy density and temperature cost. Notwithstanding that existing pellet fuelling schemes have been aimed at the triggering of an ELM immediately following the introduction of a pellet, there may nonetheless be a relationship between the proposal here and pellet fuelling schemes employed to date. Minimum pellet sizes have been suggested for production of ELMs in experiments, as a consequence of the practical requirement that pellets be large enough to reach the top of the pedestal. The minimum size is also a function of pellet velocity, as the pellet size necessary to reach the top of the pedestal decreases as pellet velocity increases. These minimum sizes are coupled with the maximum practically achievable injection frequency in each experiment. If our analogy is correct, the minimum necessary size to reach the top of the pedestal will couple with the injection frequency to produce an optimal injection frequency, which may be less than the maximum achievable injection frequency. In order to make a comparison with the proposed ITER scheme, we have 'pelletized' $dx_{fe}$ by adding sand at every $4,000$ time steps (being approximately the natural waiting time in the model, divided by $45$, based on the assumption that the pellet frequency in ITER will be $45$Hz\cite{Baylor2015}), with $f_{ELM}=1$Hz\cite{Baylor2015}. The amount of sand added in total is equal to the amount added continuously, i.e. $4000\times0.03=120$. On the assumption that pellets take effect over their ablation time, rather than instanteously, we have delivered the pellet over $400$ time steps, adopting an observed ablation time for a MAST pellet of $13\times200 \micro \second = 2.6 \milli \second$ \cite{Garzotti2010}, which equates to $\sim 400$ time steps in our model. The result is that at each time step during pellet injection, $dx=1.2$ and $dx_{fe}=0.3$, while for all other time steps $dx=1.2$ and $dx_{fe}=0$. We also observe that the amount of sand in the pedestal in the model is about $11,000$ units, so that a pellet size of $120$ units is $\sim 1\%$ of the particles in the pedestal, which is consistent with modelling by Hayashi\cite{Hayashi2013}, suggesting that the pellet size should be 1\% of the pedestal particle content. With these parameter settings, $E_p\sim1.9\times10^9$ (a reduction of $\sim30\%$ from the base case), and $\Delta S_{max}\sim13000$ (a reduction of $\sim98\%$). By contrast, if pellets are injected at a rate equal to the natural MLE frequency, consistent with pellet pacing experiments at JET, then while $E_p\sim1.9\times10^9$ (the same as for the reduction from the base case of $\sim30\%$), $\Delta S_{max}\sim99000$ (a reduction of only $\sim75\%$). The continuing occurrence of significant MLEs is consistent with the result observed at JET in which ELMs still occurred during pellet pacing, rather than being fully suppressed. This suggests that a series of pellets, such as those to be used in ITER, represent a good approximation to the continuous edge fuelling proposed here, particularly with regard to the practical limitations of implementing such a scheme. Our model also suggests that the relevant consideration for pellet pacing is whether the total amount of particles delivered reaches the ELMing threshold, whether delivered continuously, or over several pellets or gas puffs. This result contrasts with pellet pacing schemes in which pellet timing is aligned to ELM onset \cite{Lang2015} - our result suggests that it is not synchronisation of the pellets which is relevant in this regime, but instead the total amount of fuelling delivered (at least quasi-continuously) at the top of the pedestal. The scheme may alternatively be implemented by gas puffing, to the extent that gas puffs can be controllably injected at the top of the pedestal as part of a dual fuelling scheme in the proportions suggested here. \section{Conclusion} We have implemented a feedback modified sandpile model, to which we have added dual fuelling. The sandpile model incorporates feedback effects within an edge pedestal. We have observed that when additional fuelling is added at the top of the pedestal, MLEs are almost entirely suppressed while $E_p$ is reduced to a lesser extent. We observe that optimal MLE suppression, with minimal $E_p$ reduction, occurs when edge fuelling represents approximately 2.5\% of core fuelling, and the edge fuelling is added at the top of the pedestal. We conjecture that this MLE suppression results from suppression of feedback in the pedestal of the model. We suggest that a similar scheme employed in a fusion plasma may result in the suppression of ELMs at a low particle density and temperature cost. We have shown that this scheme is related to a scheme of pellet injection at frequencies up to 45 times the natural $f_{ELM}$ proposed for use in ITER\cite{Baylor2015}, and tested in DIII-D\cite{Baylor2013}, and to a scheme modelled by Hayashi\cite{Hayashi2013},who suggests that small pellets of the order of 1\% of the pedestal particle content, which are fully ablated at the top of the pedestal, may be sufficient to trigger ELMs, and thereby reduce their size. However, significant ELM suppression may not occur unless the pellet rate significantly exceeds $f_{ELM}$. Our result suggests that it is not the synchronisation of pellets to ELMs which is relevant for ELM suppression in this regime, but rather the total amount of fuel delivered (at least quasi-continuously) at the top of the pedestal. Gas puffing which provides relatively constant edge fuelling may also suppress ELMs at the same ratio of core to edge fuelling. We suggest that others may wish to implement the scheme proposed here in a fusion plasma, to determine whether edge fuelling can suppress ELMs at a particle density and temperature cost which is considered acceptable for the experiment in question, consistent with the results of our model. \section*{Acknowledgments} This work was jointly funded by the Australian Research Council through grant FT0991899 and the Australian National University. One of the authors, C. A. Bowie, is supported through an ANU PhD scholarship, an Australian Government Research Training Program (RTP) Scholarship, and an Australian Institute of Nuclear Science and Engineering Postgraduate Research Award. \input{Dual_fuelling_article_Arxiv.bbl} \renewcommand\refname{} \end{document}
2,869,038,154,133
arxiv
\section{Introduction} The application of statistical mechanics ideas and tools to random optimization problems, initiated in the mid-eighties \cite{Mezard87}, has benefited from a renewed interest from the discovery of phase transitions in Constraint Satisfaction Problems (CSP) fifteen years ago. Briefly speaking, one wants to decide whether a set of randomly drawn constraints over a set of variables admits (at least) one solution. When the number of variables goes to infinity at fixed ratio $\alpha$ of constraints per variable the answer abruptly changes from (almost surely) Yes to No when the ratio crosses some critical value $\alpha_s$. Statistical physics studies have pointed out the existence of another phase transition in the Yes region \cite{Biroli00, Krzakala07}. The set of solutions goes from being connected to a collection of disconnected clusters at some ratio $\alpha_d < \alpha_s$, a translation in optimization terms of the replica symmetry breaking transition identified by Parisi in mean-field spin glass theory. It is expected that this clustering transition may have dynamical consequences. As replica symmetry breaking signals a loss of ergodicity, sampling algorithms (e.g. Monte Carlo procedure) run into problems at that transition. A quantitative study of the slowing down of MC scheme was done in \cite{Montanari06} for the case of the $k$-XORSAT model where constraints are simply linear equations (modulo 2) over $k$ Boolean variables (for an introduction, see \cite{Monasson07} and references therein). Yet, finding a solution should in principle be easier than sampling, and the exact nature of the relationship between the performances of resolution algorithms and the static phase transitions characterizing the solution space is far from being obvious \cite{Krzakala07a}. The present paper is a modest step in elucidating this question for the $k$-XORSAT problem, and some related NP-complete problems sharing the same random structure. Hereafter we consider simple stochastic search heuristic algorithms working in polynomial (linear) time for solving $k$-XORSAT instances \cite{Chao90,Monasson07}. By successively assigning variables according to some heuristic rules those algorithms either produce a solution, or end up with a contradiction. The probability that a solution is found is a decreasing function of the ratio $\alpha$, and vanishes above some heuristic-dependent ratio $\alpha_a$ in the infinite size limit. We show that $\alpha_a < \alpha_d$ for any assignment heuristic in the class of rules preserving the Poissonian structure of the instance. In addition, we determine the most efficient heuristic, that is, the one maximizing $\alpha_a$ in this class and show that for large $k$, the two critical ratios match, $\alpha _a(k) \simeq \alpha_d(k) \simeq \log k/k$. The plan of the paper is as follows. In section \ref{sec:def} we define the random $k$-XORSAT decision problem and its extension, as well as the search algorithms studied. Section \ref{sec:leaf} presents a method to characterize the phase diagrams of those random decision problems, depending on the content (numbers of constraints over $j$ variables, with $j$ ranging from 1 to $k$) of their instances. We show that all important information is encoded in a unique `thermodynamical' potential for the fraction of frozen variables (backbone). The analysis of the dynamical evolution of the instance content is exposed in section \ref{sec:dyna}. These dynamical results are combined with the static phase diagram in section \ref{sec:cinq} to show that the success-to-failure critical ratio of search heuristic, $\alpha_a$, is smaller than the ratio corresponding to the onset of clustering and large backbones, $\alpha_d$. We then show that the so-called Generalized Unit Clause heuristic rule is optimal (in the class of Poissonian heuristics) and its critical ratio $\alpha_a$ is asymptotically equal to $\alpha_d$ in the large $k$ limit. Our results are discussed in section \ref{sec:conc}. \section{Definitions} \label{sec:def} \subsection{Decision problems} The decision problems we consider in this paper are $(k,d)$-Uniquely Extendible (UE) Constraint Satisfaction Problems (CSP) defined as follows~\cite{Connamacher04}. One considers $N$ variables $x_i \in \{0,1,\cdots,d-1\}$. A UE constraint, or {\it clause}, is a constraint on $k$ variables such that, if one fixes a subset of $k-1$ variables, the value of the $k$-th variable is uniquely determined. A $(k,d)$-UE-CSP formula is a collection of $M = \a N$ clauses, each involving $k$ variables (out of the $N$ available ones). A {\it solution} is an assignment of the $N$ variables such that all the clauses are satisfied. $k$-XORSAT corresponds to $d=2$ and is solvable in polynomial time with standard linear algebra techniques. For $d=3$ the problem is still in P, while for $d\ge 4$ it has been shown that $(3,d)$-UE-CSP is NP-complete \cite{Connamacher04}. A random formula is obtained by choosing, for each clause, the $k$ variables, and the actual UE constraint, uniformly at random. It is known that, in the infinite size limit $N\rightarrow\infty$ and at fixed clause-to-variable ratio $\alpha$, \cite{Connamacher04,Dubois02,Mezard03,Cocco03}: \begin{itemize} \item there is a critical ratio $\alpha _s (k)$ such that a random $(k,d)$-UE-CSP is almost surely satisfiable (respectively, unsatisfiable) if $\alpha < \alpha_s(k)$ (respectively, $\alpha > \alpha _s (k)$). \item in the satisfiable phase there is another phase transition at some ratio $\alpha _d (k)$ such that: \subitem{-} for $\alpha < \alpha_d(k)$ the space of solutions is `connected': with high probability there is a path in the set of solutions joining any two solutions such that a step along the path requires to change O(1) variables. \subitem{-} for $\alpha > \alpha _d(k)$ the space of solution is disconnected into an exponentially large number of clusters, each one enjoying the above connectedness property, and far away from each other (going from one solution in one cluster to another solution in another cluster requires to change $O(N)$ variables). In addition, in each cluster, a finite fraction of variables are frozen {\em i.e.} take the same value in all solutions (backbone). \end{itemize} \subsection{Search algorithms} We will consider simple algorithms acting on the formula in an attempt to find solutions. Those algorithms were introduced and analyzed by Chao and Franco \cite{Chao90} (see \cite{Achlioptas01} for a review). Briefly speaking, starting from a randomly drawn formula, the algorithm assigns one variable at each time step according to the following principles: \begin{itemize} \item If there is (at least) one clause of length one (called unit-clause) then satisfy it by adequately assigning its variable. This rule is called {\em unit propagation}. \item If all clauses have length two or more, then choose a variable according to some heuristic rules. Two simple rules are: \subitem{-} Unit Clause (UC): pick up uniformly at random any variable and set it to a random uniform value in $\{0,\cdots,d-1\}$; \subitem{-} Generalized Unit Clause (GUC): pick up uniformly at random one of the shortest clauses, then a variable it this clause, and finally its value. \end{itemize} In this analysis, we will discuss a general heuristics in which the variable to be set is chosen among those that appear in the clauses of length $j$ with some probability $p_j(C_1,\cdots,C_k)$, depending in general on the number of clauses of length $j$ present in the formula, that we shall call $C_j$. Unit propagation implies that if $C_1 \neq 0$, then $p_j = \d_{j,1}$. We consider also the possibility that the variable is chosen irrespective of the clause length, then $\sum_{j=1}^k p_j \leq 1$. Both UC and GUC are special cases of this general class: in UC variables are chosen at random, irrespectively of the clauses they appear in (if any), so that $p_j = 0$ unless there are unit clauses; GUC corresponds to $p_j = \d_{j,j^*}$ where $j^*$ is the length of the shortest clause in the system. Notice that since the variables are selected independently of their number of occurrences, the latter remains Poissonian under the action of the algorithm (even though the value of the parameter in the distribution of occurrences may vary). More involved heuristics do exist but will not be analyzed here. Under the action of the algorithm clauses get reduced (decrease in length) until they disappear once satisfied. The algorithm stops either when all clauses have been satisfied or when two incompatible unit-clauses have been generated e.g. $x=0$ and $x=1$. In the latter case the algorithm outputs `I do not know whether there is a solution', while in the former case the output reads `Satisfiable' and returns a solution to the formula. The probability of success, that is, the probability (over the choices of the algorithms and the formula) of getting the `Satisfiable' output vanishes above some heuristic-dependent ratio $\alpha_a (< \alpha_s)$ in the infinite $N$ limit. This success-to-failure transition coincides with the polynomial-to-exponential transition of backtracking algorithms \cite{Monasson07,Achlioptas04}. \section{`Thermodynamical' Characterization of the Space of Solutions} \label{sec:leaf} Under the action of the algorithm the length of the clauses changes; therefore the initial $(k,d)$-UE-CSP formula where all clauses have length $k$ evolves into a formula with some distribution of clauses of different lengths. We wish then to characterize the space of solutions of a generic $d$-UE-CSP formula made by $N$ variables and by $\{C_j^0\}_{j=2,\cdots,k}$ clauses of length $j$, assuming that there are no unit clauses. This characterization will be useful to analyze the performance of search algorithm in the following. \subsection{Leaf removal procedure and its analysis} Our starting observation is that, due to the UE property, when a variable has a unique occurrence in the formula, then the clause it appears in can always be satisfied. Hence the subformula obtained by removing this clause is equivalent (in terms of satisfiability) to the original system~\cite{Dubois02}. The interest of this remark is that it can be iterated, and more and more clauses eliminated. Monitoring the evolution of the formula under this procedure, called leaf removal, provides us with useful information on the nature of the solution space \cite{Mezard03, Cocco03, Weigt02}. One clause is removed at each time step. After $T$ steps we denote by $C_j(T)$ the number of clauses of length $j$. Those numbers obey the evolution equations (in expectation), \begin{equation} \label{eq_gen_disc} C_j(T+1)-C_j(T) = -\frac{j\;C_j(T)}{\sum_{j'=2}^k j' C_{j'}(T)} \end{equation} where the denominator is the total number of occurrences of all variables appearing in the formula. The r.h.s. of (\ref{eq_gen_disc}) is simply (minus) the probability that the unique-occurrence variable is drawn from a clause of length $j$. In addition let us define the number $N_\ell(T)$ of variables appearing in $\ell$ equations exactly. The evolution equations for those numbers are (in expectation) \begin{equation}\label{eq_gen_disc2} N_\ell(T+1)-N_\ell(T) = \sum_{j=2}^k \frac{j(j-1)\;C_j (T)}{\sum_{j'=2}^k j' C_{j'}(T)} \times \left[ \frac{(\ell+1)\; N_{\ell+1}(T)-\ell\;N_\ell(T)}{\sum_{\ell'=0}^\infty \ell' N_{\ell'}(T)} \right] \ - \d_{\ell,1} + \d_{\ell,0} \ . \end{equation} The above is easy to interpret. The second term in the square bracket on the r.h.s. is the average number of removed variables (other than the single-occurrence variable), that is, the average length of the removed clause minus one. The first term expresses that, if one of those variables appeared $\ell+1$ times before its removal, the number of its occurrences has decreased down to $\ell$ after the removal. Finally, the two $\d$ correspond to the elimination from the system of the single-occurrence variable. In the large $N$ limit we may turn those finite difference equations over extensive quantities $C_j,N_\ell$ into differential equations for their intensive counterparts $c_j=C_j/N, n_\ell =N_\ell/N$ as functions of the reduced number of steps, $\tau =T/N$. The outcome is \begin{eqnarray}\label{LRc} \frac{d c_j}{d \t} &=& - \frac{jc_j}{\NN} \ , \ \ \ (j=2,\dots,k) \ , \\ \label{LRn} \frac{d n_\ell}{d \t} &=& \sum_{j=2}^k \frac{j (j-1) c_j}{\NN} \left[ \frac{(\ell+1)n_{\ell+1}-\ell n_\ell}{\NN} \right] - \delta_{\ell,1} + \delta_{\ell,0} \ , \end{eqnarray} where $\NN(\t) = \sum_{j=2}^k j c_j(\t) = \sum_{\ell\ge 1} \ell\; n_\ell(\t)$. The initial conditions are \begin{equation}\label{Poissonian} c_j (0) = \frac{C_j^0}N\ ; \ \ \ \ n_\ell (0) = e^{-\lambda_0}\frac{(\lambda_0)^\ell}{\ell!} \ , \end{equation} where $\l_0$ is determined by $\sum_\ell \ell \ n_\ell(0) = \l_0 = \sum_j j c_j(0)$. It is easy to check that equations (\ref{LRc}) are solved by $c_j(\t) = c_j(0)\; b(\t)^j$ provided $\frac\NN{b} \frac{d b}{d \t}= - 1$. It is convenient to introduce the {\it generating function} \begin{equation} \label{defG} G(b) = \sum_{j=2}^k c_j(0)\; b^j \ . \end{equation} Derivative(s) of $G$ with respect to its argument will be denoted by prime(s). We have that $\NN(\t) = b(\t) G'(b(\t))$. In addition, we define $\g(\t) = \sum_j c_j(\t)=G(b(\t))$. We deduce the equation for $b(\t)$: \begin{equation} \label{tdib} \frac{d\g}{d\t} = \frac\NN{b} \frac{d b}{d \t}= - 1 \ \ \Rightarrow \ \ \t = \gamma(0) - \gamma(\t) = \sum_{j=2}^k c_j(0) (1-b(\t)^j) \ . \end{equation} The interpretation of the equation above is just that at each step of the leaf removal one equation is eliminated. The solution to (\ref{LRn}) remains Poissonian at all times for all $\ell \geq 2$. Substituting $n_\ell(\t) = e^{-\lambda(\t)} \frac{\lambda(\t)^\ell}{\ell!}$ we obtain an equation for $\l(\t)$: \begin{equation} \frac{d\lambda}{d\t} = - \frac{\sum_{j\geq2} j(j-1)c_j(\t)}{(\sum_{j\geq 2} jc_j(\t))^2} \lambda(\t) = -\left[\frac{G''(b)}{G'(b)^2}\right]_{b=b(\t)} \l(\t) \ , \end{equation} with the initial condition imposed by $\l(0) = \l_0= \sum_j j c_j(0) = G'(1)$. From (\ref{tdib}) we get $\frac{d\t}{db} = -G'(b)$ so that \begin{equation} \frac{d\lambda}{db} = \frac{d\lambda}{d\t}\frac{d\t}{db} = \frac{G''(b)}{G'(b)} \l \ , \end{equation} which is solved by \begin{equation} \label{lam_b} \lambda(b) = G'(b) \ , \end{equation} where the normalization is fixed by the initial condition for $\lambda$. (\ref{tdib}) and (\ref{lam_b}) determine $b(\t)$ and $\l(\t)$, which describe the evolution of the formula under the action of the leaf removal algorithm. \subsection{Static Phase Transitions} \label{sec:32} The structure of the subformula remaining at the end of the leaf-removal (if any) is indicative of the nature of the phase corresponding to typical formulas, uniformly drawn at fixed $\{C_j^0\}$. Three phases are possible: the {\it unclustered} phase where formulas are satisfiable and the solutions form a unique cluster; the {\it clustered} phase where solutions are divided into many clusters; and the {\it unsat} phase where the typical formula is not satisfiable \begin{enumerate} \item {\it Clustering transition}: The leaf removal algorithm starts from $b=1$, then $b$ decreases according to (\ref{tdib}) and the algorithm stops at the largest value of $b$ such that $n_1 = 0$, {i.e. }}\def\eg{{e.g. } there are no more variables with unique occurrence. We have \begin{eqnarray*} n_1 &=& \sum_{j=2}^k j c_j - \sum_{\ell>1} \ell n_\ell = b G'(b) - \sum_{\ell>1} \ell e^{-\lambda(b)} \frac{\lambda(b)^\ell}{\ell!} \\ &=& b \lambda(b) - e^{-\lambda(b)} \lambda(b) \left[ e^{\lambda(b)} -1 \right] = \lambda(b) \left[ b - 1 + e^{-\lambda(b)} \right] \ , \end{eqnarray*} therefore \begin{equation}\label{conddinamica} n_1 = 0 \ \ \ \Leftrightarrow \ \ \ 1-b = e^{-\lambda(b)} = e^{-G'(b)} \ . \end{equation} This equation always has the solution $b=0$, that gives $c_j=0$ for all $j$ when the algorithm stops. This corresponds to a backbone-free formula whose solution space is connected. On the other hand, if this equation admits non-trivial solutions $b> 0$, the algorithm stops when $b$ is equal to the largest of them, {i.e. }}\def\eg{{e.g. } it is unable to eliminate all clauses in the formula. Then the space is clustered and the largest solution represents the fraction of variables in the backbone of each cluster~\cite{Mezard03, Cocco03}. In the pure $(k,d)$-UE-CSP case, {i.e. }}\def\eg{{e.g. } when $c^0_j = \a \d_{j,k}$, the critical ratio at which clustering appears decreases with $k$, from $\alpha_d(3)\simeq 0.818$ to $\alpha_d(k)\simeq \log k/k$ at large $k$. \item {\it Sat/unsat transition}: The formula is satisfiable when the subformula left by the removal algorithm has a solution. This happens with high probability if and only if the number of equations, given by $G(b)$, is smaller than the number of variables, $\sum_{\ell\geq 2} n_\ell$~\cite{Mezard03, Cocco03}. Using the condition $n_1=0$, the satisfiability condition is \begin{equation}\label{condsat} G(b) \leq b+(1-b)\log(1-b) \ . \end{equation} For $(k,d)$-UE-CSP, the critical ratio at which formulas go from typically satisfiable to typically unsatisfiable increases with $k$, from $\alpha_s(3)\simeq 0.918$ to $\alpha_d(k) \rightarrow 1$ at large $k$. \end{enumerate} \subsection{The potential for the backbone} \label{sec:surfaces} The outcome of the previous section can be summarized as follows. We considered a formula specified by a set $\{c^0_j\}_{j=2,\cdots,k}$, or equivalently by the generating function (\ref{defG}). In the following we will drop the superscript $0$ to simplify the notation. We define the {\it potential} \begin{equation}\label{potedef} V(b) = -G(b) +b +(1-b) \log(1-b) \ . \end{equation} The condition $n_1=0$ (\ref{conddinamica}), is equivalent to $V'(b)=0$. Thus, if $V(b)$ has a single minimum in $b=0$, the solution space is not clustered, while if there is another minimum at $b\neq 0$, there are clusters. Moreover, the condition for satisfiability (\ref{condsat}), is that at the secondary minimum $V(b) \geq 0$. Examples are given in figure~\ref{fig_pot}. The sat/unsat surface $\Si_s$, that separates the sat and the unsat phase, is defined by the condition: \begin{equation}\label{asdef} \Si_s \equiv \{ c_j : V(b)=0 \ \mbox{and} \ V'(b)=0 \ \mbox{admit a solution} \ b>0 \} \ . \end{equation} The clustering surface $\Si_d$, that separates the clustered and unclustered regions, is defined similarly by \begin{equation}\label{dyna} \Si_d \equiv \{ c_j : V'(b)=0 \ \mbox{and} \ V''(b)=0 \ \mbox{admit a solution} \ b>0 \} \ . \end{equation} The equations above have to be interpreted as coupled equations for $(b,c_j)$; therefore $\Si_{s},\Si_d$ have dimension $k-2$ and are surfaces in the space $\{c_j\}_{j=2,\cdots,k}$ of dimension $k-1$. Note that in (\ref{asdef}) and (\ref{dyna}), one must always choose the largest solution for $b$, to which we will refer as $b_s$ and $b_d$, respectively. In addition to the previous sets, in the following a special role will be played by the condition $2 c_2 = 1$, or equivalently $V''(0)=0$, that defines the {\it contradiction surface} $\Si_q$: \begin{equation}\label{contra} \Si_q \equiv \{ c_j : V''(0)=0 \} \ . \end{equation} The surface $\Si_q$ is simply a hyperplane of dimension $k-2$. \subsection{The phase diagram} \begin{figure} \includegraphics[width=9cm]{dia_k4.eps} \includegraphics[width=7cm]{phase_diagram.eps} \caption{{\bf (Left)} Schematic phase diagram of $k$$=$4-UE-CSP. The full (black) curve is the surface $\Si_d$, the dot-dashed (red) surface is $\Si_s$. The two surfaces meet along a portion of the line $\Si_{crit}$, defined by $c_2=1/2$ and $c_3=1/6$ and represented as a dashed (blue) line. {\bf (Right, top and bottom)} The sections of $\Si_d$ (full, black) and of $\Si_s$ (dot-dashed, red), at fixed $c_2$ (=~0,~0.1, 0.2, 0.3, 0.4, 0.5 from top to bottom) as a function of $c_3$ on the top panel, and at fixed $c_4$ (=~0,~0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7 from top to bottom) as a function of $c_2$ in the bottom one. The lines corresponding to $c_4=0$ also represent the phase diagram of 3-UE-CSP.} \label{dia_fase_vero} \end{figure} We draw a phase diagram in the space of the $c_j$ by representing surfaces $\Si_{s},\Si_{d},\Si_{q}$. We focus on the region $c_j \in [0,1]$ for $j=3,\ldots,k$ and $c_2 \in [0,1/2]$. Indeed, if one of the $c_j >1$, the system is surely in the unsat phase \cite{Connamacher04} while if $c_2 > 1/2$ the algorithm discussed above find a contradiction with very high probability. Examples of the phase diagram are in figure~\ref{dia_fase_vero} for $k=3$ and $k=4$. There are some special ``lines'' ({i.e. }}\def\eg{{e.g. } intersections of surfaces) on which we will concentrate. \begin{enumerate} \item Recall that $\Si_q$ is defined by $V''(0)=0$ and note that $V'(0)=0$ for all $b$, $c_j$. Thus, on $\Si_q$, the point $b=0$ is a solution of both equations (\ref{asdef}) and (\ref{dyna}). The surfaces $\Si_{s},\Si_{d}$ are defined by the existence of solutions with $b > 0$, but they might intersect $\Si_q$ if for some values of $\{c_j\}$ the solution with $b>0$ merges with the solution $b=0$. This happen when $V'''(0)=0$, as this is the limiting case in which a saddle at $b=b_d>0$ and a secondary minimum at $b=b_s>0$ can merge for $b_d,b_s\rightarrow 0$. The condition $V'''(0)=0$ is equivalent to $c_3 = 1/6$, and this defines the $k-3$-dimensional surface \begin{equation} \Si_{crit} \equiv \{ c_j : c_2= 1/2,c_3=1/6 \} \ , \end{equation} to which we will refer as {\it critical surface}. It is easy to see that the three surfaces $\Si_{s},\Si_d,\Si_q$ are tangent to each other on the region of the critical surface where they intersect. To show that one must consider a displacement $c_3 = 1/6+\e$ and show that (\ref{dyna}), (\ref{asdef}) admit a solution with $b_s,b_d \sim \e$ if $c_2 -1/2 \sim \e^2$. We say that in this case the phase transitions are of {\it second order} because the order parameter $b$ vanishes continuously at the transition. \item There is no {\it a priori} reason for which the three surfaces must cross at $\Si_{crit}$. In fact, the solutions at $b>0$ might also disappear discontinuously, like in figure~\ref{fig_pot}, and the surfaces $\Si_s$ and $\Si_d$ can intersect the surface $\Si_q$ in regions different from $\Si_{crit}$. This does not happen for $k=3$ but happens for $k=4$ for large $c_4$, see figure~\ref{dia_fase_vero}. In this case the transition is called {\it first order} because the order parameter jumps at the transition. \end{enumerate} The generic phase diagram for all $k$ has the shape of the one for $k=4$ which we report in figure~\ref{dia_fase_vero}, left panel. \section{Search Trajectories in the Space of Formulas} \label{sec:dyna} The heuristics we defined in section~\ref{sec:def} enjoy the property that, after any number of steps of the algorithm, the reduced formula is uniformly distributed over the set of remaining $N-T$ variables conditioned to the numbers $C_j(T)$ of clauses of length $j$ ($=2,...,k$) \cite{Chao90,Achlioptas01}. This statistical property, combined with the concentration phenomenon taking place in the large $N$ limit, allows us to study the evolution of the average clauses densities $c_j(t)=C_j(T)/N$ on the time scale $t=T/N$ (fraction of assigned variables), which defines a {\it trajectory} in the $c_j$'s space. Note that these $c_j(t)$ are defined with respect to $N$, therefore the actual clause density for the reduced system of $N-T$ variables are $\widetilde c_j(t) = c_j(t)/(1-t)$. {\it The trajectory of the $\widetilde c_j(t)$ moves in the $c_j$ space of the previous section}\footnote{The reader should keep in mind this change of notation to avoid confusion in the following arguments}. Initially we have $c_j(0)=\a\; \d_{jk}$, {i.e. }}\def\eg{{e.g. } the evolution starts on the $c_k$ axis at $c_k = \a$. The evolution equation for the densities take the form of first order differential equations, \begin{equation}\label{eq_gen_cont} \dot c_j = \frac{(j+1) c_{j+1} - jc_j}{1-t} - \r_j(t) \ . \end{equation} The interpretation of the equations above is the following. Let us consider an interval $[t,t+dt]$ of continuous time that corresponds to $\D T\sim N dt$ time steps of the algorithm. The first term on the r.h.s. arises from the decrease by one of the length of the clauses that contained the variable just assigned by the algorithms during this interval. The second term corresponds to an additional loss of clauses which is present when the variable is selected from a clause of length $j$: as the heuristics explicitly chooses an equation (and a variable therein) of length $j$ with probability $p_j$ (see section~\ref{sec:def}), this equation will be reduced irrespectively of the number of other occurrences of the variable. Hence $\rho_j(t)$ is given, for $j \geq 1$, by \begin{equation}\label{rhoj} \rho_j(t) = \lim _{\Delta T\rightarrow\infty} \lim _{N\rightarrow\infty} \frac 1{\Delta T} \sum _{T=tN }^{tN+\Delta T-1} \left( p_{j} - p_{j+1} \right) \equiv \left\langle p_{j} - p_{j+1} \right\rangle \ , \end{equation} where both $p_j,p_{j+1}$ depend on their arguments (numbers of clauses) and $\left\langle \bullet\right\rangle$ represents the average over $\D T$ defined in (\ref{rhoj}). Here $p_{k+1}\equiv 0$. Note that the case $j=1$ is special as all clauses of length one that are produced are immediately eliminated. On average \begin{equation} \r_1 \equiv \frac{2 c_2}{1-t} \end{equation} clauses of length 2 become of length 1 and are then eliminated by unit propagation. The total fraction of eliminated clauses is \begin{equation}\label{cond_rho_1} \dot\g(t) \equiv -\sum_{j=2}^k \dot c_j(t) = \frac{2 c_2(t)}{1-t} + \sum_{j=2}^k \r_j(t) = \sum_{j=1}^k \r_j(t) \leq 1 \ , \end{equation} where the last inequality follows from (\ref{rhoj}). As only clauses of length one are eliminated, the violation of (\ref{cond_rho_1}) can only happen if too many such clauses are generated. This corresponds to $\rho_1 \rightarrow 1^-$; in this case a contradiction occurs with high probability and the algorithm stops with the `Don't know' output. When $\r_1 \rightarrow 1^-$, the algorithm makes only unit propagations and $\r_j \rightarrow 0^+$ for all $j \geq 2$. For this reason we called the plane $\r_1 = 1$, {i.e. }}\def\eg{{e.g. } $\widetilde c_2 = 1/2$, {\it contradiction surface}. \subsection{Unit Clause (UC)} In the UC heuristic variables are chosen at random when there is no unit clause. Hence $\rho _j =0$ for $j=2,\cdots,k$. The solution to (\ref{eq_gen_cont}) is $c_j(t)=\a {k \choose j} (1-t)^j t^{k-j}$. The algorithm will generate a contradiction with high probability (w.h.p.) if the average number of unit clauses starts to build-up, i.e. if $2 c_2(t)/(1-t) \geq 1$. This gives an equation for the value of $\a$ at which the probability that the algorithm finds a solution vanishes: for $k=3$, $\a_a^\mathrm{(UC)} = 2/3$. \subsection{Generalized Unit Clause (GUC)}\label{sec-guc} In the GUC heuristic the algorithm always fixes a variable appearing in the shortest clauses. In the continuum limit $c_j = 0$ for $j$ smaller than a given value; therefore we define \begin{equation} j^*(t) = \min \{ j : c_j(t) >0 \} \ , \end{equation} the minimal length of clauses with positive densities. We also define \begin{equation} t^*(j) = \min [ t : c_{j-1}(t) >0] \end{equation} the time at which $j^*$ jumps down from $j$ to $j-1$. Essentially, the algorithm picks one clause of length $j^*$ and assigns successively all the variables in this clause until the clause disappears. But in doing so, other clauses of length $j < j^*$ are generated and have to be eliminated to recover the situation in which $C_j=0$ for all $j < j^*$; for this reason $\r_{j^*}$ is not given exactly by $1/j^*$. When the number of generated clauses is so high that the algorithm is unable to remove them, $c_{j^*-1}$ becomes different from 0 and $j^*$ jumps down by 1. The resulting motion equations for the clause densities are, for $j \ge j^*(t)$: \begin{equation} \label{eqmot} \dot c_j (t) = \frac{ (j+1) c_{j+1} (t) - j c_j(t)}{1-t} - \delta _{j,j^*(t)} \left( \frac 1j - \frac{(j-1) c_j(t)}{1-t}\right) \ . \end{equation} The transition times $t^*$ are given by \begin{equation} \label{cond} \frac {c_j( t^* (j))}{1-t} = \frac 1{j(j-1)} \ , \end{equation} where the algorithm is no more able to remove the clauses of length $j^*$ because too many clauses of length $j^*-1$ are being generated by propagations. Comparing with (\ref{eq_gen_cont}) above, we observe that in the interval $t \in [t^*(j+1),t^*(j)]$, where $j^*=j$, only two $\r_j$ are different from 0: \begin{equation} \label{rho_j_GUC} \r_{j^*} = \frac 1{j^*} - \frac{(j^*-1) c_{j^*}(t)}{1-t} \ , \ \ \ \ \r_{j^*-1} = \frac{j^* c_{j^*}(t)}{1-t} \ , \end{equation} the first representing clauses of length $j^*$ which are directly eliminated, the second representing the clauses of length $j^*-1$ that are produced and subsequently eliminated in the process. In this interval of time, the ratio $c_{j^*}(t)/(1-t)$ increases from 0 to $1/j^*/(j^*-1)$ from condition (\ref{cond}). Then \begin{equation} \label{bounddotg} \frac 1{j^*(t)} \le \dot\g(t)= ( \r_{j^*} + \r_{j^*-1}) \le \frac 1{j^*(t)-1} \ , \end{equation} which is consistent with (but stronger than) (\ref{cond_rho_1}) above. \section{Analysis of the ``dynamic'' phase diagram} \label{sec:cinq} \begin{figure} \centering \includegraphics[width=10cm]{fig1nuova.eps} \caption{An example of the potential $V(b;t,\a)$ plotted (from top to bottom) at times $t=\{0,t_d=0.02957,0.07327,t_s=0.11697,0.20642\}$ during the evolution of a $(3,d)$-UE-CSP formula with $\a = 0.8$ under the UC heuristic. In the unclustered region it is a convex function of $b$ with a global minimum in $b=0$. On the clustering line $t_d$ it first develops a secondary minimum. On the sat/unsat line the value of $V$ at the secondary minimum becomes equal to 0. } \label{fig_pot} \end{figure} Consider now a given heuristic, and a generic $(k,d)$-UE-CSP formula specified by its clause-to-variable ratio $\a$. The formula, in the $c_j$ space, starts on the axis $c_k$ at $c_k = \a$. The evolution of the formula under the action of the algorithm is represented by a {\it trajectory} $\{c_j(t,\a)\}_{j=2,\cdots,k}$ or equivalently by $G(b;t,\a) = \sum_{j=2}^k b^j c_j(t,\a)$, that depends on $\a$ through the initial condition $G(b;0,\a) = \a b^k$. We define a potential $V(b;t,\a)$ by replacing in (\ref{potedef}) $G(b) \rightarrow G(b;t,\a)/(1-t)$; the normalization $(1-t)$ is due to the fact that the $c_j = C_j/N$ are divided by $N$ instead of $N-T$. We follow the evolution of the formula by looking at the times at which the trajectory starting at $c_k = \a$ at time $0$ crosses the surfaces $\Si_{s},\Si_d,\Si_q$ defined in section~\ref{sec:surfaces}, which we call $t_s(\a),t_d(\a),t_q(\a)$ respectively. As an example, in figure~\ref{fig_pot} we report the potential at different times during the evolution of a formula according to the UC heuristic for $\a > \a_a^{(UC)}$. We draw a ``dynamic phase diagram'' by representing in the $(t,\a)$ plane the lines separating the unclustered, clustered, unsat and contradiction phases, which we call $\a_d(t),\a_s(t),\a_q(t)$ and are just the inverse of the times defined above. Examples in the case of the UC and GUC heuristics are given in figure~\ref{dia_fase}. From the general properties of the function $V(b;t,\a)$ we can deduce a number of properties of the lines $\a_d(t),\a_s(t),\a_q(t)$. We will show that the three lines intersect at a ``critical point'' $(t_a,\a_a)$, located at $\a_a\leq\a_d$, under the more general conditions. This implies that the algorithm stops working at the value $\a_a\leq \a_d$, which is our central result: {\it Poissonian search algorithm cannot find a solution in polynomial time in the clustered region}. \subsection{Equations for the transition lines} \begin{figure} \includegraphics[width=8cm]{dia_fase.eps} \includegraphics[width=7.5cm]{linee.eps} \caption{{\bf (Left)} Phase boundary lines in the $(t,\a)$ plane for the UC and GUC heuristics for $k=3$. The three lines meet at the critical point $(t_a,\a_a)$ at which the algorithm is no more able to find a solution (black dot). {\bf (Right)} The generic shape of the clustering and of the sat/unsat lines. The possibility of a maximum cannot be excluded, but in any case $t$ must be a single-valued function of $\a$, meaning that if the algorithm enters the cluster (or unsat) phase it cannot escape at later times. } \label{dia_fase} \end{figure} The generating function $G(b;t,\a)$ satisfy an evolution equation which is easily derived from (\ref{eq_gen_cont}): \begin{eqnarray}\label{eq_gen_G} \dot G(b;t,\a) &=& \frac{1-b}{1-t} G'(b;t,\a) - F(b;t,\a) \ , \\ F(b;t,\a) &\equiv& \frac{2 c_2(t)}{1-t} b + \sum_{j=2}^k \r_j(t) b^j = \sum_{j=1}^k \r_j(t) b^j \ . \end{eqnarray} Performing the total derivative with respect to $t$ of the first condition ($V'=0$) in (\ref{dyna}) for $(\a_d,b_d)$ and using the second condition, $V''=0$, we have \begin{equation} \frac{\partial V'}{\partial \a} \frac{d \a_d}{dt} + \dot V' = 0 \ \ \ \ \Rightarrow \ \ \ \ \frac{d \a_d}{dt} = - \frac{\dot V'(b_d;t,\a_d)}{\frac{\partial V'}{\partial \a}(b_d;t,\a_d)} \ . \end{equation} Using the definition (\ref{potedef}) we have \begin{eqnarray} \dot V'(b;t,\a) &=& -\frac{1}{1-t} \left[ \dot G'(b;t,\a) + \frac{G'(b;t,\a)}{1-t} \right] \ , \\ \frac{\partial V'}{\partial \a} &=& -\frac1{1-t} \frac{\partial G'}{\partial \a} =-\frac1{1-t} \sum_{j\geq 2} j b^{j-1} \frac{\partial c_j(t,\a)}{\partial \a} \ . \end{eqnarray} Then \begin{equation} \frac{d \a_d}{dt} =- \left. \frac{ \dot G'(b;t,\a) + \frac{G'(b;t,\a)}{1-t} } { \partial_\a G'(b;t,\a) } \right|_{\a=\a_d(t),b=b_d(t)} \ . \end{equation} Using (\ref{eq_gen_G}) and differentiating with respect to $b$ we have \begin{equation} \dot G'(b;t,\a) + \frac{G'(b;t,\a)}{1-t} = \frac{1-b}{1-t} G''(b;t,\a) - F'(b;t,\a) \ . \end{equation} Now using $V''(b;t,\a) = - \frac{ G''(b;t,\a)}{1-t} + \frac1{1-b}$ and $V''(b_d,t)=0$ we have $\frac{ 1-b}{1-t} G''(b;t,\a) = 1$ for $b=b_d$ and finally we get \begin{equation}\label{dneg} \frac{d \a_d}{dt} = -\left. \frac{ 1 - F'(b;t,\a)} { \partial_\a G'(b;t,\a) }\right|_{\a=\a_d(t),b=b_d(t)} \ . \end{equation} A very similar reasoning leads to the following equation for the sat/unsat line: \begin{equation}\label{sneg} \frac{d \a_s}{dt} = - \left. \frac{b - F(b;t,\a)}{\partial_\a G(b;t,\a)} \right|_{\a=\a_s(t),b=b_s(t)} \ . \end{equation} The equation for the contradiction line is easily derived from its definition $\widetilde c_2(t,\a) = \frac{c_2(t,\a)}{1-t} = \frac12$, which immediately gives \begin{equation}\label{qneg} \frac{d\a_q}{dt} = - \left. \frac{1 + 2 \dot c_2(t,\a)}{2 \partial_\a c_2(t,\a)} \right|_{\a=\a_q(t)} \ . \end{equation} \subsection{General properties of the transition lines} We wish to show that the transition lines $t_d(\a)$,$t_s(\a)$ and $t_q(\a)$ in the $(\a,t)$ plane are single-valued functions of $\a$, and that they meet in a point $(\a_a,t_a)$ where they have infinite slope and are therefore tangent to each other; the value $\a_a$ correspond to a trajectory which is tangent to the crytical surface $\Si_{crit}$. Our argument goes as follows: \begin{enumerate} \item We defined $\a_a$ as the value of $\a$ for which the probability of finding a solution for the chosen heuristic vanishes. Then the trajectory\footnote{Recall that we are here talking about average trajectories.} corresponding to any $\a > \a_a$ must cross the contradiction surface, while the trajectory corresponding to any $\a < \a_a$ must not cross it, so that the trajectory corresponding to $\a_a$ must be \emph{tangent} to the contradiction surface $\Si_q$. The latter trajectory is tangent to $\Sigma_q$ when $\widetilde c_2(t) = 1/2$, $\frac{d}{dt} \widetilde c_2(t) = 0$; the solution to these conditions gives $t_a$ and $\a_a$. \\ Moreover, $\widetilde c_2(t) = 1/2$ implies that $\r_1 = 1$ which then implies $\r_j = 0$ for all $j\geq 2$, as already discussed. Then we have \begin{equation}\label{contra1} \frac{d}{dt} \widetilde c_2(t) = \frac d {dt} \frac {2 c_2(t)}{1-t} = \frac{2 \dot c_2(t)}{1-t} + \frac{2 c_2(t)}{(1-t)^2} = 0 \ \ \ \Rightarrow \ \ \ \dot c_2(t) = -\frac{ c_2(t)}{1-t} = -\frac12 \ , \end{equation} which, together with the equations of motion (\ref{eq_gen_cont}) and $\r_2 = 0$ gives \begin{equation}\label{contra2} -\frac{ c_2(t)}{1-t} = \frac{d c_2(t)}{dt} = \frac{3 c_3(t) - 2 c_2(t)}{1-t} \ \ \ \Rightarrow \ \ \ \widetilde c_3(t) = \frac{c_3(t)}{1-t} = \frac13 \frac{ c_2(t)}{1-t} = \frac16 \ . \end{equation} Therefore the point where the trajectory for $\a=\a_a$ is tangent to the contradiction surface belongs to the critical surface $\Si_{crit}$. From equation~(\ref{qneg}) it is clear that since $\dot c_2 = -1/2$, the function $t_q(\a)$ has infinite slope in $(t_a,\a_a)$, as in figure~\ref{dia_fase}. \item Next we show that the numerators of the fractions appearing in $\dot \a_d(t)$ and $\dot \a_s(t)$ are strictly positive if $t < t_q(\a)$, {i.e. }}\def\eg{{e.g. } in before a contradiction is found. Using the definition (\ref{rhoj}) we can write: \begin{eqnarray}\label{Fprimapp} F(b;t,\alpha) &=& \sum_{j=1}^k \rho_j(t) b^j = b \left[ \left\langle p_1 \right\rangle + \sum_{j=2}^k b^{j-2} (b-1) \left\langle p_j\right\rangle \right] \ , \\ F'(b;t,\alpha) &=& \sum_{j=1}^k j \rho_j(t) b^{j-1} = \left\langle p_1 \right\rangle + \sum_{j=2}^k b^{j-2} \left[ 1 - j(1-b) \right] \left\langle p_j \right\rangle \, . \nonumber \end{eqnarray} The coefficients in front of $\left\langle p_j \right\rangle \geq 0$ in the sums above are always smaller than 1, independently of $j$, so that \begin{eqnarray}\label{bound} F(b;t,\alpha) &\leq& b \left[ \left\langle p_1 \right\rangle + \sum_{j=2}^k \left\langle p_j \right\rangle\right] \leq b \ , \\ F'(b;t,\alpha) &\leq& \left\langle p_1 \right\rangle + \sum_{j=2}^k \left\langle p_j \right\rangle \leq 1 \,. \\ \nonumber \end{eqnarray} The functions $F(b;t,\a)$ and $F'(b;t,\a)$ are to be computed in $b=b_s(t,\a)$ or $b=b_d(t,\a)$ in equations~(\ref{dneg}) and (\ref{sneg}). Both $b_s$ and $b_d$ are strictly smaller than $1$ for all $(t,\a)$, as one can directly show from their definitions because $V'(b\rightarrow 1) \rightarrow \io$. Then the coefficients in the sums in (\ref{Fprimapp}) are strictly smaller than $1$, and the only solution to $F=b$ or $F' = 1$ is $\left\langle p_j \right\rangle = \d_{1j}$, which happens only on the contradiction line. \item The denominators in equations~(\ref{dneg}), (\ref{sneg}) are surely positive at $t=0$, as $G(b;0,\a) = \a b^k$ independently of the heuristic. If they remain positive at all times, then $\dot \a_d(t),\dot \a_s(t) \leq 0$ at all times, or equivalently $\frac{dt_d}{d\a},\frac{dt_s}{d\a} \leq 0$ at all $\a$, so that $t_d,t_s$ always increase on decreasing $\a$. \\ The other possibility is that the denominator in (\ref{dneg}) crosses zero and become negative, leading to a maximum in $t_d(\a)$, which will then decrease on decreasing $\a$. Possibly the denominators can vanish again, giving rise to a sequence of maxima and minima, see right panel of figure~\ref{dia_fase}. \\ What is important is that the numerator is always strictly positive, and as a consequence $t_d(\a)$ or $t_s(\a)$ are single-valued functions of $\a$. In fact, for $t_d(\a)$ or $t_s(\a)$ to be multiple-valued functions of $\a$, at some point their slope must become infinite, which is excluded by the analysis above. \item The statement above, that $t_d(\a)$ and $t_s(\a)$ are single valued functions of $\a$, implies that {\it if a trajectory enters the clustered or unsat phase, it cannot exit from it}. This is enough to show that $\a_a \leq \a_d$; in fact, the trajectory for $\a=\a_a$ cannot start inside the clustered phase, as it would not be able to escape and reach the origin, which is required to find a solution. \item In general the function $\widetilde c_2(t)$ increases until it reaches a maximum and then decreases to 0. For $\a=\a_a$ the value at the maximum is $\widetilde c_2 = 1/2$. For $\a > \a_a$, the value at the maximum is $\widetilde c_2 > 1/2$, therefore the contradiction $\widetilde c_2 = 1/2$ is reached before the maximum, when $\widetilde c_2$ is still increasing. Then $\frac{d}{dt} \widetilde c_2 > 0$ at the contradiction point. Performing a simple computation similar to equations~(\ref{contra1}), (\ref{contra2}), one can show that the trajectories for $\a > \a_a$ meet the contradiction surface at $\widetilde c_3 > 1/6$. Notice then that, as it is evident in figure~\ref{dia_fase_vero}, the trajectories corresponding to $\a > \a_a$ must enter first the clustered and then the unsat phases in order to reach the contradiction surface, therefore for $\a < \a_a$ one has $t_d(\a)<t_s(\a)<t_q(\a)$. On the contrary the trajectories corresponding to $\a < \a_a$ must stay away from the clustering and sat/unsat surfaces, otherwise they could not exit and should meet the contradiction surface: therefore for any $\a < \a_a$, $t_d(\a)$ and $t_s(\a)$ do not exist. For $\a \rightarrow \a_a^+$, as the surfaces $\Si_d,\Si_s,\Si_q$ are tangent in $\Si_{crit}$, one has $t_d(\a_a)=t_s(\a_a)=t_q(\a_a)=t_a$ and the three curves have infinite slope as all the numerators in equations~(\ref{dneg}), (\ref{sneg}), (\ref{qneg}) vanish on the contradiction surface. This is indeed what is observed in figure~\ref{dia_fase} for the UC and GUC heuristics, and this argument confirms that this is the generic behavior for all the heuristics in the class considered here. \end{enumerate} This structure is particularly evident for UC, where \begin{equation}\label{G_UC} G^\mathrm{(UC)}(b;t,\a)= \a [1-(1-b)(1-t)]^k - \a t^{k-1} [ kb (1-t) + t ] \ . \end{equation} From (\ref{G_UC}) it is straightforward to check that $\partial_\a G(b;t,\a) > 0$, $\partial_\a G'(b;t,\a) > 0$, if $b > 0$. Then, as $F(b;t,\a) = \frac{2 b c_2(t)}{1-t}$ for UC, both $\dot \a_d(t)$ and $\dot \a_s(t)$ are proportional to $ \frac{2 c_2(t)}{1-t} -1$. This means that $\a_s$,$\a_d$ are decreasing functions of $t$ below the contradiction line. The conclusion is that for a generic Poissonian heuristic, the three lines cross at a critical point $(t_a,\a_a)$ which depends on the heuristic. Above $\a_a$ the heuristic will cross all the lines and find a contradiction. From the properties of the dynamical line, we have that generically $\a_a \leq \a_d$, that is {\it no Poissonian search heuristic can find a solution in polynomial time above $\a_d$}, as stated at the beginning of this section. The natural question is then if there exists an heuristic that saturates the bound, {i.e. }}\def\eg{{e.g. } such that $\a_a = \a_d$. From the discussion above it is clear that this is possible only if $\dot\a_d(t) \equiv 0$, {i.e. }}\def\eg{{e.g. } the dynamical line in the $(t,\a)$ plane is a straight vertical line, which is possible only if the numerator in (\ref{dneg}) is identically vanishing. \subsection{Optimality of GUC} It is quite easy to see that GUC is the heuristic that {\it locally} optimizes the numerator in (\ref{dneg}). Indeed, from the definition $F'(b;t,\a)=\sum_{j=1}^k j b^{j-1} \r_j$ and the bound $F'(1,t) \leq 1$, it is clear that $F'(b;t,\a)$ is maximized by maximizing $\r_j$ for the smallest possible $j$, {i.e. }}\def\eg{{e.g. } by picking clauses from the shortest possible ones, that is GUC. Unfortunately a general proof of the optimality of GUC for finite $k$ seems difficult, because one should prove that GUC optimizes globally the clustering line, and also control the denominator in (\ref{dneg}). In this section we will show that for $k\rightarrow\io$, GUC is optimal in the sense that $\dot\a_d \equiv 0$ and $\a_d=\a_a$ at leading order in $k$. From the definition $\gamma (t) =- \sum _{j=2} ^k c_j(t)$ and integrating over time the bound (\ref{bounddotg}), we have for GUC: \begin{equation} \alpha - \int _0 ^t \frac {dt'}{j^*(t')-1} \le -\gamma (t) \le \alpha - \int _0 ^t \frac {dt'}{j^*(t')} \ . \end{equation} or, equivalently, \begin{equation} \alpha - \sum _j\frac {t^*(j)-t^*(j+1)}{j-1} \le -\gamma (t) \le \alpha - \sum _j \frac {t^*(j)-t^*(j+1)}{j} \ . \end{equation} where the sums are limited to the values of $j$ that are reached during the search. In the large $k$ limit, provided the hypothesis \begin{equation} \label{hypo} t^*(j)-t^*(j+1) = \frac 1k + o (1/k) \end{equation} holds for most $j$, we obtain \begin{equation} \label{re} -\gamma (t) \simeq \alpha - \frac 1k \sum _{j^*(t)}^k \frac 1{j} \ . \end{equation} The hypothesis (\ref{hypo}) is well supported by numerical data, as shown in figure~\ref{fig_tstar}. As the sum of the inverse of the first $k$ integers is equivalent to $\log k$ (harmonic number) we see that the minimal value of $j^*$ over $t$ is much larger than 2 if $\alpha$ is much smaller than $\log k/k$. Therefore \begin{equation} \label{alpha_GUC} \alpha_a \ge \frac {\log k}k \ . \end{equation} The r.h.s. of the above inequality coincides with the asymptotic scaling of the clustering critical ratio (section~\ref{sec:32}). Since the results of the previous section require that $\a_a \leq \a_d$, we obtain that $\a_a^{\mathrm (GUC)} =\a_d \simeq \log k/k$ at the leading order in $k\rightarrow\io$. As a comparison, it is easy to see that for UC the threshold for large $k$ is $\a_a^{\mathrm (UC)} \simeq e/k$, which is therefore much lower than the threshold for GUC. These arguments are supported by numerical simulations that we performed up to $k=2^{16}$, in which the equations of motion (\ref{eqmot}) are integrated as finite differences equations for all values of $j$ (see figure~\ref{fig_tstar}). The numerical investigation confirms that $k \a_a^{\mathrm (GUC)}$ is very well fitted by $\log k + 2.15$ for $k$ in the range $2^8\div2^{16}$. Moreover, a finite size scaling analysis (with respect to $k$) of the data shown in figure~\ref{fig_tstar} shows that \begin{equation} k [t^*(j) - t^*(j+1)] = 1 + k^\nu \times f(j/k) \end{equation} where $f(x)$ is a function independent on $k$ which behaves as $x^{-\mu}$ for $x$ close to 0. From the numerical data, it appears that $\nu = \mu = 0.5$, which confirms that the first correction to the leading term $\log k/k$ is of order $1/k$. \begin{figure} \includegraphics[width=0.5\textwidth]{finite_size_scaling.eps} \includegraphics[width=0.5\textwidth]{alpha_guc.eps} \\ \includegraphics[width=0.5\textwidth]{FSS_collapse.eps} \includegraphics[width=0.5\textwidth]{scaling_mu.eps} \caption{ Finite size scaling results for GUC at large $k$. \emph{Top~Left}~Each curve shows the values of $k[t^*(j)-t^*(j+1)]$ as a function of $j/k$ for $k=2^8,2^9,\dots,2^{16}$ (from the farthest to the closest curve to 1), and was obtained by integrating the equations of motion (\ref{eqmot}) by finite differences. For each $k$, the value of $\alpha$ used is $\alpha_a^\mathrm{GUC}(k)$, determined as the value of $\alpha$ for which the maximum reached by $2 c_2(t) / (1-t)$ is 1. \emph{Top~Right}~Data points of $\alpha_a^\mathrm{GUC}(k)$ versus $\log k / k + 2.15 / k$ (full red line). \emph{Bottom~left}~The same data as above, plotted as $\{k\times[t^*(j)-t^*(j+1)]\}\times k^{1/2}$. The curves ``collapse'', showing $f(x)$ and confirming the value of $\nu = 1/2$. \emph{Bottom~right}~By plotting the same curves on logarithmic scale it is easily seen that for $x$ close to 0 $f(x) \simeq x^{-\mu}$ with $\mu = 1/2$, corresponding to the slope of the full red line. } \label{fig_tstar} \end{figure} \section{Conclusions} \label{sec:conc} One of the main results of this paper, that is, that linear-time search heuristic are not able to solve instances in the clustered phase of UE-CSP problems should be interpreted with care. In XORSAT-like models the clustering transition coincide with the emergence of strong correlations between variables in solutions, while the two phenomena generally define two distinct critical ratios for other random decision problems \cite{Se07,KZ07}. From an intuitive point of view it is expected that the performances of search heuristics are affected by correlations between variables rather than the clustering of solutions. Indeed, as the search algorithms investigated here do not allow for backtracking or corrections of wrongly assigned variables, very strong correlations between $O(N)$ variables (recall that the backbone includes $O(N)$ variables in the clustered phase) are likely to result in $e^{-O(N)}$ probabilities of success for the algorithm. Extending the present work to the random Satisfiability ($k$-SAT) problem would be interesting from this point of view, because even if the clustering and freezing transition coincide at leading order for $k\rightarrow\io$~\cite{Krzakala07}, their finite $k$ values are different in this case. Moreover, in some similar problems ($k$-COL~\cite{Achlioptas03} and 1-in-$k$-SAT~\cite{Raymond07}) it has been proven that search algorithms similar to the ones investigated here are efficient beyond the point where the replica-symmetry-breaking solution is stable. Therefore these algorithms might beat the clustering threshold in these problems. Note however that in these cases the transition is continuous, so that the structure of the clusters is expected to be very different from the one of XORSAT. In addition, while the Generalized Unit Clause heuristic is here shown to be optimal for the $k$-XORSAT problem and to saturate the clustering ratio when $k\rightarrow\infty$, it is certainly not the case of the $k$-SAT problem. Determining a provably optimal search heuristic for this problem remains an open problem. \vskip1cm
2,869,038,154,134
arxiv
\section{Introduction} \label{sec:intro} The rising of Neural Radiance Fields (NeRF) techniques has heavily impacted the field of 3D scene modeling and reconstruction in recent years~\cite{mildenhall2020nerf, yu2021pixelnerf, huang2022stylizednerf, kaya2022neural, guo2022fast}. Efficient photo-realistic novel view generation from a fixed set of training images has been a popular area of research in computer vision with broad applications. The ability to distill the essence of the 3D object from 2D representations of it and its compactness is the main reason for making NeRF a high-impact approach in the literature. The original NeRF~\cite{mildenhall2020nerf} consists of a multi-layer perceptron, which implicitly learns the manifold representing the 3D object. Because of its great generalization for synthesizing novel viewpoints and the high compactness of the model itself, which typically consists of a few MB, NeRF has become a prevalent approach for 3D reconstruction. However, the NeRF's MLP has to be queried million times to render a scene, leading to slow training and rendering time. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/teaser-min.png} \caption{NeRF models with explicit voxel grid representations can be effectively compressed with Re:NeRF.} \label{fig:teaser} \end{figure} In an effort to speed up the vanilla NeRF, follow-up works introduced modifications to the original NeRF architecture~\cite{fridovich2022plenoxels, chen2022tensorf, sun2022direct}. One of the popular approaches, for example, encodes features of a scene in an explicit 3D voxel grid, combined with a tiny MLP. This group of methods, which utilizes an ``explicit voxel grid'' (EVG), is gaining more and more popularity due to the high training and rendering speed while maintaining or improving the performance of the original NeRF. Unlike traditional NeRF, EVG-NeRF models require larger memory, limiting their deployment in real-life applications, where models need to be shared through communication channels, or many of these models must be stored on memory-constrained devices. In this work, we propose Re:NeRF, a method that reduces memory storage required by trained EVG-NeRF models. Its goal is to accurately separate the object from its background, discarding unnecessary features for rendering the scene, guided by the loss functions designed for training the specific EVG-NeRFs. Re:NeRF~enables generation of highly compressed models with little or no performance loss: it is specifically designed for EVG-NeRFs as it exploits a spatial locality principle for adding-back voxels to the grid, and in such a sense its working flow resembles the one of a sculptor (Fig.~\ref{fig:teaser}). We observe that Re:NeRF~enables high-level compression of pre-trained EVG-NeRF models, and that traditional general-purpose approaches, such as blind pruning, perform worse than Re:NeRF. We test Re:NeRF~on four datasets with three recent EVG-NeRFs validating the effectiveness of the proposed approach. \section{Related works} \label{sec:sota} \begin{figure}[t!] \centering \begin{subfigure}{1.0\columnwidth} \includegraphics[width=\textwidth]{figures/nerf_trad-min.png} \caption{~} \label{fig:tradNerf} \end{subfigure} \begin{subfigure}{1.0\columnwidth} \includegraphics[width=\textwidth]{figures/nerf_evg-min.png} \caption{~} \label{fig:evgNerf} \end{subfigure} \caption{Visualisation of traditional NeRF approach, consisting of a multi-layer perceptron (a) and NeRF-based approaches with explicit voxel grid representation (b). The latter \emph{can} also have a small MLP.} \label{fig:sota} \end{figure} Rendering photo-realistic novel views of a 3D scene from a set of calibrated 2D images of the given scene has been a popular area of research in computer vision and computer graphics. Inspired by Mildenhall~\emph{et~al}'s work in 2020, which proposed to capture the radiance and density field of a 3D scene entirely using a multi-layer perceptron (MLP)~\cite{mildenhall2020nerf}, a large number of follow-up studies have adopted the implicit representation of a scene. Here follows an overview of 3D representation models, neural radiance fields, and follow-up works.\\ \textbf{3D representation for novel view synthesis.} Inferring novel views of a scene given a set of images is a long-standing challenge in the field of computer graphics. Various scene representation techniques for 3D reconstruction have been studied in past decades. Light field rendering~\cite{davis2012unstructured, levin2010linear, shi2014light} directly synthesizes unobserved viewpoints by interpolating between sampled rays but it is slow to render and requires substantial computational resources. Meshes are another common technique that is easy to implement and allows rendering in real-time~\cite{debevec1996modeling, thies2019deferred, waechter2014let}. However, it struggles to capture fine geometry and topological information and its rendering quality is limited to mesh resolution. Differentiable methods have been recently proposed to perform scene reconstruction~\cite{flynn2019deepview, li2020crowdsampling, srinivasan2019pushing}. They use a differentiable ray-marching operation to encode and decode a latent representation of a scene and achieve excellent rendering quality.\\ \textbf{Neural Radiance Fields.} Unlike traditional explicit volumetric representation techniques, NeRF~\cite{mildenhall2020nerf} stands out in recent years to be the most prevalent method for novel view rendering that infers photo-realistic views given a moderate number of input images. It encodes the entire content of the scene including view-dependent color emission and density into a single multi-layer perceptron (Fig.~\ref{fig:tradNerf}) and achieves state-of-the-art quality. Besides, Neural Radiance Field-based approaches are proving on-the-field to have good generalization when undergoing several transformations, like changing environmental light~\cite{boss2021nerd, srinivasan2021nerv}, image deformation~\cite{gafni2021dynamic, noguchi2021neural, tretschk2021non} and are even usable in more challenging setups including meta learning~\cite{tancik2021learned}, learn dynamically-changing scenes~\cite{gao2021dynamic, li2021neural, martin2021nerf, xian2021space} and even in generative contexts~\cite{chan2021pi, kosiorek2021nerf, schwarz2020graf}. Compared to explicit representations, NeRF requires very little storage space, but on the contrary suffers from lengthy training time and very slow rendering speed, as the MLP is queried an extremely high number of times for rendering a single image.\\% Some works maintained the same spirit as NeRF improving the inference speed, but at a significant toll to pay both for storage memory and convergence time~\cite{garbin2021fastnerf}\\ \textbf{NeRF with explicit voxel grids.} To reduce inference and training time, explicit prior on the 3D object representation can be imposed. The most intuitive yet effective approach relies on splitting the 3D volume into small blocks, each of which is learned by a tiny NeRF model. With KiloNeRF~\cite{reiser2021kilonerf}, the advantage of doing this is twofold: the size of a single NeRF model is much smaller than the original one, reducing the latency time; secondly, the rendering process itself becomes parallelizable, as multiple pixels can be rendered simultaneously. The downside of this approach is that the granularity of the KiloNeRFs needs to be properly tuned, and the distillation of the single tinier NeRFs can be quite an expensive process. An interesting approach that leverages radiance fields with no explicit neural component is Plenoxels~\cite{fridovich2022plenoxels}. In this case, a sparse feature grid is encoded with 3D spherical harmonics (it belongs to EVG approaches without the MLP component in Fig.~\ref{fig:evgNerf}). Hence, both the training time and the inference times are drastically improved, however, at the cost of a significant increment in-memory storage for the learned model, despite its sparse representation. Showing similar convergence time but maintaining an MLP component for complex view-dependent appearances, DVGO~\cite{sun2022direct} proposes post-activation interpolation. Recently, in order to further improve the execution speed, TensoRF~\cite{chen2022tensorf} has been proposed, which decomposes a 4D tensor into low-rank components prior to training. With the lower-quality rendering setup, the authors deliver a model of size comparable to the original NeRF, but with higher-quality rendering the memory discrepancy with the vanilla NeRF model is still quite wide.\\ \textbf{Compressing EVG-NeRF.} Whilst dense voxel-based representations increase rendering speed drastically, they require an order of magnitude more memory than implicit volumetric representations to achieve comparable rendering quality. Hierarchical structure representations using octrees allow the 3D scene to be encoded in a sparse manner, but the memory occupancy still remains high. Recent work addressed the problem of training a model with neural sparse voxel fields~\cite{liu2020neural} progressively reducing the granularity of voxels and skipping the rendering for empty voxels. This approach, however, is designed for resource reallocation. While it improves the rendering speed, it still suffers from a long training time. To the best of our knowledge, Re:NeRF~is the first approach focusing on compression specifically for EVG-NeRFs. While other works leverage the knowledge of sparsity of the 3D scene \cite{liu2020neural, fridovich2022plenoxels, sun2022direct}, they are focused on performance enhancement (fighting against artifacts which might appear in the empty space) and are not specific for compression. In this work, we are NeRF architecture agnostic, and our goal is to preserve the performance while reducing the model's size. \section{Re:NeRF} \label{sec:method} In this section, we present Re:NeRF, our approach towards storage memory reduction for EVG-NeRFs. To reduce the model size, we iteratively remove parameters with the least ranked importance. Following each round of pruning, we design a strategy that adds back neighbor voxels to avoid a drop in performance. \subsection{Which parameters are important?} \label{sec:importance} One of the key characteristics making EVG-NeRF an effective approach is the possibility of end-to-end training: given some target loss function $\mathcal{L}$ evaluated on the rendered image, using back-propagation, it is possible to train all the parameters$\boldsymbol{w}$ of the model. This learning approach is common with any standard deep neural network, which allows us to build on top of the existing technique with the same set of optimizers (such as SGD and Adam). Methods based on mini-batches of samples have gained popularity, as they allow better generalization than stochastic learning while being memory and time efficient. They also benefit from libraries that exploit parallel computation on GPUs. In such a framework, a network parameter $w_i$is updated towards the averaged direction which minimizes the averaged loss for the mini-batch. Evidently, if the gradient's magnitude is zero, the parameter is not updated, meaning that the local loss landscape for it is \emph{flat}. A typical approach to reduce the number of parameters in a deep neural network is to \emph{threshold} the parameters according to some hyper-parameters that determine the amount to be removed~\cite{tartaglione2022loss, Frankle2019TheLT}: \begin{equation} \label{eq:magprune} w_i=\left\{ \begin{array}{ll} w_i & if |w_i| > \mathcal{Q}_{|\boldsymbol{w}|}(\gamma)\\ 0 & otherwise, \end{array} \right . \end{equation} where $\mathcal{Q}_{|\boldsymbol{w}|}(\cdot)$ is the quantile function for the $\ell_1$ norm of the parameters and $\gamma \in [0; 1]$ is the percentage of parameters to be removed. Despite its simplicity and broad application, this approach has a potential issue: parameters having very low magnitude can be important for the model. For example, a parameter can have a very low magnitude but a high gradient: hard-setting it to zero according to \eqref{eq:magprune} can significantly influence the loss value/performance. Because of this, other works have suggested evaluating the importance of a parameter using the gradient of a parameter as a criterion~\cite{lecun1989optimal, tartaglione2021serene}. A parameter $w_i$ can have a low gradient locally, but removing it may potentially impose a drastic change in both the loss value and its gradient. It is necessary, hence, to find a compromise between these two conditions. We can estimate the variation of the loss value using a Taylor series expansion truncated to the first order: \begin{equation} \label{eq:taylor} \Delta \mathcal{L}(w_i) \approx \frac{\partial \mathcal{L}}{\partial w_i} w_i, \end{equation} and from \eqref{eq:taylor} we can define how to remove the parameters according to \begin{equation} \label{eq:taylorprune} w_i=\left\{ \begin{array}{ll} w_i & if |\mathcal{L}(w_i)| > \mathcal{Q}_{|\Delta \mathcal{L}(w)|}(\gamma)\\ 0 & otherwise . \end{array} \right . \end{equation} It is a known fact, however, that both gradient and weight magnitudes for the parameters change depending on the typology of layers taken into consideration~\cite{lee2018snip}. Hence, in order to address a parameter-removing strategy that could be applied globally (hence, removing a given ratio of the parameters from the whole model, without imposing uniformity in this removal), the quantile function should be evaluated on the layer-normalized quantity \begin{equation} \label{eq:taylornorm} \Delta \hat{\mathcal{L}}(w_i) = \frac{\frac{\partial \mathcal{L}}{\partial w_i} w_i}{\max\left| \frac{\partial \mathcal{L}}{\partial w_j} w_j \right|}, w_j\text{ in same layer as }w_i. \end{equation} Consequently, \eqref{eq:taylorprune} becomes \begin{equation} \label{eq:taylorprunenormalized} w_i=\left\{ \begin{array}{ll} w_i & if |\mathcal{L}(w_i)| > \mathcal{Q}_{|\Delta \hat{\mathcal{L}}(w)|}(\gamma)\\ 0 & otherwise . \end{array} \right . \end{equation} This strategy, however, evaluates the loss variation for each parameter $w_i$ independently as in \eqref{eq:taylor}, which is known to be sub-optimal, as there is a dependence between parameters inside the model. How can we correct a potential ``excessive'' removal of parameters? \subsection{Removing only?} Removing parameters from a model is always a matter of delicacy: if the parameters are removed too fast, at some point the performance can not be recovered. On the contrary, if they are removed too slowly, the training complexity becomes large. Furthermore, the strategy to identify which parameters can be removed from the model, for a matter of efficiency, is limited to a first-order approximation in~\eqref{eq:taylor}, making the parameter removal mechanism potentially prone to approximation errors. How can we identify the parameters, which have been removed, and should be added back in order not to degrade the performance excessively?\\ Let us consider the subset of parameters $\overline{\mathcal{W}}$ which have been removed. Since these parameters have been removed, according to \eqref{eq:taylor}, $\Delta \mathcal{L}(w_i) = 0 \forall w_i \in \overline{\mathcal{W}}$, meaning that this metric cannot be used to eventually re-include parameters in the model. In order to determine whether the re-inclusion of a previously removed parameter will enhance the performance further (or in other words, will cause the minimization of the evaluated loss function) we can, for instance, look at the value for its gradient. If the gradient is above a given threshold, the parameter is added back. A simple threshold could be defined by the distribution of the magnitude of the gradients for the remaining parameters $\mathcal{W}$: \begin{equation} \label{eq:reinc} \left|\frac{\partial \mathcal{L}}{\partial w_i}\right| \geq \mathcal{Q}_{\left|\frac{\partial \mathcal{L}}{\partial w}\right|, w\in \mathcal{W}}(\delta) \Rightarrow w_i \in \mathcal{W} , \end{equation} where $\delta\in [0;1]$ determines the relative threshold for the re-inclusion. \begin{figure}[t] \centering \begin{subfigure}{.45\columnwidth} \includegraphics[width=\textwidth]{figures/before_rnf-min.png} \caption{~} \label{fig:beforeRE} \end{subfigure} \hfill \begin{subfigure}{.45\columnwidth} \includegraphics[width=\textwidth]{figures/after_rnf-min.png} \caption{~} \label{fig:afterRE} \end{subfigure} \caption{Effect of RE-INCLUDE before (a) and after running one iteration (b). In red: voxels already in the model; in green: non-neighbor voxels satisfying the re-inclusion rule; in blue: neighbor voxels satisfying the re-inclusion rule. } \label{fig:neighb} \end{figure} Although \eqref{eq:reinc} is a general rule and is potentially applicable to all the layers for EVG-NeRFs, we can leverage the voxel grid structure, imposing a prior over the 3D manifold representation for the object itself. We expect it to be \emph{compact} and \emph{the least sparse possible}. Towards this end, we add, as an additional constraint to \eqref{eq:reinc}, that a parameter $w_i \in \overline{\mathcal{W}}$, in order to be re-included, it should also be connected, or should be a \emph{neighbor} of some $w_j \in \mathcal{W}$. Hence, the re-inclusion rule becomes \begin{equation} \label{eq:reinc-final} \begin{array}{cl} \left|\frac{\partial \mathcal{L}}{\partial w_i}\right| \geq \mathcal{Q}_{\left|\frac{\partial \mathcal{L}}{\partial w}\right|, w\in \mathcal{W}}(\delta) \\ \wedge & \Rightarrow w_i \in \mathcal{W}\\ \exists w_j \in \mathcal{W} | w_j \in \Omega(w_i), \end{array} \end{equation} where $\Omega(w_i)$ is the subset of parameters that are neighbors of $w_i$. Fig.~\ref{fig:beforeRE} displays a practical case where there are some voxels not included (white space), voxels in the model (red), voxels removed which satisfy \eqref{eq:reinc-final} (blue) and voxels which satisfy the condition on the gradient, but are not neighbors of any voxel in the model (green). After one re-inclusion iteration, the blue voxels are included, and some green voxels (the neighbors of the blue ones) become the new candidates for the re-inclusion (Fig.~\ref{fig:afterRE}). In order to find the whole subset of voxels to be added-back, it is necessary to iterate over the re-inclusion mechanism, until there are no voxels in blue to add. Follows an overview on Re:NeRF. \subsection{Overview on the Re:NeRF~scheme} \begin{algorithm}[ht] \caption{Re:NeRF.} \label{alg:ourmethod} \begin{algorithmic}[1] \Procedure{Re:NeRF ($\mathcal{W}_{beg}$, $\gamma$, $\delta$)}{} \State $T_{rem} \gets \mathcal{Q}_{|\Delta\hat{\mathcal{L}}(w)|, w\in \mathcal{W}_{beg}}(\gamma)$ \State $\mathcal{W}, \overline{\mathcal{W}} \gets \text{REMOVE}(\mathcal{W}_{beg}, T_{rem})$\label{line:rem} \State $T_{inc} \gets \mathcal{Q}_{\left|\frac{\partial \mathcal{L}}{\partial w}\right|, w\in \mathcal{W}}(\delta)$ \State $\mathcal{W}_{end} \gets \text{RE-INCLUDE}(\mathcal{W}, \overline{\mathcal{W}}, T_{inc})$\label{line:add} \State \Return $\mathcal{W}_{end}$ \EndProcedure \Procedure{REMOVE($\mathcal{W}_{beg}$, $T_{rem}$)}{} \State $\mathcal{W} \gets \emptyset$ \State $\overline{\mathcal{W}}\gets \emptyset$ \For{$w_i \in \mathcal{W}_{beg}$} \If{$\left|\Delta\mathcal{L}(w_i) \right|\geq T_{rem}$} \State $\mathcal{W} \gets \mathcal{W} \cup \{w_i\}$ \Else \State $\overline{\mathcal{W}} \gets \overline{\mathcal{W}} \cup \{w_i\}$ \EndIf \EndFor \State \Return $\mathcal{W}, \overline{\mathcal{W}}$ \EndProcedure \Procedure{RE-INCLUDE($\mathcal{W}$, $\overline{\mathcal{W}}$, $T_{inc}$)}{} \State $one\_added \gets True$ \While{$one\_added$}\label{line:oneadd} \State $one\_added \gets False$ \For{$w_i \in \overline{\mathcal{W}}$} \If{$\left|\frac{\partial \mathcal{L}}{\partial w_i}\right|\geq T_{inc}$} \State $\Omega \gets \text{NEIGHBORS}(w_i)$ \If{$\Omega \cap \mathcal{W} \neq \emptyset$}\label{line:neightest} \State $\mathcal{W} \gets \mathcal{W} \cup \{w_i\}$ \State $one\_added \gets True$ \EndIf \EndIf \EndFor \EndWhile \State \Return $\mathcal{W}$ \EndProcedure \end{algorithmic} \end{algorithm} \begin{figure} \includegraphics[width=\columnwidth]{figures/overview-min.png} \caption{Overview on Re:NeRF. The dashed arrows indicate usage of some specific dataset/hyper-parameter at every stage.} \label{fig:overview} \end{figure} In this section, we provide an overview of Re:NeRF, which is displayed in Fig.~\ref{fig:overview}. Given a pre-trained model, we perform a one-epoch fine-tuning on the model with the same policy as in the original NeRF model, moving then to the parameters removal/re-inclusion to determine the subset $\mathcal{W}$ of parameters belonging to the model. Every time we perform a step of parameter removal, we follow the steps as in Algorithm~\ref{alg:ourmethod}. In particular, we are asked a subset of parameters to belong to the model $\mathcal{W}_{beg}$ and two hyper-parameters $\gamma\in [0;1]$ and $\delta\in [0;1]$: while $\gamma$ determines how many parameters are (tentatively) removed after every step, $\delta$ determines how many parameters are (eventually) added back. Hence, we distinguish two phases for Re:NeRF: one (REMOVE) splits the model parameters into those dropping below or staying above a given threshold (line~\ref{line:rem}), and the other (RE-INCLUDE) re-includes the tentatively removed parameters that have both high derivative and are neighbors of other parameters in the model. This will favor lower loss (line~\ref{line:add}). In particular, the latter might need to be run multiple times, every time at least one parameter is re-included (line~\ref{line:oneadd}). This is necessary as, every time a new parameter is added to $\mathcal{W}$, the neighbor test as in line~\ref{line:neightest} potentially gives a different outcome.\\ We iterate over this until the performance does not drop below some pre-fixed performance threshold $\Delta T$ (from the original performance): when this happens, we end our training process. In order to save the model, the state dictionary is first quantized on 8-bits with a uniform quantizer and successively compressed using LZMA.\\ In the next section, we are going to present the results obtained on some common benchmarks for NeRFs. \section{Results} \label{sec:results} In this section, we present the empirical results obtained on state-of-the-art datasets and three different EVG-NeRF approaches, on top of which Re:NeRF~has been executed in order to reduce the storage memory. For all the experiments the models have been pre-trained using the hyper-parameter setup indicated in the respective original work. As a common stop criterion, we impose a maximum worsening in performance $\Delta T$ of $1$dB on the original model's PSNR. All the other hyper-parameters have been optimized using a grid-search algorithm. Although every technique requires a specific CUDA and PyTorch version, the Re:NeRF~code is compatible with pytorch~1.12 and back-compatible with PyTorch~1.6. For all the experiments an NVIDIA A40 equipped with 40~GB has been used.\footnote{The source code will be made available at the conference's dates.} \iffalse \begin{table*} \caption{Results ...} \label{tab:celeba} \renewcommand{\arraystretch}{1.2} \resizebox{\textwidth}{!}{ \centering \begin{tabular}{c c c c c c c c c c c c c c c c c } \toprule \multirow{3}{*}{\bf \large Approach} & \multirow{3}{*}{\bf \large Pruning} &\multicolumn{2}{c}{\bf \large Synthetic-NeRF}&\multicolumn{2}{c}{\bf \large Synthetic-NSVF} &\multicolumn{2}{c}{\bf \large BlendedMVS}&\multicolumn{2}{c}{\bf \large Tanks\&Temples} \\ & &\bf PSNR & \bf Size & \bf PSNR & \bf Size & \bf PSNR & \bf Size & \bf PSNR & \bf Size \\ & & [dB]($\uparrow$) & [MB]($\downarrow$)& [dB]($\uparrow$) & [MB]($\downarrow$)& [dB]($\uparrow$) & [MB]($\downarrow$)& [dB]($\uparrow$) & [MB]($\downarrow$) \\ \midrule DVGO~\cite{sun2022direct} & - &31.92 &160.09 & 35.42 &104.12& 28.17& 114.79& 28.26 &106.48 \\ DVGO~\cite{sun2022direct} & LOW & 31.47& 3.99& 35.29& 4.37& 27.95& 4.25 & 28.22 & 4.69\\ DVGO~\cite{sun2022direct} & HIGH & 31.08 & 2.00& 34.90& 2.46& 27.68 & 2.08& 27.90 & 1.62\\ \midrule TensoRF~\cite{chen2022tensorf}(VM-192) & - &33.14 &69.26 &36.52 & 69.05 &- & -& 28.56&64.04 \\ TensoRF~\cite{chen2022tensorf} & LOW & 33.11 &11.47 & 36.44 &11.60 & - & - & 28.47 &10.73\\ \midrule Plenoxels~\cite{fridovich2022plenoxels}s\\ \bottomrule \end{tabular} } \end{table*} \fi \begin{table*} \caption{Results obtained on low compressibility regime (LOW) and high compressibility (HIGH). The first line indicates the baseline. In every dataset, the various metrics are averaged for the samples in them.} \label{tab:results} \renewcommand{\arraystretch}{1.2} \resizebox{\textwidth}{!}{ \centering \begin{tabular}{@{\hskip1pt}c c c c c c c c c c c c c c @{\hskip1pt}} \toprule \multirow{3}{*}{\bf \large Approach} & \multirow{3}{*}{\bf \large Compress} &\multicolumn{3}{c}{\bf \large Synthetic-NeRF}&\multicolumn{3}{c}{\bf \large Synthetic-NSVF} &\multicolumn{3}{c}{\bf \large Tanks\&Temples} &\multicolumn{3}{c}{\bf \large LLFF-NeRF}\\ & &\bf PSNR & \bf SSIM & \bf Size &\bf PSNR & \bf SSIM & \bf Size&\bf PSNR & \bf SSIM & \bf Size &\bf PSNR & \bf SSIM & \bf Size\\ & & [dB]($\uparrow$) &($\uparrow$)& [MB]($\downarrow$)& [dB]($\uparrow$) &($\uparrow$)& [MB]($\downarrow$)& [dB]($\uparrow$) &($\uparrow$)& [MB]($\downarrow$)& [dB]($\uparrow$) &($\uparrow$)& [MB]($\downarrow$) \\ \midrule NSVF~\cite{liu2020neural} & - & 31.74 & 0.953& $\sim$ 16 & 35.13& 0.979 & $\sim$16&28.40 &0.900 & $\sim$16 & - & - & - \\ Instant-NGP~\cite{mueller2022instant} & - & 33.04 & 0.934& 28.64& 36.11& 0.966 & 46.09&28.81 &0.917 & 46.09& 20.18 & 0.662 & 46.09 \\ \midrule & - &31.92 &0.957&160.09 & 35.42 &0.979&104.12& 28.26 &0.909&106.48 & - & - & -\\ DVGO~\cite{sun2022direct} & LOW & 31.47& 0.952& 3.99& 35.29&0.974& 4.37& 28.22 &0.910& 4.69& - & - & -\\ & HIGH & 31.08&0.944 & 2.00& 34.90&0.969& 2.46& 27.90 &0.894& 1.62& - & - & -\\ \midrule & - &33.14 &0.963&69.26 &36.52 & 0.982& 69.05 & 28.56&0.920 &64.04& 26.73 & 0.839&151.79 \\ TensoRF~\cite{chen2022tensorf} & LOW & 33.26&0.962 &11.47 & 36.44 &0.982&11.60 & 28.50 & 0.916& 9.99& 26.80 & 0.820 & 32.34\\ & HIGH & 32.81&0.956&7.94& 36.14&0.978&8.52& 28.24 & 0.907 &6.70 & 26.55 & 0.797 & 20.27\\ \midrule & - & 31.48 & 0.956 & 189.08 & - & - & - & 27.37 & 0.904 & 147.96 & 25.90 & 0.838 & 1484.96\\ Plenoxels~\cite{fridovich2022plenoxels} & LOW & 31.52 & 0.952 & 91.77 & - & - & - & 27.66 & 0.909 & 102.26 & 26.24 & 0.838 & 457.23\\ & HIGH & 30.97 & 0.944 & 54.68 & - & - & - & 27.34 & 0.896 & 85.47& 25.95 & 0.828 & 338.02\\ \bottomrule \end{tabular} } \end{table*} \begin{figure*} \begin{subfigure}{0.24\textwidth} \begin{subfigure}{0.52\textwidth} \includegraphics[width=\textwidth, trim={50 100 150 100},clip]{figures/lego/gt-min.png} \includegraphics[width=1.01\textwidth]{figures/mic/gt-min.png} \end{subfigure} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth]{figures/lego/gt_a-min.png} \includegraphics[width=\textwidth]{figures/lego/gt_b-min.png} \includegraphics[width=\textwidth]{figures/mic/gt_a-min.png} \includegraphics[width=\textwidth]{figures/mic/gt_b-min.png} \end{subfigure} \begin{subfigure}{\textwidth} \includegraphics[width=\textwidth]{figures/TensoRF/llff/gt/fern-min.png} \end{subfigure} \caption{Ground truth.} \end{subfigure} \begin{subfigure}{0.24\textwidth} \begin{subfigure}{0.52\textwidth} \includegraphics[width=\textwidth, trim={50 100 150 100},clip]{figures/lego/baseline-min.png} \includegraphics[width=1.01\textwidth]{figures/mic/baseline-min.png} \end{subfigure} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth]{figures/lego/baseline_a-min.png} \includegraphics[width=\textwidth]{figures/lego/baseline_b-min.png} \includegraphics[width=\textwidth]{figures/mic/baseline_a-min.png} \includegraphics[width=\textwidth]{figures/mic/baseline_b-min.png} \end{subfigure} \begin{subfigure}{\textwidth} \includegraphics[width=\textwidth]{baseline-fern-min.png} \end{subfigure} \caption{TensoRF (baseline).} \end{subfigure} \begin{subfigure}{0.24\textwidth} \begin{subfigure}{0.52\textwidth} \includegraphics[width=\textwidth, trim={50 100 150 100},clip]{figures/lego/low_lego-min.png} \includegraphics[width=1.01\textwidth]{figures/mic/low-min.png} \end{subfigure} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth]{figures/lego/low_lego_a-min.png} \includegraphics[width=\textwidth]{figures/lego/low_lego_b-min.png} \includegraphics[width=\textwidth]{figures/mic/low_a-min.png} \includegraphics[width=\textwidth]{figures/mic/low_b-min.png} \end{subfigure} \begin{subfigure}{\textwidth} \includegraphics[width=\textwidth]{figures/TensoRF/llff/low/fern-min.png} \end{subfigure} \caption{TensoRF low compress.} \end{subfigure} \begin{subfigure}{0.24\textwidth} \begin{subfigure}{0.52\textwidth} \includegraphics[width=\textwidth, trim={50 100 150 100},clip]{figures/lego/high_lego-min.png} \includegraphics[width=1.01\textwidth]{figures/mic/high-min.png} \end{subfigure} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth]{figures/lego/high_lego_a-min.png} \includegraphics[width=\textwidth]{figures/lego/high_lego_b-min.png} \includegraphics[width=\textwidth]{figures/mic/high_a-min.png} \includegraphics[width=\textwidth]{figures/mic/high_b-min.png} \end{subfigure} \begin{subfigure}{\textwidth} \includegraphics[width=\textwidth]{figures/TensoRF/llff/high/fern-min.png} \end{subfigure} \caption{TensoRF high compress.} \end{subfigure} \caption{Qualitative results for ``lego'' (top), ``mic'' (middle) and ``fern'' (bottom).} \label{fig:highfreq} \end{figure*} \iffalse \begin{table*} \caption{Results for Synthetic-NeRF.} \label{tab:syntheticnerf} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c c c c c c c c c c c c c c c c c c c } \toprule &\multicolumn{10}{c}{\bf Synthetic-NeRF}\\ &Architecture & Pruning & Chair & Drums & Ficus & Hotdog & Lego & Materials & Mic & Ship &\bf Avg\\ \hline \multirow{9}{*}{PSNR(dB) ($\uparrow$)} & & - &34.11 & 25.48 & 32.59& 36.77 & 34.69 & 29.52 & 33.16 &29.04& \bf 31.92\\ &DVGO~\cite{sun2022direct} & LOW & 33.75 & 25.34 &32.36 &36.00 &34.30&29.26&33.16 &28.69& \bf 31.61\\ && HIGH & 33.45 & 24.98& 32.11 & 35.44 & 33.90 &28.38&32.77&28.60& \bf 31.20\\ \cline{2-12} & & - &33.98 & 25.35 & 31.83& 36.43 & 34.10 & 29.14 & 33.26 &27.78 &\bf 31.48\\ &Plenoxels~\cite{fridovich2022plenoxels} & LOW & 34.35 & 25.09 &31.69 &36.33 &34.40&28.73&33.92 &27.71& \bf 31.52\\ & & HIGH & 33.65 & 24.96& 31.21 & 35.44 & 33.91 & 28.15&33.14&27.31& \bf 30.97\\ \cline{2-12} & & - &35.76& 26.01& 33.99& 37.41& 36.46& 30.12 &34.61 &30.77& \bf 33.14 \\ &TensoRF~\cite{chen2022tensorf} & LOW &36.00&26.01&34.12&37.58&36.73&30.01&34.67&30.92&\bf33.26\\ && HIGH& 35.66&25.59&33.57&37.35&36.38&29.64&34.00&30.31&\bf32.81 \\ \hline \multirow{9}{*}{Size(MB)($\downarrow $)} & & - & 103.994 & 92.06 &108.71 &130.01&124.201& 171.36& 49.41 &100.98&\bf 110.09\\ &DVGO~\cite{sun2022direct} & LOW & 4.44 & 2.62 & 2.67 &5.19& 5.38&7.47&1.50 & 5.43 &\bf 4.33\\ && HIGH &2.53 & 1.21 & 1.82 &2.41&2.85 &3.06&0.89 & 2.85&\bf 2.20\\ \cline{2-12} & & - & 187.04 &160.83 &108.39 &290.64&291.65& 196.15& 80.83 &197.08&\bf 189.08\\ &Plenoxels~\cite{fridovich2022plenoxels} & LOW & 85.59&56.34&38.51&179.26&183.48&91.25&37.69&62.07&\bf 91.77\\ & & HIGH &47.26&42.34&28.8&96.81&97.78&66.4&21.07&36.94&\bf 54.68\\ \cline{2-12} & & - &65.46&65.62&67.87&77.61&65.75&80.06&64.45&67.12&\bf69.24\\ &TensoRF~\cite{chen2022tensorf} & LOW &12.78&8.47&13.30&14.77&12.82&15.65&7.67&8.77&\bf11.78\\ && HIGH& 8.51&6.11&8.88&9.47&8.41&10.31&5.58&6.26&\bf7.94 \\ \hline \multirow{9}{*}{SSIM($\uparrow$)} & & - &0.976 & 0.930 &0.977& 0.986& 0.976 & 0.950 & 0.983 &0.878& \bf 0.957 \\ &DVGO~\cite{sun2022direct} & LOW & 0.974 & 0.924 &0.975 &0.973&0.973 &0.943&0.982&0.871 & \bf 0.952 \\ & & HIGH & 0.971& 0.916& 0.962 & 0.969 & 0.967 &0.915&0.981&0.867& \bf 0.943 \\ \cline{2-12} & & - &0.977&0.933&0.976&0.98&0.975&0.949&0.985&0.869& \bf 0.956\\ &Plenoxels~\cite{fridovich2022plenoxels} & LOW &0.978&0.922&0.97&0.979&0.976&0.939&0.985&0.867&\bf0.952\\ && HIGH &0.971&0.915&0.97&0.972&0.972&0.923&0.977&0.854&\bf0.944\\ \cline{2-12} && - &0.985 &0.937& 0.982& 0.982& 0.983& 0.952& 0.988 &0.895&\bf0.963\\ &TensoRF~\cite{chen2022tensorf} & LOW &0.985&0.931&0.982&0.982&0.983&0.947&0.987&0.894&\bf0.962\\ & & HIGH& 0.983&0.917&0.979&0.980&0.981&0.939&0.984&0.882&\bf0.956 \\ \hline \multirow{9}{*}{LPIPS$_{VGG}$($\downarrow$)} && - &0.027 & 0.079 &0.025& 0.034& 0.027 & 0.059 & 0.018 &0.161& \bf 0.054\\ &DVGO~\cite{sun2022direct} & LOW & 0.035 & 0.090 &0.033 &0.060 &0.032&0.070&0.022 &0.167& \bf 0.064\\ & & HIGH & 0.038& 0.103& 0.037 & 0.067 & 0.039 &0.102&0.026&0.171& \bf 0.073\\ \cline{2-12} && - &0.031&0.067&0.026&0.037&0.028&0.057&0.015&0.178&\bf0.055\\ &Plenoxels~\cite{fridovich2022plenoxels} & LOW & 0.026&0.081&0.038&0.043&0.027&0.074&0.019&0.18&\bf0.061\\ && HIGH & 0.033&0.088&0.044&0.062&0.033&0.090&0.031&0.193&\bf0.072\\ \cline{2-12} & & - &0.022& 0.073 &0.022 &0.032 &0.018 &0.058& 0.015& 0.138& 0.047\\ &TensoRF~\cite{chen2022tensorf} & LOW &0.022&0.103&0.026&0.035&0.018&0.067&0.022&0.138&\bf0.054\\ & & HIGH& 0.028&0.157&0.042&0.045&0.022&0.081&0.042&0.159&\bf0.072 \\ \bottomrule \end{tabular} } \end{table*} \begin{table*} \caption{Results for Synthetic-NSVF.} \label{tab:syntheticnsvf} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c c c c c c c c c c c c c c c c c c c } \toprule &\multicolumn{11}{c}{\bf Synthetic-NSVF}\\ &Architecture & Pruning &Bike &Lifestyle &Palace &Robot&Spaceship & Steamtrain &Toad &Wineholder &\bf Avg\\ \hline \multirow{6}{*}{PSNR(dB)($\uparrow$)} & & - &38.13 & 33.64 & 34.32& 36.23 &37.56 & 36.47 & 33.02 &30.21& \bf 34.95 \\ & DVGO~\cite{sun2022direct} & LOW & 38.16 & 33.68& 34.47 &36.29 &37.26&36.10&32.98 &30.11& \bf 34.88\\ & & HIGH & 37.97 & 33.16& 33.88 & 36.00 & 36.82 &35.79&32.39&29.75& \bf 34.47\\ \cline{2-12} & & -&39.23&34.51&37.56&38.26&38.6&37.87&31.32&34.85&\bf36.53 \\ &TensoRF~\cite{chen2022tensorf}& LOW &39.38&34.68&37.92&38.72&38.58&38.06&34.85&31.77&\bf36.75\\ & & HIGH & 38.90&34.33&37.53&38.40&38.10&37.40&33.20&31.23& \bf36.14 \\ \hline \multirow{6}{*}{Size(MB)($\downarrow$)} & & - &104.10 & 97.12 & 105.00& 97.17 &128.31 &144.64 & 128.30 &97.71& \bf 112.79 \\ &DVGO~\cite{sun2022direct} & LOW & 3.55 &3.53& 4.84 &3.76&4.95&5.41&5.67 &3.30& \bf 4.38 \\ & & HIGH & 2.52 & 2.38& 2.63 & 2.55 &2.78 &2.77&1.87&1.65& \bf2.39 \\ \cline{2-12} & & - &70.92&65.46&64.94&68.63&68.01&80.02&68.71&65.74&\bf69.05 \\ &TensoRF~\cite{chen2022tensorf}& LOW &13.74&12.84&12.67&13.15&13.38&15.39&14.24&12.77&\bf13.52\\ & & HIGH & 9.11&8.47&8.36&8.80&8.83&7.12&9.12&8.35&\bf8.52 \\ \hline \multirow{6}{*}{SSIM($\uparrow$)} & & - &0.991 & 0.964 &0.992& 0.992 &0.987 &0.989 & 0.965 &0.949& \bf 0.979\\ &DVGO~\cite{sun2022direct} & LOW &0.991 & 0.963 &0.961&0.991&0.985&0.986 &0.966& 0.950&\bf 0.974\\ & & HIGH &0.990 & 0.958& 0.953 & 0.991 &0.982 &0.982&0.957&0.942& \bf0.969\\ \cline{2-12} & & - &0.993&0.968&0.979&0.994&0.989&0.991&0.961&0.978&\bf0.982\\ &TensoRF~\cite{chen2022tensorf}& LOW &0.993&0.968&0.980&0.995&0.988&0.991&0.978&0.963&\bf0.982\\ & & HIGH & 0.992&0.963&0.978&0.994&0.985&0.988&0.966&0.957&\bf0.978 \\ \hline \multirow{6}{*}{LPIPS$_{VGG}$($\downarrow$)} & & - &0.011 & 0.055 &0.045& 0.013&0.020 &0.019& 0.047 &0.059& \bf 0.034\\ &DVGO~\cite{sun2022direct} & LOW &0.015 & 0.056&0.043&0.013&0.024&0.027 &0.045& 0.055&\bf 0.035\\ & & HIGH &0.015 & 0.063& 0.050 & 0.013 &0.028 &0.036&0.054&0.067& \bf 0.041\\ \cline{2-12} & & - & 0.003&0.021&0.011&0.003&0.009&0.006&0.024&0.016&\bf0.012\\ &TensoRF~\cite{chen2022tensorf}& LOW &0.011&0.049&0.020&0.010&0.022&0.017&0.035&0.054&\bf0.027\\ & & HIGH &0.016&0.061&0.022&0.011&0.027&0.029&0.059&0.077&\bf0.038 \\ \bottomrule \end{tabular} } \end{table*} \begin{table} \caption{Results for BlendedMVF} \label{tab:blendedmvf} \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{c c c c c c c c c c c c c c c c c c c } \toprule &\multicolumn{7}{c}{\bf BlendedMVS}\\ &Architecture & Pruning& Character & Fountain & Jade & Statue&\bf Avg\\ \specialrule{0.001em}{0em}{0em} PSNR(dB)($\uparrow$) \\ &DVGO~\cite{sun2022direct} & - &30.26 & 28.27 &27.75& 26.41 \bf28.17 \\ &DVGO~\cite{sun2022direct} & LOW &30.05 & 28.21&27.48& 26.08 \bf 27.86 \\ &DVGO~\cite{sun2022direct} & HIGH &29.78 &27.90&27.08& 25.97 \bf 27.68\\ \specialrule{0.001em}{0em}{0em} Size(MB)($\downarrow$) \\ &DVGO~\cite{sun2022direct} & - &131.73 & 72.33 &158.03& 97.08 \bf114.79 \\ &DVGO~\cite{sun2022direct} & LOW &5.43 & 3.54&5.24& 2.81 \bf 4.26 \\ &DVGO~\cite{sun2022direct} & HIGH &2.92 &1.70&1.85& 1.85 \bf 6.93\\ \specialrule{0.001em}{0em}{0em} SSIM($\uparrow$) \\ &DVGO~\cite{sun2022direct} & - & 0.963 & 0.923 & 0.916& 0.887 &\bf 0.922 \\ &DVGO~\cite{sun2022direct} & LOW & 0.960 & 0.921& 0.909& 0.876 & \bf 0.917 \\ &DVGO~\cite{sun2022direct} & HIGH & 0.957 & 0.910& 0.887& 0.867 & \bf 0.908\\ \specialrule{0.001em}{0em}{0em} LPIPS$_{VGG}$($\downarrow$) \\ &DVGO~\cite{sun2022direct} & - & 0.046 & 0.116 & 0.106& 0.137 &\bf 0.101\\ &DVGO~\cite{sun2022direct} & LOW & 0.048 & 0.114& 0.107& 0.142 & \bf 0.103 \\ &DVGO~\cite{sun2022direct} & HIGH & 0.052 & 0.126& 0.129& 0.151 & \bf 0.115 \\ \specialrule{0.001em}{0em}{0em} \midrule \end{tabular} } \end{table} \begin{table*} \caption{Results for Tanks \& Temples.} \label{tab:tandt} \small \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{c c c c c c c c c c c c c c c c c c c } \toprule &\multicolumn{8}{c}{\bf Tanks \& Temples}\\ &Architecture & Pruning &Barn&Caterpillar&Family& Ignatius & Truck & \bf Avg\\ \hline \multirow{6}{*}{PSNR(dB)($\uparrow$)} & & - &26.84 & 25.70 &33.68& 28.00 &27.09 & \bf 28.26\\ &DVGO~\cite{sun2022direct} & LOW &26.76 & 25.67&33.60& 28.06&27.04& \bf 28.23\\ & & HIGH &26.32 &25.22&33.36& 27.86&26.78 &\bf 27.91 \\ \cline{2-9} & & - &25.95&24.63&32.25&27.49&26.52&\bf27.37\\ &Plenoxels~\cite{fridovich2022plenoxels} & LOW & 26.58&24.78&32.86&27.22&26.87&\bf27.66\\ & & HIGH &26.31&24.40&32.29&27.00&26.69 &\bf27.34\\ \cline{2-9} & & - &28.34&27.14&27.22&26.19&33.92& \bf 28.56\\ &TensoRF & LOW &27.28&26.09&33.75&28.06& 27.32&\bf28.50\\ & & HIGH &26.99&25.77&33.36&27.86 &27.20 &\bf 28.40\\ \hline \multirow{6}{*}{Size(MB)($\downarrow$)} & & - & 128.21 & 109.94 &92.72&95.10&106.43 &\bf 106.48\\ &DVGO~\cite{sun2022direct} & LOW &5.52 & 5.23&3.85& 3.51&5.35& \bf4.69 \\ & & HIGH &1.75 &1.89&2.37& 1.08&1.87 &\bf 1.79 \\ \cline{2-9} & & - &282.85&133.43&103.64&115.81&104.08&\bf147.96\\ &Plenoxels~\cite{fridovich2022plenoxels} & LOW &213.45&87.08&73.16&93.94&43.65&\bf102.26\\ & & HIGH &181.09&66.67&60.53&84.06&35.01&\bf85.47\\ \cline{2-9} & & - &73.95&64.56&60.06&61.25&65.36&\bf65.04\\ &TensoRF & LOW &9.39&8.16&7.40&12.13& 12.87 & \bf9.99\\ & & HIGH &6.62&5.69&5.17&7.66& 8.34 & \bf6.70\\ \hline \multirow{6}{*}{SSIM($\uparrow$)} & & - & 0.836 & 0.904 & 0.961& 0.941 & 0.905 &\bf0.909 \\ &DVGO~\cite{sun2022direct} & LOW & 0.838 & 0.904& 0.962& 0.941& 0.904& \bf0.910 \\ & & HIGH & 0.826 & 0.859& 0.958& 0.931 & 0.895 &\bf0.894 \\ \cline{2-9} & & - &0.828&0.899&0.954&0.942&0.899&\bf0.904\\ &Plenoxels~\cite{fridovich2022plenoxels} & LOW &0.856&0.894&0.959&0.935&0.902&\bf0.909\\ & & HIGH &0.844&0.871&0.95&0.923&0.892&\bf0.896\\ \cline{2-9} & & - &0.948&0.914&0.864&0.912&0.965&\bf0.920\\ &TensoRF & LOW &0.862&0.901&0.961&0.941&0.913 & \bf0.916\\ & & HIGH &0.852&0.888&0.956&0.934& 0.907 &\bf0.907\\ \hline \multirow{6}{*}{LPIPS$_{VGG}$($\downarrow$)} & & - & 0.297 & 0.171 & 0.071& 0.089 & 0.162& \bf 0.158\\ &DVGO~\cite{sun2022direct} & LOW & 0.291 & 0.172& 0.079& 0.091& 0.161& \bf 0.159 \\ & & HIGH & 0.312 & 0.194& 0.073& 0.107 & 0.174 &\bf 0.172 \\ \cline{2-9} & & - &0.306&0.169&0.081&0.102&0.167&\bf0.165\\ &Plenoxels~\cite{fridovich2022plenoxels} & LOW &0.263&0.174&0.071&0.112&0.155&\bf0.155\\ & & HIGH &0.285&0.200&0.081&0.126&0.165&\bf0.171\\ \cline{2-9} & & - &0.078&0.145&0.252&0.159&0.064 &\bf0.140\\ &TensoRF & LOW &0.258&0.187&0.067&0.087& 0.149 & \bf0.149\\ & & HIGH &0.277&0.211&0.077&0.096& 0.169 & \bf0.166\\ \bottomrule \end{tabular} } \end{table*} \fi \subsection{Setup} \textbf{Datasets.} We have evaluated our approach on four datasets. Synthetic-NeRF~\cite{mildenhall2020nerf} and Synthetic-NSVF~\cite{liu2020neural} are two popular datasets, containing 8 different realistic objects each, which are synthesized from NeRF (\textit{chair, drums, ficus, hotdog, lego, materials, mic} and \textit{ship}) and NSVF (\textit{bike, lifestyle, palace, robot, spaceship, steamtrain, toad} and \textit{wineholder}), respectively. For both, the image resolution has been set up to 800 × 800 pixels, having 100 views for training, 100 for validation, and 100 for testing. The third dataset we have tested is Tanks\&Temples~\cite{knapitsch2017tanks}: our choice fell on this dataset as it is a collection of real-world images. Here we use a subset of the provided samples (namely: \textit{ignatius, truck, barn, caterpillar} and \textit{family}). We use here FullHD resolution, and we use also in this case $10\%$ of the images used for validation and $10\%$ for testing. Finally, the fourth dataset we we run our experiments is LLFF-NeRF~\cite{mildenhall2019local}. Differently from the other three datasets, this datasets contains realistic images, and non blank background. Each scene consists of 20 to 60 forward-facing images with resolution 1008 × 756. In this case, we have used all the 8 available samples (\textit{fern, flower, fortress, horns, leaves, orchids, room} and \textit{trex}).\\ \textbf{Architectures and compressibility configuration.} We have tested Re:NeRF~on three very different EVG-NeRF approaches: DVGO~\cite{sun2022direct}, TensoRF~\cite{chen2022tensorf} and Plenoxels~\cite{fridovich2022plenoxels}.\footnote{Although Plenoxels is a method for learning radiance fields and does not have any ``neural network'', it still leverages the same optimization tools. We include it in our experimental setup to show the even broader adaptability of Re:NeRF~to any approach minimizing a differentiable loss function.} DVGO models are trained using the same configuration as in the paper, in the $160^3$ voxel grid size configuration. TensoRF models were obtained with their default 192-VM configuration, which factorizes tensors into 192 low-rank components and optimizes the model for 30k steps. Plenoxel models have obtained training first on $128^3$ grid, up-sampled to $256^3$, and finally to $512^3$. For all the architectures and datasets we have used $\gamma=0.5$ and $\delta=0.5$, except for Plenoxel trained on the Synthetic-NeRF and LLFF-NeRF datasets, where $\gamma=0.66$ has been used. For a matter of comparison with other efficiencing approaches, we compare our results also with NSVF~\cite{liu2020neural} and with Instant-NGP~\cite{mueller2022instant}. \subsection{Discussion} All the results are reported in Table~\ref{tab:results}. Here the ``LOW'' compressibility refers to compressibility achieved with the best PSNR evaluated on the validation set, while ``HIGH'' refers to the model achieved right before reaching the stop criterion (which consists of a worsening of the original performance of at most 1dB on the original PSNR). Some qualitative results are also displayed in Fig.~\ref{fig:highfreq}. In general, we observe that Re:NeRF~effectively reduces the size of the models in all the combinations of tested datasets/EVG-NeRFs, with different impacts depending on the EVG-NeRF it is applied. In general, the approach having higher average sizes while also having slightly worse performance is Plenoxels~\cite{fridovich2022plenoxels}, which is an EVG approach with no neural elements in it. Nevertheless, Re:NeRF~ is able to compress it effectively. In particular, in the low compressibility setup, the performance is improved with an overall size reduction. DVGO~\cite{sun2022direct}, consisting of a voxel grid and of an MLP component, is massive, achieving for example compression ratios of $80\times$ for Synthetic-NeRF and $65\times$ within the 1dB performance loss. The approach generally achieving better performance is TensoRF~\cite{chen2022tensorf}, where the low compression setup maintains almost the same performance still enabling $6\times$ compression. When compared to DVGO, TensoRF occupies less memory as it relies on factorized neural radiance fields in the 4D voxel grid, namely it is by design more efficient at training time, and of course in order to maintain such a higher performance the possible compressibility of the model is relatively limited. When compared with other approaches, we observe in general a significant improvement in performance for similar model's size (LLFF-NeRF) or a significantly lower memory for similar performance (in the other three cases). \subsection{Ablation study} \begin{figure*}[t] \centering \begin{subfigure}{0.3\textwidth} \includegraphics[width=\textwidth, trim={120 30 60 60},clip]{figures/DVGO_occupacy-min.png} \caption{~} \label{fig:abloccbase} \end{subfigure} \begin{subfigure}{0.3\textwidth} \includegraphics[width=\textwidth, trim={120 30 60 60},clip]{figures/DVGO-min.png} \caption{~} \label{fig:bbloccbase} \end{subfigure} \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{figures/DVGO_hist-min.png} \caption{~} \label{fig:cbloccbase} \end{subfigure} \begin{subfigure}{0.3\textwidth} \includegraphics[width=\textwidth, trim={120 30 60 60},clip]{figures/DVGO_pruned_occupacy-min.png} \caption{~} \label{fig:abloccmethod} \end{subfigure} \begin{subfigure}{0.3\textwidth} \includegraphics[width=\textwidth, trim={120 30 60 60},clip]{figures/DVGO_pruned-min.png} \caption{~} \label{fig:bbloccmethod} \end{subfigure} \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{figures/DVGO_pruned_hist-min.png} \caption{~} \label{fig:cbloccmethod} \end{subfigure} \caption{Visualization of the \texttt{density.grid} layer for DVGO~\cite{sun2022direct} trained on ``Mic'' (Synthetic-NeRF). Up: baseline model; down: Re:NeRF~applied. Here are visualized the non-empty voxels (a, d), their effective value (b, e), and the distribution of their values, in log scale (c, f). For visualization, the values in (b) have been amplified by a factor 10$\times$.} \label{fig:ablation} \end{figure*} \iffalse \begin{table* \caption{Ablation study conducted on ``Mic'' from Synthetic-NeRF. The approach used here is DVGO~\cite{sun2022direct}.} \label{tab:ablation} \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{c @{\hskip5pt}c @{\hskip5pt}c @{\hskip5pt}c@{\hskip5pt}c@{\hskip5pt}c@{\hskip5pt}c@{\hskip5pt}c} \toprule \multirow{2}{*}{\bf Layers} &\multicolumn{3}{c}{\bf Importance score} & \multirow{2}{*}{\textbf{Re-include}} &\multirow{2}{*}{\bf Quantize} & \textbf{PSNR} & \textbf{Size}\\ &$|w_i|$& \bf $\left|\frac{\partial \mathcal{L}}{\partial w_i}\right|$ &\bf Norm &&&[dB] &[MB] \\ \midrule \xmark &\xmark&\xmark & \xmark & \xmark & \xmark &33.15&67.69 \\ All &\cmark & \xmark & \xmark & \xmark & \xmark & 26.71&6.84\\ & & & & & &\high{25.05} &\high{0.87}\\ All &\xmark & \cmark & \xmark & \xmark & \xmark &26.72&6.84\\ & & & & & &\high{25.05}&\high{0.87}\\ All &\cmark & \cmark & \cmark & \xmark & \xmark & 26.72&6.88\\ & & & & & &\high{25.09} &\high{0.87}\\ All &\cmark & \cmark & \xmark & \xmark & \xmark & 26.72 & 6.84 \\ & & & & & &\high{25.06} &\high{0.87}\\ Voxels &\cmark & \cmark & \xmark & \xmark & \xmark & 33.19 & 7.02 \\ & & & & & &\high{27.67}&\high{1.08}\\ Voxels &\cmark & \cmark & \cmark & \xmark & \xmark & 33.18&7.03 \\ & & & & & &\high{27.63} &\high{1.08} \\ Voxels &\cmark & \cmark & \xmark & \cmark & \xmark &33.26 & 7.12\\ & & & & & &\high{29.54} &\high{1.24} \\ Voxels &\cmark & \cmark & \cmark & \cmark & \xmark &33.25 & 7.11\\ & & & & & &\high{29.50}&\high{1.18}\\ Voxels &\cmark & \cmark & \cmark & \cmark & \cmark & 33.10 & 1.48\\ & & & & & &\high{29.41} &\high{0.34} \\ \bottomrule \end{tabular} } \end{table*} \fi \begin{table* \caption{Ablation study conducted on ``Mic'' from Synthetic-NeRF. The approach used here is DVGO~\cite{sun2022direct}. The first line is the reference baseline.} \label{tab:ablation} \centering \renewcommand{\arraystretch}{1.2} \begin{tabular}{c c c c c c c c c} \toprule &&&&\multicolumn{2}{c}{\bf LOW compressibility}&\multicolumn{2}{c}{\bf HIGH compressibility}\\ \bf Layers & \bf Remove & \bf Re-include & \bf Quantization & \textbf{PSNR}[dB] & \textbf{Size}[MB]& \textbf{PSNR}[dB] & \textbf{Size}[MB]\\ \midrule \xmark &\xmark & \xmark & \xmark &33.15&67.69 &- &- \\ All & \cmark & \xmark & \xmark & 26.72&6.88 & 25.09 & 0.87\\ Voxels & \cmark & \xmark & \xmark & 33.19 & 7.02 & 27.67 & 1.08\\ Voxels & \cmark & \cmark & \xmark &33.26 & 7.12 & 29.54 & 1.24 \\ Voxels & \cmark & \cmark & \cmark & 33.10 & 1.48 & 29.41 & 0.34 \\ \bottomrule \end{tabular} \end{table*} In this section, we propose the ablation study for Re:NeRF. In particular, we want to evidence the single contributions of the proposed technique, emphasizing their effect. Towards this end, we have conducted experiments on ``Mic'' from the Synthetic-NeRF dataset and used DVGO~\cite{sun2022direct} as the EVG-NeRF approach. The summary for the ablation study is enclosed in Table~\ref{tab:ablation}. All the measures here proposed are averaged on 3 different runs.\\ \textbf{Remove all the layers or a subset of them?} Considering the heterogeneity of the layers in the EVG-NeRF approaches, it is not straightforward that removing parameters from all the layers is the best approach. Indeed, we observe that focusing on the layers with explicit voxel representation (indicated as ``Voxel'') leads to a similar PSNR as the baseline (33.19 dB) with a very high size reduction (from 67.69MB to 7.02MB). Focusing on all the layers of the model, as it would be done in a generic model pruning scheme~\cite{han2015learning, tartaglione2018learning, Frankle2019TheLT} leads to a very high drop in performance (26.72dB, namely -6.42dB when compared to the baseline). This shows how important it is to focus on voxels and designing specific solutions rather than relying on generic approaches.\\ \textbf{Re-including helps.} The proposed strategy needs a ``balancing'' for the voxel removal phase, which can be extreme. Towards this end, re-including come removed voxels slightly increases the size of the model, which however turns into performance recovery. In particular, by adding just 0.10MB we gain 0.07dB: please notice that the baseline PSNR is lower than the achieved performance with remove+re-include. This phenomenon is even more evident in the high compressibility regime, where we gain approximately 2dB with just 0.16MB added.\\ \textbf{Effect of quantization.} In traditional NeRF models quantizing is a delicate process, requiring non-uniform, custom quantization strategies~\cite{shi2022distilled}. In our case, however, quantizing on 8 bits maintains the performance to high PSNR values (losing 0.16 dB without additional fine-tuning) but significantly reduces the size of the compressed model (from 7.12MB to 1.48MB). This is very evident in the high compressibility result, where we move from 1.24MB to 0.34MB only. \subsection{A deeper view on Re:NeRF's effect} As a final analysis, we wish to test what happens in the voxel grid for a baseline model and for the same with Re:NeRF~applied. Fig.~\ref{fig:ablation} visualizes the content of the \texttt{density.grid} layer for the baseline (up, in red) and for the compressed one (down, in blue). Looking at the spatial occupancy for the density grid, without Re:NeRF~evidently we have a much higher than necessary voxel occupation (Fig.~\ref{fig:abloccbase}) which is trimmed to the real object shape by Re:NeRF~(Fig.~\ref{fig:abloccmethod}). Looking at the effective value of each voxel (here normalized and modeled as transparency) we can easily guess the structure of the object in the Re:NeRF~case (Fig.~\ref{fig:bbloccmethod}) while the density is so spread in the space for the baseline case that the object is almost impossible to distinguish (Fig.~\ref{fig:bbloccbase}). This has a clear effect on the distribution of the parameter's value for the layer: while in the baseline case we observe very different behavior for positive and negative values, making problems like compression and quantization harder (Fig.~\ref{fig:cbloccbase}), the distribution tends to be more specular when applying Re:NeRF~(Fig.~\ref{fig:cbloccmethod}): this is due to both the suppression of irrelevant parameters in the model and to the exclusive re-inclusion of parameters having as neighbors others already included. \section{Conclusion \& future works} \label{sec:conclusion} In this work we have presented Re:NeRF, an approach to compress NeRF models that utilizes explicit voxel grid representations. This approach removes parameters from the model, while at the same time ensures not to have a large drop in performance. This is achieved by a re-inclusion mechanism, which allows previously removed parameters that are neighbors of the remaining parameters to be re-included if they show high gradient loss. Re:NeRF~is easily deployable for any model, having different architecture, training strategy, or objective function. For this reason, we have tested its effectiveness on three very different approaches: DVGO~\cite{sun2022direct}, where a part of the model learns the density and the other maps complex voxel dependencies with an MLP, TensoRF~\cite{chen2022tensorf} which learns a 4D grid and performs low-rank decomposition on the radiance fields, and Plenoxels~\cite{fridovich2022plenoxels} which optimized the voxel grid directly with no MLP supporting the learning. These approaches have been tested on four popular datasets, two synthetic and two from real images.\\ In all the cases, Re:NeRF~is able to compress the approaches with compression rates scaling up to $80\times$. Reducing the storage memory required by these models, designed mainly to improve training and inference time but sacrificing storage memory when compared to the original NeRF~\cite{mildenhall2020nerf}, further emphasizes EVG-NeRF's benefits and pushes towards their large-scale deployability in memory-constrained or bandwidth-limited applications. Interestingly, in a low compressibility setup, the performance is essentially unharmed, while the model is effectively compressed. This opens the road towards the model's budget re-allocation, like efficient ensembling, towards further performance enhancement with specific memory constraints. \section{Detailed results} Here follow the detailed tables for Synthetic-NeRF (Table~\ref{tab:syntheticnerf}), Synthetic-NSVF (Table~\ref{tab:syntheticnsvf}) and Tanks \& Temples (Table~\ref{tab:tandt}). For these, we also include, as output quality metric, the LPIPS score evaluated on the VGG backbone.\\ Besides, we also provide evaluations on the BlendedMVS dataset for DVGO (Table~\ref{tab:blendedmvf}), whose configuration follows. \subsection{Configuration for BlendedMVS} BlendedMVS, although being a synthetic dataset, differently from Synthetic-NeRF and Synthetic-NSVF, has more realistic ambient lighting, which is taken from real image blending. In this case, following the same approach as \cite{sun2022direct}, we have used a subset of 4 objects (namely: \textit{jade, fountain, character} and \textit{statue}). We have here used as image resolution 768 × 576 pixels; $10\%$ of the images are used for validation and $10\%$ for testing. For Re:NeRF, we have used $\gamma=0.5$ and $\delta=0.5$, with $\Delta T=1$dB. \begin{table*} \caption{Results for Synthetic-NeRF.} \label{tab:syntheticnerf} \renewcommand{\arraystretch}{1.2} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c c c c c c c c c c c c c c c c c c c } \toprule &\multicolumn{10}{c}{\bf Synthetic-NeRF}\\ &Architecture & Pruning & Chair & Drums & Ficus & Hotdog & Lego & Materials & Mic & Ship &\bf Avg\\ \hline \multirow{9}{*}{PSNR(dB) ($\uparrow$)} & & - &34.11 & 25.48 & 32.59& 36.77 & 34.69 & 29.52 & 33.16 &29.04& \bf 31.92\\ &DVGO~\cite{sun2022direct} & LOW & 33.75 & 25.34 &32.36 &36.00 &34.30&29.26&33.16 &28.69& \bf 31.61\\ && HIGH & 33.45 & 24.98& 32.11 & 35.44 & 33.90 &28.38&32.77&28.60& \bf 31.20\\ \cline{2-12} & & - &33.98 & 25.35 & 31.83& 36.43 & 34.10 & 29.14 & 33.26 &27.78 &\bf 31.48\\ &Plenoxels~\cite{fridovich2022plenoxels} & LOW & 34.35 & 25.09 &31.69 &36.33 &34.40&28.73&33.92 &27.71& \bf 31.52\\ & & HIGH & 33.65 & 24.96& 31.21 & 35.44 & 33.91 & 28.15&33.14&27.31& \bf 30.97\\ \cline{2-12} & & - &35.76& 26.01& 33.99& 37.41& 36.46& 30.12 &34.61 &30.77& \bf 33.14 \\ &TensoRF~\cite{chen2022tensorf} & LOW &36.00&26.01&34.12&37.58&36.73&30.01&34.67&30.92&\bf33.26\\ && HIGH& 35.66&25.59&33.57&37.35&36.38&29.64&34.00&30.31&\bf32.81 \\ \hline \multirow{9}{*}{Size(MB)($\downarrow $)} & & - & 103.994 & 92.06 &108.71 &130.01&124.201& 171.36& 49.41 &100.98&\bf 110.09\\ &DVGO~\cite{sun2022direct} & LOW & 4.44 & 2.62 & 2.67 &5.19& 5.38&7.47&1.50 & 5.43 &\bf 4.33\\ && HIGH &2.53 & 1.21 & 1.82 &2.41&2.85 &3.06&0.89 & 2.85&\bf 2.20\\ \cline{2-12} & & - & 187.04 &160.83 &108.39 &290.64&291.65& 196.15& 80.83 &197.08&\bf 189.08\\ &Plenoxels~\cite{fridovich2022plenoxels} & LOW & 85.59&56.34&38.51&179.26&183.48&91.25&37.69&62.07&\bf 91.77\\ & & HIGH &47.26&42.34&28.8&96.81&97.78&66.4&21.07&36.94&\bf 54.68\\ \cline{2-12} & & - &65.46&65.62&67.87&77.61&65.75&80.06&64.45&67.12&\bf69.24\\ &TensoRF~\cite{chen2022tensorf} & LOW &12.78&8.47&13.30&14.77&12.82&15.65&7.67&8.77&\bf11.78\\ && HIGH& 8.51&6.11&8.88&9.47&8.41&10.31&5.58&6.26&\bf7.94 \\ \hline \multirow{9}{*}{SSIM($\uparrow$)} & & - &0.976 & 0.930 &0.977& 0.986& 0.976 & 0.950 & 0.983 &0.878& \bf 0.957 \\ &DVGO~\cite{sun2022direct} & LOW & 0.974 & 0.924 &0.975 &0.973&0.973 &0.943&0.982&0.871 & \bf 0.952 \\ & & HIGH & 0.971& 0.916& 0.962 & 0.969 & 0.967 &0.915&0.981&0.867& \bf 0.943 \\ \cline{2-12} & & - &0.977&0.933&0.976&0.98&0.975&0.949&0.985&0.869& \bf 0.956\\ &Plenoxels~\cite{fridovich2022plenoxels} & LOW &0.978&0.922&0.97&0.979&0.976&0.939&0.985&0.867&\bf0.952\\ && HIGH &0.971&0.915&0.97&0.972&0.972&0.923&0.977&0.854&\bf0.944\\ \cline{2-12} && - &0.985 &0.937& 0.982& 0.982& 0.983& 0.952& 0.988 &0.895&\bf0.963\\ &TensoRF~\cite{chen2022tensorf} & LOW &0.985&0.931&0.982&0.982&0.983&0.947&0.987&0.894&\bf0.962\\ & & HIGH& 0.983&0.917&0.979&0.980&0.981&0.939&0.984&0.882&\bf0.956 \\ \hline \multirow{9}{*}{LPIPS$_{VGG}$($\downarrow$)} && - &0.027 & 0.079 &0.025& 0.034& 0.027 & 0.059 & 0.018 &0.161& \bf 0.054\\ &DVGO~\cite{sun2022direct} & LOW & 0.035 & 0.090 &0.033 &0.060 &0.032&0.070&0.022 &0.167& \bf 0.064\\ & & HIGH & 0.038& 0.103& 0.037 & 0.067 & 0.039 &0.102&0.026&0.171& \bf 0.073\\ \cline{2-12} && - &0.031&0.067&0.026&0.037&0.028&0.057&0.015&0.178&\bf0.055\\ &Plenoxels~\cite{fridovich2022plenoxels} & LOW & 0.026&0.081&0.038&0.043&0.027&0.074&0.019&0.18&\bf0.061\\ && HIGH & 0.033&0.088&0.044&0.062&0.033&0.090&0.031&0.193&\bf0.072\\ \cline{2-12} & & - &0.022& 0.073 &0.022 &0.032 &0.018 &0.058& 0.015& 0.138& 0.047\\ &TensoRF~\cite{chen2022tensorf} & LOW &0.022&0.103&0.026&0.035&0.018&0.067&0.022&0.138&\bf0.054\\ & & HIGH& 0.028&0.157&0.042&0.045&0.022&0.081&0.042&0.159&\bf0.072 \\ \bottomrule \end{tabular} } \end{table*} \begin{table*} \caption{Results for Synthetic-NSVF.} \label{tab:syntheticnsvf} \renewcommand{\arraystretch}{1.2} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c c c c c c c c c c c c c c c c c c c } \toprule &\multicolumn{11}{c}{\bf Synthetic-NSVF}\\ &Architecture & Pruning &Bike &Lifestyle &Palace &Robot&Spaceship & Steamtrain &Toad &Wineholder &\bf Avg\\ \hline \multirow{6}{*}{PSNR(dB)($\uparrow$)} & & - &38.13 & 33.64 & 34.32& 36.23 &37.56 & 36.47 & 33.02 &30.21& \bf 34.95 \\ & DVGO~\cite{sun2022direct} & LOW & 38.16 & 33.68& 34.47 &36.29 &37.26&36.10&32.98 &30.11& \bf 34.88\\ & & HIGH & 37.97 & 33.16& 33.88 & 36.00 & 36.82 &35.79&32.39&29.75& \bf 34.47\\ \cline{2-12} & & -&39.23&34.51&37.56&38.26&38.6&37.87&31.32&34.85&\bf36.53 \\ &TensoRF~\cite{chen2022tensorf}& LOW &39.38&34.68&37.92&38.72&38.58&38.06&34.85&31.77&\bf36.75\\ & & HIGH & 38.90&34.33&37.53&38.40&38.10&37.40&33.20&31.23& \bf36.14 \\ \hline \multirow{6}{*}{Size(MB)($\downarrow$)} & & - &104.10 & 97.12 & 105.00& 97.17 &128.31 &144.64 & 128.30 &97.71& \bf 112.79 \\ &DVGO~\cite{sun2022direct} & LOW & 3.55 &3.53& 4.84 &3.76&4.95&5.41&5.67 &3.30& \bf 4.38 \\ & & HIGH & 2.52 & 2.38& 2.63 & 2.55 &2.78 &2.77&1.87&1.65& \bf2.39 \\ \cline{2-12} & & - &70.92&65.46&64.94&68.63&68.01&80.02&68.71&65.74&\bf69.05 \\ &TensoRF~\cite{chen2022tensorf}& LOW &13.74&12.84&12.67&13.15&13.38&15.39&14.24&12.77&\bf13.52\\ & & HIGH & 9.11&8.47&8.36&8.80&8.83&7.12&9.12&8.35&\bf8.52 \\ \hline \multirow{6}{*}{SSIM($\uparrow$)} & & - &0.991 & 0.964 &0.992& 0.992 &0.987 &0.989 & 0.965 &0.949& \bf 0.979\\ &DVGO~\cite{sun2022direct} & LOW &0.991 & 0.963 &0.961&0.991&0.985&0.986 &0.966& 0.950&\bf 0.974\\ & & HIGH &0.990 & 0.958& 0.953 & 0.991 &0.982 &0.982&0.957&0.942& \bf0.969\\ \cline{2-12} & & - &0.993&0.968&0.979&0.994&0.989&0.991&0.961&0.978&\bf0.982\\ &TensoRF~\cite{chen2022tensorf}& LOW &0.993&0.968&0.980&0.995&0.988&0.991&0.978&0.963&\bf0.982\\ & & HIGH & 0.992&0.963&0.978&0.994&0.985&0.988&0.966&0.957&\bf0.978 \\ \hline \multirow{6}{*}{LPIPS$_{VGG}$($\downarrow$)} & & - &0.011 & 0.055 &0.045& 0.013&0.020 &0.019& 0.047 &0.059& \bf 0.034\\ &DVGO~\cite{sun2022direct} & LOW &0.015 & 0.056&0.043&0.013&0.024&0.027 &0.045& 0.055&\bf 0.035\\ & & HIGH &0.015 & 0.063& 0.050 & 0.013 &0.028 &0.036&0.054&0.067& \bf 0.041\\ \cline{2-12} & & - & 0.003&0.021&0.011&0.003&0.009&0.006&0.024&0.016&\bf0.012\\ &TensoRF~\cite{chen2022tensorf}& LOW &0.011&0.049&0.020&0.010&0.022&0.017&0.035&0.054&\bf0.027\\ & & HIGH &0.016&0.061&0.022&0.011&0.027&0.029&0.059&0.077&\bf0.038 \\ \bottomrule \end{tabular} } \end{table*} \begin{table*} \caption{Results for BlendedMVS} \label{tab:blendedmvf} \renewcommand{\arraystretch}{1.2} \centering \begin{tabular}{c c c c c c c c c c c c c c c c c c c } \toprule &\multicolumn{7}{c}{\bf BlendedMVS}\\ &Architecture & Pruning& Character & Fountain & Jade & Statue&\bf Avg\\ \hline \multirow{3}{*}{PSNR(dB)($\uparrow$)} && - &30.26 & 28.27 &27.75& 26.41 &\bf28.17 \\ &DVGO~\cite{sun2022direct} & LOW &30.05 & 28.21&27.48& 26.08 &\bf 27.86 \\ & & HIGH &29.78 &27.90&27.08& 25.97 &\bf 27.68\\ \hline \multirow{3}{*}{Size(MB)($\downarrow$)} & & - &131.73 & 72.33 &158.03& 97.08 & \bf114.79 \\ &DVGO~\cite{sun2022direct} & LOW &5.43 & 3.54&5.24& 2.81 &\bf 4.26 \\ & & HIGH &2.92 &1.70&1.85& 1.85 &\bf 2.08\\ \hline \multirow{3}{*}{SSIM($\uparrow$)} & & - & 0.963 & 0.923 & 0.916& 0.887 &\bf 0.922 \\ &DVGO~\cite{sun2022direct} & LOW & 0.960 & 0.921& 0.909& 0.876 & \bf 0.917 \\ & & HIGH & 0.957 & 0.910& 0.887& 0.867 & \bf 0.908\\ \hline \multirow{3}{*}{LPIPS$_{VGG}$($\downarrow$)} && - & 0.046 & 0.116 & 0.106& 0.137 &\bf 0.101\\ &DVGO~\cite{sun2022direct} & LOW & 0.048 & 0.114& 0.107& 0.142 & \bf 0.103 \\ & & HIGH & 0.052 & 0.126& 0.129& 0.151 & \bf 0.115 \\ \bottomrule \end{tabular} \end{table*} \begin{table*} \caption{Results for Tanks \& Temples.} \label{tab:tandt} \small \renewcommand{\arraystretch}{1.2} \centering \begin{tabular}{c c c c c c c c c c c c c c c c c c c } \toprule &\multicolumn{8}{c}{\bf Tanks \& Temples}\\ &Architecture & Pruning &Barn&Caterpillar&Family& Ignatius & Truck & \bf Avg\\ \hline \multirow{6}{*}{PSNR(dB)($\uparrow$)} & & - &26.84 & 25.70 &33.68& 28.00 &27.09 & \bf 28.26\\ &DVGO~\cite{sun2022direct} & LOW &26.76 & 25.67&33.60& 28.06&27.04& \bf 28.23\\ & & HIGH &26.32 &25.22&33.36& 27.86&26.78 &\bf 27.91 \\ \cline{2-9} & & - &25.95&24.63&32.25&27.49&26.52&\bf27.37\\ &Plenoxels~\cite{fridovich2022plenoxels} & LOW & 26.58&24.78&32.86&27.22&26.87&\bf27.66\\ & & HIGH &26.31&24.40&32.29&27.00&26.69 &\bf27.34\\ \cline{2-9} & & - &28.34&27.14&27.22&26.19&33.92& \bf 28.56\\ &TensoRF & LOW &27.28&26.09&33.75&28.06& 27.32&\bf28.50\\ & & HIGH &26.99&25.77&33.36&27.86 &27.20 &\bf 28.40\\ \hline \multirow{6}{*}{Size(MB)($\downarrow$)} & & - & 128.21 & 109.94 &92.72&95.10&106.43 &\bf 106.48\\ &DVGO~\cite{sun2022direct} & LOW &5.52 & 5.23&3.85& 3.51&5.35& \bf4.69 \\ & & HIGH &1.75 &1.89&2.37& 1.08&1.87 &\bf 1.79 \\ \cline{2-9} & & - &282.85&133.43&103.64&115.81&104.08&\bf147.96\\ &Plenoxels~\cite{fridovich2022plenoxels} & LOW &213.45&87.08&73.16&93.94&43.65&\bf102.26\\ & & HIGH &181.09&66.67&60.53&84.06&35.01&\bf85.47\\ \cline{2-9} & & - &73.95&64.56&60.06&61.25&65.36&\bf65.04\\ &TensoRF & LOW &9.39&8.16&7.40&12.13& 12.87 & \bf9.99\\ & & HIGH &6.62&5.69&5.17&7.66& 8.34 & \bf6.70\\ \hline \multirow{6}{*}{SSIM($\uparrow$)} & & - & 0.836 & 0.904 & 0.961& 0.941 & 0.905 &\bf0.909 \\ &DVGO~\cite{sun2022direct} & LOW & 0.838 & 0.904& 0.962& 0.941& 0.904& \bf0.910 \\ & & HIGH & 0.826 & 0.859& 0.958& 0.931 & 0.895 &\bf0.894 \\ \cline{2-9} & & - &0.828&0.899&0.954&0.942&0.899&\bf0.904\\ &Plenoxels~\cite{fridovich2022plenoxels} & LOW &0.856&0.894&0.959&0.935&0.902&\bf0.909\\ & & HIGH &0.844&0.871&0.95&0.923&0.892&\bf0.896\\ \cline{2-9} & & - &0.948&0.914&0.864&0.912&0.965&\bf0.920\\ &TensoRF & LOW &0.862&0.901&0.961&0.941&0.913 & \bf0.916\\ & & HIGH &0.852&0.888&0.956&0.934& 0.907 &\bf0.907\\ \hline \multirow{6}{*}{LPIPS$_{VGG}$($\downarrow$)} & & - & 0.297 & 0.171 & 0.071& 0.089 & 0.162& \bf 0.158\\ &DVGO~\cite{sun2022direct} & LOW & 0.291 & 0.172& 0.079& 0.091& 0.161& \bf 0.159 \\ & & HIGH & 0.312 & 0.194& 0.073& 0.107 & 0.174 &\bf 0.172 \\ \cline{2-9} & & - &0.306&0.169&0.081&0.102&0.167&\bf0.165\\ &Plenoxels~\cite{fridovich2022plenoxels} & LOW &0.263&0.174&0.071&0.112&0.155&\bf0.155\\ & & HIGH &0.285&0.200&0.081&0.126&0.165&\bf0.171\\ \cline{2-9} & & - &0.078&0.145&0.252&0.159&0.064 &\bf0.140\\ &TensoRF & LOW &0.258&0.187&0.067&0.087& 0.149 & \bf0.149\\ & & HIGH &0.277&0.211&0.077&0.096& 0.169 & \bf0.166\\ \bottomrule \end{tabular} \end{table*} \begin{table*} \caption{Results for forward-facing scenes from NeRF} \label{tab:llff_nerf} \renewcommand{\arraystretch}{1.2} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c c c c c c c c c c c c c c c c c c c } \toprule &\multicolumn{11}{c}{\bf Forward-facing-NeRF}\\ &Architecture & Pruning &Fern&Flower&Fortress&Horns&Leaves&Orchids&Room&Trex&\bf Avg\\ \hline \multirow{6}{*}{PSNR(dB)($\uparrow$)} & & - &24.57&27.64&30.17&27.10&21.56&20.54&29.20&26.41& \bf 25.90 \\ & Plenoxel~\cite{fridovich2022plenoxels}& LOW & 25.45&27.82&30.57&27.51&21.38 &20.37&30.33&26.50 & \bf 26.24\\ & & HIGH & 25.23&27.65&30.18&26.98&21.12&20.32&29.76&26.27& \bf 25.94\\ \cline{2-12} & & -&25.27&28.60&31.36&28.14&21.30&19.87&32.35&26.97&\bf26.73 \\ &TensoRF~\cite{chen2022tensorf}& LOW &24.50&28.64&31.30&28.87&21.22&19.31&33.33&27.26&\bf26.80\\ & & HIGH & 24.40&28.29&31.09&28.49&20.78&19.08&33.10&27.16&\bf26.55 \\ \hline \multirow{6}{*}{Size(MB)($\downarrow$)} & & - &1658.40&1471.78&1407.66&1726.17&1851.71&720.01&1421.66&1622.31&\bf1484.96 \\ & Plenoxel~\cite{fridovich2022plenoxels} & LOW & 407.02&726.00&383.61&432.01&438.90 &221.51&495.84&552.94& \bf 457.23\\ & & HIGH & 305.97&523.17&291.59&321.62&327.51&163.36&366.04&404.93& \bf338.02 \\ \cline{2-12} & & - &148.49&152.44&149.99&152.51&151.72&159.80&151.14&148.20&\bf151.79\\ &TensoRF~\cite{chen2022tensorf}& LOW &19.26&30.72&19.69&31.29&19.66&21.96&86.23&29.93&\bf32.34\\ & & HIGH & 12.97&19.44&13.36&20.17&13.38&15.29&48.47&19.09&\bf20.27\\ \hline \multirow{6}{*}{SSIM($\uparrow$)} & & - &0.830&0.863&0.884&0.857&0.763&0.681&0.937&0.890&\bf0.838\\ & Plenoxel~\cite{fridovich2022plenoxels} & LOW &0.831&0.862&0.880&0.859& 0.758&0.684&0.938&0.895&\bf 0.838\\ & & HIGH &0.821&0.858&0.873&0.840&0.734&0.681&0.927&0.889& \bf0.828\\ \cline{2-12} & & - &0.814&0.871&0.897&0.877&0.752&0.649&0.952&0.900&\bf0.839\\ &TensoRF~\cite{chen2022tensorf}& LOW &0.764&0.864&0.891&0.892&0.725&0.570&0.955&0.900&\bf0.820\\ & & HIGH & 0.744&0.842&0.880&0.873&0.677&0.520&0.950&0.889&\bf0.797 \\ \hline \multirow{6}{*}{LPIPS$_{VGG}$($\downarrow$)} & & - &0.225&0.177&0.180&0.230&0.194&0.271&0.194&0.237&\bf0.213\\ & Plenoxel~\cite{fridovich2022plenoxels}& LOW &0.225&0.177&0.183&0.228&0.197 &0.264&0.199&0.234&\bf 0.213\\ & & HIGH &0.241&0.178&0.188&0.254& 0.226&0.265&0.234&0.250& \bf 0.229\\ \cline{2-12} & & - & 0.237&0.169&0.148&0.196&0.217&0.278&0.167&0.221&\bf0.204\\ &TensoRF~\cite{chen2022tensorf}& LOW &0.299&0.158&0.144&0.158&0.299&0.383&0.149&0.203&\bf0.224\\ & & HIGH &0.337&0.200&0.172&0.193&0.364&0.449&0.168&0.227&\bf0.264 \\ \bottomrule \end{tabular} } \end{table*} \begin{table* \caption{Examples generated from the Synthetic-NeRF dataset with TensoRF.} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c c c c} \toprule \bf Ground Truth & \bf Baseline & \bf LOW compression & \bf HIGH compression\\ \midrule \includegraphics[width=0.24\textwidth]{figures/TensoRF/nerf/gt/chair-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nerf/baseline/chair-results-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nerf/low/chair-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nerf/high/chair-min.png}\\ \midrule \includegraphics[width=0.24\textwidth]{figures/TensoRF/nerf/gt/drums-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nerf/baseline/drums-results-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nerf/low/drums-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nerf/high/drums-min.png}\\ \midrule \includegraphics[width=0.24\textwidth]{figures/TensoRF/nerf/gt/ficus-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nerf/baseline/ficus-results-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nerf/low/ficus-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nerf/high/ficus-min.png}\\ \midrule \includegraphics[width=0.24\textwidth]{figures/TensoRF/nerf/gt/lego-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nerf/baseline/lego-results-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nerf/low/lego-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nerf/high/lego-min.png}\\ \bottomrule \end{tabular} } \end{table*} \iffalse \begin{table* \caption{Examples generated from the Synthetic-NeRF dataset with TensoRF.} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c c c c} \toprule \bf Ground Truth & \bf Baseline & \bf LOW compression & \bf HIGH compression\\ \midrule \includegraphics[width=0.24\textwidth]{figures/TensoRF/nerf/gt/materials-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nerf/baseline/materials-results-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nerf/low/materials-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nerf/high/materials-min.png}\\ \includegraphics[width=0.24\textwidth]{figures/TensoRF/nerf/gt/mic-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nerf/baseline/mic-results-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nerf/low/mic-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nerf/high/mic-min.png}\\ \includegraphics[width=0.24\textwidth]{figures/TensoRF/nerf/gt/ship-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nerf/baseline/ship-results-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nerf/low/ship-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nerf/high/ship-min.png}\\ \bottomrule \end{tabular} } \end{table*} \fi \begin{table* \caption{Examples generated from the Synthetic-NSVF dataset with TensoRF.} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c c c c} \toprule \bf Ground Truth & \bf Baseline & \bf LOW compression & \bf HIGH compression\\ \midrule \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/gt/bike-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/baseline/Bike-results-min.png} & \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/low/bike-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/high/bike-min.png}\\ \midrule \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/gt/lifestyle-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/baseline/Lifestyle-results-min.png} & \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/low/lifestyle-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/high/lifestyle-min.png}\\ \midrule \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/gt/palace-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/baseline/Palace-results-min.png} & \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/low/palace-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/high/palace-min.png}\\ \midrule \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/gt/robot-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/baseline/Robot-results-min.png} & \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/low/robot-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/high/robot-min.png}\\ \midrule \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/gt/spaceship-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/baseline/Spaceship-results-min.png} & \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/low/spaceship-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/high/spaceship-min.png}\\ \bottomrule \end{tabular} } \end{table*} \iffalse \begin{table* \caption{Examples generated from the Tanks\&Temples dataset with TensoRF.} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c c c c} \toprule \bf Ground Truth &\bf Baseline & \bf LOW compression &\bf HIGH compression\\ \midrule \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/gt/steamtrain-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/baseline/Steamtrain-results-min.png} & \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/low/steamtrain-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/high/steamtrain-min.png}\\ \midrule \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/gt/toad-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/baseline/Toad-results-min.png} & \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/low/toad-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/high/toad-min.png}\\ \midrule \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/gt/wineholder-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/baseline/Wineholder-results-min.png} & \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/low/wineholder-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/nsvf/high/wineholder-min.png}\\ \bottomrule \end{tabular} } \end{table*} \fi \begin{table* \caption{Examples generated from the Tanks\&Temples dataset with TensoRF.} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c c c c} \toprule \bf Ground Truth &\bf Baseline &\bf LOW compression &\bf HIGH compression\\ \midrule \includegraphics[width=0.24\textwidth]{figures/TensoRF/tnt/gt/farm-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/tnt/baseline/farm-min.png} & \includegraphics[width=0.24\textwidth]{figures/TensoRF/tnt/low/farm-min.png} & \includegraphics[width=0.24\textwidth]{figures/TensoRF/tnt/high/farm-min.png} \\ \midrule \includegraphics[width=0.24\textwidth]{figures/TensoRF/tnt/gt/cat-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/tnt/baseline/cat-min.png} & \includegraphics[width=0.24\textwidth]{figures/TensoRF/tnt/low/cat-min.png} & \includegraphics[width=0.24\textwidth]{figures/TensoRF/tnt/high/cat-min.png} \\ \midrule \includegraphics[width=0.24\textwidth]{figures/TensoRF/tnt/gt/fam-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/tnt/baseline/fam-min.png} & \includegraphics[width=0.24\textwidth]{figures/TensoRF/tnt/low/fam-min.png} & \includegraphics[width=0.24\textwidth]{figures/TensoRF/tnt/high/fam-min.png} \\ \midrule \includegraphics[width=0.24\textwidth]{figures/TensoRF/tnt/gt/ig-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/tnt/baseline/ig-min.png} & \includegraphics[width=0.24\textwidth]{figures/TensoRF/tnt/low/ig-min.png} & \includegraphics[width=0.24\textwidth]{figures/TensoRF/tnt/high/ig-min.png} \\ \midrule \includegraphics[width=0.24\textwidth]{figures/TensoRF/tnt/gt/truck-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/tnt/baseline/truck-min.png} & \includegraphics[width=0.24\textwidth]{figures/TensoRF/tnt/low/truck-min.png} & \includegraphics[width=0.24\textwidth]{figures/TensoRF/tnt/high/truck-min.png} \\ \bottomrule \end{tabular} } \end{table*} \begin{table* \caption{Examples generated from the LLFF dataset with TensoRF.} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c c c c} \toprule \bf Ground Truth & \bf Baseline & \bf LOW compression &\bf HIGH compression\\ \midrule \includegraphics[width=0.24\textwidth]{figures/fix/2-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/llff/baseline/leaves-min.png} & \includegraphics[width=0.24\textwidth]{figures/TensoRF/llff/low/leaves-min.png} & \includegraphics[width=0.24\textwidth]{figures/TensoRF/llff/high/leaves-min.png} \\ \midrule \includegraphics[width=0.24\textwidth]{figures/fix/4-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/llff/baseline/orchids-min.png} & \includegraphics[width=0.24\textwidth]{figures/TensoRF/llff/low/orchids-min.png} & \includegraphics[width=0.24\textwidth]{figures/TensoRF/llff/high/orchids-min.png} \\ \midrule \includegraphics[width=0.24\textwidth]{figures/TensoRF/llff/gt/room-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/llff/baseline/room-min.png} & \includegraphics[width=0.24\textwidth]{figures/TensoRF/llff/low/room-min.png} & \includegraphics[width=0.24\textwidth]{figures/TensoRF/llff/high/room-min.png} \\ \midrule \includegraphics[width=0.24\textwidth]{figures/TensoRF/llff/gt/trex-min.png}& \includegraphics[width=0.24\textwidth]{figures/TensoRF/llff/baseline/trex-min.png} & \includegraphics[width=0.24\textwidth]{figures/TensoRF/llff/low/trex-min.png} & \includegraphics[width=0.24\textwidth]{figures/TensoRF/llff/high/trex-min.png} \\ \bottomrule \end{tabular} } \end{table*} \iffalse {\small \bibliographystyle{ieee_fullname}
2,869,038,154,135
arxiv
\section{Introduction} \label{sec:intro} The interior of neutron stars is, to a very good approximation, formed by pure neutron matter \cite{shapiro, glendenning}. At the very initial stages after their formation, these objects are very hot, with temperatures as high as $T \sim 40$ MeV \cite{prakash97}. The Equation of State (EoS) of pure neutron matter in a wide range of densities and temperatures is therefore a crucial ingredient to describe the structure and the evolution of neutron stars. The evaluation of both the neutron matter and the symmetric nuclear matter EoS starting from realistic models of the nucleon-nucleon (NN) interaction is still a major challenge in nuclear physics. The short-range and tensor components of realistic NN forces induce correlations which substantially modify the many-nucleon wave function as compared to the free Fermi gas (FFG) Slater determinant. This is particularly important for symmetric matter, where the $^3S_1$-$^3D_1$ channel plays a pivotal role. In neutron matter, Pauli effects block this tensor channel, but short-range correlations still need to be accounted for appropriately. Several theoretical approaches have been developed over the years to treat these correlations in zero temperature neutron matter: variational techniques within correlated basis functions \cite{akmal97,fantoni98,fantoni02}; Auxiliary Field \cite{gandolfi08} or Quantum Monte Carlo \cite{carlson03} calculations with simplified interactions and the popular Brueckner--Bethe--Goldstone hole-line expansion \cite{day67} in its lowest order form, the so-called Brueckner--Hartree--Fock (BHF) approximation \cite{baldo01}. At finite temperatures, fewer efforts have been focused in this direction: the well-known variational calculation of Friedman and Pandharipande \cite{friedman81} and recent similar calculations \cite{kanzawa07}, as well as BHF extensions at finite temperature \cite{cugnon87,bombaci94}. The latter approximation takes into account particle-particle correlations by solving the Bethe--Goldstone equation, which leads to the so-called $G$-matrix. Nevertheless, a minimal consistent treatment of correlations in nuclear systems requires the inclusion not only of particle-particle (pp) intermediate states, but also of the hole-hole (hh) ones. The propagation of particles and holes can be treated in the same footing by means of the Self-Consistent Green's Function (SCGF) approach \cite{muther00}. The SCGF approach gives direct access to the single-particle spectral function and therefore to all the single-particle properties of the system. A great progress in the application of the SCGF method to nuclear matter has been achieved in recent years, both at zero \cite{dewulf03} and finite temperatures \cite{bozek99,bozek02,frick03,frick05,rios06}. The solution of the SCGF equations is a rather demanding numerical problem due to the complete treatment of off-shell energy dependences. As a consequence, the SCGF method has been applied to few general, extensive analysis of dense nuclear systems. The studies at zero temperature have been mainly oriented to provide the appropriate theoretical support for the interpretation of $(e,e'p)$ experiments, while those at finite temperature focus on a correlated description of matter to be used in the studies of heavy ion collisions dynamics or in astrophysical environments. In particular, the effects of temperature might affect substantially different astrophysical observables. As an example, the cooling curve of a neutron star depends on the interior temperatures and the possible transition to a superfluid regime \cite{yakovlev04}. Also, the gravitational wave signature of the supernova explosion might be sensitive to the EoS and might even be able to distinguish thermal effects \cite{janka07}. In this line, we want to study the microscopic and thermodynamical properties of hot pure neutron matter within the SCGF framework. The SCGF method, as formulated here, cannot be used below the critical temperature of the pairing transition \cite{bozek99a,dickhoff05} and therefore all our results only apply for the normal phase. Although this is not the first time that the SCGF approach is used to study pure neutron matter \cite{bozek99,dewulf03a}, it is, up to our knowledge, the first time that a systematic study of the microscopic and thermodynamical properties of pure neutron matter at finite temperature is performed within the SCGF approach. Moreover, we shall perform our calculations with two different realistic nucleon-nucleon interactions, the meson-exchange CD-BONN potential \cite{cdbonn} and the local Argonne V18 \cite{av18}. Together with the comparison to other many-body approaches, this can be used to highlight the model dependence in hot neutron matter calculations. Lately, the problem of neutron matter has also been growing in interest due to its connection with the experimental studies of ultracold fermionic systems \cite{carlson03,baldo08}. Dilute strongly-interacting fermionic systems with large scattering lengths (such as neutron matter, with a scattering length $a=-18$ fm to be compared to a $k_F =1.68$ fm$^{-1}$ for $\rho=0.16$ fm$^{-3}$) lie in the so-called \emph{unitary regime}. As a consequence of the lack of any characteristic energy scale, these systems show a universal behavior in their zero- and finite-temperature dynamics, with scalings that are related to the non-interacting case \cite{ho04}. We shall not treat this particular problem here, but one should mention that the SCGF method is able to tackle the unitary regime above the pairing phase transition \cite{kohler08}. When properly complemented with pairing effects \cite{bozek99a,dickhoff05}, this method should also be able to properly describe the unitary regime. After a brief description of the SCGF formalism in Section \ref{sec:form}, we discuss in Section \ref{sec:micro} our results for the microscopic properties of hot pure neutron matter. Section \ref{sec:macro} is devoted to the analysis of the thermodynamical properties and the comparison of our results with those obtained within other approaches. Finally, a brief summary and our main conclusions are presented in Section \ref{sec:conclu}. \section{Self-Consistent Green's Functions method at finite temperature} \label{sec:form} A crucial step in the microscopic description of nuclear many-body systems is the determination of the effective in-medium nucleon-nucleon (NN) interaction. The ladder approximation to the in-medium $T$-matrix is well suited for strongly interacting low density systems \cite{fetter} and has the following structure: \begin{align} \left\langle \mathbf{k}_1 \mathbf{k}_2 | T (\Omega_+) | \mathbf{k}_3 \mathbf{k}_4 \right\rangle & = \left\langle \mathbf{k}_1 \mathbf{k}_2 | V | \mathbf{k}_3 \mathbf{k}_4 \right\rangle \nonumber \\ & + \int \frac{\textrm{d}^3 k_5}{(2 \pi)^3} \frac{\textrm{d}^3 k_6}{(2 \pi)^3} \left\langle \mathbf{k}_1 \mathbf{k}_2 | V | \mathbf{k}_5 \mathbf{k}_6 \right\rangle \mathcal{G}^0_{II}(k_5,k_6; \Omega_+) \left\langle \mathbf{k}_5 \mathbf{k}_6 | T (\Omega_+) | \mathbf{k}_3 \mathbf{k}_4 \right\rangle \, , \label{eq:lippschw} \end{align} where $\mathcal{G}^0_{II}$ is associated to the propagation of two dressed but non-interacting single-particle lines: \begin{align} \mathcal{G}^0_{II}(k,k'; \Omega_+) = \int_{-\infty}^{\infty} \frac{\textrm{d} \omega}{2 \pi} \frac{\textrm{d} \omega'}{2 \pi} \mathcal{A}(k,\omega) \mathcal{A}(k',\omega') \frac{1 - f(\omega) - f(\omega')}{\Omega_+ - \omega -\omega'} \, , \label{eq:g20} \end{align} with $f(\omega)=\left[ e^{\beta (\omega - \mu)} + 1 \right]^{-1}$ the Fermi-Dirac distribution and $\mathcal{A}(k,\omega)$ the single-particle spectral function. The notation $\Omega_\pm$ stands for $\Omega \pm i \eta$, with $\eta$ infinitesimally small. $\mathcal{G}^0_{II}$ can be interpreted as a Pauli blocking factor at finite temperature, analogous to the one that appears in zero temperature BHF calculations \cite{muther00}. In contrast to BHF, however, the zero temperature version of the SCGF formalism accounts for the intermediate propagation of both pp and hh states. The interaction of a nucleon with the remaining nucleons in the medium is described within the Green's functions formalism in terms of the self-energy \cite{kadanoff}. Its imaginary part is related to the in-medium $T$-matrix: \begin{align} \textrm{Im} \Sigma(k,\omega) = \int \frac{\textrm{d}^3 k'}{(2 \pi)^3} \int_{-\infty}^{\infty} \frac{\textrm{d} \omega'}{2 \pi} \left\langle \mathbf{k} \mathbf{k}' | \textrm{Im} T (\omega+\omega'_+) | \mathbf{k} \mathbf{k}' \right\rangle \mathcal{A}(k,\omega') \left[ f(\omega') + b(\omega+\omega') \right] , \label{eq:imself} \end{align} where a Bose-Einstein factor, $b(\Omega) = \left[ e^{-\beta(\Omega - 2\mu)} - 1 \right]^{-1}$, appears due to the symmetric treatment of pp and hh states. The real part of the self-energy is determined from its imaginary part by a dispersion relation: \begin{align} \textrm{Re} \Sigma(k,\omega) = \Sigma_{HF}(k) - \mathcal{P} \int \frac{\textrm{d} \omega'}{\pi} \frac{ \textrm{Im} \Sigma(k,\omega'_+)}{\omega-\omega'} \, , \label{eq:reself} \end{align} except for the energy-independent Hartree-Fock contribution: \begin{align} \Sigma_{HF}(k) = \int \frac{\textrm{d}^3 k'}{(2 \pi)^3} \left\langle \mathbf{k} \mathbf{k}' | V | \mathbf{k} \mathbf{k}' \right\rangle n(k') \, , \label{eq:reshf} \end{align} where the momentum distribution includes the effects of correlations via $\mathcal{A}(k,\omega)$: \begin{align} n(k) = \nu \int_{-\infty}^\infty \frac{\textrm{d} \omega}{2 \pi} \mathcal{A}(k,\omega) f(\omega) \, . \label{eq:nk} \end{align} $\nu=2$ accounts for the spin degeneracy of neutron matter. Finally, one can make use of Dyson's equation to close this set of equations by determining the single-particle spectral function from the real and imaginary parts of the self-energy: \begin{align} \mathcal{A}(k,\omega) = \frac{-2 \textrm{Im} \Sigma(k,\omega)}{ \left[\omega - \frac{k^2}{2m} - \textrm{Re} \Sigma(k,\omega) \right] ^2 + \left[ \textrm{Im} \Sigma(k,\omega) \right]^2 } \, . \label{eq:sf} \end{align} The previous equations are derived within the grand-canonical picture, where the two external, fixed variables are the temperature, $T=\frac{1}{\beta}$, and the chemical potential, $\mu$. For dense matter studies, it is more convenient to fix the density $\rho$ and therefore we supplement the previous set of equations with the normalization condition: \begin{align} \rho = \nu \int \frac{\textrm{d}^3 k}{(2 \pi)^3} \int_{-\infty}^\infty \frac{\textrm{d} \omega}{2 \pi} \mathcal{A}(k,\omega) f(\omega,\tilde \mu) \, , \label{eq:rho} \end{align} which determines a ``microscopic'' chemical potential, $\tilde \mu$. In a thermodynamically consistent approximation (such as the ladder approximation), $\tilde \mu$ should coincide with the macroscopic chemical potential, $\mu$, obtained from the bulk properties by taking the derivative of the free energy density, $F$: \begin{align} \mu = \left. \frac{\partial F}{\partial \rho} \right|_T \, . \label{eq:mu} \end{align} Thermodynamically non-consistent many-body approximations, such as BHF, lead to $\tilde \mu \neq \mu$ \cite{baym62}. Equations (\ref{eq:lippschw}-\ref{eq:rho}) form a closed self-consistent set of equations in terms of the in-medium interaction, the self-energy and the single-particle spectral function that can be solved iteratively. The numerical details associated to the solution of these equations are rather involved and we refer the reader to Refs.~\cite{frick03,frickphd,riosphd} for further details. It is important to note that the numerical solution of the SCGF method, when available (see following paragraph), accounts for the full ladder approximation. The bosonic factor appearing in Eq.~(\ref{eq:imself}) presents a pole for $\Omega=2 \mu$, which is generally cancelled by an associated zero in $\textrm{Im} T(\Omega=2 \mu)$. However, below a certain critical temperature, $T_c$, the state with center of mass momentum $P=0$ and energy $\Omega=2 \mu$ does not cancel the bosonic factor and an instability occurs, reminiscent of the formation of Cooper pairs. This signals the onset of superfluidity, according to the so-called Thouless criterion \cite{thouless60,alm96}, and imposes a limit to the lowest temperatures we can achieve within our numerical calculations. All the results presented in the following are obtained for $T>T_c$, thus neglecting the effect of pairing correlations but guaranteeing the convergence of the approach. So far, we have discussed the determination of the microscopic properties of the system. The Green's function formalism can also be used to obtain the bulk properties of neutron matter. For the case of two-body interactions, one can evaluate the energy per particle by means of the Galitskii-Migdal-Koltun (GMK) sum rule \cite{migdal58,koltun74}: \begin{align} \frac{E}{A} = \frac{\nu}{\rho} \int \frac{\textrm{d}^3 k}{(2 \pi)^3} \int_{-\infty}^\infty \frac{\textrm{d} \omega}{2 \pi} \frac{1}{2} \left\{ \frac{k^2}{2m} + \omega \right\} \mathcal{A}(k,\omega) \, , \label{eq:gmk} \end{align} from the spectral function evaluated in the SCGF approach. To obtain the free energy, $F=E- TS$, and have a complete thermodynamical description of the system, one still needs to compute the entropy within a correlated approximation. This can be obtained by using the Luttinger-Ward (LW) formalism \cite{luttinger60b,carneiro75,rios06}. Within this approach, an expression for the grand-canonical potential in terms of dressed single-particle propagators can be obtained by means of a Legendre transformation. The entropy can then be computed from the derivative $S= - \frac {\partial \Omega}{\partial T} \mid_{\mu}$, which gives a closed expression in two terms, $S=S^{DQ}+S'$. The first one corresponds to the dynamical quasi-particle (DQ) entropy density: \begin{align} \frac{S_{DQ}}{A} & = \frac{\nu}{\rho} \int \frac{\textrm{d}^3 k}{(2 \pi)^3} \int_{-\infty}^\infty \frac{\textrm{d} \omega}{2 \pi} \sigma(\omega) \mathcal{B}(k,\omega) \, , \label{eq:sqp} \end{align} given by the convolution of a statistical factor, \mbox{$\sigma(\omega)=-f(\omega) \ln f(\omega) - \left[ 1 - f(\omega) \right] \ln \left[ 1-f(\omega) \right]$,} and a spectral function, $\mathcal{B}(k,\omega)$: \begin{align} \mathcal{B}(k,\omega) = \left[ 1 - \frac{\partial \Sigma(k,\omega)}{\partial \omega} \right] \mathcal{A}(k,\omega) - 2 \frac{\partial \textrm{Re} \mathcal{G}(k,\omega)}{\partial \omega} \textrm{Im} \Sigma(k,\omega) \, , \end{align} which can be computed from the single-particle quantities obtained in the SCGF approach. This $\mathcal{B}-$spectral function accounts for the effect of the dynamical (\emph{i.e.} interaction-induced) correlations that fragment the quasi-particle peak \cite{rios06}. In this paper, we will consider that the second term, $S'$, is negligible due to constraints in phase space for relatively low temperature \cite{carneiro75}. This approach leads to thermodynamical consistent results for neutron matter as well as for symmetric nuclear matter \cite{rios06,riosphd}. In order to assess the dependence of our results on the many-body approximation employed in the description of neutron matter, we shall compare the SCGF calculations to a finite temperature generalization of the BHF method. A real finite temperature extension of the BHF approach is given by the Bloch-de Dominicis theory \cite{nicotraphd,baldo99} but, instead of using the latter, our calculations will rely on an often used simpler generalization \cite{bombaci94,rios05}. This extension can be obtained from the SCGF equations by assuming that the spectral function has no width and full strength concentrated at the BHF quasi-particle energy: \begin{align} \mathcal{A}(k,\omega) = (2 \pi) \delta \left[\omega - \varepsilon_{BHF}(k) \right] \, . \end{align} In addition, one eliminates the bosonic factor of Eq.\ (\ref{eq:lippschw}) and modifies the in-medium two-body propagator to include only intermediate particle-particle propagation: \begin{align} \mathcal{G}^0_{II}(k,k; \Omega_+) = \frac{[1 - f(\omega)][1 - f(\omega')]}{\Omega_+ - \varepsilon_{BHF}(k) -\varepsilon_{BHF}(k') } \, . \label{eq:g20BHF} \end{align} The set of equations thus obtained mimics the zero temperature BHF formalism with the replacement of the step-function momentum distributions at $T=0$ by Fermi-Dirac distributions at $T \neq 0$. This guarantees that in the $T \to 0$ limit the results will coincide with BHF. One can proof that this extension coincides with the Bloch-de Dominicis results at low temperatures \cite{baldo99}. Few other approaches exist that can be used to study neutron matter at finite temperatures starting from realistic NN potentials. The benchmark variational calculations of Friedman and Pandharipande (hereafter FP) \cite{friedman81} relied on a frozen correlation approximation, \emph{i.e.} using as a starting point the Jastrow-like correlation functions obtained at zero temperatures. This is of course an additional approximation, possibly only suitable for low enough temperatures and large densities, where matter can be consider degenerate. The variational approach has only recently been extended to finite temperatures to include appropriately thermal correlations \cite{mukherjee07}. Alternatively, the case of low densities and high temperatures can be studied by means of the model-independent virial expansion \cite{huang87,schwenk06}. In this approximation, the thermodynamical properties are expanded in terms of the fugacity, $z=e^{\beta \mu}$. The first term in this expansion leads to the thermodynamics of a classical free gas, while the first order correction is given in terms of a virial coefficient that can be computed from the experimental NN interaction phase-shifts in free space. Since neutron matter is not expected to clusterize at low densities, this approximation will hold for extremely dilute and hot matter. Recently, another method has been proposed to study neutron matter at nonzero temperatures by making use of renormalized low momentum two- and three-nucleon interactions whose short-range components have been properly eliminated \cite{tolos08}. The thermal properties of neutron matter have been computed up to second order in finite temperature many-body perturbation theory, including contributions from normal and anomalous diagrams. \section{Microscopic properties of neutron matter} \label{sec:micro} In this Section we will discuss the microscopic single-particle properties of neutron matter as obtained from the SCGF approach. To address the model dependence of our calculations, we will show results using two different realistic nucleon-nucleon interactions, namely, the meson-exchange CD-BONN \cite{cdbonn} and the local Argonne V18 potentials \cite{av18}. Partial waves up to $J=8$ have been considered, with the Born approximation for $J \ge 5$ in both SCGF and BHF calculations. We start by showing in Fig.~\ref{fig:asf} the density and temperature dependence of the neutron spectral function in dense neutron matter. Due to the similarity of the results for the two interactions, we will only consider the results obtained with the Argonne V18 potential. The spectral function for densities ranging from $\rho=0.04$ fm$^{-3}$ to $\rho=0.32$ fm$^{-3}$ at a fixed temperature of $T=5$ MeV is shown in the left panels for three momenta: $k=0$ (top panel), $k=k_F$ (middle panel), and $k=2k_F$ (bottom panel). $k_F$ corresponds to the Fermi momentum associated to each density. The right panels show the results for a fixed density, $\rho=0.16$ fm$^{-3}$, and temperatures from $T=5$ to $20$ MeV. The qualitative features of these figures are already well-known (see {\it e.g.} Ref.\ \cite{frickphd}). There is an important quasi-particle peak, which contains roughly $70-80 \%$ of the total strength for all momenta. The position of this peak changes with momenta and it is described by the self-consistent equation: \begin{align} \varepsilon_{qp}(k) = \frac{\hbar^2k^2}{2m} + \textrm{Re} \, \Sigma[k,\varepsilon_{qp}(k)] \, , \label{eq:qpe} \end{align} which defines the quasi-particle spectrum for neutrons in the medium. With increasing density, the quasi-particle peak at zero momentum shifts to lower energies with respect to the chemical potential. It turns out that neutrons at low momenta are more bound at higher densities. The situation is the opposite for high momenta ($k \sim 2 k_F$), where the peak shifts to higher energies when density increases. At the Fermi surface, $k=k_F$, the quasi-particle peak is approximately centered around $\omega \sim \mu$ and its width decreases as $\rho$ increases. At zero temperature and in the absence of pairing correlations, the spectral function would actually have a delta-like quasi-particle peak. The effect of density is particularly large in the low- and high-energy tails of the spectral function. For both large removal ($\omega << \mu$) and large addition ($\omega >> \mu$) energies, the strength increases with density. These off-shell components of the spectral function are populated mainly due to the action of the short-range core of the nuclear interaction and therefore it is reasonable that they increase when the mean separation between neutrons decreases. In other words, the high-energy strength of the spectral function is a good measure of the correlations induced by density effects. The influence of temperature in the spectral function is less pronounced. Both the position of the quasi-particle peak and the strength at low and high energies are almost unaffected by changes in temperature. The only region that is slightly modified by temperature corresponds to the range of energies $\omega \sim \mu$, which is particularly sensitive to variations in phase space \cite{luttinger61}. It seems fair to say that the structure of the spectral function is mainly determined by the in-medium renormalization associated to the density, while temperature effects play a minor role. This is no longer the case close to and below $T_c$, where a relatively small decrease in temperature can lead to the appearance of superfluidity and thus to an important change in the properties of the spectral function. In particular, the onset of pairing results into a double quasi-particle peak structure close to the Fermi surface \cite{bozek99a,dickhoff05}. To learn more about the effect of hh propagation on the microscopic properties of neutron matter, one can compare the quasi-particle peak described by Eq.~(\ref{eq:qpe}) with the single-particle spectrum obtained within the BHF approach. The first includes both pp and hh effects, while the second only accounts for pp states. In Fig.~\ref{fig:qp} we compare the real part of the on-shell self-energy, $\textrm{Re} \Sigma[k,\varepsilon_{qp}(k)]$, for both approaches at densities $\rho=0.08$, $0.16$ and $0.24$ fm$^{-3}$ (left, central and right panel, respectively) and temperatures $T=5$ MeV (solid lines) and $T=20$ MeV (dashed lines). The results displayed in this Figure have been obtained with Argonne V18, but similar conclusions are reached with CDBONN. For all cases, the SCGF spectra are more repulsive than the BHF ones at all momenta. The effect of hh propagation in the on-shell self-energy is therefore of a repulsive nature. This effect is larger at low momenta, in accordance with the idea that the dressing induced by hh states is irrelevant for high-momentum, particle states. The repulsive effect of SCGF with respect to BHF increases with density and the differences can be as large as $25$ MeV for $k=0$ at $\rho=0.24$ fm$^{-3}$. The temperature behavior of the quasi-particle spectra shows some interesting features. On the one hand, the BHF single-particle spectrum becomes more repulsive with increasing temperature at all momenta. This is usually attributed to the presence of thermal Fermi-Dirac factors in the self-energy. The repulsive high relative momentum components of the interaction are not accessible at zero temperature and they only become available once the thermal distribution populates high momentum single-particle states. The overall effect is then repulsive. The same reasoning applies to particle states in the SCGF case, which also become more repulsive with increasing temperature. Hole states, on the other hand, become more attractive with increasing temperature. Presumably, this behavior can be attributed to the fact that hole states are renormalized in the SCGF, which results into a quenching of the attractive long-range components of the NN interaction in the zero temperature case. The inclusion of thermal effects leads to a somewhat weaker renormalization that increases the attractive component of the spectrum for $k<k_F$. A similar effect has been observed in extended BHF (where the repulsive contribution of holes is taken into account by the $M_2$ rearrangement term in the self-energy \cite{zuo06}) as well as in SCGF calculations of symmetric nuclear matter \cite{riosphd}. Among the one-body properties of interest for correlated many-body systems, the momentum distribution of Eq.~(\ref{eq:nk}) is particularly sensitive to dynamical corrections. At zero temperature, for instance, the momentum distribution of the FFG is just a step-function, with complete population below $k_F$ and empty states above. In contrast, the correlated momentum distribution at $T=0$ displays a substantial depletion of hole states and a non-zero population of high momentum states. Unfortunately, the FFG at finite temperature also shows these features, since all the states become partially populated due to the thermal distribution of states. As a consequence, the correlated $n(k)$ at finite temperature will have both thermal and dynamical components. To appropriately disentangle these two components, extensive studies of the temperature and density dependence of the momentum distribution are needed. This analysis is presented in Fig.\ \ref{fig:nk}, where the density (top panels) and temperature (bottom panels) dependence of the momentum distribution is shown for both CD-BONN (left panels) and Argonne V18 (right panels) potentials. Interesting analogies between the density and temperature dependence are observed: decreasing temperature has a somewhat similar effect to increasing density. This is in stark contrast to the effect of density and temperature on the spectral function, which, as already commented, are rather different. In the case of the momentum distribution, these dependences can be interpreted in terms of degeneracy arguments: for both the low temperature and the high density case, the system is reaching a degenerate limit, where thermal effects are unimportant and the depletion is essentially governed by dynamical effects. This will be the range which is interesting for understanding the influence of short-range correlations on the system. The opposite limit (low densities, high temperatures) leads to a momentum distribution which is controlled by thermal effects. Some particular details, however, differ depending on how the degenerate limit is approached. On the one hand, fixing the density and progressively decreasing the temperature, leads to a monotonous increase (decrease) of $n(k)$ below (above) the Fermi surface. In particular, $n(k=0)$ saturates to a value different from $1$ when $T \to 0$. For $\rho=0.16$ fm$^{-3}$ at the lowest temperature available ($T=4$ MeV), one finds $n(0)=0.974$ for CDBONN and $n(0)=0.959$ for Argonne V18. The differences in the short-range components of the two interactions explain the discrepancies in $n(0)$: Argonne V18 has a harder short-range core compared to CD-BONN and thus leads, in general, to lower occupations for $k<k_F$ at high densities. On the other hand, fixing the temperature and increasing the density, one finds a different scenario, where $n(0)$ is no longer a monotonous function due to the competition of thermal and dynamical effects. This behavior is observed in detail in Fig.\ \ref{fig:depl}, where the occupation of the lowest momentum state, $n(0)$, is shown as a function of density for several temperatures. The density dependence of $n(0)$ indeed has features which can be attributed to both thermal and dynamical effects. For all temperatures, there is a steep decrease of $n(0)$ when $\rho \to 0$. The FFG $n(0)$, shown in double-dotted dashed line, has a similar behavior, which can be explained in terms of the system approaching the classical limit ($\mu \to \infty$). In the non-interacting case, dynamical correlations are absent and therefore thermal effects are responsible for the strong decrease of $n(0)$ at low densities. The analogous behavior in the correlated $n(0)$ is basically driven by thermal correlations. The high density behavior of $n(0)$, on the other hand, is totally different from the FFG. While the latter always equals $1$ above $\rho \sim 0.08$ fm$^{-3}$ at $T=5$ MeV, the correlated $n(0)$ at this temperature tends to have values which are about $10 \%$ lower. One actually observes a decrease in $n(0)$ as density increases in this low temperature range. This dependence can be understood in terms of dynamical correlations: an increase in density results into a decrease of the mean distance between particles and, as a consequence, the importance of short-range effects is incremented at higher densities. As a consequence, the depletion increases with density, as observed. Again, this effect depends on the particular short-range structure of the NN force, which explains the differences observed between the left and right panels. Finally, let us note once again that the temperature dependence of $n(0)$ is monotonous: large temperatures lead to low $n(0)$'s at all densities. The changes induced by temperature on $n(0)$ are however density dependent and, as expected from degeneracy arguments, they are almost negligible at high densities. \section{Thermodynamical properties of neutron matter} \label{sec:macro} The SCGF approach, complemented with the Luttinger-Ward formalism, can be used to obtain the thermodynamical properties of neutron matter including the effect of correlations. In this Section we shall analyze these properties and compare the SCGF results with those of other approaches, such as the variational calculation of FP, the finite temperature extension of BHF and the virial expansion. The energy per particle, obtained from the GMK sum-rule of Eq.\ (\ref{eq:gmk}), is shown in Fig.~\ref{fig:ener} as a function of density for two temperatures, $T=10$ and $T=20$ MeV. CDBONN (Argonne V18) results are displayed in the left (right) panel. The SCGF results (circles) are compared with those obtained with the finite temperature generalization of the BHF approach (triangles), and those of the variational calculation of FP (crosses). Note that the results for the energy per particle are not quoted in the original publication and these have been obtained from the free energy and the entropy. At low densities, we also compare our results with the model-independent virial approximation for fugacities up to $z=0.5$ \cite{schwenk06}. These correspond to densities $\rho=0.0035, 0.0098, 0.0181$ and $0.0279$ fm$^{-3}$ at temperatures $T=5, 10, 15$ and $20$ MeV, respectively. Comparing the SCGF and BHF approaches for a single NN interaction, one finds that the inclusion of hh correlations leads to a more repulsive energy per particle for almost all densities. As expected from phase space considerations, this repulsive effect is more important at higher densities. Moreover, the repulsion induced by hh propagation is more important for Argonne V18 than for CDBONN. As mentioned previously, the Argonne V18 interaction has a strong short-range core and therefore the hh renormalization on top of the pp propagation will still have an important effect. In particular, at a temperature of $T=10$ MeV, the inclusion of hh propagation leads to a $1.6$ MeV ($4.0$ MeV) increase of the energy per particle at $\rho=0.16$ fm$^{-3}$ ($0.32$ fm$^{-3}$). In contrast, the weaker short-range structure of CDBONN is already well treated with pp correlations and the inclusion of the hh component has a smaller effect, of only $0.6$ MeV ($1.5$ MeV). These results are in agreement with the zero temperature calculations of the Ghent group, which showed almost no difference between SCGF and BHF at $\rho=0.16$ fm$^{-3}$ for the Reid93 interaction \cite{dewulf03a}. However, these findings seem to disagree with those of the Krakow group \cite{bozek02a}, which suggest differences between the SCGF bulk energies and continuous choice BHF calculations of about $5$ MeV in the same conditions. Note, however, that those results were obtained with a simpler separable NN interaction and that different numerical procedures were used in the solution of the SCGF equations. The recent calculation of Ref.\ \cite{tolos08} leads to more repulsive results than ours at low densities, even when three-body effects are not considered. This is curious since, by construction, $V_{lowk}$ does not include short-range cores and thus one would have naively expected theirs EoS to be softer than that obtained by renormalizing interactions with hard cores. The differences in energy between the many-body approaches for a given potential that we have just discussed are a consequence of differences in the treatment of dynamical and thermal correlations. In contrast, the discrepancies within the same many-body approach for two NN interactions are a reflection of the different structure of the two potentials and, in particular, of their short-range behavior. In general, the results with Argonne V18 for both the SCGF and the BHF approaches are more repulsive than those of CDBONN. In the SCGF approach at $T=10$ MeV and $0.16$ fm$^{-3}$, for instance, $E/A=18.0$ MeV for Argonne V18, while $E/A=16.9$ MeV for CDBONN. This discrepancy increases with density and, for $\rho=0.32$ fm$^{-3}$, it becomes as large as $6.4$ MeV. In contrast, the differences in energy per particle between the two potentials for the BHF approximation are rather small. For $T=10$ MeV and at $\rho=0.16$ fm$^{-3}$, they are less than $0.5$ MeV, while at $\rho=0.32$ fm$^{-3}$ they are just about $\sim 3$ MeV. This indicates that the inclusion of hh correlations in the energy per particle increases the dependence of the results on the short-range structure of the potential. Let us also note that the discrepancy in the energy per particle due to the use of different NN potentials is somewhat larger than that associated to the use of different many-body approaches, particularly in the high density regime. At low densities, short-range effects are weakened and the SCGF data agrees with the virial expansion independently of the NN interaction. This is particularly well observed in the inset of Fig.~\ref{fig:ener}. This agreement provides for the first time, to the best of our knowledge, a model-independent verification of the numerics of the SCGF approach. Let us also note the relatively large differences between the FP and the SCGF results below $0.08$ fm$^{-3}$. In addition, we would like to stress that the various approximations reach the correct classical limit, $E/A \to 3T/2$ for $\rho \to 0$, but the way this limit is reached depends on the approach under consideration. In all cases, the energy per particle shows a well defined minimum. This is a consequence of the competition between thermal effects, which are dominant at low densities and tend to make the energy more repulsive, and interaction effects, which are attractive and important at intermediate densities. Remarkably, our SCGF results for Argonne V18 agree well with those of FP at high densities. Both calculations are based on local NN potentials, but the Urbana V14 interaction of Ref.~\cite{friedman81} includes a density-dependent quenching of the two-pion exchange contribution to account for the repulsive effect of a three-body force in a phenomenological way. Naively, one would have expected the inclusion of such a contribution to yield more repulsive results than ours, especially at high densities. Note that, if the contribution of the three-body effects was indeed negligible, the observed agreement could be a signature of an unprecedented agreement between the variational and SCGF approaches in a wide range of temperatures and densities. To clarify this issue, it would be interesting to compare our SCGF results with finite temperature variational calculations with the Argonne V18 interaction. Alternatively, we have performed some preliminary SCGF calculations with the Urbana V14 interaction (together with the density-dependent quenching). Our preliminary calculations indicate that the energy per particle is $\sim 3$ MeV more repulsive than the SCGF energy with Argonne V18 at $\rho=0.16$ fm$^{-3}$ and $T=10$ MeV. The discrepancy increases to $\sim 10$ MeV at $0.32$ fm$^{-3}$. All in all, this seems to indicate that the agreement between the FP calculations with the Urbana V14 force and our SCGF with Argonne V18 is a coincidence, which might have been caused by some sort of cancellation between the differences induced by the two underlying interactions and those associated to the different many-body approaches. The discrepancies are substantially smaller in the case of the entropy per particle, shown as a function of density for two temperatures, $T=10$ and $T=20$ MeV, in Fig.~\ref{fig:entro}. In particular, the changes arising from the use of different potentials (left and right panels) are smaller than those due to the use of different many-body methods. At $T=10$ MeV, the different approximations (DQ entropy from SCGF results, BHF entropy, FP, FFG) are quite consistent with each other. The deviations above $0.16$ fm$^{-3}$ are at most of $0.15$ Boltzmann units, which would have a maximum impact on the free energy per particle of $T \times \delta S/A \sim 1.5$ MeV. At higher temperatures ($T=20$ MeV), the differences between approaches are somewhat larger, of at most $0.25$ units. All in all, these results support the idea that the entropy is mostly determined by thermal correlations and rather unaffected by dynamical correlations. This is confirmed by the extremely narrow quasi-particle peak of the $\mathcal{B}$ spectral function. The many-body effects that fragment the quasi-particle peak, which are extremely important in the calculation of the energy, are almost negligible for the entropy \cite{rios06,riosphd}. This explains partially the good agreement between SCGF and BHF entropies. The latter are obtained by using the quasi-particle approximation to the entropy: \begin{align} \frac{S_{BHF}}{A} = \frac{\nu}{\rho} \int \frac{\textrm{d}^3 k}{(2 \pi)^3} \sigma \left[ \varepsilon_{BHF}(k) \right] \, . \end{align} Although, as observed in Fig.~\ref{fig:qp}, the quasi-particle energies of the two approaches are quite different, the change in chemical potential between BHF and SCGF shifts the entropy to values very close to those of $S_{DQ}$. The similarity between both entropies had already been observed for symmetric matter \cite{rios06}. Compared to the FP entropy, we find that both SCGF and BHF predict slightly larger entropies at large densities for both temperatures and interactions. A similar effect was observed in Ref.\ \cite{tolos08} and attributed to an anomalously low effective mass in variational calculations. The restriction to a quadratic spectrum in variational approaches is a limitation, especially in view of the clearly non-quadratic momentum dependences of the BHF and SCGF quasi-particle spectra (see Fig.~\ref{fig:qp}). The entropies in the latter approaches go beyond such approximation and, in the SCGF case, they even go beyond the assumption of a single quasi-particle peak. In any case, close to the degenerate limit, all the calculated entropies have a Fermi-liquid-like behavior: \begin{align} \frac{S}{A}= a_s T \, , \label{eq:sfl} \end{align} (see Fig.~\ref{fig:temp}), where the parameter $a_s=\frac{\pi^2 m}{\hbar^2 k_F^2}$ is given in terms of the effective mass $m^*$, calculated at the Fermi surface at zero-temperature. The discrepancy between the $S^{DQ}$ and the FP entropies at large densities (close to the degenerate limit) suggests that the effective mass in the DQ entropy is larger than that of the variational entropy. Indeed, the $m^*$ in the DQ density of states is the product of the $m^*_k$ and the $m^*_\omega$ effective masses \cite{negele}. The latter is associated to the energy dependence of the self-energy and is believed to be absent in the variational approach. Since $m^*_\omega$ is strongly peaked around the Fermi-surface, it leads to larger values of the total effective mass and therefore increases the DQ entropy at large densities with respect to the variational one. In this direction, it is important to note that, in all cases, the entropy at high densities is smaller than that predicted by the FFG. This is in accordance with the idea that, in this regime, the entropy is dominated by the effective mass, which is always smaller than $1$ and therefore leads to lower entropies. Finally, let us stress that, as expected, the interaction has little influence in the entropy near the classical regime. At low densities, all the approximations to the entropy converge to similar values and no differences are observed between the FFG and the virial entropies (see inset of Fig.~\ref{fig:entro}). The free-energy obtained from the GMK sum-rule complemented with the dynamical quasi-particle entropy is shown in Fig.\ \ref{fig:free} as a function of density for several temperatures. Let us first note that the calculations yield well-behaved results in a large range of densities and temperatures for both CDBONN (left panel) and Argonne V18 (right panel). In particular, the low density high-temperature regime agrees well with the virial results. This agreement is directly related to the similarity of the DQ and the virial entropies since, in this regime, the entropy overcomes the energy contribution in $F/A$. Comparing the two panels, one observes that for densities higher than $0.08$ fm$^{-3}$ the Argonne results are more repulsive than the CDBONN ones. In addition, the Argonne SCGF and the FP results are quite close to each other for all densities, with differences (mostly coming from the entropy) smaller than $3$ MeV for the highest density considered here. In general, one can say that, for low densities, $F/A$ is well determined and all the approaches agree well with each other independently of the potential. Above $\sim 0.08$ fm$^{-3}$, however, differences appear due to the sensitivity of the many-body approach to the short-range structure of each NN interaction. Let us also note that the results of Ref.~\cite{tolos08} within the two-body case are about $5-10$ MeV more repulsive than ours. Once more, we would like to stress the fact that the SCGF approach, complemented with the LW formalism, yields thermodynamically consistent results. To this end, we show in Fig.~\ref{fig:chem} the microscopic chemical potential, $\tilde \mu$, together with the macroscopic one, $\mu$, as a function of density for two temperatures, $T=10$ and $20$ MeV. Left panels correspond to SCGF results for both the CDBONN potential (upper panel) and the Argonne V18 interaction (lower panel). For the two temperatures, there is a good agreement between the microscopic chemical potential, obtained from the normalization condition of Eq.~(\ref{eq:rho}), and the macroscopic chemical potential, coming from the numerical derivative of the free energy density, Eq.~(\ref{eq:mu}). For the latter, a centered two-point formula has been used. Let us stress that this agreement confirms \emph{a posteriori} the good behavior of the dynamical quasi-particle entropy as a function of the density and also the negligible role of the $S'$ term. The approximations involved in the calculation of $S_{DQ}$ do not spoil the consistency of the ladder approximation. Results for $\tilde \mu$ and $\mu$ within the BHF approach are presented in the right panels. Both chemical potentials agree at low ($\rho < 0.08$ fm$^{-3}$) densities for both temperatures. Above this density, however, discrepancies appear due to the increasing importance of the rearrangement contribution to the self-energy \cite{zuo06}. At $0.16$ fm$^{-3}$, the difference is of $\sim 4-5$ MeV for both potentials and temperatures, and it becomes as large as $15-20$ MeV at $0.30$ fm$^{-3}$. These differences show the lack of consistency of the BHF approach at finite temperature, even though the effect is smaller than in nuclear matter \cite{rios06}. The EoS of neutron matter is shown for different temperatures in Fig.\ \ref{fig:pres}. For the SCGF approach this quantity is computed from the thermodynamical relation $p=\rho \left( \tilde \mu - \frac{F}{A} \right)$, with $\tilde \mu$ the microscopic chemical potential obtained from Eq.~(\ref{eq:rho}). Note that in thermodynamically non-consistent approaches the pressure has to be computed from numerical derivatives of the free energy with respect to the density. Once again, a remarkable agreement with FP is found for the Argonne V18 results, while the CD-BONN interaction leads to a softer EoS. In the low density regime, both results agree well with the virial expansion. The effect of temperature decreases as density increases and eventually the curves for different temperatures seem to collapse to a single (density dependent) value, as expected from degeneracy arguments. This high density regime, however, will be mostly affected by the inclusion of three-body forces in the calculations. So far, we have plotted all our results as functions of density. To get a more accurate insight on the temperature dependence of the different thermodynamical properties of the system, we show in Fig.~\ref{fig:temp} the energy (left panel), entropy (central panel) and free energy (right panel) per particle as a function of temperature for a fixed density, $\rho=0.16$ fm$^{-3}$. The results correspond to the Argonne V18 interaction. The agreement between the SCGF and the FP energy per particle is confirmed for all temperatures (the FP results have been interpolated to this particular density). The SCGF results are about $2$ MeV more repulsive than the BHF ones and this repulsive effect is almost temperature-independent. The entropy, as expected, is well determined by all the approaches and only some small differences can be observed at large temperatures. These small differences, however, are translated into a slight disagreements between the FP and SCGF results in the free energy per particle. As observed in the right panel, the SCGF results are about $0.5$ MeV more attractive than the FP ones. The BHF results are about $1$ MeV more bound than the SCGF ones. Again, the differences between the approaches are rather temperature-independent. This suggests that the effect of dynamical correlations on the macroscopic properties are rather insensitive to thermal effects. It would be interesting to study the effect that sophisticated many-body calculations have on the temperature dependence of the different thermodynamical properties. This would provide a reliable test for the usually assumed quadratic (linear) temperature dependences for the energy (entropy). A detailed study of these dependences would however need of reliable extrapolations to the low-temperature regime, which in our present approach is not possible due to the presence of pairing effects. A thorough analysis of this low-temperature regime will be discussed elsewhere. At the moment, using the present data, we have parametrized the different thermodynamical quantities in terms of simple fits to study the quality of the commonly used approximations. The energy per particle of the FFG is well fitted by a quadratic temperature dependence, $e \sim e_0 + a_e T^2$, inspired by the Sommerfeld expansion \cite{ashcroft}. This expansion is only valid for $\frac{T}{\varepsilon(k_F)} << 1$, \emph{i.e.} temperatures close to zero. In the fits to the SCGF results, however, we have to use the available data between $T=4$ and $8$ MeV. Fitting a quadratic dependence for the energy per particle of the FFG in this temperature range yields a deviation from the exact result, $a_e = \frac{\pi^2 m}{2 \hbar^2 k_F^2} = 0.0422$ MeV$^{-1}$, of only $5 \%$. Assuming that this procedure is also valid for the SCGF energies per particle, we find $a_e=0.0339$ MeV$^{-1}$. This $20 \%$ difference seems too large to be explained simply in terms of the effective mass, which in this regime is $m^* \sim 0.9 m$. The naive replacement $a_e=\frac{\pi^2 m^*}{2 \hbar^2 k_F^2}$, for instance, does not agree with the previous value. The more accurate prediction $\frac{\pi^2 m^*}{2 \hbar^2 k_F^2} \frac{m^*+m}{2m}$ \cite{grange87}, although closer, is also somewhat too large to coincide with the fit to SCGF data. Alternatively, one could have tried to obtain an analytic expression for $a_e$ from the low temperature expansion of the GMK sum rule formula, but this is difficult due to the non-trivial temperature dependence of $\mathcal{A}(k,\omega)$. Let us also stress that the quadratic thermal dependence of the energy per particle comes essentially from the kinetic energy term. The potential energy is rather temperature independent and decreases by less than $2$ MeV when going from $T=20$ to $4$ MeV. According to Fermi liquid theory, the behavior of the entropy at low temperatures should be linear with $T$. For the FFG, a fit of Eq.~(\ref{eq:sfl}) in the $T=4$ to $8$ MeV regime gives a very accurate value of $a_s=2 a_e=0.0844$ MeV$^{-1}$. A similar one-parameter fit to the SCGF yields a slope, $a_s \sim 0.0772$ MeV$^{-1}$, in agreement with the Fermi liquid prediction for an effective mass, $m^* \sim 0.92 m$. This coincides with the value that we obtain for the effective mass at $k=k_F$ at low temperatures. The entropy however shows a clear deviation from this linear behavior above $12$ MeV. In addition, the FFG prediction $a_s = 2 a_e$ is partially violated. Finally, a quadratic fit to the SCGF data for the free energy per particle, $f=f_0 + a_f T^2$, leads to $a_f=-0.0428$ MeV $^{-1}$. This is a somewhat low value, rather close to the FFG prediction. In contrast, the FFG relation $a_e=-a_f$ is not well fulfilled. Moreover, the non-linear behavior of the entropy for $T>12$ MeV leads to a non-quadratic behavior of $F/A$ above this temperature. A consistency check of these low temperature fits is the relation $a_e - a_s \sim a_f$ as well as the fact that the zero-temperature extrapolation of $E/A$, $e_0 = 14.51$, and of $F/A$, $f_0 = 14.49$ MeV, do coincide. Note, however, that the accuracy of the fits depend on the exact position and the number of points considered at low temperatures and these are the most sensitive to numerical uncertainties within our approach. Finally, we would like to stress that the convergence of our results down to $T=4$ MeV implies that ${T_c < 4}$ MeV. \section{Conclusions} \label{sec:conclu} We have presented the first systematic study of hot neutron matter within the Self-Consistent Green's Function formalism in the ladder approximation for two realistic NN interactions, the CDBONN and the Argonne V18 potentials. The calculations cover a wide range of densities and temperatures and show the adequateness of this method to account for correlations in the microscopic properties of dense, hot hadronic matter. The effect of short range correlations in the thermodynamical properties is correctly described by the Luttinger-Ward formalism. At the microscopic level, short-range effects are particularly important on the spectral functions and are manifested in its low and high energy tails. Our results indicate that both the location of the quasi-particle peak and the amount of strength in the energy tails change substantially with density. On the contrary, thermal effects are very small and only affect the region around the chemical potential. The momentum dependence of the real part of the on-shell self-energy in the SCGF approach has been compared to the single-particle spectrum in BHF-like descriptions. The propagation of hh pairs in the intermediate states has a repulsive effect with respect to BHF results. This difference grows with density and is larger for momenta below the Fermi momentum, with maximum differences of $\sim 25$ MeV in the range of densities explored here. In the hole-momentum region, in addition, BHF and SCGF show different thermal behaviors, with the latter becoming more attractive as temperature increases. A careful study of the momentum distribution of the system has also been performed and the important interplay between thermal and dynamical correlations has been highlighted. These effects are well exemplified by $n(0)$, which is customarily used as a measure of correlation effects. For a given temperature and decreasing density, the system approaches the classical limit and the depletion of the momentum distribution increases. For larger densities, closer to the degenerate limit, dynamical correlations play a more important role and $n(0)$ decreases with increasing density. In general, correlation effects, as measured by the depletion, are larger for Argonne V18 than for CDBONN. In general, the SCGF energy per particle is more repulsive than the BHF one, independently of the interaction. The magnitude of these differences is governed by the density and by the particular structure of the NN interaction, and it is at most of $5$ MeV for the range of densities explored (up to $0.32$ fm$^{-3}$). The sensitivity to the NN interaction within the SCGF approach appears to be larger than this, with differences of up to $6$ MeV in the same range. In contrast, BHF results are relatively potential-independent. In any case, in the low density regime the energies for both approaches compare very well with the virial expansion. In addition, there is a very good agreement between the SCGF results for Argonne V18 and those of FP for Urbana V14 above a density of $0.08$ fm$^{-3}$. This is possibly due to a cancellation between the potential and the many-body dependence of the energy per particle in this regime. The entropy has been computed within the dynamical quasi-particle approximation, which takes into account the effects of correlations in the width of the quasi-particle peak. The discrepancies between different approximations to the entropy are rather small, thus revealing that the entropy is not affected by correlations. In general, all the approaches lead to somewhat lower values than those predicted by the FFG. The free energy for Argonne V18 and CDBONN shows substantial differences at large densities due to the different structures of the potential. The free energy obtained from the GMK energy and the DQ entropy leads to a thermodynamical consistent result, with a good agreement between the microscopic and the macroscopic chemical potentials. The differences for the BHF approach can be as large as $20$ MeV, although in general they are less important than for the nuclear matter case. The EoS, which has been computed in a wide range of densities and temperatures, also shows a similar potential dependence. In the low density regime, however, all the thermodynamical quantities show a very good agreement with the virial expansion. The stability of our results in this regime shows the robustness of the numerical techniques involved in the calculations. The temperature dependence of the energy, the entropy and the free energy has also been explored. A quadratic dependence is compatible for the energy per particle at $\rho=0.16$ fm$^{-3}$ for the SCGF and BHF approaches. The entropy is only proportional to the temperature below $T \sim 10$ MeV, which in turn translates into a non-quadratic temperature dependence of the free energy above this temperature. Moreover, the differences in energy and free energy between BHF and SCGF remain constant with temperature, indicating that the effect of temperature on dynamical correlations is rather small. In addition, the convergence of the results down to $T=4$ MeV indicate that no superfluidity appears above this temperature. In conclusion, the calculations performed show the potential of the SCGF method to describe accurately the properties of dense and hot matter. The inclusion of pairing effects and three-body forces within this formalism will improve the predictions for the microscopic and the bulk properties and will provide a very complete description of neutron star matter. \section{Acknowledgments} This work was partially supported by the NSF under Grant No. PHY-0555893, the MEC (Spain) and FEDER under Grant No. FIS2005-03142, and by the Generalitat de Catalunya (Spain) under Grant No. 2005SGR-00343. \bibliographystyle{apsrev}
2,869,038,154,136
arxiv
\section{Introduction} Grand unification theories (GUTs) based on a single gauge coupling such as $SU(5)$ \cite{GeorgiGlashow} predict the existence of a topologically stable magnetic monopole which carries one quantum ($2\pi/e$) of Dirac magnetic charge \cite{dokos,daniel}. In contrast to the 't Hooft-Polyakov monopole \cite{monopole}, the $SU(5)$ monopole also carries an appropriate amount of color magnetic flux that is screened because of color electric confinement. Unification models based on product groups such as $SU(4)_c \times SU(2)_L\times SU(2)_R$ \cite{pati} predict the existence of a topologically stable monopole that carries two quanta ($4\pi/e$) of magnetic charge \cite{magg}. One straightforward way to see this is by noting that the underlying group allows the existence of color singlet states that carry electric charges $\pm e/2$ and colored triplets with charges $\pm e/6$. A more explicit realization of this doubly charged monopole was demonstrated in Ref.~\cite{TopDef}, where it was shown to arise from the merger of two distinct (``confined'') monopoles, with each one carrying some Coulomb flux and a magnetic flux tube. This demonstration also reveals the existence of ``magnetic dumbbells'' in a variety of unified theories. Very interestingly, following Ref.~\cite{TopDef}, Volovik has shown \cite{volovik} how topological structures similar to the doubly charged construction in Ref.~\cite{TopDef} may arise in superfluid $^3{\rm He}$. Furthermore, the existence of a class of topological structures called ``walls bounded by strings'' \cite{kibble} was verified in experiments with superfluid $^3{\rm He}$ \cite{volovik2019}. Motivated by these recent developments and especially the interplay between topological structures in high energy and condensed matter physics, we explore some interesting topological structures that arise in the framework of the trinification gauge symmetry $G=SU(3)_c \times SU(3)_L\times SU(3)_R$ \cite{TopDef,kephart}. In contrast to $SU(5)$ and $SU(4)_c\times SU(2)_L\times SU(2)_R$, the topologically stable monopole in the trinification model is purely electromagnetic in nature with no color magnetic field accompanying it. It carries three quanta of magnetic charge ($6\pi/e$) in order to satisfy the Dirac quantization condition, and its mass may be light enough to make it accessible in high energy colliders. To identify the variety of topological substructures potentially associated with this monopole, we assume that the trinification symmetry breaking to the Standard Model (SM) proceeds through a series of steps. This deconstruction procedure allows us to identify the building blocks that make up the triply charged monopole. The latter, it turns out, consists of three distinct constituent monopoles which are bound together by flux tubes. We may thus refer to the triply charged monopole as a ``magnetic baryon,'' and its confined constituent components as ``magnetic quarks.'' It is clear that other bound states such as ``magnetic mesons'' are also present in this trinification model. We display an example of a somewhat more elaborate topological configuration referred to as a ``fang necklace.'' \section{Triply Charged Monopole} The trinification symmetry $G$ is a well known subgroup of $E_6$ \cite{ramond}, and a variety of topological structures that arise when the latter breaks to the SM have been discussed in Ref.~\cite{TopDef}. In this paper we do not insist on this relationship between the two groups, which allows us to contemplate the spontaneous breaking of $G$ at scales lying in the TeV range. Because $G$ implements electric charge conservation, its spontaneous breaking to the SM and subsequently to $SU(3)_c\times U(1)_{em}$ yields a topologically stable magnetic monopole that carries three quanta of Dirac magnetic charge, namely $6\pi/e$ \cite{kephart}. Recall that in the presence of fractionally charged quarks, say $d$ or $s$, one naively expects the magnetic monopole to carry this amount ($6\pi/e$) of magnetic charge from the Dirac quantization condition ($\mathsf{q}\mathsf{g}/4\pi=n/2$, where $\mathsf{q}$, $\mathsf{g}$ denote the electric and magnetic charges respectively and $n$ is an integer) \cite{dirac}. However, the topologically stable magnetic monopole in $SU(5)$ carries just a single quantum ($2\pi/e$) of magnetic charge. This is compatible with the Dirac quantization condition because the monopole also carries an appropriate amount of color magnetic charge \cite{daniel}. In the trinification case this is not the case and so the magnetic charge carried by the monopole is $6\pi/e$. A simple way to see this is to note that $G$ allows, in principle, color singlet states in the representations ${\bf (1,3,1)}$ + h.c., which carry electric charge $\pm 1/3$, and therefore the magnetic monopole must carry a magnetic charge of $6\pi/e$. (Fractionally charged color singlet states accompanied by multiply charged monopoles also appear in string theories \cite{WenWitten}.) Recall that the observed quarks and leptons reside in bifundamental representations of $G$ such as ${\bf (1, \bar{3}, 3)}$, etc. The discussion regarding the monopole charge is a bit more subtle if $G$ is embedded in $E_6$ but the outcome remains intact \cite{TopDef,kephart}. The monopole is topologically stable because the second homotopy group of the vacuum manifold $\pi_2(G/H)=\mathbb{Z}=\{n=0,\pm 1,\pm2, \pm 3, ...\}$, with $G$ being the trinification group and $H=SU(3)_c \times U(1)_{em}$. We now turn to the breaking of $G$ to $SU(3)_c\times SU(2)_L\times U(1)_{Y_L}\times SU(2)_R\times U(1)_{Y_R}$ at an intermediate scale, which can approach the TeV scale if desired. This is achieved by the vacuum expectation values (VEVs) of the {\bf (1,8,1)} and {\bf (1,1,8)} components of a Higgs {\bf 78}-plet under $E_6$. (It is sometimes convenient to follow the $E_6$ notation.) Recall that the $SU(3)_{L(R)}$ octet under the $SU(2)_{L(R)}\times U(1)_{Y_L(Y_R)}$ subgroup is decomposed as follows ${\bf 8=1_0+3_0+2_3+ 2_{-3}}$, where the subscripts denote the charges with respect to the generator $T^8_{L(R)}\equiv {\rm diag}(1,1,-2)$ of $U(1)_{Y_L(Y_R)}$. We can further break $SU(2)_R$ to $U(1)_R$ by a VEV along the ${\bf 3_0}$ component of the $SU(3)_R$ octet. The generator of $U(1)_R$ is $T^3_R={\rm diag}(1,-1)$. The unbroken subgroup is then $SU(3)_c\times SU(2)_L\times U(1)_{Y_L}\times U(1)_R\times U(1)_{Y_R}$. To be a bit more explicit, the potential for the breaking of $SU(3)_R$ to $SU(2)_R\times U(1)_{Y_R}$, assuming a discrete symmetry $\phi\to -\phi$, with $\phi$ being the scalar octet, is given by \begin{equation} V=-\frac{1}{2}m^2\Tr\phi^2+\frac{a}{4}\Tr(\phi^2)^2 + \frac{b}{2}\Tr\phi^4, \end{equation} where $m$ is a mass parameter and $a$, $b$ dimensionless parameters. The $3\times 3$ matrix $\phi_i^j$ can be diagonalized by an $SU(3)_R$ rotation, and for suitable choices of $a$ and $b$, $\phi$ acquires a VEV \begin{equation} \vev{\phi}\propto \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -2 \end{pmatrix}, \end{equation} which breaks $SU(3)_R$ to $SU(2)_R\times U(1)_{Y_R}$. With a second scalar octet, it is then straightforward to break $SU(2)_R$ to $U(1)_{R}$. More details will not be provided here. At this stage, we have the generation of three types of intermediate scale magnetic monopoles. Two of them result from the breaking of $SU(3)_L$ and $SU(3)_R$ to $SU(2)_L\times U(1)_{Y_L}$ and $SU(2)_R \times U(1)_{Y_R}$ and carry one unit of Coulomb magnetic flux along the generators $T^3_L/2+T^8_L/2$ and $T^3_R/2+T^8_R/2$ respectively, where $T^3_{L(R)}\equiv {\rm diag}(1,-1)$. This is because the $(-1,-1)\in SU(2)_{L(R)}\times U(1)_{Y_L(Y_R)}$ coincides with the identity element as it leaves all the representations of $SU(3)_{L(R)}$ unchanged. Consequently, a rotation by $2\pi$ along the generator $T^3_{L(R)}/2+T^8_{L(R)}/2$, which interpolates between (1,1) and (-1,-1), is a closed loop generating the second homotopy group $\pi_2 (SU(3)_{L(R)}/SU(2)_{L(R)}\times U(1)_{Y_L(Y_R})=\pi_1 (SU(2)_{L(R)}\times U(1)_{Y_L(Y_R)})=\mathbb{Z}$ of the vacuum manifold. The breaking of $SU(2)_R$ to $U(1)_R$ generates a third monopole which carries one unit of $T^3_R$ magnetic flux corresponding to a $2\pi$ rotation along this generator. We should further break $U(1)_{Y_L} \times U(1)_R\times U(1)_{Y_R}$ to $U(1)_Y$, where $Y=T^3_R/2+(T^8_L+T^8_R)/6$ is the weak hypercharge. (The electric charge operator is given by $Q=T^3_L/2+Y$.) First consider the breaking of $U(1)_{Y_L} \times U(1)_{Y_R}$ to $U(1)_{B-L}$, where $B-L=(T^8_L+T^8_R)/3$ is the baryon minus lepton number. This symmetry breaking is achieved by a Higgs field in the fundamental representation of $E_6$, \begin{equation} {\bf 27}={\bf (1,\bar{3},3)}+{\bf (3,3,1)}+ {\bf (\bar{3},1,\bar{3})}\equiv\lambda+\mathbb{Q}+ \mathbb{Q}^c, \end{equation} where \begin{equation} \lambda= \begin{pmatrix} h_u & e^c \\ & \\ h_d & \nu^c \\ & \\ l & N \end{pmatrix} \end{equation} with the rows being ${\bf \bar{3}}$'s of $SU(3)_L$ and the columns {\bf 3}'s of $SU(3)_R$, and \begin{equation} \mathbb{Q}= \begin{pmatrix} q \\ & \\ g \end{pmatrix} \quad {\rm and}\quad \mathbb{Q}^c= \begin{pmatrix} u^c, &d^c, &g^c \end{pmatrix}, \end{equation} denote an $SU(3)_L$ triplet and an $SU(3)_R$ antitriplet respectively. For simplicity, we use here for the various components of the Higgs {\bf 27}-plet the same symbols as for the corresponding components of the fermion {\bf 27}-plets which contain the ordinary quarks and leptons. The reader should keep this in mind to avoid any confusion. The Higgs {\bf 27}-plet acquires a VEV along its $N$ component which is an $SU(3)_c\times SU(2)_L\times SU(2)_R$ singlet and has $T^8_L=2$, $T^8_R=-2$. Consequently, the generator $T^8_L+T^8_R=3(B-L)$ remains unbroken \cite{TopDef}. A rotation by $2\pi/4$ along the orthogonal broken generator \beq \mathcal{B}\equiv T^8_L-T^8_R \eeq leaves the VEV of $N$ invariant. Consequently, the cosmic string generated by the breaking of $U(1)_\mathcal{B}$ is a tube with magnetic flux corresponding to this rotation, namely it carries magnetic flux $(T^8_L-T^8_R)/4$. We next consider the breaking $U(1)_R\times U(1)_{B-L}$ to $U(1)_Y$, where $Y=T^3_R/2+(B-L)/2$ is the SM weak hypercharge, by a VEV along the $\nu^c$ component of the Higgs {\bf 27}-plet which has $T^3_R=-1$ and $B-L=1$. The normalized generators corresponding to $T^3_R$ and $(B-L)$ are $T^3_R/2$ and $\sqrt{3/8}(B-L)$ and, thus, the orthogonal broken generator is \beq 2T^3_R-3(B-L). \label{orthgen1} \eeq This generator is unbroken by the VEV of $N$, but breaks by the VEV of $\nu^c$. However, the charges of $\nu^c$ imply that a rotation by $2\pi/5$ along this generator remains unbroken and the associated string carries magnetic flux \beq \frac{2}{5}T^3_R-\frac{3}{5}(B-L). \label{tube} \eeq Revisiting the tube with magnetic flux $(T^8_L-T^8_R)/4$, we see that as we go around it the VEV of $\nu^c$ acquires a factor $\exp(2i\pi/4)$ since its relevant charges are $T^8_L=2$, $T^8_R=1$. To cancel this factor, we should add along the tube an additional magnetic flux $(1/4)\{2T^3_R/5-3(B-L)/5\}$ so that $\nu^c$ acquires an extra factor $\exp(-2i\pi/4)$. This additional flux does not affect the VEV of $N$ since its relevant charges are $T^3_R=0$, $B-L=0$. In conclusion, we obtain a tube with a combined magnetic flux \beq \frac{1}{4}(T^8_L-T^8_R)+\frac{1}{4}\{\frac{2}{5}T^3_R- \frac{3}{5}(B-L)\}. \label{combined} \eeq In Ref.~\cite{TopDef}, it has been shown that the only intermediate scale topological defect which survives in this model, where the symmetry breaking employs the $\nu^c$ component of a Higgs ${\bf 27}$-plet rather than the $\nu^c\nu^c$ component of a Higgs $\overline{{\bf 351}'}$, is a triply charged ($6\pi/e$) magnetic monopole. Therefore, one expects that the three types of intermediate scale monopoles and the two types of magnetic flux tubes mentioned above must combine to generate this monopole. Indeed, when the trinification group is broken to the SM gauge group, the magnetic flux $T^3_R/2+T^8_R/2$ emerging from the $SU(3)_R$ monopole splits into two parts, one equal to minus the flux in Eq.~(\ref{combined}) which forms a tube and one Coulomb flux equal to $6Y/5$. Similarly, the magnetic flux $T^3_R$ of the $SU(2)_R$ monopole forms a tube with flux given in Eq.~(\ref{tube}) and a Coulomb magnetic field with flux $6Y/5$. This tube is absorbed by an $SU(3)_L$ monopole with flux $T^3_L/2 +T^8_L/2$, which also emits the tube with magnetic flux as in Eq.~(\ref{combined}) terminating on the $SU(3)_R$ monopole. The remaining magnetic flux $T^3_L/2+3Y/5$ forms a Coulomb magnetic field emerging from the $SU(3)_L$ monopole. At this point, it is convenient -- for reason to become apparent in the next paragraph -- to add to the Coulomb fields of the $SU(3)_R$ and the $SU(2)_R$ monopoles and subtract from the magnetic field of the $SU(3)_L$ monopole a magnetic flux $T^3_L$. This is legitimate since a rotation by $2\pi$ around $T^3_L$ is homotopically trivial. The sum of the Coulomb magnetic fluxes emerging from the three monopoles is then \beq \frac{3}{2}T^3_L+3Y=3Q, \eeq where $Q$ is the electric charge operator. Consequently, the three constituent magnetic monopoles (magnetic quarks) are pulled together by the strings to create a triply charged ($6\pi/e$) magnetic monopole. Next we consider the effect of the electroweak symmetry breaking on the two tubes with magnetic fluxes given in Eqs.~(\ref{tube}) and (\ref{combined}). The relevant charges of the VEVs $\vev{h_u}$ and $\vev{h_d}$ of the electroweak doublets $h_u$ and $h_d$, which couple to the up-type and down-type quarks, are $T^3_L=-1$, $T^3_R=1$, $T^8_L=-1$, $T^8_R=1$, and $T^3_L=1$, $T^3_R=-1$, $T^8_L=-1$, $T^8_R=1$, respectively. Consequently, as we go around the string with magnetic flux as in Eq.~(\ref{combined}), the phase of $\vev{h_u}$ changes by $(-2/5)2\pi$ and that of $\vev{h_d}$ by $(-3/5)2\pi$. The tube must then acquire an extra magnetic flux $-2T^3_L/5$ so that the phase of $\vev{h_d}$ changes by $-2\pi$ and $\vev{h_u}$ remains constant around the string. Similarly, as we go around the string with magnetic flux as in Eq.~(\ref{tube}), the phases of $\vev{h_u}$ and $\vev{h_d}$ change by $(2/5)2\pi$ and $(-2/5)2\pi$ respectively. Thus, we must add an extra magnetic flux $2T^3_L/5$ along this tube so that both $\vev{h_u}$ and $\vev{h_d}$ remain constant around the string. This choice is energetically favored since it minimizes the magnetic energy along the strings -- see Ref.~\cite{TopDef}. The Coulomb magnetic fluxes emerging from the $SU(3)_R$ and $SU(2)_R$ monopoles are $(6/5)(T^3_L/2+Y)=6Q/5$ each, and from the $SU(3)_L$ monopole this flux is equal to $(3/5)(T^3_L/2+Y)=3Q/5$, in total $3Q$ -- see Fig.~\ref{fig:TripMon}. \begin{figure}[t] \centerline{\epsfig{file=TripMon_fig.eps,width=10.6cm}} \caption{Emergence of the topologically stable triply charged monopole from the symmetry breaking $G \to SU(3)_c\times SU(2)_L\times U(1)_{Y_L}\times U(1)_{Y_R}\times U(1)_R \to SU(3)_c\times SU(2)_L\times U(1)_Y\to SU(3)_c\times U(1)_{em}$. An $SU(2)_R$ (green) monopole is connected by a flux tube to an $SU(3)_L$ (blue) monopole which, in turn, is connected to an $SU(3)_R$ (red) monopole by a superconducting flux tube. The constituent monopoles are pulled together to form the triply charged monopole. The fluxes along the tubes and around the monopoles are indicated.} \label{fig:TripMon} \end{figure} The Coulomb magnetic charges accompanying the $SU(3)_R$, $SU(3)_L$, and $SU(2)_R$ constituent magnetic monopoles are, respectively, $(6/5)2\pi/e$, $(3/5)2\pi/e$, and $(6/5)2\pi/e$. These magnetic charges, by construction, are compatible with the Dirac quantization condition because of their accompanying magnetic flux tubes. (Magnetic monopoles carrying a mixture of Coulomb magnetic flux and $Z$-magnetic flux have been considered in the past \cite{magg,Ztube}. For a recent discussion see Refs.~\cite{TopDef,hung}.) Clearly, each of the three types of constituent magnetic monopoles (magnetic quarks) can alternatively be connected to its own magnetic antiquark by the appropriate flux tube(s) to produce a magnetic meson in the case of the $SU(2)_R$ and $SU(3)_R$ monopoles with a single flux tube connecting it to its antimonopole, or a new type of magnetic mesons in the case of the $SU(3)_L$ magnetic quark with two flux tubes connecting it to its magnetic antiquark. In all three cases, the magnetic quarks and antiquarks eventually annihilate by being pulled together. Let us briefly discuss the mass of the triply charged magnetic monopole. This mass depends, of course, on the breaking scale $M$ of the trinification symmetry. Since the latter is not a grand unified theory without additional assumptions such as gauge coupling constant unification, there is nothing, in principle, that prevents the scale $M$ to lie in the TeV range, in which case the magnetic monopole mass also is of order $M$ or somewhat larger. This would make the topologically stable trinification monopole accessible at the LHC \cite{moedal} and its planned upgrades. For completeness, let us note that the size of the core of each magnetic monopole is determined by $g M^{-1}$, where $g$ and $M$ denote the relevant gauge coupling constant and symmetry breaking scale. Also, the mass per unit length of the magnetic flux tubes is of order $\mu^2$, with $\mu$ being the corresponding symmetry breaking scale. These flux tubes are practically stable with a relatively small hierarchy between $M$ and $\mu$. Finally, some remarks regarding the observability of this topologically stable triply charged monopole at the LHC are in order here. It has been recognized for quite some time now that the production cross section of a composite coherent quantum state such as this monopole is expected to be exponentially suppressed in Drell-Yan processes involving elementary particles -- for a recent review and additional references, see Ref.~\cite{DrellYan}. This is somewhat analogous to the exponential suppression encountered in tunneling phenomena in quantum mechanics. This suppression of monopole production in Drell-Yan production does not depend on whether the semi-classical monopole solution is spherically symmetric or not. More recently, it has been suggested that this challenge may be overcome at colliders by exploiting the magnetic analogue of the Schwinger mechanism. In the presence of adequately strong magnetic fields the (dual) Schwinger mechanism may lead to an observable cross section for monopole pair production in heavy ion collisions -- for a recent discussion and additional references, see Ref.~\cite{Schwinger}. It is fair to state that the production mechanisms in colliders of more complex topological structures such as necklaces requires additional studies well beyond the scope of this paper. \section{Strings and Necklaces} \label{sec:Neck} Around the string that connects the $SU(3)_L$ and $SU(3)_R$ monopoles, $\vev{h_u}$ remains constant implying that there are no transverse zero modes in the up-type quark sector. However, the phases of $\vev{h_d}$ and $\vev{N}$ change by $-2\pi$ and $2\pi$ respectively. The masses of the down-type quarks can be written as \beq \mathcal{M}_d= \begin{pmatrix} g^c, & d^c \end{pmatrix} \begin{pmatrix} \vev{N}, & 0\\ & \\ \vev{\nu^c}, & \vev{h_d} \end{pmatrix} \begin{pmatrix} g \\ &\\ d \end{pmatrix}. \label{matrix} \eeq Three of the four $3\times 3$ blocks in the mass matrix are of the order of $\vev{N}$, $\vev{\nu^c}$, and $\vev{h_d}$ as indicated with constant unsuppressed coefficients. The fourth block is suppressed by powers of the Planck mass since the relevant direct trilinear Yukawa coupling is forbidden by $E_6$. Applying the results of Ref.~\cite{ganoulis}, we then see that there exist nine right-moving and nine left-moving zero modes (one for each family and color). A very similar analysis can be done for the charged leptons. We conclude that these strings are superconducting. In contrast, the string that connects the $SU(2)_R$ and $SU(3)_L$ monopoles is not superconducting since $\vev{N}$, $\vev{h_u}$, and $\vev{h_d}$ remain constant as we go around it. It is worth mentioning that the fact that the phase of $\vev{\nu^c}$ changes by $-2\pi$ around this string does not imply the existence of zero modes in this case. In order to see this, we employ a theorem given in Ref.~\cite{ganoulis} which states that, if a particular mass matrix element remains constant around the string, we can remove from the mass matrix the row and the column that contain it when calculating the number of transverse zero modes. In our case $\vev{N}$ and $\vev{h_d}$ remain unaltered around the string, so all rows and columns can be removed and no zero modes appear. \begin{figure}[t] \centerline{\epsfig{file=FangNeck_fig.eps,width=10.6cm}} \caption{Necklace configuration with alternating $SU(3)_L$ (blue) and $SU(2)_R$ (green) monopoles from the symmetry breaking $G \to SU(3)_c\times SU(2)_L\times U(1)_{Y_L}\times U(1)_{Y_R}\times U(1)_R \to SU(3)_c\times SU(2)_L\times U(1)_Y\times Z_2\to SU(3)_c\times U(1)_{em}\times Z_2$. These are connected by half flux tubes along the necklace as indicated. Each $SU(3)_L$ (blue) monopole in the necklace is also connected by a flux tube with an $SU(3)_R$ (red) monopole hanging outside the necklace. We display explicitly only the Coulomb magnetic flux of three of the constituent monopoles and the flux along two of the tubes.} \label{fig:FangNeck} \end{figure} Let us now turn to the alternative case where the symmetry breaking of $E_6$ employs the $\nu^c\nu^c$ component of a Higgs $\overline{{\bf 351}'}$. In this case, intermediate scale $Z_2$ topologically stable strings are produced \cite{TopDef,z2string} in addition to the superheavy Dirac and the intermediate scale triply charged monopoles. A rotation by $2\pi/10$ around the generator in Eq.~(\ref{orthgen1}) is now left unbroken by the VEV of $\nu^c\nu^c$ since its relevant charges are $T^3_R=-2$, $B-L=2$. Consequently, the flux tube from the $SU(2)_R$ to $SU(3)_L$ monopole splits into two equivalent tubes with magnetic flux \beq \frac{2}{10}T^3_R-\frac{3}{10}(B-L). \label{halftube} \eeq After the electroweak symmetry breaking, this tube acquires an extra magnetic flux $T^3_L/5$ so that $\vev{h_u}$, $\vev{h_d}$ remain constant around it. One can show that this ``half flux tube'' is not superconducting. The combined flux tube though is not affected. We can imagine that we break one of the two strings from the $SU(2)_R$ to $SU(3)_L$ monopole which leaves the two monopoles connected by one string and two ``loose'' strings attached to the two monopoles. One can then connect these latter strings to other similar monopole-string structures in series to form ``fang necklaces'' -- see Fig.~\ref{fig:FangNeck}. More complex fang necklaces can be contemplated where each $SU(3)_L$ monopole (antimonopole) in the necklace is connected by a half tube either to its own antimonopole or an $SU(2)_R$ monopole (antimonopole), and each $SU(2)_R$ monopole (antimonopole) either to its own antimonopole or to an $SU(3)_L$ monopole (antimonopole). Each $SU(3)_R$ monopole (antimonopole) in the necklace is also connected by a flux tube to an $SU(3)_R$ monopole (antimonopole) hanging outside the necklace or to its own antimonopole which participates in a different necklace. \section{Conclusions} \label{sec:concl} The trinification group $SU(3)_c\times SU(3)_L\times SU(3)_L$ implements charge quantization and predicts the existence of a topologically stable monopole of magnetic charge $6\pi/e$. The trinification symmetry breaking to the SM may occur in a number of steps, and we have discussed a scenario in which this monopole may be regarded as a magnetic baryon, in rough analogy with the QCD baryon. It is composed of three confined monopoles (magnetic quarks), where the latter monopoles carry some Coulomb magnetic flux accompanied by a magnetic flux tube. These confined monopoles can yield more elaborate topological configurations and we display one such example consisting of a fang necklace. In contrast to the superheavy GUT monopoles the trinification monopole discussed here may be accessible at high energy colliders. \section*{Acknowledgments} This work is supported by the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the ``First Call for H.F.R.I. Research Projects to support Faculty Members and Researchers and the procurement of high-cost research equipment grant'' (Project Number: 2251). Q.S. is supported in part by the DOE Grant DE-SC-001380. We thank Joey Betz for preparing the figures for this paper. \def\ijmp#1#2#3{{Int. Jour. Mod. Phys.} {\bf #1},~#3~(#2)} \def\plb#1#2#3{{Phys. Lett. B }{\bf #1},~#3~(#2)} \def\zpc#1#2#3{{Z. Phys. C }{\bf #1},~#3~(#2)} \def\prl#1#2#3{{Phys. Rev. Lett.} {\bf #1},~#3~(#2)} \def\rmp#1#2#3{{Rev. Mod. Phys.} {\bf #1},~#3~(#2)} \def\prep#1#2#3{{Phys. Rep. }{\bf #1},~#3~(#2)} \def\prd#1#2#3{{Phys. Rev. D }{\bf #1},~#3~(#2)} \def\npb#1#2#3{{Nucl. Phys. }{\bf B#1},~#3~(#2)} \def\np#1#2#3{{Nucl. Phys. B }{\bf #1},~#3~(#2)} \def\npps#1#2#3{{Nucl. Phys. B (Proc. Sup.)} {\bf #1},~#3~(#2)} \def\mpl#1#2#3{{Mod. Phys. Lett.} {\bf #1},~#3~(#2)} \def\arnps#1#2#3{{Annu. Rev. Nucl. Part. Sci.} {\bf #1},~#3~(#2)} \def\sjnp#1#2#3{{Sov. J. Nucl. Phys.} {\bf #1},~#3~(#2)} \def\jetp#1#2#3{{JETP Lett. }{\bf #1},~#3~(#2)} \def\app#1#2#3{{Acta Phys. Polon.} {\bf #1},~#3~(#2)} \def\rnc#1#2#3{{Riv. Nuovo Cim.} {\bf #1},~#3~(#2)} \def\ap#1#2#3{{Ann. Phys. }{\bf #1},~#3~(#2)} \def\ptp#1#2#3{{Prog. Theor. Phys.} {\bf #1},~#3~(#2)} \def\apjl#1#2#3{{Astrophys. J. Lett.} {\bf #1},~#3~(#2)} \def\apjs#1#2#3{{Astrophys. J. Suppl.} {\bf #1},~#3~(#2)} \def\n#1#2#3{{Nature }{\bf #1},~#3~(#2)} \def\apj#1#2#3{{Astrophys. J.} {\bf #1},~#3~(#2)} \def\anj#1#2#3{{Astron. J. }{\bf #1},~#3~(#2)} \def\mnras#1#2#3{{MNRAS }{\bf #1},~#3~(#2)} \def\grg#1#2#3{{Gen. Rel. Grav.} {\bf #1},~#3~(#2)} \def\s#1#2#3{{Science }{\bf #1},~#3~(#2)} \def\baas#1#2#3{{Bull. Am. Astron. Soc.} {\bf #1},~#3~(#2)} \def\ibid#1#2#3{{\it ibid. }{\bf #1},~#3~(#2)} \def\cpc#1#2#3{{Comput. Phys. Commun.} {\bf #1},~#3~(#2)} \def\astp#1#2#3{{Astropart. Phys.} {\bf #1},~#3~(#2)} \def\epjc#1#2#3{{Eur. Phys. J. C} {\bf #1},~#3~(#2)} \def\nima#1#2#3{{Nucl. Instrum. Meth. A} {\bf #1},~#3~(#2)} \def\jhep#1#2#3{{J. High Energy Phys.} {\bf #1},~#3~(#2)} \def\jcap#1#2#3{{J. Cosmol. Astropart. Phys.} {\bf #1},~#3~(#2)} \def\lnp#1#2#3{{Lect. Notes Phys.} {\bf #1},~#3~(#2)} \def\jpcs#1#2#3{{J. Phys. Conf. Ser.} {\bf #1},~#3~(#2)} \def\aap#1#2#3{{Astron. Astrophys.} {\bf #1},~#3~(#2)} \def\mpla#1#2#3{{Mod. Phys. Lett. A} {\bf #1},~#3~(#2)}
2,869,038,154,137
arxiv
\section{Introduction.} The origin of large (galactic) black holes, present already in the early Universe has been a long standing puzzle, see {\em e.g.} \cite{GiantBH} for information on the most recently discovered behemoth black hole, \cite{R0} for a generally accessible update and overview, and \cite{R1,R2,R3} and references therein for more recent work. It seems generally agreed that such large black holes cannot form by the usual stellar processes ({\em i.e.} gravitational collapse of stars and subsequent accretion of mass), but must have originated from some other source. One possible explanation is that black holes were already present from the very beginning of the matter dominated period, and in sufficient numbers and with sufficiently large masses to be able to grow further by accretion to very large sizes already a few hundred million years after the Big Bang. Various mechanisms have been proposed and discussed towards solving this problem, most of them based on extrapolations of known physics, such as {\em e.g.} large random density fluctuations in the early universe, see \cite{CKSY} for a comprehensive recent review with many further references. That review also discusses different observational consequences and constraints, while emphasizing that ``the limits are constantly changing as a result of both observational and theoretical developments". From a more theoretical perspective, a mechanism based on bubble formation during inflation was recently put forward in \cite{V,HD}, but differs essentially from the one presented here, because there the substantive part of black hole growth must take place {\em before} the onset of the radiation phase. At any rate, the crucial question remains whether an explanation can be found in terms of known physics, or whether an explanation necessarily involves essentially new physics. In this paper we present a new proposal towards addressing this problem which can complement existing proposals in that it does not rely on random processes, such as density fluctuations or bubble formation, but invokes {\em new} physics. It is based on the conjectured existence of certain supermassive particles (gravitinos) that allow for the formation of black holes already during the early radiation phase, well before decoupling. There are two necessary prerequisites for a mechanism based on the `condensation' of superheavy particles to work, namely \begin{enumerate} \item the supermassive particles must be absolutely stable against decay into Standard Model matter; and \item they must be subject to sufficiently strong attractive forces to enable them to rapidly cluster in sufficient amounts to undergo gravitational collapse. \end{enumerate} Although ans\"atze towards fundamental physics, in particular Kaluza-Klein theory and string theory, abound in massive excitations that might serve as candidates for such a scenario, such excitations usually fail to meet the first requirement (with decay lifetimes on the order of the Planck time $t_{\rm Pl}$), which is why they are often assumed to play no prominent role in the cosmology of the very early universe. Here we will argue that, by contrast, the superheavy gravitinos proposed in our previous work \cite{MN2,MN0} can meet both requirements. That the requisite particles should be gravitinos, rather than some other particle species, is perhaps unusual, so let us first explain the reasons for this claim. Our proposal has its origin in our earlier attempt to understand the observed spin-$\frac12$ fermion content of the Standard Model, with three generations of quarks and leptons (including three right-chiral neutrinos). It relies on a unification scenario based on a still hypothetical extension of maximally extended $N\!=\!8$ supergravity involving the infinite-dimensional duality symmetries ${\rm E}_{10}$ and ${{\rm K}(\EE})$ \cite{MN0,MN2,KN} (this proposal itself has its origins in much earlier work \cite{GM,NW}). The enlargement of the known duality symmetries of supergravity and M theory to the infinite-dimensional symmetries ${\rm E}_{10}$ and ${{\rm K}(\EE})$ is absolutely essential here, because without this extension neither the charge assignments of the quarks and leptons, nor those of the gravitinos in (\ref{GravCharges}) below could possibly work, and stability of the gravitinos against decay could not be achieved. A key feature of our proposal, and one that sets it apart from all other unification schemes, is that besides the 48 spin-$\frac12$ fermions of the Standard Model, the {\em only} other fermions are the eight supermassive gravitinos corresponding to the spin-$\frac32$ states of the $N=8$ supermultiplet. It is thus a {\em prediction} that the spin-$\frac12$ fermion content of the Standard Model will remain unaltered up to the Planck scale -- a prediction that is (at least so far) supported by the absence of any signs of new physics from LHC, and by the fact that the currently known Standard Model couplings can be consistently evolved all the way to the Planck scale. Indeed, the detection of any new fundamental spin-$\frac12$ degree of freedom (such as a sterile fourth neutrino, or a fourth generation of quarks and leptons, or any of the `{\em -ino}' fermions predicted by low energy supersymmetry) would immediately falsify the present scheme. Evidence for infinite-dimensional duality symmetries of Kac-Moody type comes from an earlier BKL-type analysis of cosmological singularities in general relativity \cite{DH,DHN1}. This has led to the conjecture that M theory in the `near singularity limit' is governed by the dynamics of an ${\rm E}_{10}/{{\rm K}(\EE})$ non-linear $\sigma$-model \cite{DHN2}. In this scenario space-time, and with it space-time based quantum field theory and space-time symmetries would have to be emergent, in the sense that all the relevant information about space-time physics gets encoded in and `spread over' a hugely infinite-dimensional hyperbolic Kac-Moody algebra. In particular, this scheme goes {\em beyond} supergravity in that the infinite-dimensional ${\rm E}_{10}$ duality symmetry replaces, and quite possibly disposes of, supersymmetry as a guiding principle towards unification. The fermionic sector of the theory is then governed by the `maximal compact' (or more correctly, `involutory') subgroup ${{\rm K}(\EE})\subset{\rm E}_{10}$, which can be regarded as an infinite-dimensional generalization of the usual R-symmetries of extended supergravity theories. While an analysis of the bosonic sector of the ${\rm E}_{10}/{{\rm K}(\EE})$ model and its dynamics beyond the very first few levels is severely hampered by the fact that a full understanding of ${\rm E}_{10}$ remains out of reach, a remarkable property of its involutory subgroup ${{\rm K}(\EE})$ is the existence of {\em finite}-dimensional (unfaithful) spinorial representations \cite{DKN,dBHP,KNV}. The combined spin-$\frac12$ and spin-$\frac32$ fermionic degrees of freedom at any given spatial point are then no longer viewed as fermionic members of the $N\!=\!8$ supermultiplet, but rather as belonging to an (unfaithful) irreducible representation of the generalized R-symmetry ${{\rm K}(\EE})$ \cite{DKN,dBHP,KNV}. The link with the physical fermion states is then made by identifying the known ${{\rm K}(\EE})$ representation with the Standard Model fermions at a given spatial point, in the spirit of a BKL-type expansion in spatial gradients, as explained for the bosonic sector in \cite{DHN2}. A crucial feature is now that the gravitinos are predicted to participate in strong and electromagnetic interactions (unlike the sterile gravitinos of MSSM-like models with low energy supersymmetry), and that they carry fractional charges. More precisely, as a consequence of the group theoretic analysis in \cite{MN0,MN2,KN}, the eight massive gravitinos are assigned to the following representations of the residual unbroken SU(3)$_c \,\times\,$U(1)$_{em}$ symmetry \begin{equation}\label{GravCharges} \left({\bf 3}_c\,,\,\frac13\right) \oplus \left(\bar{\bf 3}_c\,,\,-\frac13\right) \oplus \left({\bf 1}_c\,,\,\frac23\right) \oplus \left({\bf 1}_c\,,\, -\frac23\right) \end{equation} These assignments follow from an SU(3)$\,\times\,$U(1) $\subset$ SO(8) decomposition of the $N\!=\!8$ supergravity gravitinos, {\em except} for the `spurion' shift of the U(1) charges by $\pm\frac16$ that was originally introduced in \cite{GM} for the spin-$\frac12$ members of the $N\!=\!8$ supermultiplet, in order to make their electric charge assignments agree with those of three generations of quarks and leptons (including right-chiral neutrinos). As shown in \cite{MN0,MN2,KN}, it is this latter shift which requires enlarging the R-symmetry to $K(E_{10})$, and which takes the construction {\em beyond} $N\!=\!8$ supergravity and {\em beyond} the confines of space-time based field theory. All gravitinos are assumed to be superheavy, with masses just below the Planck mass. This assumption is plausible because in any scheme avoiding low energy supersymmetry and in the absence of grand unification the Planck scale is the natural scale for symmetry breaking. Despite their large mass {\em all gravitinos are stable against decays into Standard Model matter}, as a consequence of their peculiar quantum numbers: there is simply no final state in the Standard Model into which they could possibly decay in compliance with (\ref{GravCharges}) and the residual unbroken SU(3)$_c \,\times\,$U(1)$_{em}$ symmetry. This feature is essentially tied to the replacement of the usual R-symmetry by ${{\rm K}(\EE})$, because in a standard supergravity context a supermassive gravitino would not be protected against decay into other particles. In the present paper we take a more pragmatic approach by simply proceeding with the assignments (\ref{GravCharges}) as the starting point, but keeping in mind that this scheme is strongly motivated by unification and a possible explanation of the observed pattern of quark and lepton charge quantum numbers, and thus not based on {\em ad hoc} choices. In \cite{MN1,MN3} we have already begun to explore the possible astrophysical implications of supermassive gravitinos with the above assignments. More specifically, in \cite{MN1} we have proposed the color singlet gravitinos as novel dark matter candidates, and discussed possible avenues to search for them (in fact, even within the present scenario, this proposal would hold up, in that the supermassive gravitinos could make up a large part, or even all of dark matter, via the black holes into which they would have been swallowed). In subsequent work \cite{MN3} we showed that the color triplet states in (\ref{GravCharges}) can potentially explain the observed ultra-high energy cosmic ray events with energies of up to $10^{21} \,\rm eV$ via gravitino anti-gravitino annihilation in the crust of neutron stars. In this paper we now turn our attention again to the {\em color singlet} gravitinos of charge $\pm\frac23$, to argue that they can in addition play a key role in shedding light on the origin of giant black holes in the early universe. The structure of this paper, then is as follows: in section II we show that quantum mechanically the wave function of a multi-gravitino bound state is highly unstable against gravitiational collapse. In the following two sections we study the formation and evolution of mini-black holes during the radiation era, also deriving numerical estimates. For the evolution we employ a generalization of the McVittie solution (on which there is already an ample literature, see {\em e.g.} \cite{McV0,McV1,McV2,McV3,McV4,McV5,McV6} and references therein). In the last section we analyze the energy-momentum tensor for this solution, and show that it has the right form expected for a radiation dominated universe. We also argue that the `blanket' surrounding the primordial black hole can further enhance the growth of massive black holes. These last two sections may be of interest in their own right, independently of the main line of development of this paper. \section{Formation of multi-gravitino bound states} The main new feature of our proposal is that, as a result of the assumed large mass of the gravitinos, the combined gravitational and electric forces between any arrangement of gravitinos and anti-gravitinos is {\em universally attractive}. In natural units we define the BPS-mass $M_{\rm BPS}$ for the (anti-)gravitino to be the one for which the electrostatic force between two gravitinos with charges $\pm Q_g$ equals their gravitational attraction (modulo sign) \begin{equation}\label{BPS} Q_g^2 \,=\, G M_{\rm BPS}^2 \; ; \end{equation} we refer to $M_{\rm BPS}$ as the `BPS-mass' because it is the one relevant for extremal Reissner-Nordstr\"om or Kerr-Newman solutions. This equality is written in units where $4\pi\epsilon_0=\mu_0/(4\pi)=c=1$ (here it is worthwhile to recall that these units, with the addition of $e=M_{\rm BPS}=1$, were introduced already in 1881 by George Stoney, probably the first physicist who seriously contemplated quantization of charge \cite{Stoney}; the electron was discovered only 16 years later, while Planck units were introduced 18 years later). As is well known, $M_{\rm BPS}$ is {\em not} the same as the (reduced) Planck mass $M_{\rm Pl}$, but differs from it by a factor of the fine structure constant $\alpha$ (always with $c=1$ from now on): \begin{equation} M_{\rm BPS}^2 \,=\, \frac{Q_g^2}{G} \,=\, \frac{Q_g^2}{\hbar} \cdot \frac{\hbar}{G} \,\equiv \, \alphaM_{\rm Pl}^2\,. \end{equation} where $\alpha$ differs from the usual fine structure constant $\alpha_{em}$ by a factor $\frac49$ because of the fractional charge, see below. We will assume that the gravitino mass lies between these two values, {\em i.e.} \begin{equation}\label{Mgrav} M_{\rm BPS} < M_g < M_{\rm Pl} \end{equation} The first of these inequalities is needed to ensure that the force between same charge gravitinos remains attractive; for $M_g < M_{\rm BPS}$ we would have repulsion [because $(1-\beta^2)$ in (\ref{V}) becomes negative], and the proposed mechanism would no longer work. Denoting the usual elementary charge by $e$ we can thus write for the gravitino charges \begin{equation} Q_g = \pm \frac23 e = \pm \beta G^{\frac12} M_g \end{equation} with the `BPS-parameter' $\beta$ obeying $0 < \beta < \frac23$; we will denote the (fixed) gravitino mass by $M_g$ throughout this paper, whereas generic black hole masses will be designated by the letter $m$, where $m$ can also vary with time. The total force between two (anti-)gravitinos is thus determined by the combined electric and gravititional charges $(1 \pm \beta^2) GM_g^2 > 0$, so that even for like charges the force remains attractive because the gravitional attraction overwhelms the electrostatic repulsion (reflecting the `almost BPS-like' nature of the gravitinos). In this paper we hypothesize that it is this universal attraction that leads to the formation of multi-gravitino bound states inside the plasma of the radiation dominated phase, starting from small inhomogeneities in analogy with cluster formation of galaxies. The main difference with the latter is that, prior to gravitational collapse, we are here initially dealing with a {\em quantum mechanical} bound state, not one that can be understood in terms of Newtonian physics. For two gravitinos the bound state would be somewhat analogous to positronium, however with the crucial difference that `gravitinium' can be a longer lived state because the annihilation cross section between two oppositely charged (color singlet) gravitinos is very small, of the order $\sim M_g^{-2}$ (as follows from inspection of the standard tree level Feynman diagram for annihilation into, say, a pair of gravitons, with one intermediate gravitino propagator). Note that in principle positronium can also be long lived, provided the bound state is formed in a state of very large radial quantum number \cite{PM} (see (\ref{relax}) below). We wish to study the formation of bound states of gravitinos during the radiation era in the very early universe. For a proper analysis, and as a first step, we would now have to go through a first quantized analysis of the massive Rarita-Schwinger equation in such a homogeneously and isotropically expanding background. This task is substantially simplified by our main assumption (\ref{Mgrav}) which allows us to resort to the non-relativistic limit, and by the fact that this inequality also implies \begin{equation}\label{MH} M_g \,>\, H(t) \end{equation} for the Hubble parameter during the radiation era, whence we can also drop the usual friction term $\propto H(t) = \dot a(t)/a(t)$ that would normally have to be included in the equation of motion. It is therefore enough to consider the free Rarita-Schwinger equation for a massive spin-$\frac32$ complex vector spinor, which reads \begin{equation}\label{RS} i\gamma^{\mu\nu\rho} \partial_\nu \psi_\rho + M_g\gamma^{\mu\nu} \psi_\nu \,=\, 0 \end{equation} From this one immediately deduces the Dirac and constraint equations \begin{equation} (i\gamma^\lambda \partial_\lambda -M_g) \psi_\mu = 0 \quad \mbox{and} \quad \gamma^\mu \psi_\mu \,=\, \partial^\mu \psi_\mu \,=\, 0 \end{equation} (see {\em e.g.} \cite{dWF} for a more complete account). The latter two equations imply a halving of the available degrees of freedom, and tell us that the vector spinor $\psi_\mu$ carries altogether four helicity degrees of freedom, with labels $\sigma,\tau \in \left\{\pm \frac12,\pm \frac32\right\}$ for both gravitino and anti-gravitino. The relevant expansion reads \begin{eqnarray} \psi_\mu (x) \,&=&\, \int \frac{{\rm d}^3 {\bf{p}}}{(2\pi)^{3/2} \sqrt{2E({\bf{p}})}} \Big[ e^{ipx} f_\mu^+(p) u_+(p) + e^{ipx} f_\mu^-(p) u_-(p) \,+\, \nonumber\\[2mm] && \hspace{3.5cm} + \; e^{-ipx} g_\mu^+(p) v_+(p) \,+\, e^{-ipx} g_\mu^-(p) v_-(p) \Big] \end{eqnarray} where, of course, $p^2 +M_g^2=0$, and $u_\pm(p)$ and $v_\pm(p)$ are the two positive and negative energy solutions of the Dirac equation. The last constraint equations is solved by \begin{equation} f_\mu^\pm(p) = \sum_{\text{i}} b_{\text{i}}^\pm (p) \varepsilon^{\text{i}}_\mu(p) \quad , \quad g_\mu^\pm(p) = \sum_{\text{i}} d_{\text{i}}^\pm(p) \varepsilon^{\text{i}}_\mu(p) \end{equation} with the three linearly independent polarization vectors $\varepsilon^{\text{i}}_\mu(p)$ satisfying $p^\mu \varepsilon^{\text{i}}_\mu(p) = 0$. For the other constraint equation we need to impose \begin{eqnarray} \sum_{\text{i}} \gamma^\mu \varepsilon^{\text{i}}_\mu(p) \Big[ b_{\text{i}}^+(p) u_+(p) \,+\, b_{\text{i}}^-(p) u_-(p)\Big] \,&\stackrel{!}{=}& \, 0 \nonumber\\[2mm] \sum_{\text{i}} \gamma^\mu \varepsilon_\mu^{\text{i}}(p) \Big[d_{\text{i}}^+(p) v_+(p) \,+\, d_{\text{i}}^-(p) v_-(p)\Big] \,&\stackrel{!}{=}& \, 0 \end{eqnarray} thus eliminating four out of the 12 free coefficients $b_{\text{i}}^\pm(p)$ and $d_{\text{i}}^\pm(p)$, respectively, leaving us with four helicity wave functions for gravitino and anti-gravitino each. As the spin interactions are not relevant for our approximation there is no need here to be any more specific about the parametrization of the helicity wave functions. However, each gravitino degree of freedom is exposed to the gravitational and electric background generated by the other gravitinos (as well as the surrounding plasma which we can neglect). In order to incorporate these interactions in lowest order, one performs the standard Foldy-Wouthuysen transformation on each component of $\psi_\mu$, which yields a non-relativistic one-particle Hamiltonian for each gravitino component. The corresponding multi-particle Schr\"odinger Hamiltonian therefore reads \begin{equation}\label{H} H = - \frac{\hbar^2}{2M_g}\sum_i \big( \triangle_{{\bf{x}}_i} + \triangle_{{\bf{y}}_i}\big) \,+\, V({\bf{x}},{\bf{y}}) \end{equation} with the universally attractive potential (for $\beta^2 < 1$) \begin{equation}\label{V} V({\bf{x}},{\bf{y}}) = - (1-\beta^2) \left( \sum_{i\neq j} \frac{GM_g^2}{|{\bf{x}}_i - {\bf{x}}_j|} \,+\, \sum_{i\neq j} \frac{GM_g^2}{|{\bf{y}}_i - {\bf{y}}_j|}\right) \,-\, (1+\beta^2) \sum_{i,j} \frac{GM_g^2}{|{\bf{x}}_i - {\bf{y}}_j|} \end{equation} where the positions of the gravitinos and anti-gravitinos are designated by ${\bf{x}}_i$ and ${\bf{y}}_j$, respectively. This Hamiltonian acts on a fermionic wave function $\Psi({\bf{x}}_1,\sigma_1,\dots,{\bf{x}}_n,\sigma_p;{\bf{y}}_1,\tau_1,\dots,{\bf{y}}_q,\tau_q)$ which is is antisymmetric under simultaneous interchange of the position and spin labels of the gravitinos and anti-gravitinos, respectively. In writing this Hamiltonian we have also neglected the fluctuating external electric and magnetic fields in the radiation plasma. Likewise, as we already explained, we ignore subleading spin-orbit and spin-spin interactions that would follow from the Rarita-Schwinger equation in a fully relativistic treatment (and which would be very complicated). Finally, we can neglect the effect of the protons and electrons from the surrounding plasma (as well as all other Standard Model particles): for them, the gravitational interactions are governed by the factors $GM_g m_e\,,\, GM_g m_p\, ,... \ll GM_g^2$, whence their interactions are completely dominated by the purely electromagnetic forces. The latter are, however, screened out because of the overall electric neutrality of the plasma, and can thus be ignored. Evidently the above considerations only apply to superheavy particles obeying (\ref{Mgrav}) and (\ref{MH}), and would not make any sense at all for ordinary (Standard Model) particles. For the latter all masses and binding energies are far below the temperature of the surrounding plasma, that is $m_e, m_p,... \ll T_{rad}$, and also below the Hubble parameter, $m_e, m_p,...\ll H$. In that case, the stationary Schr\"odinger equation would have to be replaced by a relativistic equation in a time-dependent background, and the friction term involving the Hubble parameter $H$ would lead to immediate decay of the wave function (as unitarity in the naive sense is violated in a time-dependent background). We will not attempt here to investigate in any detail the multi-particle Schr\"odinger equation based on (\ref{H}), which would amount to a quantum analog of the computations performed in connection with galaxy structure formation. Nevertheless, we can still make some rigorous statements relying on well known estimates (see {\em e.g.} \cite{LS}). Namely, it is a rigorous result \cite{LL} that for a system of fermions (that is, particles obeying the Pauli principle with a fully antisymmetric wave function) the lowest energy eigenvalue of the $N$-particle Hamiltonian \begin{equation}\label{E0} E_0(N) := \inf_{|\!|\Psi |\!| =1} \langle \Psi | H | \Psi \rangle \end{equation} (where $N$ is the combined number of gravitinos and anti-gravitinos) is subject to the upper and lower bounds \begin{equation}\label{E1} - AN(N-1)^{\frac43} G^2M_g^5 \hbar^{-2} \,\leq\, E_0(N) \, \leq \, - BN^{\frac13}(N-1)^2 G^2M_g^5 \hbar^{-2} \end{equation} with strictly positive constants $A > B > 0$. Consequently the lowest energy per particle $E_0(N)/N$ decreases as $\propto - N^{4/3}$ with $N$, signaling an instability. For a bosonic wave function the fall-off would be even faster with $E_0(N)/N \propto - N^2$ \cite{LL}. Therefore the inclusion of spin degrees of freedom (where one combines a partially symmetric wave function in the space coordinates with an anti-symmetric wave function in spin space) cannot improve the situation. The estimate (\ref{E1}) tells us that the system is unstable, and for sufficiently large $N$ will thus undergo gravitational collapse, as the fermionic degeneracy pressure is not enough to sustain the system in a stable equilibrium. Because of (\ref{MH}) the basic instability estimate (\ref{E1}) is not affected by the cosmological expansion either. Now if we consider a bound state of just two gravitinos (a hydrogen-like system) the associated `Bohr radius' is only a few orders of magnitude away from the Planck length, to wit \begin{equation} a_B \,\sim \, \frac{\hbar^2}{G M_g^3} \end{equation} which is not too far from the Schwarzschild radius. If the formation of such bound states took place in vacuum, and the relaxation to the ground state proceeded too fast, the resulting mini-black holes would immediately evaporate by Hawking radiation according to the well known formula (see {\em e.g.} \cite{TD}) \begin{equation}\label{evap} t_{evap} \,\sim \, t_{\rm Pl} \left(\frac{m}{M_{\rm Pl}}\right)^3 \,\sim \, 10^{-42}\, {\rm s} \left(\frac{m}{10^{-9}{\rm kg}}\right)^3 \end{equation} which follows from the Stefan-Boltzmann law upon substitution of the Hawking temperature \begin{equation}\label{BHtemp} T_{Hawking} = \frac{\hbar}{8\pi Gm} \end{equation} In order to prevent this from happening, and in order to create bigger black holes that can survive for longer and start growing, it is therefore necessary for the bound states to persist long enough to accrete a sufficiently large number of gravitinos {\em before} gravitational collapse. Meta-stability can be ensured if the initial energy of the bound state is much larger than $E_0(N)$, and consequently its overall extension stays well above its Schwarzschild radius for sufficiently long time. Of course, the bound state will eventually relax to lower lying bound states by the spontaneous emission of photons and gravitons, but this process will take some time. For instance, for positronium the average lifetime $\tau$ of a bound state as a function of the principal quantum number $n$ scales as (see {\em e.g.} \cite{LL0}, eqs. (7)--(9)) \begin{equation}\label{relax} \tau \sim \, n^4 \end{equation} In comparison with positronium which has a large annihilation cross section, the mutual annihilation of (color singlet) gravitinos and anti-gravitinos is further delayed by their small annihilation cross section $\sim M_g^{-2}$, which was already highlighted above. Extrapolating the above formula to the present case thus suggests that, with sufficiently large $n$ at the time of formation, we can get lifetimes long enough to bind a large number of gravitinos into a meta-stable configuration before the collapse can occur. We also note that at this stage (that is, prior to the formation of a black hole) the absorption of protons and electrons from the ambient plasma plays no role, as these particles, unlike the gravitinos, will be only very weakly bound. \section{Collapse of gravitino lumps and mini-black holes} At this point we have lumps, each corresponding to a quantum mechanical multi-gravitino bound state, which are scattered throughout the radiation plasma. Because of the density fluctuations and inhomogeneities in the plasma, and as a result of their strong gravitional attraction these lumps will eventually coalesce before collapsing into small black holes, a microscopic analogue of the clumping of dust into galaxies and stellar matter. In a first approximation the ensemble of massive lumps can be treated classically ({\em i.e.} need not be considered as a single coherent wave function). In order to arrive at a rough estimate of the initial mass of the resulting black holes we first estimate the total number of gravitinos contained in a coalesced lump of gravitino matter. Treating them classically with an average kinetic energy per particle equal to the temperature of the plasma we have \begin{equation}\label{Ekin} \langle E_{kin} (t) \rangle \,\sim \, N T_{rad}(t) = N T_{eq} \left(\frac{t_{eq}}{t}\right)^{1/2} \end{equation} where $T_{eq}\sim 1\,$eV and $t_{eq}\sim 40\,000\,{\rm yr} \sim 10^{12}$ s (we find it convenient to refer all quantities to equilibrium time $t_{eq}$ rather than Planck units). The potential energy of $N$ gravitinos and anti-gravitinos is given by \begin{equation} \langle E_{pot}(t) \rangle \sim - \, N^2 \frac{GM_g^2}{\langle d(t) \rangle} \end{equation} where for numerical estimates we take $M_g \sim M_{\rm BPS}$. The average separation $\langle d(t)\rangle$ between gravitinos and anti-gravitinos at time $t$ is given by \begin{equation} \langle d(t) \rangle \sim \left( \frac{M_g}{\rho(t)} \right)^{1/3} \, \sim \, (10^2 \, {\rm m}) \, \left(\frac{t}{t_{eq}} \right)^{1/2} \end{equation} where we estimate the gravitino density $\rho(t)$ at time $t$ by scaling back the known density at the equilibrium time $t_{eq}$ (with $8\pi G\rho_{rad}=8\pi G\rho_{mat} \sim 4\cdot 10^{-25}$ s$^{-2}$), with the further assumption that at $t=t_{eq}$, most of the matter consisted of supermassive gravitinos, in line with our previous dark matter proposal \cite{MN1}. For this estimate we also need to keep in mind that matter density scales as $a(t)^{-3}$ also during the radiation era (while the radiation density scales as $a(t)^{-4}$). Gravitational collapse is expected to occur if the total energy is negative: \begin{equation}\label{N} \langle E_{kin} (t) \rangle \,+ \, \langle E_{pot} (t) \rangle \, < \, 0 \quad \Rightarrow \quad N \,>\, \frac{T_{eq}\cdot 10^2\,{\rm m}}{G M_g^2} \,\sim \, 10^{12} \end{equation} Importantly, the time $t$ drops out of this relation because the temperature and the inverse average distance decrease in the same manner as a function of $t$ during the radiation era. Let us stress that this is only a very rough estimate: if the bound state is meta-stable, the collapse can be delayed in such a way that a larger number of (anti-)gravitinos can be accrued. With (\ref{N}) the mass of the resulting mini-black hole comes out to be \begin{equation}\label{Mlump} m_{initial} \,\sim\, 10^{12} M_g \,\sim \, 10^{12} M_{\rm BPS} \, \sim \, 10^3\, {\rm kg} \end{equation} By formula (\ref{evap}) the Hawking evaporation time for a black hole of this mass would be \begin{equation}\label{evap1} t_{evap} (m_{initial}) \,\sim \, 10^{-7}\, {\rm s} \end{equation} However, it is important now that Hawking evaporation is not the only process that must be taken into account. There is a competing process which can in fact stabilize the mini-black holes and their further evolution: it is the presence of the dense and hot plasma surrounding the black hole that can feed the growth of small black holes. More precisely, Hawking evaporation competes with accretion according to the following equation: \begin{equation}\label{Mdot} \frac{dm(t)}{dt} \,=\, C_0 G^2 \rho_{rad}(t)\cdot m^2(t) \, -\, C_1 \frac{M_{\rm Pl}^3}{t_{\rm Pl}}\cdot \frac1{m^2(t)} \end{equation} where $C_0$ and $C_1$ are constants of ${\mathcal O}(1)$. The first term on the r.h.s. originates from the flux of the infalling radiation from the surrounding plasma, which is $\propto 4\pi R^2(t) \rho_{rad}(t) c$ (with $c=1$) for a (time dependent) black hole of radius $R(t) =2 Gm(t)$ (a `fudge factor' $C_0 ={\mathcal O}(4\pi)$ can be included to account for the fact that not all the surrounding radiation falls in radially, but this is not essential for our argument). The second term in (\ref{Mdot}) governs Hawking evaporation. For Hawking evaporation taking place in empty space we can ignore the first term on the r.h.s. of (\ref{Mdot}), and formula (\ref{evap}) follows directly. In that case any microscopic black hole would disappear, and not be able to grow into a macroscopic black hole. The crucial difference with this standard scenario is embodied in the first term on the r.h.s. of (\ref{Mdot}) (which is usually disregarded in discussions of Hawking evaporation). This term takes into account the fact that the decay takes place in an extremely hot surrounding plasma whose density varies with time as $8\pi G\rho_{rad}(t) = 3/4t^2$. At the initially extremely high temperatures of the radiation era the accretion can thus out-compete Hawking evaporation {\em even for very small black holes}. In terms of temperature with the break-even point at $T_{rad} = T_{Hawking}$ where the radiation temperature $T_{rad}(t)$ at time $t$ can be read off from (\ref{Ekin}). The simple criterion for black hole accretion to overcome the rate for Hawking radiation reads \begin{equation}\label{Trad} T_{rad} \,>\, T_{Hawking} \end{equation} This inequality is easy to achieve in the initially very dense and hot plasma where $T_{rad} \sim 10^{17} \,\rm GeV$. Later, it is a delicate issue because from (\ref{Mdot}) it follows that $m(t)$ can run away in either direction. This can also be directly seen by setting to zero the r.h.s. of (\ref{Mdot}): at time $t$ the break-even point occurs for \begin{equation}\label{even} m_0^4(t) \,\sim \, \frac{M_{\rm Pl}^3}{t_{\rm Pl}} \cdot \frac1{G^2 \rho_{rad}(t)} \,\propto \, t^2 \end{equation} where we have used $\rho_{rad} = \frac{3}{32\pi G} t^{-2}$. Hence, a mini-black hole of initial size $m_{initial}(t) > m_0(t)$ will be able to survive and can start growing, whereas those of smaller mass decay. Consequently, the earlier the bound state is formed, the smaller its initial mass can be. From these considerations and the time-independent estimate (\ref{Mlump}) we can also derive a rough upper bound on the formation time, after which the radiation temperature is too low to stabilize mini-black holes against Hawking evaporation. The maximal time $t_{max}$ is found by equating $m_0(t_{max}) \sim 10^3\,{\rm kg}$ from (\ref{Mlump}), which yields the value \begin{equation}\label{tmax} t_{max} = 10^{-20}\,{\rm s} \end{equation} Mini-black holes formed after this time can be expected to decay by Hawking radiation because $T_{rad}(t) < T_{Hawking}$ for $t > t_{max}$. In summary, the usual argument that small black holes would quickly decay via (\ref{evap}) no longer applies as long as the inequality (\ref{Trad}) is obeyed. Note that we invoke the `empirical' formula (\ref{Mdot}) mainly to argue that mini-black holes can form in such a way as to remain stable against Hawking evaporation at early times. In fact, this reasoning can be made more quantitative by substituting $\rho_{rad} = \frac{3}{32\pi G} t^{-2}$ into (\ref{Mdot}) which turns this equation into a simple differential equation that can be studied numerically. However, because this formula is only approximate, and once the stability of the mini-black hole is ensured, we can switch to a classical description by means of an exact solution of Einstein's equations describing a Schwarzschild black hole in a radiatively expanding universe, to describe its further evolution. This will be explained in the next section. \section{Growth of black holes in radiation dominated universe} Having motivated the assumption that small black holes stable against Hawking evaporation have formed in sufficient numbers early in the radiation dominated era we can proceed to study their evolution in this background. For this purpose we employ an {\em exact} solution of the Einstein equations, rather than the `phenomenological' formula (\ref{Mdot}). This solution can be regarded as a variant of the so-called McVittie solution \cite{McV0}; for more recent literature, see {\em e.g.} \cite{McV1,McV2,McV3,McV4,McV5,McV6} and references therein. The solution that we require here is conveniently presented in terms of conformal coordinates, by starting from the general ansatz \begin{equation}\label{metric0} {\rm d} s^2\,=\, a(\eta)^2\left[- C(\eta,r){\rm d} \eta^2+ \frac{{\rm d} r^2}{C(\eta,r)}+r^2{\rm d}\Omega^2\right] \end{equation} where $\eta$ is conformal time, which we use from now on as the time coordinate. $a(\eta)$ is the scale factor and $C=C(\eta,r)$ some function to be specified. We will discuss the equations for the general ansatz elsewhere, but for the present purposes it is enough to restrict to the special case, where $C$ depends only on the radial coordinate, {\em i.e.} $C(\eta,r) \equiv C(r)$. Furthermore, since we are here mainly interested in perfect fluids, for which $a(t) \sim t^{2/3(w+1)} \sim \eta^{2/(3w+1)}$, and more specifically, a radiation dominated universe, we right away specialize the scale factor to be \begin{equation} a(\eta)=A\eta\quad \Longleftrightarrow \quad t=\frac12 A \eta^2 \, . \end{equation} where in our Universe $A \sim 4\cdot 10^{-5}\,$s$^{-1}$ (while $a(\eta)$ is dimensionless). With these assumptions it is straightforward to compute the non-vanishing components of the Einstein tensor, and hence the components of the energy-momentum tensor, with the result \begin{eqnarray}\label{Tmn} 8\pi G \,T_{tt}(\eta,r) \,&=& \, - \frac1{\eta^2 r^2} \Big( C(r)C'(r) r\eta^2 + C^2(r) \eta^2 - C(r) \eta^2 - 3r^2 \Big) \nonumber\\[2mm] 8\pi G\, T_{tr}(\eta,r) \,&=&\, \frac{C'(r)}{\eta C(r)} \nonumber\\[2mm] 8\pi G\, T_{rr}(\eta,r) \,&=&\, \frac1{C(r)^2 r^2\eta^2} \Big( C(r) C'(r) r\eta^2 + C(r)^2 \eta^2 - C(r)\eta^2 + r^2\Big) \nonumber\\[2mm] 8\pi G\, T_{\theta\theta}(\eta,r) \,&=&\, \frac1{2C(r)\eta^2} \Big( C(r) C''(r) r^2 \eta^2 + 2C(r) C'(r) r\eta^2 + 2r^2 \Big) \nonumber\\[2mm] 8 \pi G\, T_{\varphi\vp}(\eta,r) \,&=&\, 8\pi G\, \sin^2\!\theta \, T_{\theta\theta}(\eta,r) \end{eqnarray} where, of course, $C'(r) \equiv dC(r)/dr$, {\em etc.} At this point, this is just an identity (the so-called `Synge trick' \cite{George}); in fact, such solutions trivially exist for {\em any} profile of the scale factor $a(\eta)$. The non-trivial part of the exercise is therefore in ascertaining that the energy-momentum tensor resulting from this calculation does make sense {\em physically}. The requisite condition for a radiation dominated universe, stated in the most general and coordinate independent way, is the vanishing of the trace of the energy-momentum tensor, {\em viz.} \begin{equation}\label{Tmm} T^\mu{}_\mu(\eta,r) \,=\, \frac1{A^2\eta^2 r^2} \left[ \frac{{\rm d}^2}{{\rm d} r^2} \big( r^2 C(r) \big) - 2 \right] \, \stackrel{!}{=}\, 0 \end{equation} This condition is solved by \begin{equation}\label{Cr} C(r) \,=\, 1 - \frac{2Gm}{r} + \frac{{G{\mathcal Q}}^2}{r^2} \end{equation} with two integration constants $m$ (mass) and ${\mathcal Q}$ (charge). Remarkably, the metric (\ref{metric0}) comes out to be conformal to the Reissner-Nordstr\"om metric not as a result of imposing the Einstein equations with an electromagnetic point charge source, but with the weaker and more general conformality constraint (\ref{Tmm})! Taking ${\mathcal Q} =0$ for simplicity (and also because we do not expect these black holes to carry significant amounts of electrical charge), the resulting solution describes the exterior region ($r>2Gm$) of a Schwarzschild black hole in a radiation dominated universe. We emphasize that there is absolutely no issue with the causal structure of this solution, because the conformal equivalence ensures that (for $\eta >0$) the global structure of the space-time outside the would-be horizon $r=2Gm$ is the same as for the Schwarzschild solution, and the tracelessness of the energy momentum tensor holds right up to the would-be horizon (the black hole interpretation is also supported by the arguments in \cite{McV4,McV5}). However, there are some subtleties (apart from issues related to de Sitter space and cosmological horizons discussed in \cite{McV1,McV2,McV3,McV4,McV5}, which are of no concern here) which have to do with the structure of the energy-momentum tensor. Namely, as we show in the following section, closer inspection reveals the existence of an apparent `superluminal barrier' surrounding the surface $r=2Gm$, and shielding the would-be horizon from the outside observer. For the physical mass of the black hole we take the formula \begin{equation}\label{mass} \frac1{2\pi} \int {\rm d}\theta a(\eta) r \Big|_{r=2Gm}\,=\, 2 Gm a(\eta)\quad \Rightarrow \quad m(\eta)=m a(\eta) \end{equation} keeping in mind that the observer at infinity will in addition measure the integrated matter density outside the apparent horizon, so the above formula is really a lower bound on the total mass accretion. The total mass therefore grows (at least) linearly with the scale factor, and this is also consistent with the fact that $T_{tr} \neq 0$. The formula (\ref{mass}) gives (with $\eta = \eta_{initial}$) \begin{equation} m=\frac{m_{initial}}{a_{initial}} \end{equation} where $m_{initial}$ is any value compatible with the lower bound following from (\ref{even}), and $a_{initial}$ is the scale factor at the time when the black hole forms. The mass accretion described by (\ref{mass}) is also evident from the non-vanishing mixed component $T_{tr}$ in (\ref{Tmn}) which states that there is energy flow into the black hole from the surrounding radiative medium. During the radiation era there is, in fact, an unlimited supply of `food' for the black hole to swallow. This supply will dry up only when inhomogeneities are formed, after which the accretion works in the more standard form. Evolving the initial mass (\ref{Mgrav}) with the formula (\ref{mass}) we calculate the final mass at the equilibrium time (assuming that $\eta_{initial}\sim \eta_{\rm Pl}$) \begin{equation}\label{mfinal} m_{final} \,\sim \, m_{initial} \left(\frac{\eta_{eq}}{\eta_{\rm Pl}} \right) \,\sim\, 10^{30}\ {\rm kg} \,\sim\, M_{\odot} \end{equation} with $\eta_{eq} \sim 2\cdot 10^8\,$s and $\eta_{\rm Pl} \sim 10^{-19}\,$s. This estimate applies to mini-black holes formed very early in the radiation era (for $\eta\ll \eta_{max}$). The same calculation for a mini-black hole at the latest possible time $\eta_{max}$ given by (\ref{tmax}) also yields a lower bound for the final mass of the primordial black hole upon exit from the radiation era, \begin{equation}\label{Mfinal} \sqrt{m_{initial} M_{\rm Pl}} \,<\, m_{final} \,<\, M_{\rm Pl} \left(\frac{\eta_{eq}}{\eta_{\rm Pl}}\right) \end{equation} or \begin{equation} 10^{11}\,{\rm kg} \,<\, m_{final} \, < \, M_\odot \end{equation} This inequality restricts the possible mass range for primordial black holes at the equilibrium time. The above analysis can be repeated for matter dominated and exponentially expanding universes, respectively. In this case we need the angular Killing vectors $k^\mu_\theta\partial_\mu $ and $k^\mu_\varphi\partial_\mu$ to state the pertinent conditions in a generally covariant way. In the matter dominated era we have \begin{equation} a(\eta)=B^2\eta^2\;\; ,\quad T_{\mu\nu}k_\theta^\mu k_\theta^\nu\,=\, T_{\mu\nu}k_\phi^\mu k_\phi^\nu\,=\,0\quad \Rightarrow\; C(r)\,=\,1-\frac{2Gm}{r} \end{equation} where we utilize the Killing vectors to state the condition of vanishing pressure. Because this solution does not allow for a non-vanishing charge, this provides another reason for setting ${\mathcal Q} = 0$ in (\ref{Cr}), in order to allow for a smooth transition from the radiation dominated to the matter dominated phase. From this we see that the primordial black holes will continue to grow with the scale factor also in the early part of the matter dominated phase, absorbing radiation {\em and} matter, as long as there are no significant inhomogeneities. After the distribution of matter develops inhomogeneities, the further evolution of black holes proceeds in the standard fashion. In other words, the range of mass values in (\ref{Mfinal}) corresponding to time $t= t_{eq}$, only represent a lower limit, as the black holes will continue to accrete mass in significant amounts until inhomogeneities start forming. Finally, for an exponentially expanding universe we have \begin{equation} a(\eta)=\frac{1}{H(\eta_\infty - \eta)} \;\;, \quad T^\mu{}_\mu \,=\, 2\big(T_{\mu\nu}k_\theta^\mu k_\theta^\nu+T_{\mu\nu}k_\phi^\mu k_\phi^\nu \big) \quad \Rightarrow\; C(r)\,=\, 1-\frac{2Gm}{r}-C_Hr^2 \end{equation} Please note that for $C_H \neq 0$ this is {\em not} the well known Kottler solution (that is, de Sitter space in static coordinates). We stress again that for $C(\eta,r) = C(r)$ and with ${\mathcal Q}=0$ and $C_H = 0$ the causal structure of the space-time is the same as for an ordinary black hole space-time, and only in this case we can have a smooth transition between all phases. \section{Energy-momentum tensor} To gain further insight into the physical properties of our solution let us examine the energy-momentum tensor (\ref{Tmn}) a bit more closely. Following \cite{Weinberg} we parametrize the latter as \begin{eqnarray}\label{Tmn1} T_{\mu\nu} \,&=&\, p g_{\mu\nu} + (p+\rho) u_\mu u_\nu - \Pi_{\mu\rho} Q^\rho u_\nu - \Pi_{\nu\rho} Q^\rho u_\mu \nonumber\\[2mm] && - \, \zeta_1 \, \Pi_\mu{}^\rho \Pi_\nu{}^\sigma \left (\nabla_\rho u_\sigma + \nabla_\sigma u_\rho - \frac23 g_{\rho\sigma} \nabla^\lambda u_\lambda \right) \, - \, \zeta_2 \, \Pi_{\mu\nu} \nabla^\lambda u_\lambda \end{eqnarray} where $u^\mu u_\mu = -1$, $Q^\mu$ is the heat flow, and $\zeta_1$ and $\zeta_2$ are the shear and bulk viscosity, respectively. All variables are assumed to depend on $\eta$ and $r$ only. The projector is defined by \begin{equation} \Pi_{\mu\nu} = g_{\mu\nu} + u_\mu u_\nu \end{equation} We will now match the energy-momentum tensor (\ref{Tmn}) to this formula. For simplicity we assume \begin{equation}\label{visc} \zeta_1 = \zeta_2 = 0 \end{equation} We also write $q_\mu \equiv \Pi_{\mu\nu} Q^\nu$ (so that $u^\mu q_\mu = 0$), so that the energy momentum tensor simplifies to \begin{equation}\label{Tmn3} T_{\mu\nu} \,=\, p g_{\mu\nu} + (p+\rho) u_\mu u_\nu - q_\mu u_\nu - q_\nu u_\mu \end{equation} The assumption of vanishing viscosity coefficients (\ref{visc}) is certainly justified after baryogenesis (that is $t > 10^{-12}\,$s), when the number of photons by far exceeds the number of other particles in the plasma (for instance, $n_\gamma \sim 10^{10}\, n_{b}$). While the condition (\ref{Tmm}) leaves $\zeta_1$ undetermined, we could in principle also admit a non-vanishing $\zeta_2 \neq 0$, that is, self-interacting conformal matter ({\em e.g.} self-interacting massless scalar fields). In that case the relation $\rho=3p$ derived below would no longer hold even with vanishing $T^\mu{}_\mu$. For the comparison we write out (\ref{Tmn}) explicitly for the solution (\ref{Cr}) (with ${\mathcal Q}=0$), which gives \begin{eqnarray}\label{Tmn2} 8\pi G \, T_{tt} \,&=&\, \frac{3\,\dot a^2}{a^2} \,=\, \frac3{\eta^2} \nonumber\\[2mm] 8\pi G \, T_{rr} \,&=&\, \frac{r^2( -2a \ddot a + \dot a^2)}{a^2(r-2Gm)^2} \,=\, \frac{r^2}{\eta^2(r- 2Gm)^2} \nonumber\\[2mm] 8 \pi G \, T_{rt} \,&=&\, \frac{2Gm \dot a}{ar(r-2Gm)} \,=\, \frac{2Gm}{r\eta(r-2Gm)} \nonumber\\[2mm] T_{\theta\theta} \,&=&\, r^2 T_{rr} \quad , \quad T_{\varphi\vp} = r^2 \sin^2\!\theta\, T_{rr} \end{eqnarray} Comparing (\ref{Tmn2}) and (\ref{Tmn3}) we read off the unknown quantities on the r.h.s. of (\ref{Tmn3}); we find \begin{eqnarray}\label{uq} u_\mu(\eta,r) \,&=&\, A\eta \left(\sqrt{\frac{r - 2Gm}{r}} \cosh\xi\,,\, \sqrt{\frac{r}{r -2Gm}}\sinh\xi\,,\,0\,,\,0\right) \nonumber\\[2mm] q_\mu (\eta,r)\,&=&\, A\eta q(\eta,r) \left( \sqrt{\frac{r - 2Gm}{r}}\sinh\xi \,,\, \sqrt{\frac{r}{r -2Gm}}\cosh\xi\,,\,0\,,\,0\right) \end{eqnarray} where \begin{equation} \tanh \xi \,=\, \frac{Gm\eta}{r^2} \qquad\quad (\Rightarrow \; \xi>0) \end{equation} and \begin{equation}\label{q} q(\eta,r) \,=\, 2p(\eta,r) \tanh\xi \end{equation} (with $m \equiv m_{initial}$). The density and pressure are given by \begin{equation}\label{rho} \rho(\eta,r) = 3 p(\eta,r) \qquad \mbox{with} \quad p(\eta,r) \,=\, \frac{r}{A^2 \eta^2 (r- 2Gm)} \end{equation} as expected for a radiation dominated universe. We stress that there are no pathologies here of the kind encountered in some of the previous literature on McVittie-type solutions. In particular, the energy density $\rho(\eta,r)$ is strictly positive for $r> 2Gm$ and at all times $\eta > 0$. Moreover, because $q$ is positive from (\ref{q}), the radial component of $q^\mu$ in (\ref{uq}) is also positive, which means that the radial heat flow is {\em inward directed}, explaining why the mass of the black hole {\em grows} with time. To keep $\xi$ real we must demand \begin{equation} \tanh \xi = \frac{Gm\eta}{r^2} \,< \,1 \quad \Rightarrow \quad r\, >\, \sqrt{ Gm\eta} \quad (> 2Gm ) \end{equation} For $r^2 \rightarrow Gm\eta$ the average velocity of the infalling matter reaches the speed of light, and the expansion (\ref{Tmn1}) in powers of $u_\mu$ and its derivatives breaks down. Consequently, while the solution (\ref{metric0}) remains valid down to $r=2Gm$, the expressions (\ref{uq}), (\ref{q}) and (\ref{rho}) become meaningless in the region $2Gm < r < \sqrt{Gm\eta}\,$ because of apparently superluminal propagation (similar conclusions regarding superluminality were already reached in \cite{McV1}). Likewise the components of the heat flow $q^\mu$ diverge for $\tanh\xi\rightarrow 1$, indicating an apparent divergence of the temperature in this limit. This is also an unphysical feature in view of the breakdown of the expansion (\ref{Tmn1}). Physically it is tempting to interpret this result as implying that the would-be horizon is shielded from the outside observer by a `blanket' at $r=\sqrt{Gm\eta}$, whose extension grows with cosmic time $\eta$. However, in recent work \cite{HS} it is argued that the gradient expansion (\ref{Tmn1}) must be replaced by a different expansion; adapting these arguments to the present case we conclude that the solution can, in fact, remain meaningful all the way down to $r=2Gm$. Because of the breakdown of the expansion (\ref{Tmn1}), also the apparent `firewall' ($\equiv$ divergent energy density $\rho$) on the would-be horizon $r=2Gm$ is an unphysical feature (we have checked that by re-instating the $\eta$-dependence in the metric coefficient $C(\eta,r)$ and setting up an appropriate expansion near the would-be horizon one can eliminate this divergence). This is just as well, because otherwise the total mass at infinity (which includes the integrated energy density for $r>2Gm$) would diverge, as $\rho(\eta,r)$ has a non-integrable singularity at $r = 2Gm$. At any rate these arguments show that the actual mass value for the black hole will exceed the estimated value (\ref{mfinal}) if the matter contributions outside the horizon are taken into account, thus further enhancing the growth of primordial black holes. {\section{Conclusions} In this paper we have proposed a new mechanism to explain the emergence of supermassive primordial black holes during the radiation period. The key element here is the conjectured existence of very massive particles stable against decay into Standard Model matter, that can `condense' into bound states sufficiently early in the radiation period which can subsequently collapse to black holes. Our proposal is chiefly motivated by the possible explanation of the observed spectrum of 48 spin-$\frac12$ fermions in the Standard Model that was put forward in our previous work \cite{MN2,KN,MN0}, and is thus subject to independent falsification if {\em any} new fundamental spin-$\frac12$ fermions were to show up in future collider searches. In addition, we have derived a new solution of Einstein's equations describing the growth of black holes in a dense and hot plasma through inflow of radiation. This exact solution could be useful also in other contexts. \vspace{1cm} \noindent {\bf Acknowledgments:} We would like to thank B.F. Schutz for correspondence and helpful comments, and the referee for suggesting several improvements in the original version. K.A.M. thanks AEI for hospitality and support; he was partially supported by the Polish National Science Center grant DEC-2017/25/B/ST2/00165. The work of H.N. has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 740209). \vspace{0.8cm}
2,869,038,154,138
arxiv
\section{Introduction} Networks are a convenient abstraction in many different areas, such as social sciences, biology, and the world wide web. A common structure in these real-world networks is a community, a group of nodes that are tightly connected and often share other properties, for example, biological function in a protein interaction network. Imagine that you are trying to find such a community of nodes in a network. If the network is very large, it becomes too expensive to look at all nodes and edges in the network. Therefore local methods are needed. Local community detection aims to find only one community around a given set of seed nodes, by relying on local computations involving only nodes relatively close to the seed \citep{KloumannKleinberg2014,Andersen2006local}; in contrast to global community detection, where all communities in a network have to be found. For global community detection, it is possible to treat the problem of finding all communities in a network as a probabilistic inference problem. This puts global community detection on a solid foundation, and makes it clear what a community is, and how these communities manifest in the network structure. But most algorithms for local community detection operate by optimizing an ad-hoc objective function such as conductance \citep{Andersen2006local,Yang2012,Li2015}. In this paper we will fill this gap, and propose a probabilistic model for local community detection. Our contributions can be summarized as follows: \begin{enumerate} \item We introduce an approximation technique for using global models to perform local community detection. \item We introduce the first method for local community detection based on a generative model by using this approximation. \item We propose two algorithms for local community detection, based on approximations of the stochastic block model and of the degree-corrected stochastic block model. \item We provide a probabilistic interpretation of conductance, as limit behavior of the approximate degree-corrected stochastic block model. \item We show that the approximate stochastic block model is a highly competitive algorithm, which outperforms the state-of-the-art on three of five real life benchmark datasets. \end{enumerate} \subsection{Related work} \subsubsection{Local community detection} Local network community detection methods have largely focused on optimizing conductance, which is a measure of the quality of a graph cut. Empirically, conductance has been shown to be a good quality metric for communities in real-world networks \citep{Yang2012}, in the sense that real-world communities often have a higher conductance than other sets of nodes. Because community detection is computationally hard \citep{Fortunato2010,ShiMalik2000NormalizedCut}, several different heuristics have been developed. A common approach is to use spectral partitioning, which involves finding the dominant eigenvector of a random walk kernel. \citet{Andersen2006local} showed that communities with a good conductance can be computed efficiently in this way. In their method nodes are added to the community in order of decreasing personalized pagerank score, and the community along this `{sweep}' with the highest conductance is returned. Computing this personalized pagerank is a global operation, but efficient local approximations are possible, that only involve nodes near the seed. Several variants of this sweep method have been proposed. \Citet{Kloster2014} propose an alternative to the personalized pagerank score, based on the heat kernel instead of random walks. \Citet{Yang2012} propose to find the first local optimum of conductance instead of a global optimum. Other heuristics involve trying to find multiple pagerank-like vectors and restarting the method from different neighborhoods around the seed \citep{Li2015}. However, being based on graph cuts, a good conductance is often achieved by cutting a network into roughly equal-sized parts, which is undesirable. To limit the size of communities, a cut-off on the personalized pagerank score can be used \citep{Andersen2006local}; also, variations of the sweep methods have been proposed that stop at earlier local optima \citep{Yang2012}. \subsubsection{Global community detection} Many different global community detection methods have been developed for different classes of networks and different community structures. For a complete overview, we refer the reader to the surveys by \citet{Fortunato2010,Xie2013OverlappingSurvey}. Here we focus on probabilistic models for global community detection. The simplest are the stochastic block models \citep{Holland1983sbm,Anderson1992sbm}, which partition the nodes into communities, with varying probabilities of edges. These block models produce networks that are very different than real networks, in particular, the distribution of degrees is very different. To more accurately model the node degrees, \citet{Karrer2011} have proposed the degree-corrected stochastic block model (DC-SBM). In this model, they add an extra parameter to each node, which controls the likelihood of edges to that node, and hence the node's degree. This extra complexity comes at the cost of making the model more difficult to fit, and so degree correction might not be appropriate for all networks \citep{Yan2014DCBM}. An issue with the stochastic block model is that the number of communities has to be fixed because with more communities there are more parameters in the model, which makes it impossible to compare likelihoods. The Infinite Relational Model \citep{Kemp2006IRM} solves this problem by assuming an infinite number of communities in combination with a Chinese restaurant prior over community structures. The model of \citet{NewmanLeicht2007mixture} goes one step further, and has a parameter for each combination of node and community, indicating the likelihood of edges from nodes in that community to a particular other node. This is similar to models based on non-negative matrix factorization \citep{Psorakis2011NMF,Ball2011}. These more complex models allow for nodes to be in more than one community. Aside from these flat models, also hierarchical models have been developed for probabilistic network community detection \citep{Zhang2007,Blundell2013BayesianHierarchical}. With all probabilistic models there is the question of inference, that is, how to find the parameters or distribution of parameters that accurately model the observed data. Common approaches are to use maximum likelihood \citep[used by e.g.][]{Karrer2011,Yan2014DCBM}, variational Bayes \citep[used by][]{Hofman2008}, and Markov chain Monte Carlo sampling \citep[used by e.g.][]{Kemp2006IRM}. More recently, there has been work using loopy belief propagation for inference in stochastic block models \citep{Decelle2011BeliefPropagationBlockModel,ZhangMooreNewman2016UnequalGroups}. There it has also been noted that these models exhibit a phase transition: beyond a certain point in parameter space it is impossible to recover the true community structure, even it was drawn from the model itself. Furthermore, this transition is sharp for large networks. \subsection{Problem description} Before continuing, we will formalize the problem. The network of interest is represented as an unweighted undirected graph without self-loops. Let $N$ be the number of nodes in this graph, and $M$ the number of edges. The graph can be represented by an adjacency matrix $A$, where $a_{ij}=a_{ji}=1$ if there is an edge between nodes $i$ and $j$, and $a_{ij}=0$ otherwise. Furthermore $\sum_{i=1}^N \sum_{j=1}^N a_{ij} = 2M$. The local community detection problem is now to find the community $c_s$ that contains a given seed node $s$, while inspecting only nodes and edges in or near that community. We will only concern ourselves with a single seed node in this paper, but this is not essential. When working in a probabilistic setting, the goal becomes to find the most likely community that contains the seed, $\argmax_{c_s} \P(c_s \mid A,s)$. Even computing this probability entails a marginalization over all the other clusters in the graph. So as a first simplification we will instead search for the most likely clustering, $\argmax_{C} \P(C \mid A,s)$, and report the community $c_s$ in this clustering. We can assume that the seed is chosen independently from the graph, and also independently from the clustering, so we have that \P(C \mid A,s) \propto \P(C) \P(A \mid C). $ To find the community containing the seed we only need to maximize this quantity, and then to find the community $c_s \in C$ that contains the seed. \section{The Stochastic Block Model} We first model the community structure of all nodes in the graph using the stochastic block model \citep{Karrer2011}. In this global model, each node $i$ is in exactly one community, which we denote as $c_i$, so the communities form a partition of the set of nodes. The edges of the graph are generated independently based on these communities. In the standard stochastic block model the probability of an edge between nodes $i$ and $j$ is $\P(a_{ij}) = \pi_{c_ic_j}$ where the $\pi$ are parameters. For simplicity, we only consider two different values for $\pi$, $\pi_{c,c}={\lambda_\subscripti}$ for edges inside clusters and $\pi_{cc'}={\lambda_\subscriptb}$ for edges between different clusters $c \neq c'$, so \begin{align*} \P(a_{ij}) =\pi_{c_ic_j} &= \begin{cases} {\lambda_\subscripti} & \text{ if } c_i = c_j \\ {\lambda_\subscriptb} & \text{ if } c_i \neq c_j. \end{cases} \end{align*} Since we do not know the value of these parameters ${\lambda_\subscripti}$ and ${\lambda_\subscriptb}$, we put a conjugate Beta prior on them, \begin{align*} {\lambda_\subscripti},{\lambda_\subscriptb} &\sim \beta({\alpha^+},{\alpha^-}). \end{align*} In this simpler variant of the stochastic block model, the number of parameters of the model does not depend on the number of communities. Hence, in contrast to most other work on stochastic block models, we do not have to fix the number of communities. Instead, we use a prior over partitions, which allows for varying number of communities of varying sizes. It is well known that community sizes in real life networks follow a power law distribution \citep{Palla2005-clique-percolation}. Hence we adopt the prior \begin{align} \label{eq:powerlaw} \P(C) \propto \sum_{c \in C} (\gamma-1)|c|^{-\gamma}, \end{align} where $C$ ranges over all possible partitions of the set of $N$ nodes. Note that the particular choice of prior distribution is not critical to the rest of this work, and for other applications other priors might make sense. In particular, a common alternative choice is the Chinese Restaurant Process. \subsection{Inference} In the basic stochastic block model \begin{multline*} \P(A \mid C) = \expect_{{\lambda_\subscripti},{\lambda_\subscriptb}}\Bigl[ \prod_{i<j} \pi_{c_ic_j}^{a_{ij}} (1-\pi_{c_ic_j})^{1-a_{ij}} \Bigr] \\ = \expect_{{\lambda_\subscripti},{\lambda_\subscriptb}}\Bigl[ {\lambda_\subscripti}^{{\alpha_\subscripti^+}} (1-{\lambda_\subscripti})^{{\beta_\subscripti}} {\lambda_\subscriptb}^{{\alpha_\subscriptb^+}} (1-{\lambda_\subscriptb})^{{\beta_\subscriptb}} \Bigr], \end{multline*} where ${\alpha_\subscripti^+} = \sum_{i<j,c_i=c_j} a_{ij}$, ${\beta_\subscripti} = \sum_{i<j,c_i=c_j} (1-a_{ij})$ and ${\alpha_\subscriptb^+}$ and ${\beta_\subscriptb}$ are the corresponding sums over $c_i \neq c_j$. This likelihood has the same shape as the beta distribution, so we can calculate the expectation exactly, to get \begin{multline} \label{eq:sbm-lh} \P(A \mid C) = \\ \quad \frac{\Beta({\alpha^+}+{\alpha_\subscripti^+},{\alpha^-}+{\beta_\subscripti})\Beta({\alpha^+}+{\alpha_\subscriptb^+},{\alpha^-}+{\beta_\subscriptb})}{\Beta({\alpha^+},{\alpha^-})^2}. \end{multline} Multiplying this by the prior on clusterings \eqref{eq:powerlaw} gives the posterior probability $\P(C|A)$ up to a normalizing constant. \subsection{Local approximation} \label{sec:asbm} The likelihood in equation~\eqref{eq:sbm-lh} is still a function of the entire clustering, so to find the most likely cluster containing the seed we would need to consider clusterings of the entire graph. To obtain a local model we make an approximation based on the assumption that the graph is uniform: all clusters of the graph are similar to each other. We make this idea concrete by assuming that all clusters in the graph have approximately the same volume, the same size, and the same fraction of within community edges. Now, if the community containing the seed has $n$ nodes, while the graph has $N$ nodes, this means that there are approximately $k=N/n$ communities that are all similar to $c_s$. Furthermore, suppose that this community has $w$ within community edges, then the parameters for the stochastic block model can be approximated as \begin{align*} \tilde\alpha_\subscripti^+ &= k w, \\ \tilde\alpha_\subscripti^- &= k n(n-1)/2 - \tilde\alpha_\subscripti^+,\\ \tilde\alpha_\subscriptb^+ &= M - \tilde\alpha_\subscripti^+, \\ \tilde\alpha_\subscriptb^- &= N(N-1)/2 - \tilde\alpha_\subscripti^+ - \tilde\alpha_\subscripti^- -\tilde\alpha_\subscriptb^+. \end{align*} With these quantities we can use equation~\eqref{eq:sbm-lh} to approximate the likelihood of the network given the community that contains the seed. And hence also approximate the posterior probability of a community given the network, \begin{multline} \tilde P_\text{SBM}(c_s,C \mid A) = (\gamma-1)^kn^{-k\gamma} \\ \frac{\Beta({\alpha^+}+\tilde\alpha_\subscripti^+,{\alpha^-}+\tilde\alpha_\subscripti^-)\Beta({\alpha^+}+\tilde\alpha_\subscriptb^+,{\alpha^-}+\tilde\alpha_\subscriptb^-)}{\Beta({\alpha^+},{\alpha^-})^2} . \end{multline} Note that instead of taking $k=N/n$, we might reason that, if the volume of the community containing the seed is $v$ and the graph has $M$ edges, that there are approximately $k=2M/v$ communities. This is, in general, a different estimate. For the stochastic block model, it makes more sense to use $k=N/n$, because then there is no dependence on the volume of the community. \section{Degree-corrected block model} The degree distribution of the stochastic block model is not very realistic, because nodes inside a cluster have similar degrees. In many real-world networks, there are hub nodes, which have a much higher than average degree; as well as leaf nodes with a very low degree. To accurately model these phenomena \citet{Karrer2011} have proposed the degree-corrected stochastic block model (DC-SBM). In this model, they assign an extra parameter $\deg_i$ to each node, which controls the likelihood of edges to that node, and hence the node's degree. We can then model the edges as being drawn from a Poisson distribution with mean $\deg_i \deg_j \pi_{c_ic_j}$, \begin{align*} a_{ij} &\sim P(\deg_i \deg_j \pi_{c_ic_j}) \text{ for all } i < j. \end{align*} Note that we use a Poisson distribution, which allows for weighted edges with weight larger than 1, instead of the Bernoulli distribution, because the mean might be larger than $1$. We again place conjugate priors on all parameters, which in this case follow a gamma distribution, \begin{align*} {\lambda_\subscripti},{\lambda_\subscriptb} &\sim \Gamma({\alpha},{\theta}) \\ \deg_i &\sim \Gamma({\alpha},{\theta}) \text{ for all } i. \end{align*} \subsection{Inference} For this degree-corrected model the likelihood of the network given the clustering depends on parameters $\deg$ and $\lambda$. It is not possible to integrate over these parameters analytically, and other authors have therefore chosen to maximize over the parameters instead \citep{Karrer2011,Yan2014DCBM}. Here we use a variational approximation, \begin{multline} \log \P(A \mid C) \ge L(A,C) \\= \expect_{\deg,\lambda \sim Q}\bigl[\log \P(A,\deg,\lambda \mid C) - \log Q(\deg,\lambda)\bigr] . \end{multline} As is standard, we take $Q$ to be the factorized distribution $Q(\deg,\lambda) = Q_{{\lambda_\subscripti}}({\lambda_\subscripti})Q_{{\lambda_\subscriptb}}({\lambda_\subscriptb}) \prod_{i=1}^N Q_{\deg_i}(\deg_i),$ where each component has a gamma distribution, $Q_{\deg_i}(\deg_i) = \Gamma(d_i;\alphad{i},\thetad{i})$, $Q_{{\lambda_\subscripti}}({\lambda_\subscripti}) = \Gamma({\lambda_\subscripti};{\alpha_\subscripti},{\theta_\subscripti})$. This gives us the following variational lower bound \begin{multline} L(A,C) = \sum_{i<j} \bigl(a_{ij}\log(\deg_i \deg_j \pi_{c_ic_j}) -\deg_i \deg_j \pi_{c_ic_j} \bigr)\\ - \sum_{i=1}^N D_{KL}(\alphad{i},\thetad{i} || {\alpha},{\theta}) \\ - D_{KL}({\alpha_\subscripti},{\theta_\subscripti} || {\alpha},{\theta}) \\ - D_{KL}({\alpha_\subscriptb},{\theta_\subscriptb} || {\alpha},{\theta}) . \label{eq:vb} \end{multline} Where $D_{KL}(\alpha,\theta || \alpha',\theta')$ is the Kullback-Leibler divergence between two gamma distributions. And we have assumed that $a_{ij} \in \{0,1\}$, which implies that $a_{ij}!=1$. We find that the parameters that maximize $L(A,C)$ are \begin{align*} \alphad{i} &= {\alpha} - 1 + \sum_{j=1}^N a_{ij} \\ \thetad{i} &= \Bigl( {\theta}^{-1} + \sum_{j\neq i} \pi_{c_i,c_j}\alphad{j}\thetad{j} \Bigr)^{-1} \end{align*} and \begin{align*} {\alpha_\subscripti} &= {\alpha} - 1 + \sum_{i<j,c_i=c_j} a_{ij} \\ {\theta_\subscripti} &= \Bigl( {\theta}^{-1} + \sum_{i<j,c_i=c_j} \alphad{i}\thetad{i}\alphad{j}\thetad{j} \Bigr)^{-1}, \end{align*} and similarly for ${\alpha_\subscriptb}$ and ${\theta_\subscriptb}$. There is a mutual dependence between these variables, so in practice we use several iterations of the above equations to find a good approximation of the parameters. The variational approximation gives a lower bound to the log-likelihood $\P(A \mid C)$. In contrast, maximum likelihood would give an upper bound. This upper bound is similar to $L(A,C)$, but it does not include the Kullback-Leibler terms. For large networks the first term of $L(A,C)$ dominates, and so the variational lower bound, the true likelihood and the maximum likelihood upper bound will all be close. \subsection{Local approximation} As before, we will make a local approximation of $L$, which depends only on the community that contains the seed. \newcommand{\hat \volume}{\hat v} \newcommand{\hat K}{\hat K} \newcommand{\hat M}{\hat M} It will be convenient to define \begin{align*} \hat \volume &= \sum_{i \in c_s} \alphad{i} = w+n({\alpha}-1),\\ \hat M &= \sum_{i=1}^N \alphad{i} = 2M+N({\alpha}-1), \text{ and}\\ \hat K^2 &= \sum_{i \in c_s} \alphad{i}^2. \end{align*} First of all, we can approximate $\thetad{i}$ by changing the sum over all other nodes $j\neq i$ to a sum over all nodes $j$. Then $\thetad{i}$ becomes the same for all nodes in the same community, and under the assumption that all communities are the same, $\thetad{i}={\theta_d}$ is the same for all nodes, \begin{align*} {\theta_d} = \Bigl( {\theta}^{-1} + {\theta_d}{\lambda_\subscripti}\hat \volume +{\theta_d}{\lambda_\subscriptb}(\hat M-\hat \volume) \Bigr)^{-1}. \end{align*} Furthermore, because $\alphad{i}$ does not depend on the clustering, we can make the following approximation ${\tilde L}$ of $L$, \begin{multline} \label{eq:lapprox} L(A,C) \approx {\tilde L}(A,c) = 2M \log({\theta_d}) + kw (\psi({\alpha_\subscripti})+\log({\theta_\subscripti})) \\ + (M-kw)(\psi({\alpha_\subscriptb})+\log({\theta_\subscriptb})) \\ - k(\hat \volume^2 - \hat K^2) {\theta_d}^2 {\alpha_\subscripti}{\theta_\subscripti} - (\hat M^2 - k\hat \volume^2) {\theta_d}^2 {\alpha_\subscriptb}{\theta_\subscriptb} \\ + N {\alpha}\log{\theta_d} - \hat M{\theta_d}/{\theta} \\ - D_{KL}({\alpha_\subscripti},{\theta_\subscripti} || {\alpha},{\theta}) - D_{KL}({\alpha_\subscriptb},{\theta_\subscriptb} || {\alpha},{\theta}) + \kappa, \end{multline} where $\kappa$ is a constant that depends only on the network and on the priors. The likelihood of the degree-corrected model is based on the degrees of nodes and the volume of communities. So in contrast to the previous section, here it makes sense to estimate the number of communities as $k=2M/v$ instead of $N/n$. As before, we multiply this approximate likelihood by the prior, which we approximate as $\P(C)\approx (\gamma-1)^k n^{-k\gamma}$, to obtain the posterior \begin{align*} \tilde P_\text{DCBM}(c_s, C \mid A) = e^{{\tilde L}(A,c)} (\gamma-1)^k n^{-k\gamma}. \end{align*} \section{Limiting behavior} \label{sec:limiting-behavior} The premise of local community detection is to find a community without considering the entire graph, or even a significant portion of the graph. This is only possible if the community is small compared to the graph. We can take this assumption one step further, and consider what happens if the graph becomes infinitely large compared to the cluster. Therefore we take the limit of the approximate likelihood as $N \to \infty$, assuming that the average degree $M/N$ remains constant. With the stochastic block model from \secref{sec:asbm} we get that \begin{equation*} \lim_{N \to \infty} \frac{\log \tilde P_\text{SBM}(c_s,C\mid A)}{N \log N} = \frac{w}{n}. \end{equation*} With the degree-corrected model we obtain \begin{equation*} \lim_{N \to \infty} \frac{2 \log \tilde P_\text{DCBM}(c_s,C\mid A)}{N \log N} = \frac{w}{v} - 1, \end{equation*} which is exactly equal to the negation of conductance. In other words, under this model, and in the limit of an infinitely large graph, the a posteriori most likely cluster corresponds to the cluster of minimum conductance. Note that conductance has a global optimum with a large community that contains all nodes, since in that case $w=v$. Even in the non-limiting case, as the network becomes larger, so does the optimal community. And for very large networks it becomes impossible to recover small communities. This phenomenon is called the resolution limit \citep{Fortunato2007ResolutionLimit}, and is shared by many network community detection methods. To avoid the resolution limit, a parameter must be introduced into the objective function, for instance by replacing the graph size or graph volume \citep{Reichardt2004}. In our case, we could take $N$ as a formal parameter, instead of using the actual number of nodes in the network (keeping the average degree $M/N$ fixed). In this way, the search for a community is in effect performed in a subnetwork of a given size. \section{Experiments} In this section, we experimentally evaluate the proposed models and approximations. We use the following experimental protocol: \begin{comment} We want to answer the following questions: \begin{itemize} \item How much is lost in making the local approximation to the likelihood? \item How well does the local approximation to the block model work as a method for finding communities? \item To what extent does the degree-corrected model provide an advantage over the stochastic block model in real networks. \end{itemize} \end{comment} \begin{enumerate} \item pick a random community from the set of all communities. \item pick a random seed from this community. \item run the method(s) with this seed. \item compare the recovered community to the true one using the $F_1$ score. For sets of nodes $c$, $d$ the $F_1$ score amounts to F_1(c,d) = 2|c\cap d|/(|c|+|d|), which is $1$ if the communities are identical, and $0$ if they are disjoint. We exclude the seed from this comparison since it always occurs in both communities, and we would otherwise see a good $F_1$ score for the trivial community containing only the seed. \end{enumerate} \subsection{Methods} We compare three classes of methods. \subsubsection{Global generative models} We optimize the likelihood of the global clustering models with a Louvain method \citep{Blondel2008}. This yields a partition of the nodes. In this partition, there is always a single community that contains the seed. We denote this method as gSBM (global stochastic block model) and gDCBM (global degree-corrected block model). We use uninformative priors for all parameters, $\beta(1,1)$ in the stochastic block model and $\Gamma(1,1)$ in the degree-corrected model. For the power law prior on community sizes we use $\gamma=2$. Note that it is somewhat unfair to compare local and global models. A global method has access to more information. The goal of local community detection is not to outperform global methods, but rather to achieve comparable results faster while looking at only a small part of the network. \subsubsection{Local approximations} We have implemented a simple greedy algorithm to optimize $\tilde \P(C\mid A)$ for the stochastic and degree-corrected block models. The algorithm starts from the community $\{s\}$ that contains only the seed. Then we consider all neighboring nodes of the current community in a random order, and for each node we add it to the community if doing so would improve the approximate likelihood. This optimization procedure is then repeated, until none of the neighboring nodes are added. We further restart this search 10 times, and pick the community with the highest approximate likelihood. We denote this method as aSBM (local approximate stochastic block model) and aDCBM (local approximate degree-corrected block model). Each iteration of this greedy optimization procedure takes time proportional to the volume of the retrieved community, and the total number of iterations is bounded by the diameter of the community $D$, which is very small in practice. This makes the total runtime $O(vD)$. We also consider a variant of aDCBM with an explicit parameter $N$, as discussed in \secref{sec:limiting-behavior}. We report results with $N=1000$, and set $M$ to the average node degree times $N$ (aDCBM1k). The supplementary material includes results for different values of $N$. \subsubsection{State-of-the-art methods for local community detection} \begin{itemize} \item PPR. The algorithm by \citet{Andersen2006local} based on the Personalized Page Rank graph diffusion. We use the implementation included with the {HK} method. \item HK. The algorithm by \citet{Kloster2014}, using a Heat Kernel Code is available at \url{https://www.cs.purdue.edu/homes/dgleich/codes/hkgrow}. \newcommand{\phi}{\phi} \item YL. The algorithm by \citet{Yang2012} with conductance as scoring function. This method uses a different stopping condition compared to PPR, selecting a local optimum of conductance, instead of searching for a more global optimum. This introduces a bias towards finding smaller communities. % % \item LEMON. The Local Expansion via Minimum One Norm algorithm by \citet{Li2015}. Instead of considering a single probability vector as in HK\, or YL\,, this method uses the space spanned by several short random walks. Communities are found by solving an $l_1$-penalized linear programming problem. The algorithm includes a number of heuristic post-processing steps. Code is available at \url{https://github.com/yixuanli/lemon}. \end{itemize} \subsection{Artificial datasets} We first look at artificial datasets, by using the LFR benchmark \citep{Fortunato2008BenchmarkGraphs} to generate networks with a known community structure. We used the parameter settings \texttt{N=5000 k=10 maxk=50 t1=2 t2=1 minc=20 maxc=100}, which means that the graph has 5000 nodes, and between 20 and 100 communities, each with between 10 and 50 nodes. We vary the mixing parameter (\texttt{mu}), which determines what fraction of the edges are between different communities. More mixing makes the problem harder. The LFR benchmark is very similar to the degree-corrected block model. The differences are that node degrees follow a power law distribution in the LFR model, while we used a gamma distribution, and that in the LFR benchmark the edges are not completely independent because the degree of each node must match a previously drawn value. Nevertheless, we expect the DCBM to give a good fit to these networks. The results of these experiments are shown in \tblref{tbl:f1}. Here we see that the global degree-corrected model performs better than the simple stochastic block model. This is not surprising, since the LFR benchmark has nodes with varying degrees. These results carry over to the local approximations, which do perform significantly worse than the global models on these datasets. Out of the local methods, the aDCBM and LEMON models achieve the best results. \subsection{Real-world networks} \begin{table} \caption{ Overview of the SNAP datasets used in the experiments. } \label{tbl:datasets} \small \begin{center} \begin{tabular}{lrr@{\hspace*{5mm}}r@{\hspace*{3mm}}r} \hline\noalign{\vspace\tblskipamount} Dataset & \#node & \#edge & \#comm & $\overline{|c|}$ \\ \noalign{\vspace\tblskipamount}\hline\noalign{\vspace\tblskipamount} Amazon & 334863 & 925872 & 151037 & 19.4\\ DBLP & 317080 & 1049866 & 13477 & 53.4\\ Youtube & 1134890 & 2987624 & 8385 & 13.5\\ LiveJournal & 3997962 & 34681189 & 287512 & 22.3\\ Orkut & 3072441 & 117185083 & 6288363 & 14.2\\ \noalign{\vspace\tblskipamount}\hline \end{tabular} \end{center} \end{table} We use five social and information network datasets with ground-truth from the SNAP collection \citep{snapnets}. These datasets are summarized in \tblref{tbl:datasets}. We consider all available ground-truth communities with at least 3 nodes. All experiments were performed on a random subsample of 1000 communities. \Citet{Yang2012} also defined a set of top 5000 communities for each dataset. These are communities with a high combined score for several community goodness metrics, among which is conductance. We therefore believe that communities in this set are biased to be more easy to recover by optimizing conductance. In addition to the SNAP datasets, we also include the Flickr social network \citep{Wang-etal12-flickr-dataset}. As well as some classical datasets with known communities: Zachary's karate club \cite{Zachary1977}; Football: A network of American college football games \citep{GirvanNewman2002}; Political books: A network of books about US politics \citep{Krebs2004polbooks}; and Political blogs: Hyperlinks between weblogs on US politics \citep{Adamic2005polblogs}. These datasets might not be very well suited for local community detection since they have very few communities. We see in \tblref{tbl:f1} that, while on the artificial benchmark networks the global model significantly outperforms the local approximation, on the real-world networks this is not the case. We believe that this is because the ground-truth communities on these networks are much smaller, and the considered local methods tend to find smaller communities. Additionally, the simple stochastic block model outperforms the degree-corrected model on all SNAP datasets except for the Amazon dataset. We found this surprising, because all these datasets do have nodes with widely varying degrees. However, the number of within community edges varies much less. For instance a node in the DBLP dataset with degree $d_i$ will have on the order of $\sqrt{d_i}$ within community edges, which means that the truth is in between the aSBM (which assumes $O(1)$ edges) and aDCBM models (which assumes $O(d_i)$ edges). An issue with the local approximation is that nodes inside communities are not representative of the entire network. For instance, on the Youtube network the average node degree is $5.3$, while the average degree of nodes that are inside at least one community is $33.7$. When using an explicit $N=1000$, the results on the SNAP networks improve, again because the ground-truth communities on these networks tend to be small. For the three largest datasets, different values of $N$ have a large influence on the size of the recovered community. This is likely due to the fact that it is possible to find communities at all scales in these networks. See the supplementary material for the results for different values of $N$. The results using the top 5000 communities are much better, which is not surprising, since these top communities were selected to be easier to find. The trend between the different methods is similar, with the aSBM and aDCBM methods performing best in most cases. Surprisingly, the local approximations outperform the global community detection methods on most of the real-world datasets. The likely reason is that the ground-truth communities in the SNAP datasets are relatively small. And because of the greedy optimization strategy, the local methods tend to find smaller communities. The global methods, in contrast, find larger communities that better fit the model, but which likely combine several ground-truth communities. This means that the local methods achieve a much better precision at the cost of a somewhat lower recall, resulting in an overall higher $F_1$ score. Results on the smaller networks are mixed. All these datasets, except for football have very large clusters for their size, which are easier to recover with the HK and PPR methods. The aDCBM method with $N=1000$ is also sometimes better able to find good communities on these datasets, because those networks have fewer than 1000 nodes, so increasing $N$ increases the size of the found community. We were unable to run LEMON on the large SNAP datasets due to its memory usage. \begin{table*}[t] \caption{ $F_1$ score between recovered communities and ground-truth (excluding the seed node). The best result for each dataset is indicated in bold, as are the results not significantly worse according to a paired T-test (at significance level $0.01$). } \label{tbl:f1} \small \begin{center} \begin{tabular}{l|cc|ccc|cccc} \hline\noalign{\vspace\tblskipamount} & \multicolumn{2}{|c|}{Global methods} & \multicolumn{7}{|c}{Local methods} \\ \noalign{\vspace\tblskipamount} Dataset & gSBM & gDCBM & aSBM & aDCBM & aDCBM-1k & {YL} & LEMON & {HK} & {PPR} \\ \noalign{\vspace\tblskipamount}\hline\noalign{\vspace\tblskipamount} LFR (mu=0.1) & 0.999 & \textbf{1.000} & 0.613 & 0.911 & 0.895 & 0.307 & 0.925 & 0.881 & 0.352\\ LFR (mu=0.2) & 0.998 & \textbf{1.000} & 0.583 & 0.812 & 0.799 & 0.274 & 0.859 & 0.090 & 0.136\\ LFR (mu=0.3) & 0.958 & \textbf{0.997} & 0.534 & 0.800 & 0.726 & 0.168 & 0.587 & 0.039 & 0.040\\ LFR (mu=0.4) & 0.920 & \textbf{0.990} & 0.466 & 0.659 & 0.529 & 0.121 & 0.533 & 0.039 & 0.040\\ LFR (mu=0.5) & 0.756 & \textbf{0.911} & 0.368 & 0.458 & 0.322 & 0.085 & 0.427 & 0.039 & 0.041\\ LFR (mu=0.6) & \textbf{0.433} & \textbf{0.426} & 0.258 & 0.138 & 0.093 & 0.069 & 0.279 & 0.037 & 0.039\\ \noalign{\vspace\tblskipamount}\hline\noalign{\vspace\tblskipamount} Amazon & 0.330 & 0.245 & 0.395 & \textbf{0.431} & \textbf{0.447} & 0.381 & 0.229 & 0.221 & 0.119\\ DBLP & 0.287 & 0.220 & \textbf{0.406} & 0.349 & 0.344 & 0.245 & 0.240 & 0.199 & 0.194\\ Youtube & 0.040 & 0.054 & \textbf{0.099} & 0.071 & 0.084 & 0.082 & 0.075 & 0.031 & 0.052\\ LiveJournal & 0.041 & 0.025 & \textbf{0.072} & 0.043 & 0.052 & 0.054 & - & 0.028 & 0.035\\ Orkut & 0.010 & 0.007 & 0.020 & 0.014 & 0.016 & \textbf{0.034} & - & \textbf{0.032} & 0.019\\ \noalign{\vspace\tblskipamount}\hline\noalign{\vspace\tblskipamount} Amazon (top 5000) & 0.792 & 0.614 & 0.844 & \textbf{0.903} & \textbf{0.895} & 0.780 & 0.397 & 0.709 & 0.527\\ DBLP (top 5000) & 0.442 & 0.334 & \textbf{0.647} & 0.567 & 0.571 & 0.419 & 0.339 & 0.342 & 0.329\\ Youtube (top 5000) & 0.083 & 0.085 & 0.140 & 0.195 & 0.172 & \textbf{0.241} & 0.098 & 0.067 & 0.116\\ LiveJournal (top 5000) & 0.507 & 0.354 & 0.666 & \textbf{0.715} & 0.672 & 0.521 & - & 0.569 & 0.478\\ Orkut (top 5000) & 0.233 & 0.195 & \textbf{0.334} & \textbf{0.312} & 0.119 & 0.097 & - & \textbf{0.312} & 0.260\\ \noalign{\vspace\tblskipamount}\hline\noalign{\vspace\tblskipamount} Karate & 0.165 & 0.101 & 0.379 & 0.448 & 0.740 & 0.562 & 0.683 & 0.799 & \textbf{0.908}\\ Football & \textbf{0.790} & \textbf{0.817} & 0.727 & 0.682 & 0.769 & \textbf{0.784} & 0.322 & 0.452 & 0.260\\ Pol.Blogs & 0.192 & 0.039 & 0.103 & 0.040 & 0.090 & 0.015 & 0.151 & \textbf{0.661} & 0.535\\ Pol.Books & 0.274 & 0.429 & 0.295 & 0.243 & 0.451 & 0.175 & 0.605 & \textbf{0.629} & \textbf{0.653}\\ Flickr & \textbf{0.204} & 0.090 & 0.164 & 0.050 & 0.066 & 0.012 & 0.025 & 0.054 & 0.118\\ \noalign{\vspace\tblskipamount}\hline \end{tabular} \end{center} \end{table*} \section{Discussion} The local approximations are based on the assumption that all communities are alike. We needed to make this assumption to be able to say something about the global properties of the network given only a single community. In real-world networks there is often a large variation in the size of the communities. Our approximation might work well if the community is close to the average size, but for very large or very small communities it is not accurate. Better approximations might be possible by finding more than one community (but still a small subset of the network), or by modeling the distribution of community sizes. When we are interested in local community structure, it would seem to make sense to consider models that only have this local structure. For instance, models with a single community that stands apart from a background. But to fit such a model to an observed graph we would also need a good model for the background, and so we should also model the structure in the background. In other words, we also need to find communities in the rest of the graph, and the method would not be local. If we were to instead use a background without further structure, then the closest fit to the observed network will be obtained by using the structure of the single community to explain the largest variances in the entire network, so the obtained `community' would cover roughly half of the nodes. Our approach of assuming that the background is similar to the community containing the seed is a good compromise, as illustrated by the experiments. In this work we maximize over clusters. It would be interesting and useful to estimate the marginals instead. That is, the probability that a node $i$ is inside the same cluster as the seed, conditioned on the graph. While Variational Bayes gives a decent approximation to the log-likelihood, it does not give approximations to the marginals. Indeed, trying to use a variational bound to marginalize over all but one of the cluster membership indicators leads to an objective that is identical to ${\tilde L}$, except for a relatively small entropy term. It remains to be seen if other inference methods can be used to estimate the marginals, and thus to in some sense find \emph{all} possible communities containing a given seed. The assumption used to derive the local approximations is not particular to the stochastic block models that we have used here, and the same technique can be used for other global community detection methods that are based on a partition of the nodes, such as the models of \citet{Kemp2006IRM} or \citet{NewmanLeicht2007mixture}. However, it is often assumed that in practice some nodes can belong to more than one community. There exist several global models that include overlapping communities \citep[see e.g.][]{McDaid2010moses,Ball2011}. In the derivation we used the fact that if all communities are identical and each node is in exactly one community, then there are $N/n$ communities. But when nodes can be in more than one community, it is no longer clear how many communities there are. We would need a reliable estimate of the average number of communities that cover a node. It therefore remains an open problem how the local approximation can be applied to models with overlapping clusters. \section*{Acknowledgements} This work has been partially funded by the Netherlands Organization for Scientific Research (NWO) within the EW TOP Compartiment 1 project 612.001.352.
2,869,038,154,139
arxiv
\section{Introduction} During the past few years the Atacama Large Millimetre/submillimeter Array (ALMA) has demonstrated its remarkable power by exploring the interstellar media (ISM) in galaxies in the reionisation era. In addition to studies of extreme and rare dusty sub-millimetre galaxies at redshifts $z\simeq$5-6 (e.g. \citealt{Capak2015}, \citealt{Pavesi2018}), the array has become the most reliable tool for spectroscopic confirmation of more typical distant star-forming galaxies (\citealt{Inoue2016}, \citealt{Laporte2017}, \citealt{Carniani2017}, \citealt{Smit2018}, \citealt{Hashimoto2018,Hashimoto2019}, \citealt{Tamura2018}). The two most prominent emission features targeted by ALMA for normal star-forming galaxies are the [O{\sc iii}]88$\mu$m and [C{\sc ii}]158$\mu$m fine structure lines, both of which are redshifted into the sub-mm atmospheric window in the reionisation era. [C{\sc ii}]158$\mu$m is the dominant coolant of neutral gas in the ISM of local star-forming galaxies and its luminosity is observed to correlate closely with star formation rate (SFR - \citealt{DeLooze2014}). Early work exploring this relation at high-redshifts revealed increased scatter compared to that seen in local samples. Whereas luminous Lyman break galaxies selected at $z\simeq$5-6 (e.g. \citealt{Capak2015}, \citealt{Willott2015}) as well as some Lyman-alpha emitters at $z\sim$6 (\citealt{Matthee2017}, \citealt{Carniani2018}, \citealt{Matthee2019}) found trends similar to those seen locally, other star-forming galaxies at $z>6$ often showed weak or no [C{\sc ii}]158$\mu$m detections (e.g. \citealt{Ota2014}, \citealt{Pentericci2016}). This so-called '[CII]-deficit' has been the subject of much debate and earlier discussed in the context of thermal saturation in ultra-luminous infrared galaxies \citep{Munoz2016}. While [C{\sc ii}]158$\mu$m is not affected by dust attenuation, it is sensitive to metallicity \citep{Olsen2017}, the ionisation state of the gas \citep{Vallini2017} and CMB attenuation. In addition, in a survey of three $z\simeq$7 sources, \cite{Maiolino2015} discovered [C{\sc ii}]158$\mu$m emission with significant spatial offsets from the UV and Ly$\alpha$ emission, suggesting that the cores of young galaxies are disrupted by stellar feedback with line emission occurring only in external clumps of neutral gas. Although high-redshift data remains sparse and some non-detections are likely due to inadequate sensitivity, it remains of interest to pursue the topic to gain insight into the morphology and physical conditions in rapidly assembling young galaxies. \begin{figure*} \centering \includegraphics[width=15cm]{JD1_CII_several_offset.pdf} \caption{\label{fig1} Search for [C{\sc ii}]158$\mu$m emission line near MACS1149\_JD1. Each stamp shows the flux contours (drawn from 2$\sigma$) at different velocity offsets (from -500km/s to +500km/s) with respect to the [O{\sc iii}]88$\mu$m. redshift. The HST F160W image is shown at the bottom right of the figure with [O{\sc iii}]88$\mu$m. (green) and [C{\sc ii}]158$\mu$m (blue) contours. The shape of ALMA beam is placed at the bottom right of each ALMA stamp. No [C{\sc ii}]158$\mu$m emission is evident.} \end{figure*} [O{\sc iii}]88$\mu$m emission also correlates with the star formation rate in local galaxies \citep{DeLooze2014} but, as a line with a higher ionisation potential, it is generated within H II regions rather than in photo-dissociation regions. The motivation for targeting [O{\sc iii}]88$\mu$m at high-redshift is two-fold. \textit{Herschel} observations of dwarf galaxies suggested that it is a stronger line than [C{\sc ii}]158$\mu$m in low metallicity systems \citep{Madden2013}. Additionally, the line is well-placed observationally in the ALMA bands at the very highest redshifts for which targets are available from deep \textit{Hubble} imaging. The line was prominently detected in two gravitationally-lensed targets, A2744\_YD4 at $z=8.38$ for which a dust continuum detection was also secured \citep{Laporte2017} and MACS1149\_JD1 at $z=9.11$ \citep{Hashimoto2018}. The two sources represent the highest redshift spectroscopically-confirmed sources accessible to ALMA and, in this paper we exploit the newly-available band 5 receiver to present new observations targeting [C II] 158$\mu$m in each source with the goal of further examining the relationship between [C{\sc ii}]158$\mu$m, [O{\sc iii}]88$\mu$m and various probes of star formation in early sources. Throughout the paper, we adopt a $\Lambda$-dominated, flat Universe with $\Omega_{\Lambda}$ = 0.7, $\Omega_M$ = 0.3 and $H_0$ = 70 km s$^{-1}$ Mpc$^{-1}$. \section{Observations} Observations were carried out in band 5 during ALMA Cycles 5 and 6 under regular proposal (2017.1.00697 - PI: N. Laporte) and DDTs (2017.A.00026 and 2018.A.0004 - PI: N. Laporte). The lower spectral window used to observe A2744\_YD4 is centred on the frequency where [C{\sc ii}]158$\mu$m is expected at $z=$8.38, and its width covers the redshift range 8.26 $< z <$ 8.43. The total exposure time on source was 3.8hrs. A similar setup was used for the MACS1149\_JD1 observations, with a redshift range 8.96$\leq z \leq$ 9.16 and a total exposure time of 6.2hrs. Observations of A2744\_YD4 were made with the C43-2 configuration yielding a beam size of 1.3''$\times$0.79''. For MACS1149\_JD1, we used the configuration C43-4 to achieve a beam size of 0.75''$\times$0.63''. Data were reduced using the version 5.4.0 of the CASA pipeline (\citealt{CASA}), a Briggs weighting was applied in the \textit{tclean} task in both cases. For consistency purpose, we re-reduced ALMA band 7 data for A2744\_YD4 following the same procedures (2015.1.00594 - PI: N. Laporte) \begin{figure} \centering \includegraphics[width=7cm, angle=0]{YD4_CII_OIII_dust.pdf} \caption{\label{fig2} An ALMA view of A2744\_YD4 showing the respective positions of the dust detection in ALMA band 7 (red), the [O{\sc iii}]88$\mu$m emission line (green) and the UV-rest frame continuum (HST/F160W image). The shape of each ALMA beam is shown at the bottom right. Contours are plotted from $2\sigma$. No [C{\sc ii}]158$\mu$m emission (blue contours) is detected at more than $3\sigma$ near the rest-frame UV position of this galaxy. } \end{figure} We do not detect any band 5 continuum for either target. We measure 3$\sigma$ upper limits using several beam-size apertures distributed at the centre of the field where our targets are located, and find $f_{\nu}^{158\mu m}$< 21 $\mu$Jy/beam for A2744\_YD4 and $f_{\nu}^{158\mu m}$< 15 $\mu$Jy/beam for MACS1149\_JD1 (not corrected for magnification). We also searched for line emission in a 1.5'' radius circle around the UV-rest frame position of our targets (corresponding to a physical size of 13.2 and 14.1 kpc respectively for MACS1149\_JD1 and A2744\_YD4) and allowing a velocity offset respective to the [O{\sc iii}]88$\mu$m redshift ranging from -500 km/s to 500km/s (e.g. \citealt{Hashimoto2019}). We rebinned the data assuming a FWHM of 100km/s for [C{\sc ii}]158$\mu$m (as previously found for example in \citealt{Carniani2017}, \citealt{Smit2018}, \citealt{Bradac2017}). No emission is detected in either target (Figure~\ref{fig1} and Figure~\ref{fig2}) with a 3$\sigma$ upper limit on the [C{\sc ii}]158$\mu$m luminosity of $L_{CII}^{JD1}$< 3.98$\times$10$^6$$\times$(10/$\mu$)L$_{\odot}$ and $L_{CII}^{YD4}$< 2.0$\times$10$^7$$\times$(2/$\mu$)L$_{\odot}$, assuming a FWHM=100km/s, with the rms measured in several beam size apertures (with $\theta_{min}$=0.63'' and $\theta_{maj}$=0.75'' for JD1 and $\theta_{min}$=0.73'' and $\theta_{maj}$=1.21'') distributed in a 1.5'' radius circle around the UV restframe position and taking into account the best magnification for the two targets ($\mu$=2 and $\mu$=10 respectively for YD4 and JD1 - see \citealt{Laporte2017} and \citealt{Hashimoto2018} for details). We also applied the same method to more finely binned data (FWHM=50km/s) taking into account the FWHM of the [O{\sc iii}] 88$\mu$m line found in A2744\_YD4, but no emission line was found on either dataset. We summarise the salient properties of A2744\_YD4 and MACS1149\_JD1 in Table~\ref{tab1}. A similar non-detection of [C{\sc ii}]158$\mu$m was reported by \cite{Inoue2016} for a Lyman-$\alpha$ emitter at $z$ = 7.2 with [O{\sc iii}]88$\mu$m emission and, in the following analysis, we include those measurements. \begin{table} \centering \begin{tabular}{l|cc} & A2744\_YD4 & MACS1149\_JD1 \\ \hline $z_{OIII}$ & 8.382$^a$ & 9.1096$^b$ \\ L$_{OIII}$ ($\times$10$^7$L$_{\odot}$) & 7.0$\pm$1.7$^a$ & 7.4$\pm$1.6$^b$ \\ L$_{FIR}$ ($\times$10$^{10}$L$_{\odot}$) & 12.6$\pm$ 5.5$^a$ & $<$ 0.77$^b$ \\ L$_{CII}$ ($\times$10$^7$L$_{\odot}$) & $<$ 2.0 (3$\sigma$) & $<$ 0.4 (3$\sigma$) \\ $S_{\nu}^{158\mu m}$ ($\mu$Jy/beam) & $<$ 10.5 (3$\sigma$) & $<$1.5 (3$\sigma$) \\ $S_{\nu}^{88\mu m}$ ($\mu$Jy/beam) & 99 $\pm$ 23 $^a$ & $<$ 5.3$^b$ (3$\sigma$) \\ \hline SFR (M$_{\odot}$/yr) & 20.4$^{+17.6}_{-9.5}$$^a$ & 4.2$^{+0.8}_{-1.1}$$^b$ \\ M$_{\star}$ (10$^9$M$_{\odot}$) & 2.0$^{+1.5}_{-0.7}$$^a$ & 1.1$^{+0.5}_{-0.2}$$^b$ \end{tabular} \caption{ \label{tab1} Properties of the two $z>$8 galaxies reported in this paper. All values are corrected for magnification assuming $\mu$=2 for A2744\_YD4 and $\mu$=10 for MACS1149\_JD1. \\ $^a$ \protect\cite{Laporte2017} \\ $^b$ \protect\cite{Hashimoto2018} \\ } \end{table} \section{Analysis} In Figure~\ref{fig3} we compare the location of the two objects discussed in this paper, plus that of \cite{Inoue2016}, in the [CII] - SFR relation traced at lower redshift. The apparent trend towards a [C II] deficit in the reionisation era is striking. Likewise, Figure~\ref{fig4} shows the [O{\sc iii}] /[C{\sc ii}] line ratio in the context of lower redshift metal-poor dwarf galaxies \citep{Madden2013} and recent numerical simulations of high-redshift galaxies targeting both emission lines \citep{Katz2019}. The gas-phase metallicity in these simulations is 0.1 solar, comparable to that observed in the local dwarfs. Reducing the metallicity by a factor of 10 would be required to explain the absence of [C{\sc ii}]158$\mu$m although at that point [O{\sc iii}]88$\mu$m emission would be similarly reduced. Although it is possible that the [C{\sc ii}]158$\mu$m and [O{\sc iii}]88$\mu$m emission regions are physically distinct in some of our sources, these comparisons suggest that a low metallicity may be insufficient to explain the deficit. Additionally, the strongest likely attenuation of [C{\sc ii}]158$\mu$m by cosmic microwave background radiation \citep{Lagache2018} seems unable to explain the size of the discrepancy (see dashed lines in Figure~\ref{fig4}). Energetic feedback from intermittent star formation may be capable of expelling neutral gas and thereby suppressing [C{\sc ii}]158$\mu$m emission. Although the presence of a significant dust mass in A2744\_YD4 might then be considered surprising, the possibility of a spatial offset between [O{\sc iii}]88$\mu$m. emission and the dust continuum (Figure~\ref{fig2}) may imply regions with different physical conditions or represent the result of some feedback process. One way to understand if a deficit of neutral gas is expected at high redshift is to determine the range of [C{\sc ii}]158$\mu$m emission expected in simulations. Examining a recent semi-analytical model of galaxy evolution \citep{Lagache2018} in over 10$^3$ simulated objects at $z\simeq$8 (Figure~\ref{fig5}) and focusing now only on the two highest-redshift sources, A2744\_YD4 and MACS1149\_JD1, we find 75 simulated objects that have extreme properties similar to A2744\_YD4 (i.e. SFR from 1 to 35 $M_{\odot}$ yr$^{-1}$; L$_{[CII]}<$2.0$\times$10$^7$L$_{\odot}$ ; $\log$(M$_{\star}$ [M$_{\odot}$]) from 8.8 to 9.7) with as mean properties <M$_{\star}$>=1.3$\times$10$^9$ M$_{\odot}$, <L$_{[CII]}$>=9.4$\times$10$^6$ and gas-phase metallicity <Z$_g$>=0.20. Furthermore, only 8 simulated sources have [C II]158$\mu$m properties similar to MACS1149\_JD1 (i.e. SFR from 0.9 to 6.6 $M_{\odot}$ yr$^{-1}$ ; L$_{[CII]}<$0.4$\times$10$^7$L$_{\odot}$ ; $\log$(M$_{\star}$ [M$_{\odot}$]) from 8.7 to 9.4) with mean properties : <M$_{\star}$>=7.7$\times$10$^8$ M$_{\odot}$, <L$_{[CII]}$>=2.7$\times$10$^6$ L$_{\odot}$ and <Z$_g$>=0.25. Since our observational upper limits are 3$\sigma$, this demonstrates the difficulty of reproducing our first glimpse at the weak [C{\sc ii}]158$\mu$m emission in $z>8$ sources. A further explanation may be a trend towards higher ionisation parameters at early times \citep{Katz2016} for which there is some evidence in rest-frame UV spectroscopy of similar $z>7$ sources \citep{Mainali2018}. Such a trend may arise from a moderate non-thermal component or an increasing contribution from metal-poor massive stars. The original motivation for this study was to assemble of multi-line data using ALMA for sources in the reionisation era largely to test such hypotheses. Our discovery of a surprising [C{\sc ii}]158$\mu$m deficit argues for continuing this effort including further diagnostic lines sensitive to the nature of the radiation field, the gas-phase metallicity and the presence of neutral gas. Finally, utilising the non-detection of the continuum of A2744\_YD4 in ALMA band 5 we have the opportunity to re-analyse the SED of this object. We include data from a previous ALMA band 6 programme covering the position of this target (2015.1.00463.S - PI : M. Ouchi). In this dataset, A2744\_YD4 is also not detected and we measured in a beam-size aperture a 2$\sigma$ upper limit flux of 30 $\mu$Jy/beam (not corrected for magnification). Using \textit{MAGPHYS} \citep{MAGPHYS}, we can give a first constraint on the dust temperature in this object $T_{dust}$ > 55 K. This value contrasts with the value generally used to determine the dust properties at high-$z$ (T$\sim$30K), but is consistent with recent simulations (e.g. \citealt{Behrens2018}) which predict a higher dust temperature at high redshifts. Using the 3$\sigma$ upper limits for both band 5 and 6 observations decreases the minimum dust temperature to T$>$43 K. \begin{figure} \hspace{-1cm} \includegraphics[width=7.5cm, angle=-90.]{CII_SFR.pdf} \caption{\label{fig3} Relation between L$_{CII}$ and the SFR for the two galaxies studied in this letter plus that of \protect\citet{Inoue2016} (red) and previous $5.5<z<7.5$ galaxies studies from \protect\cite{Capak2015}, \protect\cite{Carniani2017}, \protect\cite{Carniani2018}, \protect\cite{Smit2018}, \protect\cite{Pentericci2016}, \protect\cite{Hashimoto2019}, \protect\cite{Kanekar2013}, \protect\cite{Ota2014}, \protect\cite{Bradac2017} and \protect\cite{Matthee2017} grouped according to redshift. Open circles show the location of local metal poor dwarfs galaxies \protect\citep{Madden2013}. We also plot the relation predicted by \protect\cite{Lagache2018} at $z\sim$6 (blue), 7 (black) and 8 (red).} \end{figure} \begin{figure} \hspace{-1cm} \includegraphics[width=8.0cm,angle=-90]{OIII_CII_SFR.pdf} \caption{\label{fig4} The [O III] / [C II] emission line ratio for high redshift galaxies. Our work on MACS1149\_JD1 and A2744\_YD4 together with the $z=7.2$ LAE \citep{Inoue2016} indicate ratios well above those seen in local metal poor dwarfs (\citealt{Madden2013}, grey circles) as well as numerical simulations capable of predicting both lines (\citealt{Katz2019}, black open symbols). The maximum effect of CMB attenuation is indicated by dashed lines below the current limits (see text for details).} \end{figure} \section{Summary} The recent commissioning of the ALMA band 5 receiver has opened a new window to study the ISM of the two most distant gravitationally-lensed galaxies detected with ALMA band 7, namely A2744\_YD4 ($z=$8.38) and MACS1149\_JD1 ($z$=9.11). We have used this capability to search for the FIR emission line [C{\sc ii}]158$\mu$m , the primary coolant of the ISM at low redshift, which should give valuable insight into the metallicity and neutral gas content for systems of known SFR. However, despite adequately sensitive data considering the [C{\sc ii}] - SFR relation observed at lower redshifts (e.g. $z<$6), neither of these targets is detected in the dust continuum or line emission. Noting the magnification for these two targets ($\mu\sim$2 and 10 for A2744\_YD4 and MACS1149\_JD1 respectively), these non-detections imply [CII]158$\mu$m luminosities well below what is observed for $z\sim$0 metal poor dwarfs, reviving the discussion of a `[CII] deficit' previously considered at lower redshift. Likewise when studying the [O{\sc iii}]88$\mu$m/ [C{\sc ii}]158$\mu$m line ratio, we find anomalously high values. We examine this line ratio with a recent hydrodynamical simulation of the ISM in early galaxies \citep{Katz2019} and suggest that a low gas-phase metallicity may not be the sole explanation for this [C II] deficit. Other hypotheses include a high ionisation parameter consistent with trends seen in UV spectroscopy of similar $z>7$ sources or the suppression of neutral gas and hence [C{\sc ii}]158$\mu$m emission via energetic feedback from intermittent star formation. Using a semi-analytical model of galaxy evolution \citep{Lagache2018}, we demonstrate that such faint [C{\sc ii}]158$\mu$m luminosities are rarely expected at $z\geq$8. Further multi-line data on $z>8$ sources will be helpful in resolving this puzzle. \text\bf{Our study emphasises the importance of gathering multi-line ALMA data for sources in the reionisation era to robustly study the physical conditions in their interstellar media.} \begin{figure} \hspace{-1.00cm} \includegraphics[width=10cm, angle=0]{LCII_SFR_z=9_v2.pdf} \vspace{-0.50cm} \caption{\label{fig5} As Fig \ref{fig3} with the location of the two $z\geq$8 galaxies discussed in this paper represented by red arrows. Black dots show the distribution of all of simulated galaxies from \protect\cite{Lagache2018}, extrapolated from their highest redshift $z=$7.6 to redshift $z=$9 by estimating a mean CMB attenuation factor on the [C{\sc ii}]158$\mu$m luminosity from z=7.6 to z=9. The red line displays the mean relation between the SFR and L$_{CII}$ and the yellow region shows the mean dispersion (0.45 dex according to Fig.8 of \protect\citealt{Lagache2018}) of the simulated galaxies. Clearly both galaxies are extreme outliers in the relation.} \end{figure} \section*{Acknowledgements} We thank Morgane Cousin to provide CMB attenuation estimates at $z\sim$9. NL and RSE acknowledge funding from the European Research Council (ERC) under the European Union Horizon 2020 research and innovation programme (grant agreement No 669253). FEB acknowledges support from CONICYT-Chile (Basal AFB-170002, Programa de Cooperaci{\'{o}n} Cient{\'{\i}}fica ECOS-CONICYT C16U02, FONDO ALMA 31160033) and the Ministry of Economy, Development, and Tourism's Millennium Science Initiative through grant IC120009, awarded to The Millennium Institute of Astrophysics, MAS. AKI and TH acknowledge funding from NAOJ ALMA Scientific Research Grant number 2016-01 A and JSPS KAKENHI Grant Number 17H01114. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2015.A.00463, ADS/JAO.ALMA\#2017.1.00697, ADS/JAO.ALMA\#2017.A.00026 and ADS/JAO.ALMA\#2018.A.0004, ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. \bibliographystyle{mnras}
2,869,038,154,140
arxiv
\section{Introduction} \label{sec:introduction} Reinforcement learning~\cite{Sutton1998} (RL) is a discipline of artificial intelligence seeking to find optimal behavioral policies that enable agents to collect maximal reward while interacting with the environment. A popular RL algorithm is Q-learning \cite{Watkins1989} that operates by estimating expected cumulative rewards (Q-values). Although successful in numerous applications~\cite{Busoniu2010}, standard Q-learning suffers from two drawbacks. First, due to its tabular nature in representing Q-values, it is not readily applicable to high-dimensional environments with large state and/or action spaces. Second, it initially overestimates Q-values, introducing a bias at early stages of training \cite{Fox2016}. This bias has to be ``unlearned'' as training proceeds, thus decreasing sample efficiency. To address the first problem, Q-learning has been extended to high-dimensional environments by using parametric function approximators instead of Q-tables~\cite{Busoniu2010}. One particularly appealing class of approximators are deep neural networks that learn ``complex'' relationships between high-dimensional inputs (e.g. images) and low-level actions. Building on this idea, deep Q-networks (DQNs) \cite{Mnih2015} were proposed, attaining state-of-the-art results in large-scale domains, e.g. the Arcade Learning Environment for Atari games \cite{Bellemare2013}. Though successful, DQNs fail to address the overestimation problem, and are therefore rather sample-inefficient~\cite{vanHasselt2016}. One way of addressing Q-value overestimation is to introduce an intrinsic penalty signal in addition to instantaneous rewards. The intrinsic penalty affects the learned Q-values, eventually leading to lower estimates. Information theory provides a principled method to formalize such a penalty by interpreting the agent as an information-theoretic channel with limited transmission rate~\cite{Sims2010,Ortega2013}. Specifically, the state of the environment is interpreted as channel input, the action as channel output and the agent's reward as quality of information transmission~\cite{Genewein2015}. Interestingly, in the RL setting, limits in transmission rate reflect limits in ``information resources'' the agent can spend to deviate from a given reference policy. The instantaneous deviation between the agent's current policy and such a reference policy directly results in an intrinsic penalty to be subtracted from the reward. Information-theoretic RL approaches \cite{Azar2012,Rawlik2012,Fox2016} have been designed for the tabular setting but do not readily apply to high-dimensional environments that require parametric function approximators. Since we are interested in improving sample complexity of RL in high-dimensional state spaces, we contribute by adapting information-theoretic concepts to phrase a novel optimization objective for learning Q-values with deep parametric function approximators. The resultant algorithm encompasses a wide range of learning outcomes that can be demonstrated by tuning a Lagrange multiplier. We show that DQNs arise as a special case of our proposed approach. We further contribute by introducing a dynamic scheduling scheme for adapting the magnitude of intrinsic penalization based on temporal Bellman error evolution. This allows us to outperform DQN and other methods, such as double DQN \cite{vanHasselt2016} and soft Q-learning~\cite{Schulman2017}, by large margins in terms of game score and sample complexity in the Atari domain. At the same time, our approach leads to decreased Q-value estimates, confirming our hypothesis that overestimation leads to poor performance in practice. Finally, we show further performance increase by adopting the dueling architecture from \cite{Wang2016}. In short, our contributions are: \begin{enumerate} \item applying information-theoretic concepts to large state spaces with function approximators; \item proposing a novel information-theoretically inspired optimization objective for deep RL; \item demonstrating a wide range of learning outcomes for deep RL with DQN as a special case; \item and outperforming DQN, double DQN, and soft Q-learning in the Atari domain. \end{enumerate} \section{Reinforcement Learning} \label{sec:overestimations} In RL, an agent, being in a state $\bm{s} \in \mathcal{S}$, chooses an action $\bm{a} \in \mathcal{A}$ sampled from a behavioral policy $\bm{a} \sim \pi_{\text{behave}}(\bm{a}|\bm{s})$, where $\pi_{\text{behave}}: \mathcal{S}\times \mathcal{A} \rightarrow [0,1]$. Resulting from this choice is a transition to a successor state $\bm{s}^{\prime}\sim \mathcal{P}\left(\bm{s}^{\prime}|\bm{s},\bm{a}\right)$, where $\mathcal{P}:\mathcal{S}\times \mathcal{A}\times \mathcal{S} \rightarrow [0,1]$ is the unknown state transition model, and a reward $r = \mathcal{R}(\bm{s},\bm{a})$ that quantifies instantaneous performance. After subsequent interactions with the environment, the goal of the agent is to optimize for $\pi^{\star}_{\text{behave}}$ that maximizes the expected cumulative return $\mathbb{E}_{\pi_{\text{behave}},\mathcal{P}}\left[\sum_{t=0}^{\infty} \gamma^{t}r_{t}\right]$, with $t$ denoting time and $\gamma \in (0,1)$ the discount factor. Clearly, to learn an optimal behavioral policy, the agent has to reason about long term consequences of instantaneous actions. Q-learning, a famous RL algorithm, estimates these effects using state-action value pairs (Q-values) to quantify the performance of the policy. In Q-learning, updates are conducted online after each interaction $(\bm{s},\bm{a},r,\bm{s'})$ with the environment using \begin{equation} \label{Eq:QLearning} Q\left(\bm{s},\bm{a}\right) \leftarrow Q(\bm{s},\bm{a}) + \alpha \left(r + \gamma \max_{\bm{a}^{\prime}} Q(\bm{s}^{\prime},\bm{a}^{\prime}) - Q\left(\bm{s},\bm{a}\right) \right), \end{equation} with $\alpha >0$ being a learning rate. Equation~\eqref{Eq:QLearning} assumes an old value, i.e. the prediction $Q(\bm{s},\bm{a})$, and corrects for its estimate based on new information, i.e. the target $r + \gamma \max_{\bm{a}^{\prime}} Q(\bm{s}^{\prime},\bm{a}^{\prime})$. \paragraph{Optimistic Overestimation:} Upon careful investigation of Equation~\eqref{Eq:QLearning}, one comes to recognize that Q-learning updates introduce a bias to the learning process caused by an overestimation of the optimal cumulative rewards~\cite{vanHasselt2010,Azar2011,Lee2012,Bellemare2016,Fox2016}. Specifically, the usage of the maximum operator assumes that current guesses for Q-values reflect optimal cumulative rewards. Of course, this assumption is violated, especially early in the learning process, when a relatively small number of updates has been performed. Due to the correlative effect of ``bad'' estimations between different state-action pairs, these mistakes tend to propagate rapidly through the Q-table and have to be unlearned in the course of further training. Though such an optimistic bias is eventually unlearned, the convergence speed (in terms of environmental interactions, i.e. sample complexity) of Q-learning is highly dependent on the quality of the initial Q-values. The problem of optimistic overestimation only worsens in large state spaces, such as images in Atari. As mentioned earlier, high-dimensional representations are handled by generalizing tabular Q-learning to use parametric function approximators, e.g. deep neural networks~\cite{Mnih2015}. Learning then commences by fitting weights of the approximators using stochastic gradients to minimize \begin{equation} \label{Eq:dqlearning} \mathbb{E}_{\bm{s},\bm{a},r,\bm{s}^{\prime}} \left[ \left( r + \gamma \max_{\bm{a}^{\prime}} Q_{\bm{\theta}^{-}}(\bm{s}^{\prime},\bm{a}^{\prime}) - Q_{\bm{\theta}}(\bm{s},\bm{a}) \right)^2 \right]. \end{equation} Here, the expectation $\mathbb{E}$ refers to samples drawn from a replay memory storing state transitions \cite{Lin1993}, and $Q_{\bm{\theta}^{-}}(\bm{s}^{\prime},\bm{a}^{\prime})$ denotes a DQN at an earlier stage of training. The minimization objective in Equation~\eqref{Eq:dqlearning} resembles similarities to that used in the tabular setting. Again, old value estimates are updated based on new information, while introducing the $\max$-operator bias. Although DQNs generalize well over a wide range of input states, they are ``unaware'' of the aforementioned overestimation problem \cite{Thrun1993}. However, when compared with the tabular setting, this problem is even more severe due to the lack of any convergence guarantees to optimal Q-values when using parametric approximators, and the inability to explore the whole state-action space. Hence, the number of environmental interactions needed to unlearn the optimistic bias can become prohibitively expensive. \section{Addressing Optimistic Overestimation} \label{sec:info_core} A potential solution to optimistic overestimation in Q-learning is to add an intrinsic penalty to instantaneous rewards, thus reducing Q-value estimates. A principled way to introduce such a penalty is provided by the framework of information theory for decision-making. The rationale is to interpret the agent as an information-theoretic channel with limited transmission rate \cite{Sims2010,Tishby2011,Ortega2013,Genewein2015}. The environmental state $\bm{s}$ is considered as channel input, the agent's action $\bm{a}$ as channel output and the quality of information transmission is expressed by some reward or utility function $U(\bm{s}, \bm{a})$. According to Shannon's noisy-channel coding theorem \cite{Shannon1948}, the transmission rate is upper-bounded by the average Kullback-Leibler (KL) divergence between the behavioral policy $\pi_{\text{behave}}$ and any arbitrary reference policy with support in $\mathcal{A}$ \cite{Csiszar1984,Tishby1999}. In the following, the reference policy is denoted as prior policy $\pi_{\text{prior}}$. The KL-divergence, therefore, plays the role of a limited resource and may not exceed a maximum $K>0$, such that $\text{KL}\left(\pi_{\text{behave}}||\pi_{\text{prior}}\right) \leq K$. The intuition behind the information-theoretic viewpoint is that the channel aims to map input $\bm{s}$ to output $\bm{a}$, measuring the quality of the mapping in terms of $U(\bm{s},\bm{a})$. Since the transmission rate is limited, the agent has to discard information in $\bm{s}$ that has little impact on $U$ to obtain a utility-maximizing $\bm{a}$ without exceeding the transmission limit $K$. Importantly, the constraint in transmission rate directly translates into an instantaneous penalty signal leading to reduced utility, as outlined next for a one-step decision-making problem. In a one-step scenario, we obtain the following \begin{equation*} \label{Eq:BoundedOne} \max_{\pi_{\text{behave}}} \sum_{\bm{a} \in \mathcal{A}} \pi_{\text{behave}}(\bm{a}|\bm{s})U(\bm{s},\bm{a}) \ \ \text{s.t.} \ \ \text{KL}\left(\pi_{\text{behave}}||\pi_{\text{prior}}\right) \leq K, \end{equation*} where $\log \frac{\pi_{\text{behave}}(\bm{a}|\bm{s})}{\pi_{\text{prior}}(\bm{a}|\bm{s})}$ reflects instantaneous penalty\footnote{Note that although we use a state-independent prior in this work, the theoretical framework for Q-value reduction remains valid for state-conditioned $\pi_{\text{prior}}(\bm{a}|\bm{s})$.}. The above constrained optimization problem can be expressed as a concave unconstrained objective by introducing a Lagrange multiplier $\lambda > 0$: \begin{equation} \label{Eq:BoundedTwo} \mathcal{L}^{\star}\left(\bm{s},\pi_{\text{prior}},\lambda\right) = \max_{\pi_{\text{behave}}} \sum_{\bm{a} \in \mathcal{A}} \pi_{\text{behave}}(\bm{a}|\bm{s})U(\bm{s},\bm{a}) -\frac{1}{\lambda} \text{KL}\left(\pi_{\text{behave}}||\pi_{\text{prior}}\right), \end{equation} where $\lambda$ trades off utility versus closeness to prior information. The optimum has a closed form: \begin{equation} \label{Eq:opt_pol} \pi^{\star}_{\text{behave}}(\bm{a}|\bm{s}) = \frac{\pi_{\text{prior}}(\bm{a}|\bm{s})\exp\left(\lambda U(\bm{s},\bm{a})\right)}{\sum_{\bm{a}^{\prime}}\pi_{\text{prior}}(\bm{a}^{\prime}|\bm{s})\exp\left(\lambda U(\bm{s},\bm{a}^{\prime})\right)}. \end{equation} Note that we are not the first to propose such information-theoretic principles within the context of RL (and planning), where the utility function is usually assumed to be the expected cumulative reward, i.e. $U(\bm{s}, \bm{a}) = Q(\bm{s}, \bm{a})$. In fact, similar principles have recently received increased attention within policy search and identification of optimal cumulative reward values, as outlined next. In policy search, information-theoretic principles similar to Equation~\eqref{Eq:BoundedOne} can be categorized into three classes depending on the choice of the prior $\pi_{\text{prior}}(\bm{a}|\bm{s})$. The first class adopts a fixed prior that remains unchanged during learning. Entropy regularisation~\cite{Williams1991,Mnih2016} is a special case within this class (assuming a uniform prior policy). The second class uses a marginal prior policy obtained by averaging the behavioral policy over all environmental states. The information-theoretic intuition, here, is to encourage the agent to neglect reward-irrelevant information in the environment~\cite{Leibfried2015,Leibfried2016,Peng2017}. The third class assumes an adaptive prior (e.g. a policy learned at an earlier stage of training) to ensure incremental improvement steps in on-policy settings as learning proceeds~\cite{Bagnell2003,Peters2008,Peters2010,Schulman2015}. In optimal cumulative reward value identification, the KL-penalty is directly incorporated into Q-value estimates rather than using it for regularization. There are two distinct categories for value identification that utilize KL-constraints in different ways. The first category considers a restricted class of Markov Decision processes (MDPs), where instantaneous rewards incorporate a KL-penalty that explicitly discourages deviations from uncontrolled environmental dynamics. Such restricted MDPs enable efficient optimal value computation as outlined in~\cite{Todorov2009,Kappen2012}. The second category comprises MDPs with intrinsic penalty signals similar to Equation~\eqref{Eq:BoundedOne} where deviations from a prior policy are penalized. Optimal values are either computed with generalized value iteration schemes \cite{Tishby2011,Rubin2012,Grau-Moya2016}, or in an RL setting similar to Q-learning~\cite{Azar2012,Rawlik2012,Fox2016}. Closest to our work are the recent approaches in~\cite{Haarnoja2017,Haarnoja2017b,Schulman2017}. It is worth mentioning that apart from the discrete action and high-dimensional state space setting, we tackle two additional problems not addressed previously. First, we consider \emph{dynamic} adaptation for trading off rewards versus intrinsic penalties as opposed to the static scheme presented in~\cite{Haarnoja2017,Haarnoja2017b,Schulman2017}. Second, we deploy a robust computational approach that incorporates value-based advantages to ensure bounded exponentiation terms. Our approach also fits into the work of how utilising entropy for reinforcement learning connects policy search to optimal cumulative reward value identification \cite{Haarnoja2017,Nachum2017,Donoghue2017,Schulman2017}. In this paper, however, we focus on deep value-based approaches, which show improved performance, as demonstrated in the experiments. Due to the intrinsic penalty signal, information-theoretic Q-learning algorithms provide a principled way of reducing Q-value estimates and are hence suited for addressing the overestimation problem outlined earlier. Although successful in the tabular setting, these algorithms are not readily applicable to high-dimensional environments that require parametric function approximators. In the next section, we adapt information-theoretic concepts to high-dimensional state spaces with function approximators and demonstrate that other deep learning techniques (e.g. DQNs) emerge as a special case. \subsection{Addressing Overestimation in Deep RL} \label{sec:deep_info_rl} We aim to reduce optimistic overestimation in deep RL methodologically by leveraging ideas from information theory. Since Q-value overestimations are a source of sample-inefficiency, we improve large-scale reinforcement learning where current techniques exhibit high sample complexity \cite{Mnih2015}. To do so, we introduce an intrinsic penalty signal in line with the methodology put forward earlier. Before commencing, however, it can be interesting to gather more insights into the range of possible learners while tuning such a penalty. Plugging the optimal behavior policy $\pi^{\star}_{\text{behave}}$ from Equation~\eqref{Eq:opt_pol} back in Equation~\eqref{Eq:BoundedTwo} yields \begin{equation*} \label{Eq:FreeEnergy} \mathcal{L}^{\star}(\bm{s},\pi_{\text{prior}},\lambda) = \frac{1}{\lambda} \log \sum_{\bm{a} \in \mathcal{A}}\pi_{\text{prior}}(\bm{a}|\bm{s})\exp\left(\lambda U(\bm{s},\bm{a})\right). \end{equation*} The Lagrange multiplier $\lambda$ steers the magnitude of the penalty and thus leads to different learning outcomes. If $\lambda$ is large, little penalization from the prior is introduced. As such, one would expect a learning outcome that mostly considers maximizing utility. This is confirmed as $\lambda \rightarrow \infty$, where \begin{equation*} \lim_{\lambda \rightarrow \infty} \mathcal{L}^{\star}(\bm{s},\pi_{\text{prior}}, \lambda) = \max_{\bm{a} \in \mathcal{A}} U(\bm{s},\bm{a}). \end{equation*} On the other hand, for small $\lambda$ values, the deviation penalty is significant and the prior policy should dominate. This is again confirmed when $\lambda \rightarrow 0$, where we recover the expected utility under $\pi_{\text{prior}}$: \begin{equation*} \lim_{\lambda \rightarrow 0} \mathcal{L}^{\star}(\bm{s},\pi_{\text{prior}},\lambda) = \sum_{\bm{a} \in \mathcal{A}} \pi_{\text{prior}}(\bm{a}|\bm{s})U(\bm{s},\bm{a}) = \mathbb{E}_{\pi_{\text{prior}}}\left[U(\bm{s},\bm{a})\right]. \end{equation*} Carrying this idea to deep RL by setting $U(\bm{s},\bm{a})=Q_{\bm{\theta}}(\bm{s},\bm{a})$, where $Q_{\bm{\theta}}(\bm{s},\bm{a})$ represents a deep Q-network, we notice that incorporating a penalty signal in the context of large-scale Q-learning with parameterized function approximators leads to \begin{equation*} \mathcal{L}_{\bm{\theta}}^{\star}(\bm{s},\pi_{\text{prior}},{\lambda}) = \frac{1}{\lambda} \log \sum_{\bm{a} \in \mathcal{A}} \pi_{\text{prior}}(\bm{a}|\bm{s}) \exp\left(\lambda Q_{\bm{\theta}}(\bm{s},\bm{a})\right). \end{equation*} We use this operator to phrase an information-theoretic optimization objective for deep Q-learning: \begin{equation} \label{Eq:Short} \mathcal{J}_{\lambda}(\bm{\theta}) = \mathbb{E}_{\bm{s},\bm{a},r,\bm{s}^{\prime}} \left[\left(r + \gamma \mathcal{L}^{\star}_{\bm{\theta}^{-}}(\bm{s}^{\prime},\pi_{\text{prior}},{\lambda}) - Q_{\bm{\theta}}( \bm{s},\bm{a})\right)^{2}\right], \end{equation} where $\mathbb{E}_{\bm{s},\bm{a},r,\bm{s}^{\prime}}$ refers to samples drawn from a replay memory in each iteration of training, and $\bm{\theta}^{-}$ to the parameter values at an earlier stage of learning. The above objective leads to a wide variety of learners and can be considered a generalization of current methods, including deep Q-networks~\cite{Mnih2015}. Namely, if $\lambda \rightarrow \infty$, we recover the approach in~\cite{Mnih2015} that poses the problem of optimistic overestimation: \begin{equation*} \mathcal{J}_{\lambda \rightarrow \infty} (\bm{\theta}) = \mathbb{E}_{\bm{s},\bm{a},r,\bm{s}^{\prime}}\Bigg[\Bigg(r + \gamma \max_{\bm{a}^{\prime} \in \mathcal{A}}Q_{\bm{\theta}^{-}}(\bm{s}^{\prime},\bm{a}^{\prime}) - Q_{\bm{\theta}}(\bm{s},\bm{a})\Bigg)^{2} \Bigg]. \end{equation*} On the contrary, if $\lambda \rightarrow 0$, we obtain the following \begin{equation} \label{Eq:ObjectiveLambda0} \mathcal{J}_{\bm{\lambda} \rightarrow 0}(\bm{\theta}) = \mathbb{E}_{\bm{s},\bm{a},r,\bm{s}^{\prime}}\Bigg[\Bigg(r + \gamma \sum_{\bm{a}^{\prime}\in \mathcal{A}}\pi_{\text{prior}}(\bm{a}^{\prime}|\bm{s}^{\prime})Q_{\bm{\theta}^{-}}(\bm{s}^{\prime},\bm{a}^{\prime}) - Q_{\bm{\theta}}(\bm{s},\bm{a}) \Bigg)^{2}\Bigg]. \end{equation} Effectively, Equation~\eqref{Eq:ObjectiveLambda0} estimates future cumulative rewards using the prior policy as can be seen in the term $\sum_{\bm{a}^{\prime}\in \mathcal{A}}\pi_{\text{prior}}(\bm{a}^{\prime}|\bm{s}^{\prime})Q_{\bm{\theta}^{-}}(\bm{s}^{\prime},\bm{a}^{\prime})$. From the above two special cases, we recognize that our formulation allows for a variety of learners, where $\lambda$ steers outcomes between the above two limiting cases. Note, however, setting low values for $\lambda$ introduces instead a pessimistic bias~\cite{Fox2016}. Since low $\lambda$-values introduce a pessimistic bias and large $\lambda$-values an optimistic bias, there must be a $\lambda$-value in between encouraging unbiased estimates. Unfortunately, it is not possible to compute such a $\lambda$ in closed form, which is why we propose a dynamical scheduling scheme based on temporal Bellman error evolution in the next section. Note that we assume a fixed prior $\pi_{\text{prior}}$ and we aim at scheduling $\lambda$. Another possibility would be to fix $\lambda$ and schedule the prior action probabilities instead. The latter is however practically less convenient compared to scheduling a scalar. \section{Dynamic \& Robust Deep RL} \label{sec:deep_info_rl_continued} A fixed hyperparameter $\lambda$ is undesirable in the course of training as the effect of the intrinsic penalty remains unchanged. Since overestimations are more severe at the start of the learning process, a dynamic scheduling scheme for $\lambda$ with small values at the beginning (incurring strong penalization) and larger values towards the end (leading to less penalization) is preferable. \paragraph{Adaptive $\lambda$:} A suitable candidate for dynamically adapting $\lambda$ in the course of training is the average squared loss (over replay memory samples) $\mathcal{J}_{\text{squared}} (t,p) = (t-p)^{2}$ between target values $t=r + \gamma \mathcal{L}^{\star}_{\bm{\theta}^{-}}(\bm{s}^{\prime},\pi_{\text{prior}}, \lambda)$ and predicted values $p=Q_{\bm{\theta}}(\bm{s},\bm{a})$. The rationale, here, is that $\lambda$ should be inversely proportional to the average squared loss. If $\mathcal{J}_{\text{squared}}(t,p)$ is high on average, as is the case during early episodes of training, low values of $\lambda$ are favored. However, if $\mathcal{J}_{\text{squared}}(t,p)$ is low on average later in training, then high $\lambda$ values are more suitable for the learning process. We therefore propose to adapt $\lambda$ with a running average over the loss between targets and predictions. The running average $\mathcal{J}_{\text{avg}}$ should emphasize recent history as opposed to samples that lie further in the past since the parameters $\bm{\theta}$ of the Q-value approximator change over time. This is achieved with an exponential window and the online update \begin{equation} \label{eq:beta_updates} \mathcal{J}_{\text{avg}} \leftarrow \left(1-\frac{1}{\tau}\right) \mathcal{J}_{\textrm{avg}} + \frac{1}{\tau} \mathbb{E}_{t,p} \left[ \mathcal{J}_{\text{squared}}(t,p) \right] , \end{equation} where $\tau$ is a time constant referring to the window size of the running average, and $\mathbb{E}_{t,p} \left[ \mathcal{J}_{\text{squared}}(t,p) \right]$ is a shorthand notation for Equation~\eqref{Eq:Short}. This running average allows one to dynamically assign $\lambda = \frac{1}{\mathcal{J}_{\textrm{avg}}}$ at each training iteration. The squared loss $\mathcal{J}_{\text{squared}}(t,p)$ has an impeding impact on the stability of deep Q-learning, where the parametric approximator is a deep neural net and parameters are updated with gradients and backpropagation. To prevent loss values from growing too large, the squared loss is practically replaced with an absolute loss if $|t-p|>1$~\cite{Mnih2015}, referred to as Huber loss $\mathcal{J}_{\text{Huber}}(t,p)$. The Huber loss leads to a more robust adaptation of $\lambda$, as it uses an absolute loss for large errors instead of a squared one. Furthermore, the squared loss is more sensitive to outliers and might penalize the learning process unreasonably in the presence of sparse but large error values. \paragraph{Robust Value Computation:} The dynamic adaptation of $\lambda$ encourages learning of unbiased estimates of the optimal cumulative reward values. Presupposing $Q_{\bm{\theta}}(\bm{s},\bm{a})$ is bounded, $\mathcal{L}^{\star}_{\bm{\theta}}(\bm{s},\pi_{\text{prior}},\lambda)$ is also bounded in the limits of $\lambda$: \begin{equation*} \mathbb{E}_{\pi_{\text{prior}}}\left[Q_{\bm{\theta}}(\bm{s},\bm{a})\right] \leq \mathcal{L}^{\star}_{\bm{\theta}}(\bm{s},\pi_{\text{prior}},\lambda) \leq \max_{\bm{a} \in \mathcal{A}} Q_{\bm{\theta}}(\bm{s},\bm{a}). \end{equation*} In practice, however, this operator is prone to computational instability for large $\lambda$ due to the exponential term $\exp\left(\lambda Q_{\bm{\theta}}(\bm{s},\bm{a})\right)$. We address this problem by amending the term $\frac{\exp\left(\lambda V_{\bm{\theta}}(\bm{s})\right)}{\exp\left(\lambda V_{\bm{\theta}}(\bm{s})\right)}$, where $V_{\bm{\theta}}(\bm{s}) = \max_{\bm{a}}Q_{\bm{\theta}}(\bm{s},\bm{a})$: \begin{equation*} \begin{split} \mathcal{L}^{\star}_{\bm{\theta}} (\bm{s},\pi_{\text{prior}},\lambda) & = \frac{1}{\lambda} \log \sum_{\bm{a} \in \mathcal{A}} \pi_{\text{prior}}(\bm{a}|\bm{s}) \exp\left(\lambda Q_{\bm{\theta}}(\bm{s},\bm{a})\right) \frac{\exp\left(\lambda V_{\bm{\theta}}(\bm{s})\right)}{\exp\left(\lambda V_{\bm{\theta}}(\bm{s})\right)} \\ &=V_{\bm{\theta}}(\bm{s})+ \frac{1}{\lambda} \log \sum_{\bm{a} \in \mathcal{A}} \pi_{\text{prior}}(\bm{a}|\bm{s}) \exp\left(\lambda \left(Q_{\bm{\theta}}(\bm{s},\bm{a}) - V_{\bm{\theta}}(\bm{s})\right)\right) \end{split} \end{equation*} The first term represents the maximum operator as in vanilla deep Q-learning. The second term is a log-partition sum with computationally stable elements due to the non-positive exponents $\lambda (Q_{\bm{\theta}}(\bm{s},\bm{a}) -V_{\bm{\theta}}(\bm{s})) \leq 0$. As a result, the log-partition sum is non-positive and subtracts a portion from $V_{\bm{\theta}}(\bm{s})$ that reflects cumulative reward penalization. \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.7\columnwidth]{3games}} \caption{Q-values and episodic rewards for Asterix, Road Runner and Up'n Down for both normal and dueling architectures. Each plot shows three pairs of graphs, reporting the outcomes of two different random seeds, in black for DQN, purple for double DQN (DDQN) and blue for our information-theoretic approach (DIN). Clearly, our approach leads to lower Q-value estimates resulting in significantly better game play performance.} \label{fig:3games} \end{center} \vskip -0.2in \end{figure} \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.4\columnwidth]{median_games}} \caption{Median normalized episodic rewards across 20 Atari games for normal and dueling architectures. Each plot compares DQN (black), against double DQN (DDQN, purple) and our approach (DIN, blue). Our approach leads to significantly higher median game score.} \label{fig:median_games} \end{center} \vskip -0.2in \end{figure} \section{Experiments \& Results} We hypothesize that addressing the overestimation problem results in improved sample efficiency and overall performance. To this end, we use the Atari domain \cite{Bellemare2013} as a benchmark to evaluate our method. We compare against deep Q-networks \cite{Mnih2015} that are susceptible to overestimations, and to double deep Q-networks \cite{vanHasselt2016}---an alternative proposed to address the precise problem we target. Our results demonstrate that our proposed method (titled deep information networks DIN) leads to significantly lower Q-value estimates resulting in improved sample efficiency and game play performance. We also show that these findings remain valid for the recently proposed dueling architecture~\cite{Wang2016}\footnote{Our approach could be incorporated into the newly released Rainbow framework~\cite{Hessel2018} that achieves state-of-the-art results by combining several independent DQN improvements over the past few years (one of them being double DQNs over which our approach achieves superior performance). Although we focus on Q-value identification in this work, ideas similar to DIN do apply as well to actor-critic methods like A3C~\cite{Mnih2016,Schulman2017}.}. Parameter settings for reproducibility can be found in the appendix. We compare our approach against deep Q-networks and double deep Q-networks. We conduct further experiments by replacing network outputs with the dueling architecture \cite{Wang2016}. The dueling architecture leverages the advantage function $A(\bm{s},\bm{a})=Q(\bm{s},\bm{a})-\max_{\bm{a}}Q(\bm{s},\bm{a})$ and generalizes learning across actions. This results in improved game play performance, as confirmed in our experiments. \subsection{Q-Values and Game Play Performance} \label{q-values} When training, networks are stored every $10^5$ iterations and used for offline evaluation. Evaluating a single network offline comprises $100$ game play episodes lasting for at most $4.5 \times 10^3$ iterations. In evaluation mode, the agent follows an $\epsilon$-greedy policy with $\epsilon=0.05$~\cite{Mnih2015}. We investigate 20 games. Figure~\ref{fig:3games} reports results from the offline evaluation on three individual games (Asterix, Road Runner and Up'n Down), illustrating average maximum Q-values and average episodic rewards as a function of training iterations. Note that episodic rewards are smoothed with an exponential window, similar to Equation~\eqref{eq:beta_updates} with $\tau=10$, to preserve a clearer view. On all three games, our approach leads to significantly lower Q-value estimates when compared to DQN and double DQN for both, the normal and the dueling architecture (see left plots in Figure~\ref{fig:3games}). At the same time, this leads to significant improvements in game play performance (see right plots of Figure~\ref{fig:3games}). Absolute episodic rewards ($\text{score}$) may vary substantially between different games. To ensure comparability across games, we normalize episodic rewards ($\text{score}_{\text{norm}}$) as $\text{score}_{\text{norm}} = \frac{\text{score} - \text{score}_{\text{random}}}{\text{score}_{\text{human}} - \text{score}_{\text{random}}} \cdot 100\% $, where $\text{score}_{\text{random}}$ and $\text{score}_{\text{human}}$ refer to random and human baselines, see \cite{Mnih2015,Wang2016}. Normalized episodic rewards enable a comparison across all 20 Atari games by taking the median normalized score over games \cite{Hessel2018}. The results of this analysis are depicted in Figure~\ref{fig:median_games} as a function of training iterations (smoothed with an exponential window using $\tau = 10$). Our approach clearly outperforms DQN and double DQN for both normal and dueling architectures. The dueling architecture yields an additional performance increase when combined with DIN. Our approach also yields superior results in terms of the best-performing agent (see the appendix for details). \subsection{Sample Efficiency} \label{sampe_efficiency} To quantify sample efficiency, we identify the minimal number of training iterations required to attain maximum deep Q-network performance. To this end, we compute the average episodic reward as in Figure~\ref{fig:3games} but smoothed with an exponential window $\tau=100$. We then identify for each approach the number of training iterations at which maximum deep Q-network performance is attained first. \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.8\columnwidth]{sampling_complexity}} \caption{Sample efficiency for Asterix, Road Runner and Up'n Down under both normal and dueling architectures (left two panels) and when taking the median over 20 games (right panel). The color code is: DQN (black), double DQN (DDQN, purple) and our approach (DIN, blue). DINs are more sample-efficient for both architectures on the three games depicted and on average across 20 games.} \label{fig:sample_eff} \end{center} \vskip -0.2in \end{figure} The results for Asterix, Road Runner and Up'n Down are shown in Figure~\ref{fig:sample_eff} in the left two panels. It can be seen that our approach leads to significant improvements in sample efficiency when compared to DQN and double DQN. For instance, DINs require only about $2 \times 10^7$ training iterations in Road Runner compared to about $3 \times 10^{7}$ for double DQNs, and about $5\times 10^{7}$ for standard DQNs using the normal architecture. These improvements are also valid for the dueling setting. In order to assess sample efficiency across all 20 Atari games, we compute the median sampling efficiency over games, see Figure~\ref{fig:sample_eff} right panel. This analysis confirms the overall improved sample complexity attained in a wide range of tasks by our approach compared to DQN and double DQN. \begin{figure*}[ht!] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=1\linewidth]{sql_comparison}} \caption{Episodic rewards for Asterix, Beamrider and Road Runner comparing our method to SQL. Clearly, our results show better performance in both the normal and dueling architecture without the necessity of identifying an optimal $\lambda$ in advance.} \label{fig:q-values_and_score_rr_fixedLambda} \end{center} \vskip -0.2in \end{figure*} \subsection{Comparison to Soft Q-Learning (SQL)} The closest work to our approach is that of~\cite{Schulman2017}, where the authors consider information theory to bridge the gap between Q-learning and policy gradients RL. Our approach goes further by considering dynamic adaptation for $\lambda$ in the course of training, and introduces robust computation based on value advantages. We compare our method to SQL (where $\lambda$ is fixed) on the games Asterix, Beamrider and Up'n Down. Results depicted in Figure~\ref{fig:q-values_and_score_rr_fixedLambda} demonstrate that our method can outperform SQL on these three games by significant margins without the requirement of pre-specifying $\lambda$. For instance, DINs achieve the best performance of SQL in about 5,000,000 iterations on the Road Runner game. \section{Conclusions} In this paper, we proposed a novel method for reducing sample complexity in deep reinforcement learning. Our technique introduces an intrinsic penalty signal by adapting principles from information theory to high-dimensional state spaces. We showed that DQNs are a special case of our proposed approach for a specific choice of the Lagrange multiplier steering the intrinsic penalty. Finally, in a set of experiments on 20 Atari games, we demonstrated that our technique indeed outperforms competing approaches in terms of performance and sample efficiency. These results remain valid for the dueling architecture from \cite{Wang2016} yielding a further performance boost.
2,869,038,154,141
arxiv
\section{Introduction} Robots are increasingly adopted in industrial environments to carry out dangerous, repetitive, or stressful tasks. The introduction of robots in production lines has improved a number of key performance indicators, and has addressed a market-driven goods growing demand with quality products \cite{esmaeilian2016evolution}. However, due to well-known limitations of robot perceptual, cognitive, and reasoning capabilities, certain tasks, which are difficult to model or require a higher-level of awareness because they cannot be easily modelled nor formalised, are still better handled by human operators. The introduction of collaborative robots (nowadays referred to as \textit{cobots}) in recent years has contributed to relax those limitations, and implicitly promoted human working conditions \cite{kock2011robot}. Among the tasks typically considered stressful, \textit{quality control} and \textit{defects inspection} play a key role in defining the quality of a finished or semi-finished product. Currently, trained and expert personnel is tasked with establishing benchmarks and examining products quality, which require prolonged focus and continuous attention. In this work, we argue that the collaboration between an experienced human operator and a robot may lead to higher rates in defects spotting, overall productivity, and safety \cite{de2008atlas, lasota2017survey}. Human-robot collaboration (HRC) is defined as the purposeful interaction among humans and robots in a shared space, and it is aimed at a common goal. A natural collaboration requires a robot to perceive and correctly interpret the actions (as well as the intentions) of other humans or robots \cite{adams2005human, steinfeld2006common}. \begin{figure}[ht!] \centering \includegraphics[width=0.45\textwidth]{fig.png} \caption{A human operator and and two robots collaborating in a product defect inspection scenario: the mobile manipulator supplies a human operator and the dual-arm manipulator with objects to inspect.} \label{fig:scene} \end{figure} The main goal of this paper is to extend the human-robot collaboration model proposed in \cite{darvish2018interleaved}, referred to as \textsc{FlexHRC}, along two directions. On the one hand, to allow for a collaboration model taking multiple, heterogeneous robots into account, while the original work in \cite{darvish2018interleaved} considered models with one human operator and one robot. On the other hand, introduce a use case whereby a human operator and a robot must collaboratively perform a defects inspection, whereas the original work focused on assembly tasks. The scenario we consider is shown in Figure \ref{fig:scene}. A mobile manipulator (in our case, a Kuka youBot) picks-up objects to be inspected (wooden pieces) from a warehouse area (a marked region in the workspace), and carries them out to deliver them to human operators or another robot (in our case, a dual-arm Baxter manipulator) for inspection \cite{Kattepur2019}. When the object to inspect is delivered to human operators, these undertake the foreman task \cite{Huber2010assist, Glasauer2010interacting}, and then the object is passed to the manipulator for a further vision-based inspection. Afterwards, the manipulator sorts the object out as \textit{faulty} or \textit{non faulty} in two different boxes. Scenarios modelling defects inspection impose functional requirements which are partially in overlap with the ones considered in \cite{darvish2018interleaved} for the assembly of semi-finished products. The main functional requirement in quality control is the \textit{validation of products quality} with a reliable estimation. In an HRC process, such a requirement can be met by a double-check carried out by an expert operator in case the defects classification accuracy as provided by the robot is below a pre-specified threshold. However, differently from the work in \cite{kawaguchi1995internal, oh2009bridge, cho2013inspection}, whereby a visual inspection is carried out by a robot, in order to validate the quality of products an integration of auditory, tactile, and visual perception is likely to be needed \cite{Spence2006auditory, Garrett2001effects}. Such an integration is still an open issue and it is not considered in this paper. \begin{comment} In this article we present an HRC framework that is aimed for multi-agent collaboration. The overall architecture is shown in Figure \ref{fig:arch}. Concurrent collaboration is modelled by using multiple task representations and planners which are following their goal independently from each other. In this architecture hamuan is responsible for coordination of all agents and by his actions determines the states of collaboration for all agents. All the information of collaboration states and workspace are stored continuously in a common knowledge base module so that all the agents can coordinate themselves with other agents in workspace. \end{comment} This paper introduces and discusses \textsc{ConcHRC}, a framework extending \textsc{FlexHRC} that addresses the need for concurrent, multi human-robot collaboration in industrial environments, and validates the models in an inspection use case. The novelty of the approach is two-fold: (i) the design and development of an AND/OR graph based multi human-robot collaboration model that allows for concurrent, modelled, operations in a team made up of multiple human operators and/or robots; (ii) the description of a particular instance of such a cooperation model, implemented within an existing human-robot collaboration architecture, and extending it whereby a human operator, a mobile manipulator, and a dual-arm manipulator collaborate for a defect inspection purpose. In the paper, the focus is on the concurrent HRC model for the quality control task, and therefore we decided to simplify the robot perception system. The paper is organized as follows. Section \ref{sec:RelatedWork} discusses related work. Section \ref{sec:SystemArchitecture} introduces the \textsc{ConcHRC} architecture, and Section \ref{sec:concurrent} formalises the concurrent model. Section \ref{sec4} lays the experimental scenario and the related discussion. Conclusions follow. \begin{comment} Collaborative AND/OR graph is used to represent the collaboration process. Collaborative AND/OR graph is flexible in terms of proposing many alternative collaboration pathes for plannig if the optimal allocated path for any reason fails to reach to goal. \end{comment} \begin{comment} The rest of article is organized as follows. In section \ref{sec:SystemArchitecture} we introduce our HRC architecture and briefly discuss about the modules and section \ref{sec:Representation} describes task representation, planning and simultaneous task allocation problem in multi-agent colloboration process. In section \ref{sec4} we describe experimental scenario and discuss the results and in section \ref{sec5} Conclusions are presented. \end{comment} \section{Related Work} \label{sec:RelatedWork} For a natural human-robot collaboration, different aspects such as safety, robot perception, task representation, and action execution must be considered when designing a collaborative-friendly workspace \cite{goodrich2008human, lasota2017survey}. This paper focuses on task representation when multiple human operators and/or robots group as a team to reach a common goal, which is \textit{a priori} known to all collaborators, either humans or robots. The uncertainties in perception, task representation and reasoning that a robot must face increase when collaborating with humans, because a natural cooperation, i.e., a one in a way similar to human-human teams \cite{meneweger2015working}, may require the robot to \textit{make sense} of or even anticipate human intentions. The need arises to provide robots with reasoning capabilities about the state of the human-robot cooperation process, suitable to be executed online. Although approaches based on offline planning and task allocation fulfil a requirement related to the effectiveness of the collaboration \cite{johannsmeier2016hierarchical, lemaignan2017artificial}, they neither ensure such a natural collaboration nor address its intrinsic uncertainties. Differently, the approaches described in \cite{hawkins2014anticipating, levine2014concurrent, darvish2018flexible, darvish2018interleaved} are aimed at enhancing the naturalness and the flexibility of the collaboration based on online task allocation and/or contingency plans, such that the robot is able to adapt to human decisions on the spot and uncertainties. Such flexibility requires a rich perception for recognising human actions as well as the collaboration state \cite{darvish2018interleaved}. Some of the methods applied for robot action planning in collaboration scenarios include \textit{Markov Decision Processes} \cite{claes2014human, crandall2018cooperating}, \textit{Task Networks} \cite{levine2014concurrent, lemaignan2017artificial}, \textit{AND/OR graphs} \cite{xie2010dynamic, johannsmeier2016hierarchical, darvish2018flexible}, and STRIPS-based \textit{planners} \cite{capitanelli2018manipulation}. Among these methods, finding the priors and the reward function for Markov Decision Processes and the exponential growth of the computational load of STRIPS-based planners make them very difficult to be adopted in practice. Task Networks and AND/OR graphs ensure that the generated collaboration models are in accordance with domain expert desiderata, hence guaranteeing shared \textit{mental} models between human operators and robots. In order to allocate tasks to human operators or robots, and to meet such collaboration constraints as limited resources, a common approach in the literature is to maximise the overall utility value of the collaboration \cite{tsarouchi2017human} on the basis of multi-objective optimisation criteria. However, in these examples the number of human operators or robots is limited. In order to enhance the efficiency of the collaboration, and to face the inherent limitations owing to workspace constraints, human skills, and robot capabilities, an approach can be to raise the number of human operators or heterogeneous robots involved in the collaboration. To this aim, human operators and robots must schedule their actions according to resources, timings, and skill constraints. An example can be found in \cite{toussaint2016relational} whereby concurrent cooperation models are formalised according to relational activity processes. The authors in that study model the cooperation and predict future actions using a Monte Carlo method along with learning by demonstration. A similar approach is adopted in \cite{smith1999temporal}, whereby a temporal graph plan with the consideration of action durations has been applied. Another illustration of concurrent HRC, with a probabilistic formulation due to uncertainties, is presented in \cite{weld2008planning}, where a concurrent Markov Decision Process is adopted. In previous work \cite{darvish2018flexible, darvish2018interleaved} where we demonstrated a flexible collaboration between human operators and robots, this paper extends the notion of AND/OR graph to a concurrent model, and adopts it to model multi human-robot collaboration scenarios. This is further detailed in Section \ref{sec:concurrent}. \section{System's Architecture} \label{sec:SystemArchitecture} Figure \ref{fig:arch} depicts the overall architecture of the \textsc{ConcHRC} framework. The architecture is made up of three layers, including a \textit{perception} layer in green, a \textit{representation} layer in blue, and an \textit{action} layer in red. The perception layer provides information regarding the activities carried out by human operators, a part's defect status, and object locations in the robot workspace. The representation layer forms the concurrency model, stores the necessary knowledge, and manages task execution to reach the collaboration goal. The action level simulates and executes robot actions. \begin{figure*}[t!] \centering \includegraphics[width=0.73\textwidth]{architecture.jpg} \caption{System's architecture for a multi human-robot collaboration model in a defects detection scenario.} \label{fig:arch} \end{figure*} The perception layer encapsulates three modules, which are called \textit{Human Activity Recognition}, \textit{Product Defect Detection}, and \textit{Scene Perception}. The latter two modules provide the \textit{Knowledge Base} module with information about the status of the workspace, human operators, and robots, whereas the former communicates detected human activities to the \textit{Task Planner}. \textit{Human Activity Recogntion} obtains inertial data originating from wearable sensors worn by human operators, and run a series of algorithms to detect and classify performed actions. Those are modelled using \textit{Gaussian Mixture Modelling} (GMM) and \textit{Regression} \cite{bruno2014using, darvish2018flexible}. In our setup, defects detection is considered as a classification problem. \textit{Product Defect Detection} exploits the images coming from a robot-centric camera to detect defects. The action layer is made up of three modules, namely \textit{Robot Execution Manager}, \textit{Simulator}, and \textit{Controller}. The \textit{Robot Execution Manager} module receives discrete, symbolic commands from the \textit{Task Planner}, maps them to actual values, and drives the behaviour of the \textit{Controller} or the \textit{Simulator}. This module retrieves information about the workspace, human operators and robots from the \textit{Knowledge Base}. The \textit{Robot Execution Manager} is in charge of sequencing robot behaviours, on the basis of the plan as provided by the \textit{Task Representation} module. It also provides an acknowledgement to the \textit{Task Planner} upon the execution of a command by the robots. The \textit{Simulator} module is aimed at predicting the outcome of robot behaviours before their actual execution. It simulates a closed-loop model of the robot and the controller, by solving the ordinary differential equations online. The \textit{Controller} receives the configuration (in joint space) or the task space command (in the Cartesian space) from the \textit{Robot Execution Manager}. It computes the joints velocity reference values at each control time step to the robot, while receiving feedback from it \cite{Simetti2015}. The representation layer embeds \textit{Task Representation}, \textit{Task Planner}, and the \textit{Knowledge Base} module. In the \textsc{ConcHRC}, an AND/OR graph with several layers represents the collaborative task \cite{darvish2018flexible}. In order to model concurrency in a multi-agent collaboration scenario, the AND/OR graph based \textsc{FlexHRC} framework has been extended, as described in the next Section. Along with the AND/OR graph, the \textit{Task Planner} module is in charge of decision making and the adaptation of the ongoing parallel tasks. To do so, the \textit{Task Planner} provides a set of achieved cooperation states or transitions between states to the \textit{Task Representation} module, and receives the set of allowed cooperation states and transitions with the associated costs to follow. Later, it associates each state or state transition with an ordered set of actions, and according to the workspace's, human operator's, and robot's status, along with online simulation results, it assigns actions to the either human operators or robots. Finally, it informs each human operator or robot involved in the cooperation about the action to follow. Once an action is carried out, it receives the acknowledgement from the action level and updates its internal structure. The \textit{Knowledge Base} stores all relevant information to make the cooperation progress, as better described in \cite{darvish2018flexible}. \section{A Concurrent Model for Multi-agent Cooperation} \label{sec:concurrent} In this Section, we describe first a multi human-robot cooperation model based on a $1$-layer AND/OR graph, then we consider an extended $n$-layer AND/OR graph, and finally a concurrent model based on a constrained $n$-layer configuration, which we refer to as a $c$-layer AND/OR graph. \subsection{$1$-layer AND/OR Graphs} \label{sec:1-layer} In order to formalise the multi human-robot cooperation process in \textsc{ConcHRC} we adopt AND/OR graphs \cite{de1990and, luger2009artificial, russell2010artificial}, as discussed above. An AND/OR graph allows for representing \textit{procedures} to follow, which can be decomposed in subproblems as parts of the graph, as well as the logic \textit{relationships} among them, i.e., the graph interconnectivity. The root node conventionally represents the goal state of the process being modelled, and achieving the goal means traversing the graph from leaf nodes to the root node via intermediate nodes and hyper-arcs according to its structure. A $1$-layer AND/OR graph $G$ can be formally defined as a $2$-ple $\langle N, H \rangle$ where $N$ is a set of $|N|$ nodes, and $H$ is a set of $|H|$ hyper-arcs. An hyper-arc $h \in H$ induces the set $N_c(h) \subset N$ of its \textit{child} nodes, and the singleton $N_p(h) \subset N$ made up of a \textit{parent} node, such that \begin{equation} h: N_c(h) \rightarrow N_p(h). \label{eq:hyper_arc_trans_simple} \end{equation} Furthermore, we define $n \in N$ as a \textit{leaf} node if $n$ is not a parent node for any hyper-arc, i.e., if $h \in H$ does not exist such that $n \in N_p(h)$, or as a \textit{root} node if it is the only node that is not a child node for any hyper-arc, i.e., if $h \in H$ does not exist such that $n \in N_c(h)$. In a multi human-robot cooperation scenario, each node $n \in N$ represents a cooperation \textit{state}, e.g., \textit{faulty object inside box}, whereas each hyper-arc $h \in H$ represents a (possibly) \textit{many-to-one} transition among states, i.e., activities performed by human operators and/or robots, which make the cooperation move forward, such as \textit{the robot puts the faulty object into the box}. The relation among child nodes in hyper-arcs is the logical \textit{and}, whereas the relation between different hyper-arcs inducing on the same parent node is the logical \textit{or}, i.e., different hyper-arcs inducing on the same parent node represent alternative ways for a cooperation process to move on. Each hyper-arc $h \in H$ implements the transition in (\ref{eq:hyper_arc_trans_simple}) by checking the \textit{requirements} defined by nodes in $N_c(h)$, executing \textit{actions} associated with $h$, and generating \textit{effects} compatible with the parent node. Each hyper-arc $h \in H$ executes an ordered set $A(h)$ of \textit{actions}, such that \begin{equation} A(h) = (a_1, \ldots,a_{|A|}; \preceq), \end{equation} where the precedence operator $\preceq$ defines the pairwise expected order of action execution. The sequence can be scripted or planned online \cite{capitanelli2018manipulation}. Before an hyper-arc $h$ is executed, all actions $a \in A(h)$ are marked as \textit{undone}, i.e., ${\textsf{done}(a) \leftarrow false}$. When one action $a$ is executed by any agent, its status changes to ${\textsf{done}(a) \leftarrow true}$. An hyper-arc $h \in H$ is marked as \textit{solved}, i.e., ${\textsf{solved}(h) \leftarrow true}$ \textit{iff} all actions $a \in A(h)$ are done in the expected order. In a similar way, nodes $n \in N$ may be associated with a (possibly ordered) set of \textit{processes} $P(n)$, which are typically \textit{robot} behaviours activated in a cooperation state but not leading to a state transition. It is possible to introduce the notion of feasibility. A node $n \in N$ is \textit{feasible}, i.e., $\textsf{feasible}(n) \leftarrow true$, \textit{iff} a solved hyper-arc $h \in H$ exists, for which $n \in N_p(h)$, and $\textsf{met}(n) \leftarrow false$, i.e., \begin{equation} \exists h \in H. \left(\textsf{solved}(h) \cap n \in N_p(h) \cap \neg \textsf{met}(n)\right). \label{eq:feasible_node} \end{equation} All leaf nodes in an AND/OR graph are usually feasible at the beginning of the multi human-robot cooperation process, which means that the cooperation can be performed in many ways. An hyper-arc $h \in H$ is \textit{feasible}, i.e., $\textsf{feasible}(h) \leftarrow true$, \textit{iff} for each node $n \in N_c(h)$, $\textsf{met}(n) \leftarrow true$ and $\textsf{solved}(h) \leftarrow false$, i.e., \begin{equation} \forall n \in N_c(h).\left(\textsf{met}(n) \cap \neg \textsf{solved}(h)\right). \label{eq:feasible_hyper_arc} \end{equation} Once an hyper-arc $h_i \in H$ is solved, all other feasible hyper-arcs $h_j \in H\setminus\{h_i\}$, which share with $h_i$ at least one child node, i.e., $N_c(h_i) \cap N_c(h_j) \neq \emptyset$, are marked as unfeasible, in order to prevent the cooperation process to consider alternative ways to cooperation that have become irrelevant. Given and AND/OR graph, the multi human-robot cooperation process is modelled as a \textit{graph traversal} procedure which, starting from a set of leaf nodes, must reach the root node by selecting hyper-arcs and reaching states in one of the available \textit{cooperation paths}, depending on the feasibility statuses of nodes and hyper-arcs. According to the graph structure, multiple cooperation paths may exist, meaning that multiple ways to solve the task may be equally legitimate. The traversal procedure dynamically follows the cooperation path that at any time is characterised by the lowest cost. The entire algorithm has been described in \cite{darvish2018flexible,darvish2018interleaved}. The traversal procedure suggests to human operators agents actions in the hyper-arcs that are part of the path, and sends to robots the actions they must execute. Human operators can override the suggestions at any time, executing different actions, which may cause the graph to reach a state not part of the current path. When this happens, \textsc{ConcHRC} tries to progress from that state onwards \cite{darvish2018flexible,darvish2018interleaved}. This mechanism enables \textsc{ConcHRC} to pursue an optimal path leading to the solution, while it allows human operators to choose alternative paths. As long as the multi human-robot cooperation process unfolds, and the AND/OR graph is traversed, we refer with $N_f$ and $H_f$ to the sets of \textit{currently} feasible nodes and hyper-arcs, respectively. We say that an AND/OR graph $G$ is \textit{solved}, i.e., $\textsf{solved}(G) \leftarrow true$, \textit{iff} its root node $r \in N$ is met, i.e., $\textsf{met}(r) \leftarrow true$. Otherwise, if the condition $N_f \cup H_f = \emptyset$, i.e., there are no feasible nodes nor hyper-arcs, then the multi human-robot cooperation process fails, because there is no feasible cooperation path leading to the root node. \subsection{$n$-layer AND/OR Graphs} \label{sec:n-layer} A $n$-layer AND/OR graph $G^n$ can be recursively defined as a $2$-ple $\langle \Gamma, \Theta \rangle$ where $\Gamma$ is an ordered set of $|\Gamma|$ \textit{up to} $(n-1)$-layer AND/OR graphs, such that: \begin{equation} \Gamma = \left(G_1, \ldots, G_{|\Gamma|}; \preceq \right), \label{eq:gamma_set} \end{equation} and $\Theta$ is a set of $|\Theta|$ pairwise transitions between them. In (\ref{eq:gamma_set}), the AND/OR graphs are ordered according to their layer. Lower-layer AND/OR graphs are characterised by a decreasing level of abstraction, i.e., they are aimed at modelling the HRC process more accurately. Transitions in $\Theta$ define how different AND/OR graphs in $\Gamma$ are connected, and in particular model the relationship between graphs belonging to different layers. If we recall (\ref{eq:hyper_arc_trans_simple}) and we contextualise it for an AND/OR graph $G^n = \langle N^n, H^n \rangle$, we observe that a given hyper-arc in $H^n$ represents a mapping between the set of its child nodes and the singleton parent node. We can think of a generalised version of such a mapping to encompass a whole AND/OR graph $G^{n-1} = \langle N^{n-1}, H^{n-1} \rangle$, where the set of child nodes is constituted by the set $N^{n-1}_L$ of leaf nodes, and the singleton parent node by the graph's root node $r^{n-1} \in N^{n-1}$. As a consequence, a transition $T \in \Theta$ can be defined between a hyper-arc $h \in H^n$ and an entire AND/OR graph $G^{n-1}$, such that \begin{equation} T: h \rightarrow G^{n-1}, \label{eq:graph_transition} \end{equation} subject to the fact that appropriate mappings can be defined between the set of child nodes of $h$ and the set of leaf nodes of the deeper graph, i.e., \begin{equation} M_1: N_c(h) \rightarrow N_L \in N^{n-1}, \label{eq:mapping_1} \end{equation} and between the singleton set of parent nodes of $h^n$ and the root node of the deeper graph, i.e., \begin{equation} M_2: N_p(h) \rightarrow r^{n-1} \in N^{n-1}. \label{eq:mapping_1} \end{equation} Mappings $M_1$ and $M_2$ must be such that the corresponding information in different layers should be \textit{semantically equivalent}, i.e., it should represent the same information with a different representation granularity. The same applies for $N_p(h)$ and the root of $G^{n-1}$. Once these mappings are defined, it easy to see that $G^n$ has a tree-like structure, where graphs in $\Gamma$ are nodes and transitions in $\Theta$ are edges. An AND/OR graph $G^n$ is feasible, i.e., $\textsf{feasible}(G^n) \leftarrow true$ \textit{iff} it has at least one feasible node or hyper-arc. If a transition $T \in \Theta$ exists in the form (\ref{eq:graph_transition}), a hyper-arc $h \in H^n$ is feasible \textit{iff} the associated AND/OR graph $G^{n-1}$, is feasible, i.e., \begin{equation} \forall T.\left(\textsf{feasible}(h) \leftrightarrow \textsf{feasible}(G^{n-1})\right). \label{eq:feasibility_hierarchical_graph} \end{equation} As a consequence, when the nodes in $N^{n-1}_L$ of $G^{n-1}$ becomes feasible, the hyper-arc $h$ in $G^n$ becomes feasible as well. Furthermore, the hyper-arc $h$ is solved \textit{iff} the associated AND/OR graph $G^{n-1}$ is solved, i.e., \begin{equation} \forall T.\left(\textsf{solved}(h) \leftrightarrow \textsf{solved}(G^{n-1})\right). \label{eq:solve_hierarchical_graph} \end{equation} \subsection{$c$-layer AND/OR Graphs} \label{sec:n-layer} A concurrent AND/OR graph is modelled as a restriction of a $n$-layer AND/OR graph whereby the $n$-th layer is aimed at modelling the termination condition for the whole hierarchy of $(n-1)$-layer graphs, and the latter model different, concurrent activities part of the HRC process. A $c$-layer AND/OR graph must also specify if and how nodes belonging to separate lower-layer graphs are synchronised. Analogously to an $n$-layer graph, a $c$-layer AND/OR graph $G^c$ can be defined as a $2$-ple $\langle \Gamma^c, \Theta^c \rangle$ where $\Gamma^c$ is an ordered set of $|\Gamma^c|$ \textit{up to} $(n-1)$-layer AND/OR graphs, such that: \begin{equation} \Gamma^c = \left(G_1, \ldots, G_{|\Gamma^c|}; \preceq \right), \label{eq:concurrent_set} \end{equation} and $\Theta^c$ is a set of $|\Theta^c|$ pairwise transitions between them. Whilst the considerations related to $n$-layer AND/OR graphs apply for $c$-layer AND/OR graphs, the composition of the constituting sets of nodes and hyper-arcs may differ. Let us recall that for a generic AND/OR graph $G$ we refer to $N$ as its set of nodes, and with $H$ as its set of hyper-arcs, and let us consider two AND/OR graphs $G_i$ and $G_j \in \Gamma^c$. Let us limit ourselves to a weak notion of independence between graphs. We consider $G_i$ and $G_j$ as mutually independent \textit{iff} there is no node in $G_i$ (respectively, $G_j$) that needs to be met before another node of $G_j$ (respectively, $G_i$). If this is the case, $G_i$ (respectively, $G_j$) can be modelled as a generic $n$-layer AND/OR graphs $\langle N_i, H_i \rangle$ (respectively, $\langle N_j, H_j \rangle$). Otherwise, if $G_i$ is dependent on $G_j$, i.e., a node $n_j$ in $G_j$ must be met before another node $n_i$ in $G_i$ can be \textsf{met}, we need to formally model it as an external dependence. To this aim, and in general terms, we augment the set of nodes $G_i$ with a set of dependence nodes, whose associated logic predicates \textsf{met} are entangled with the corresponding nodes in $G_j$, such that their truth values always correspond. A node $n^e$ of an AND/OR graph $G_i$ is said to be \textit{entangled} with a node $n_j$ of an AND/OR graph $G_j$, with $i \neq j$, \textit{iff} for that node \begin{equation} \textsf{met}(n^e) \leftrightarrow \textsf{met}(n_j) \label{eq:entangled_node} \end{equation} and $n^e$ is a leaf node for $G_i$, i.e., $N_c(n^e) = \emptyset$. Then, a \textit{dependent} AND/OR graph $G_i$ is defined as a $2$-ple $\langle N^c_i, H^c_i \rangle$, such that $N^c_i = N_i \cup \{n^e_1, \ldots, n^e_\eta\}$, i.e., the union between the set of nodes $N_i$ as if the graph were not dependent on any other graph, plus the set of the entangled nodes, and $H^c_i = H_i \cup \{h^e_1, \ldots, h^e_\lambda\}$, i.e., the union between the set of hyper-arcs $H_i$ as if the graph were not dependent on any other graph, plus the set of the hyper-arcs reliant on entangled nodes. \section{Experimental Validation} \label{sec4} \subsection{Implementation of the Multi Human-Robot Collaboration Process for Defects Inspection} In order to validate the effectiveness of \textsc{ConcHRC}, we implemented an abstract defects inspection scenario. The scenario has been briefly described in the Introduction, and is represented in Figure \ref{fig:scene}. A Kuka youBot omni-directional mobile manipulator is used to pick up objects from a \textit{warehouse area}, and brings them close to the \textit{defects inspection} cell, where a human operator and a dual-arm Baxter robot are expected to collaborate. The youBot and the objects to be manipulated are localised in the workspace using an external motion capture system based on passive markers, i.e., a system composed of $8$ OptiTrack-Flex $13$ motion capture cameras. Baxter is provided with the standard grippers, and is equipped also with a RGB-D camera mounted on the robot \textit{head} and pointing downward, which is used to acquire images for defects inspection. Since, in our case, the focus is on the multi human-robot collaboration process, we decided to over-simplify the inspection, which is surrogated using QR tags corresponding to \textit{faulty}, \textit{non faulty}, \textit{Na}, respectively. Actions carried out by human operators are perceived via their inertial blueprint via an LG G Watch R (W110) smartwatch, worn at the right wrist. Data are transmitted through a standard WiFi link to a workstation. The workstation is equipped with an Intel(R) core i7-8700 @ 3.2 GHz $\times$ 12 CPUs and 16 GB of RAM. The architecture is developed using C++ and Python under ROS Kinetic. There are upper bounds to the maximum angular velocity of arm joints for both the Baxter and the youBot, i.e., $0.6$ $rad/s$. Limits on the youBot's linear and angular velocities are $0.4$ $m/s$ and $0.3$ $rad/s$, respectively. These limits are applied to both simulated and real robots. Action models foreseen for human activity recognition are simply \textit{pick up} and \textit{put down}. Instead, actions used for Baxter arms include \textit{approach}, \textit{grasp}, \textit{ungrasp}, \textit{hold on}, \textit{stop}, \textit{check object status}, whereas for the youBot arm we considered only \textit{approach}. \begin{figure}[t!] \centering \includegraphics[scale=0.31]{concexpr.png} \caption{The collaboration graph for defects inspection.} \label{fig:andorbaxter} \end{figure} Our scenario includes three physical agents, i.e., a human operator, Baxter and youBot, but five \textit{logical} agents, i.e., the operator, the Baxter left arm, the Baxter right arm, the youBot base, and the youBot arm. However, one planner manages both Baxter arms, and likewise one planner manages the youBot base and arm, so they are used sequentially. In the scenario, objects are randomly placed in the warehouse area. Objects are cylinders labeled with three different QR code types (Figure \ref{fig:obj}). The youBot must find each object, move towards it, pick it, take it to the area where the human operator and the Baxter are located, and hand it over the operator. This sequence is repeated until all objects are delivered. On the other side of the collaboration scenario, the Baxter starts its operations when the human operator puts down an object on the table in front of the robot. By default, its right arm is used to pick the object up, and to check whether it is faulty, non-faulty or the defect cannot be assessed. If the object is faulty, it is placed in a \textit{faulty} box close to the right arm, or in case of a non-faulty object, the object is handed over to the left arm to be placed in a \textit{non-faulty} box. If the object level of defects cannot be assessed, then it is handed back to the human operator for an \textit{ad hoc} assessment. This process is repeated for all objects. \subsection{Description of the Experiment} Figure \ref{fig:andorbaxter} shows a $c$-layer concurrent AND/OR graph, which is composed of two $1$-layer AND/OR graphs, for the youBot ($G_1$) and the Baxter ($G_2$), respectively. Entangled nodes of both graphs are depicted in red, which makes graph $G_2$ dependent on graph $G_1$. In order for the leaf node of $G_2$ (i.e., \textit{new object}) to be feasible, the root node of $G_1$ (i.e., \textit{obj on table}) must be met. During the HRC process, the human operator is typically close to the Baxter, as shown in Figure \ref{fig:exp}. When the youBot approaches, the operator executes a \textit{discrete gesture} moving an arm upward in order to announce a \textit{pick up} action. Once the gesture is detected, youBot releases the object opening the end-effector to hand it over. Afterwards, the operator announces via a \textit{put down} gesture the fact that the object to inspect has been located on the table for the Baxter to start inspection. \begin{figure}[t!] \centering \includegraphics[scale=0.12]{obj3.jpg} \caption{Four tagged cylinders used in our scenario.} \label{fig:obj} \end{figure} \begin{figure*} \centering \begin{subfigure}{0.24\textwidth} \centering \includegraphics[width=3.7cm]{initial.jpg} \caption{} \label{fig:sfig1} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics[width=3.7cm]{yopick.jpg} \caption{} \label{fig:sfig2} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics[width=3.7cm]{up.jpg} \caption{} \label{fig:sfig3} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics[width=3.7cm]{down.jpg} \caption{} \label{fig:sfig4} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics[width=3.7cm]{baxstart.jpg} \caption{} \label{fig:sfig5} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics[width=3.7cm]{inspection.jpg} \caption{} \label{fig:sfig6} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics[width=3.7cm]{faulty.jpg} \caption{} \label{fig:sfig7} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics[width=3.7cm]{parallel.jpg} \caption{} \label{fig:sfig8} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics[width=3.7cm]{handover.jpg} \caption{} \label{fig:sfig9} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics[width=3.7cm]{nonfaulty.jpg} \caption{} \label{fig:sfig10} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics[width=3.7cm]{na.jpg} \caption{} \label{fig:sfig11} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics[width=3.7cm]{finalcheck.jpg} \caption{} \label{fig:sfig12} \end{subfigure} \caption{A typical sequence of tasks in a defects inspection experiment.} \label{fig:exp} \end{figure*} Figure \ref{fig:exp} shows a typical run of the collaboration process. In the initial configuration, shown in Figure \ref{fig:sfig1}, both the human operator and the robots are in stand by mode. The youBot moves towards the next object to inspect (\textit{obj in ws} state), according to graph $G_1$. The object is selected on the basis of the time it takes to perform the whole operation in simulation. After approching the object (\textit{youbot+obj}), the youBot's arm attempts grasping (Figure \ref{fig:sfig2}), and then picking it up (\textit{obj picked}). In the meantime, the Baxter is waiting for human operator actions to start collaboration. The youBot moves towards the human operator (Figure \ref{fig:sfig3}), and waits for a command to release the object (\textit{youbot+obj+human} state). This is done by the operator by moving an arm upward (\textit{human ready}), which implies the youBot to open the gripper. The operator takes the object (\textit{human+obj}) and puts it down (\textit{obj on table}) on the table. The operator, then, can keep moving downward one arm, therefore notifying to the Baxter that an object is on the table (Figure \ref{fig:sfig4}). An entangled node (\textit{new object}) becomes feasible after the root node of $G_1$ is met. It is noteworthy that in some cases the youBot was not able to grasp objects properly, or dropped it actually before handover could occur. Furthermore, it happened that human actions were not recognised, which required the operator to repeat them. In these cases it is the operator's responsibility to handle the situation by taking appropriate actions in order to make the collaboration fluent. Upon the notification of the appropriate operator gesture, the Baxter starts grasping the object (Figure \ref{fig:sfig5}) and moves it in order to place it in front of the head-mounted camera, rotating it (\textit{obj checked}) for defects inspection (Figure \ref{fig:sfig6}). In Figure \ref{fig:sfig7}, it is shown how the object is recognised as \textit{faulty}, and therefore the right arm places it in the \textit{faulty box} (\textit{obj at box}). While the Baxter is inspecting the object, the youBot continues to look for other objects (Figure \ref{fig:sfig8}). After a while, as shown in Figure \ref{fig:sfig9}, one of the objects is classified as \textit{non faulty}. Since the related box cannot be reached by the Baxter right arm, an handover in-between the two arms is executed (Figure \ref{fig:sfig10}). In case the assessment cannot be done (this is simulated with a specific QR tag), the graph reaches a \textit{NA} state, which implies that the Baxter requires the human operator to inspect the object directly (Figure \ref{fig:sfig11}). After all objects are inspected (\textit{inspected} state), the human operator performs a check (Figure \ref{fig:sfig12}). In order to perform a realistic computational assessment of the architecture, the whole scenario has been tested five times. Results can be seen in Table \ref{table:1}, where times are related to the whole experiments\footnote{A video is available at https://youtu.be/0aOOeqCL2So.}. Statistics presented in Table \ref{table:1a} and Table \ref{table:1b} seem to indicate that the representation and planning modules together require less than 1\% of the overall execution time, whereas the major portion of collaboration time is related to human or robot actions. The standard deviation related to task planners and the representation modules for both robots are low enough to be neglected, and imposes no latency in collaboration proccess. \begin{table}[t!] \centering \caption{Execution times.} \begin{subtable}[c]{0.5\textwidth} \centering \begin{tabular}{c c c c} Module & Avg. time [s] & Avg. time $[\%]$ & Std. dev. [s] \\ \hline Task Representation & 0.52 & 0.21 & 0.01 \\ Task Planner & 0.02 & 0.008 & 0.003 \\ Simulator & 3.69 & 1.49 & 0.24 \\ Baxter actions & 203.00 & 82.00 & 5.00 \\ Human actions & 39.00 & 15.80 & 6.00 \\ Total & 246.75 & 100.00 & 11.253 \\ [1ex] \end{tabular} \caption{Baxter-related activities.} \label{tab:label subtable A} \label{table:1a} \end{subtable} \vfill \begin{subtable}[c]{0.5\textwidth} \centering \begin{tabular}{c c c c} Module & Avg. time [s] & Avg. time $[\%]$ & Std. dev. [s] \\ \hline Task Representation & 0.43 & 0.13 & 0.02 \\ Task Planner & 0.02 & 0.00 & 0.004 \\ Simulator & 2.74 & 0.79 & 0.40 \\ youBot actions & 268.00 & 86.00 & 14.00 \\ Human actions & 39.00 & 12.50 & 6.00 \\ Total & 310.19 & 100.00 & 20.424 \\ [1ex] \end{tabular} \caption{youBot-related activities.} \label{tab:label subtable B} \label{table:1b} \end{subtable} \label{tab:label all table} \label{table:1} \end{table} \subsection{Discussion} On the basis of the experiments we carried out, it is possible to make two different remarks. The first is related to the robustness associated with the overall process. In spite of such faults as unsuccessful robot grasps, or issues related to false positives or negatives when monitoring the activities carried out by human operators, the inherent flexibility of \textsc{ConcHRC} allows human operators to intervene and manage these issues. This is even more relevant considering that our current setup does not focus on such a robustness level. The second is the insight that using parallel instances of AND/OR graph representation layers seems to be more efficient with respect to an equivalent, common, single instance model. We observed that the adoption of \textsc{ConcHRC} reduces the overall idle time considerably. This is an obvious consequence of the fact that the total time needed for a multi human-robot collaboration process to conclude is determined by the maximum one associated with the longest execution branch in the graph. On the contrary, if the HRC process were implemented as a single, non concurrent, model, then the total time would correspond to the sum of all times associated with single cooperation paths. As an example, in our scenario \textsc{ConcHRC} allows for a total collaboration time equal to $310.19$ $s$, whereas an equivalent implementation using \textsc{FlexHRC} the total collaboration time can be up to $866.94$ $s$. \section{Conclusions} \label{sec5} In this paper we present and discuss \textsc{ConcHRC}, a framework aimed at modelling multi human-robot collaboration processes. The framework builds upon \textsc{FlexHRC}, which did not consider concurrent task allocation and execution. \textsc{ConcHRC} has been preliminary analysed in a use case related to defects inspection, where one human operator and four robot \textit{agents} are present. Two general remarks can be done. The first is a general robustness of the human-robot cooperation flow with respect to issues related to object grasping and manipulation, as well as the recognition of human actions. The second, which is related to best practices in modelling the cooperation scenario, is a tendency towards minimising idle times. Obviously enough, the work can be improved along many directions: (i) evaluating the use of a scheduler instead of a set of concurrent planners, especially considering approaches based on Answer Set Programming or metaheuristics; (ii) the gesture recognition module, used to detect and classify human activities, may be improved allowing for models able to predict them. These two aspects are subject of current work. \bibliographystyle{IEEEtran}
2,869,038,154,142
arxiv
\section{Introduction} During the last years an essential progress has been achieved in the investigation of integrable quantum field theories. Such a success owes much to the fact that these models are characterized by infinite dimensional Hopf algebra symmetries, known as affine quantum group symmetries. These symmetries are genereted by non-local conserved currents which in many cases can be constructed explicitly. Such an approach to the quantum field theory permits to obtain non-perturbative solutions in the quantum field theory using algebraic methods \cite{smi}-\cite{bab}. The situation is analogous to the one taking place in Conformal Field Theory (CFT). In particular, in CFT, as a result of the infinite-dimensional Virasoro algebra (or other extended algebras), exact solutions are successfully obtained with the help of the Ward identities \cite{BPZ}. Explicit currents that generate a $q$-deformation of affine Kac-Moody algebras \cite{dri},\cite{jim} were constructed for the Sine-Gordon theory and its generalization to imaginary coupling affine Toda theory in \cite{BL}, and shown to completely characterize the $S$-matrices. At special values of the coupling where these quantum field theories have ordinary Lie group $G$ invariance, the quantum affine symmetry becomes the $G$-Yangian symmetry \cite{ber},\cite{lus}. The affine quantum group invariance fixes the $S$-matrices up to overall scalar factors, which in turn can be fixed using crossing symmetry, unitarity and analyticity. These quantum group invariant $S$-matrices, which are the specializations of the $R$-matrices satisfy the Yang-Baxter equation. In the present work a series of new integrable models is identified and its $q$-deformed structure is studied. In particular, the organization of the paper is as follows. In section \ref{hyperelliptic-surfaces}, a brief description of the minimal conformal models on hyper-elliptic surfaces which can be represented as two-sheet coverings of a ramified sphere is given. In section \ref{new-model}, a model of perturbed CFT is proposed; the relevant perturbation is the highest weight vector of the Virasoro algebra at the branching points. The characters of this model are calculated and the existence of an infinite series of Integrals of Motion (IMs) is proved; the integrability of the model is thus established. Furthermore, the $\beta$-function of the model is calculated and it is shown that the theory is massive. In the last section, section \ref{nonlocal-charges}, the non-local currents are constructed. These are related by non-trivial braiding relations which lead to the $q$-deformed algebra of the conserved charges of the model. \section{CFT on Hyper-Elliptic Surfaces} \label{hyperelliptic-surfaces} Conformal field theories on compact Riemann surfaces, and in particular on hyper-elliptic surfaces, have been considered by many authors. One of the pioneering works on hyper-elliptic surfaces was Zamolodchikov's work for the Ashkin-Teller models \cite{zam87}; another important contribution was Knizhnik's work \cite{kni} on two-loop calculations in string theory. Finally, in \cite{CSS}, the minimal models on hyper-elliptic surfaces were thoroughly discussed. Let $\Gamma$ be a compact Riemann surface of genus $g\geq 1$. If $\Gamma$ is a Riemann surface of an algebraic function $y=y(z)$ given by the equation \begin{equation} R(y,z)=y^{n}+a_{1}(z)y^{n-1}+\ldots+a_{n}(z)=0~, \end{equation} where $R(y,z)$ is a polynomial of the form shown above, then the affine part of $\Gamma$ coincides with the complex algebraic curve (1,1) in ${\Bbb C}^2$ in case this curve is ordinary (smooth). Of special importance to us is the example of hyper-elliptic curves given by equations of the form \begin{equation} \label{form1} y^2=P_{2g+1}(z)~, \end{equation} or \begin{equation} \label{form2} y^2=P_{2g+2}(z)~, \end{equation} where $P_h(z),~h=2g+1,2g+2,$ is a polynomial of degree $h$ without roots of multiplity $h$. In both cases, the genus of the corresponding Riemann surface is $g$. It is noteworthy that any Riemann surface of genus $g=1$ or $g=2$ has a representation in one of the forms \calle{form1} or \calle{form2}, while the same statement is not true for surfaces of genus $g=3$. We label the two sheets of the Riemann surface $\Gamma$ by the numbers $l=0,1$: \begin{equation} y^{(l)}(z)= e^{i\pi l}\,P_h^{1/2}(z)= e^{i\pi l}\,\prod_{i=1}^h\, (z-w_i)^{1/2}~. \end{equation} Let $A_a,\, B_a,~a=1,2,\dots,g$ be the basic cycles of the surface. As we encircle the point $w_i$ along the contours $A_a,\, B_a$, in the case of an $A_a$ cycle we stay on the same sheet, while in the case of a $B_a$ cycle we pass from the $l$-th sheet to the $(l+1)$-th one. We shall denote the process of encircling the points $w_i$ on the cycles $A_a, \, B_a$ by the symbols $\hat{\pi}_{A_a}$, $\hat{\pi}_{B_a}$ respectively. Here these generators form a group of monodromy that in our case of two-sheet covering of the sphere coincides with the ${\Bbb Z}_{2}$ group. We consider the energy-momentum tensor with representation $T^{(l)}(z)$ on each of these sheets. The above definition of the monodromy properties along the cycles $A_a,~B_a$ implies that the following boundary conditions should be satisfied by the energy-momentum tensor: \begin{equation} \hat{\pi}_{A_{a}}T^{(l)}=T^{(l)} ,\quad \hat{\pi}_{B_{a}}T^{(l)}=T^{(l+1)}~. \end{equation} It is convenient to pass to a basis, in which the operators $\hat{\pi}_{A_a}$, $\hat{\pi}_{B_a}$ are diagonal \begin{eqnarray} T=T^{(0)}+T^{(1)}~,&&\quad T^{-}=T^{(0)}-T^{(1)}~,\\ \hat{\pi}_{A_{a}}T=T~,&& \quad \hat{\pi}_{A_{a}}T^{-}=T^{-}~, \label{BC1}\\ \hat{\pi}_{B_{a}}T=T~,&& \quad \hat{\pi}_{B_{a}}T^{-}=-T^{-}~. \label{BC2} \end{eqnarray} The corresponding operator product expansions (OPEs) of the $T,~T^-$ fields can be determined by taking into account the OPEs of $T^{(l)},~T^{(l')}$. On the same sheet, the OPEs of the two fields $T^{(l)}(z_{1})T^{(l)}(z_{2}),$ are the same as that on the sphere, while on different sheets they do not correlate, i.e. $T^{(l)}(z_{1})T^{(l+1)}(z_{2})\sim {\rm reg}$. Thus, in the diagonal basis the OPEs can be found to be \begin{eqnarray} T(z_{1})T(z_{2})&=&{c\over 2\,z_{12}^4}+ {2\,T(z_2)\over z_{12}^2}+ {T'(z_2)\over z_{12}} + {\rm reg}~, \label{OPE1} \\ T^{-}(z_1)T^{-}(z_{2})&=&{c\over 2\,z_{12}^4}+ {2\,T(z_2)\over z_{12}^2}+ {T'(z_2)\over z_{12}} + {\rm reg}~, \label{OPE2}\\ T(z_1)T^{-}(z_2)&=&{2\over z_{12}^2}\,T^{-}(z_2)+ {T'^{-}(z_2)\over z_{12}}+ {\rm reg}~, \label{OPE3} \end{eqnarray} where $c=2\hat{c}$, and $\hat{c}$ is the central charge in the OPE of $T^{(l)}(z_{1})T^{(l)}(z_{2})$. It is seen from \calle{OPE3} that $T^-$ is primary field with respect to $T$. To write the algebra \calle{OPE1}-\calle{OPE2} in the graded form we determine the mode expansion of $T$ and $T^-$: \begin{eqnarray} T(z)V_{(k)}(0)&=&\sum_{n\in {\Bbb Z}}\, z^{n-2}L_{-n}V_{(k)}(0)~,\\ T^-(z)V_{(k)}(0)&=&\sum_{n\in {\Bbb Z}}\, z^{n-2-k/2}L_{-n+k/2}^-V_{(k)}(0)~, \end{eqnarray} where $k$ ranges over the values 0,1 and determines the parity sector in conformity with the boundary conditions \calle{BC1} and \calle{BC2}. Standard calculations lead to the following algebra for the operators $L_{-n}$ and $L_{-n+k/2}^{-}$: \begin{eqnarray} \lbrack L_n,L_m\rbrack &=& (n-m)\,L_{n+m}+\frac{c}{12}\, (n^3-n)\,\delta_{m+n,0}~,\nonumber\\ \lbrack L_{m+k/2}^{-},L_{n+k/2}^{-}\rbrack &=&(m-n)\,L_{n+m+k}+\frac{c}{12}\lbrack (m+k/2)^3- (m+k/2)\rbrack \, \delta_{n+m+k,0}~,~~~~~~ \label{algebra} \\ \lbrack L_m,L_{n+k/2}^- \rbrack &=& \lbrack m-n-k/2\rbrack \, L_{m+n+k/2}^-~. \nonumber \end{eqnarray} The operators $\overline{L}_n,~ \overline{L}_{m+k/2},~\overline{L}_n^-,~ \overline{L}_{m+k/2}^-$ satisfy the same relations and $\overline{L}_n,$ $\overline{L}_{m+k/2},$ $\overline{L}_n^-,$ $\overline{L}_{m+k/2}^-$ commute with $L_n,~ L_{m+k/2},~L_n^-,~L_{m+k/2}^-$. To describe the representations of the algebra \calle{algebra}, it is necessary to consider separately the non-twisted sector with $k=0$ and the twisted sector sector with $k=1$. In order to write the $\lbrack V_{(k)}\rbrack$ representation of the algebra \calle{algebra} in a more explicit form, it is convenient to consider the highest weight states. In the $k=0$ sector, the highest weight state $\vline\, \Delta , \Delta^-\rangle$ is determined with the help of a primary field $V_{(0)}$ by means of the formula \begin{equation} \label{state1} \vline \,\Delta , \Delta^-\rangle=V_{(0)}\, \vline\, \emptyset \rangle ~. \end{equation} Using the definition of vacuum, it is easy to see that \begin{equation} \begin{array}{l} L_0\,\vline\,\Delta, \Delta^-\rangle=\Delta \, \vline\, \Delta ,\Delta^-\rangle~ ,\quad L_0^-\,\vline\, \Delta, \Delta^-\rangle= \Delta^-\,\vline\, \Delta ,\Delta^-\rangle~, \\ \nonumber\\ L_n\,\vline\, \Delta, \Delta^-\rangle = L_n^-\,\vline\, \Delta, \Delta^-\rangle=0 , \quad n \geq 1~. \end{array} \end{equation} In the $k=1$ sector, we define the vector of highest weight $|\Delta\rangle$ of the algebra to be \begin{equation} \label{state2} \vline\, \Delta \rangle=V_{(1)}\,\vline \,\emptyset\rangle~, \end{equation} where $V_{(1)}$ is a primary field with respect to $T$. In analogy with the non-twisted sector we obtain \begin{equation} L_0\,\vline \,\Delta \rangle=\Delta \,\vline \,\Delta \rangle,\quad L_n\,\vline\, \Delta \rangle=L_{n-1/2}^- \,\vline\, \Delta \rangle=0, \quad n \geq 1~. \end{equation} Thus, the Verma module over the algebra \calle{algebra} is obtained by the action of any number of $L_{-m}$ and $L_{-m+k/2}^-$ operators with $n,m>0$ on the states \calle{state1} and \calle{state2}. As was shown in ref. \cite{CSS} by means of GKO (coset construction) method, the central charge of a reducible unitary representation of the algebra \calle{algebra} has the form \begin{equation} \label{ccharge} c=2-\frac{12}{p(p+1)}=2\hat{c}~ ,\quad p=3,4,\ldots~. \end{equation} Using ref. \cite{FF}, Dotsenko and Fateev \cite{DF} gave the complete solution for the minimal model correlation functions on the sphere. They were able to write down the integral representation for the conformal blocks of the chiral vertices in terms of the correlation functions of the vertex operators of a free bosonic scalar field $\Phi$ coupled to a background charge $\alpha_0$. This construction has become known as the Coulomb Gas Formalism (CGF). In the present case, this approach is also applicable by considering a Coulomb gas for each sheet separately but coupled to the same bouckground charge: \begin{equation} \begin{array}{l} T^{(l)}=-\frac{1}{4}(\partial_z\Phi^{(l)})^{2} + i\alpha_0\partial_z^2 \Phi^{(l)}~,\quad \langle\Phi^{(l)}(z)\Phi^{(l')}(z')\rangle=-\delta^{ll'} \,\ln|z-z'|^2~,\\ \\ \hat{\pi}_{A_a}\partial_z\Phi^{(l)}=\partial_z\Phi^{(l)}~ ,\quad \hat{\pi}_{B_a}\partial_z\Phi^{(l)}=\partial_z\Phi^{(l+1)}~, \nonumber \end{array} \end{equation} where $c=2-24\alpha_0^2$ or $\alpha_0^2=1/2p(p+1)$. Passing to the basis which diagonalizes the operators $\hat{\pi}_{A_a}$ , $\hat{\pi}_{B_a}$, i.e. \begin{eqnarray} \Phi=\Phi^{(0)} + \Phi^{(1)}~,\quad \Phi^- = \Phi^{(0)} - \Phi^{(1)} ~,\nonumber\\ \hat{\pi}_{A_a}\partial_z\Phi = \partial_z\Phi~ ,\quad \hat{\pi}_{B_a}\partial_z\Phi = \partial_z\Phi~,\\ \hat{\pi}_{A_a}\partial_z\Phi^- = \partial_z\Phi^-~ ,\quad \hat{\pi}_{B_a}\partial_z\Phi^- = -\partial_z\Phi^-~, \nonumber \end{eqnarray} we finally obtain the bosonization rule for the operators $T$ , $T^-$ in the diagonal basis \begin{eqnarray} T &=& -\frac{1}{4}(\partial_z\Phi)^2 + i\alpha_0\partial_z^2\Phi - \frac{1}{4}(\partial_z \Phi^-)^2~,\nonumber \\ \\ T^- &=& -\frac{1}{2}\partial_z\Phi\partial_z\Phi^- + i\alpha_0\partial_z^2\Phi^- ~. \nonumber \end{eqnarray} In conventions of ref. \cite{CSS}, the vertex operator with charges $\alpha$, $\beta$ in the $k=0$ (non-twisted) sector is given by \begin{equation} \label{vertex1} V_{\alpha\beta}(z) = e^{i\alpha\Phi+i\beta\Phi^-}~, \end{equation} with conformal weights $\Delta=\alpha^2-2\alpha_0\alpha +\beta^2$ and $\Delta^-=2\alpha\beta-2\alpha_0\beta$. In the $k=1$ (twisted) sector the situation is slightly different. Here we have an antiperiodic bosonic field $\Phi^-$, i.e. $\Phi^-(e^{2\pi i}z) = -\Phi^-$; this leads to the deformation of the geometry of space-time. If we recall that the circle is parametrized by $\Phi^- \in S^1 \lbrack 0,2\pi R\rbrack$, the condition $\Phi^- \sim -\Phi^-$ means that pairs of points of $S^1$ have been identified. Thus, $\Phi^-$ lives on the orbifold $S^1/{\Bbb Z}_2$; under the identification $\Phi^- \sim -\Phi^-$ the two points $\Phi^-=0$ and $\Phi^-=\frac{1}{2}(2\pi R)$ are fixed points. One can try to define the twist fields $\sigma_\epsilon(z),~\epsilon=0,1,$ for the bosonic field $\Phi^-$, with respect to which $\Phi^-$ is antiperiodic. Notice that there is a separate twist field for each fixed point. The OPE of the current $I^-=i\partial_z\Phi^-$ with the field $\sigma_\epsilon$ is then \begin{equation} \begin{array}{l} I^-(z)\sigma_{\epsilon}(0)=\frac{1}{2}z^{-1/2}\hat{\sigma}_{\epsilon}(0) + \ldots~,\\ \nonumber\\ I^-(z)\hat{\sigma}_{\epsilon}(0)=\frac{1}{2}z^{-3/2} \sigma_{\epsilon}(0) + 2z^{-1/2}\sigma'_{\epsilon}(0) + \ldots~. \end{array} \end{equation} The twist fields $\sigma_\epsilon$ and $\hat{\sigma}_\epsilon$ are primary fields for the $T_{\rm orb}=-\frac{1}{4}(\partial_z\Phi^-)^{2}$ with dimensions $\Delta_{\epsilon}=1/16$ and $\hat{\Delta}_{\epsilon}= 9/16$ respectively. So, in the twisted sector the highest weight vectors (or primary fields) can be written as follows \begin{equation} \label{vertex2} V_{\gamma\,\epsilon}^{(t)}=e^{i\gamma\Phi}\sigma_{\epsilon}~ ,\quad \Delta^{(t)}=\gamma^2-2\alpha_0\gamma+{1\over 16}~. \end{equation} In ref. \cite{CSS}, the anomalous dimensions of the primary fields of the minimal models for the algebra \calle{algebra} were obtained both in the non-twisted and twisted sectors in conformity with the spectrum of the central charge \calle{ccharge}; in particular, it was found that the charges $\alpha,\beta,\gamma$ of the primary fields corresponding to $k=0$ and $k=1$ sectors have the form: \begin{equation} \begin{array}{l} \alpha_{n'm'}^{nm}={2-n-n'\over 2}\,\alpha_{+}+ {2-m-m'\over 2}\,\alpha_{-}~,\\ \nonumber\\ \beta_{n'm'}^{nm}={n-n'\over 2}\,\alpha_{+}+ {m-m'\over 2}\,\alpha_{-}~,\\ \nonumber\\ \gamma_{nm}={2-n\over 2}\,\alpha_{+}+ {2-m\over 2}\,\alpha_{-}~,\\ \nonumber\\ 1\leq n,n'\leq p ,\quad 1\leq m,m'\leq p-1~, \end{array} \end{equation} where the constants $\alpha_{\pm}$ are expressed in terms of the background charge $\alpha_0$: \begin{equation} \alpha_{\pm}=\alpha_{0}/2 \pm \sqrt{\alpha_{0}^{2}/4+1/2} ~. \end{equation} We denote the corresponding fields by $V^{nm}_{n'm'}$, $V^{(t)}_{nm}$ and their conformal weights by $\Delta^{nm}_{n'm'}$, $\Delta^{(t)}_{nm}$. We can thus represent the CFT on a hyper-elliptic surface as a CFT on the plane with an additional symmetry, exactly as described by the algebra \calle{algebra}. The corresponding highest weight vectors of the algebra are given by \calle{vertex1} and \calle{vertex2}; finally, the central charge is given by \calle{ccharge}. We will confine ourselves to the minimal models on hyper-elliptic surfaces as presented above; keeping this in mind we pass to the construction of perturbed models of these CFTs. \section{Perturbation by $V_{nm}^{(t)}$ and Integrals of Motion} \label{new-model} \setcounter{equation}{0} Let $S_p$ be the action the $p$-th conformal minimal model on the hyper-elliptic surface $\Gamma$ \begin{equation} S_p\lbrack\Phi,\Phi^-\rbrack\,\sim \, \int\,d^2z\,( \, \partial_z \Phi \partial_{\overline z}\Phi - i\alpha_0R\Phi) + \int\,d^2z\,\partial_z\Phi^-\partial_{\overline z}\Phi^-~. \end{equation} We now consider the perturbation of this conformal field theory by the degenerate relevant operator $V_{nm}^{(t)}$. \begin{equation} S_\lambda\,=\,S_p\lbrack\Phi,\Phi^-\rbrack +\lambda\,\int\,d^2\,z\,e^{i\gamma_{nm} \Phi(z,\overline{z})}\,\sigma_{\epsilon}(z,\overline{z})~. \end{equation} The parameter $\lambda$ is a coupling constant with conformal weight $(1-\Delta_{nm}^{(t)}\, , \, 1-\Delta_{nm}^{(t)})$. Obviously, for a generic perturbation the new action $S_\lambda$ does not describe an integrable model. We are going to choose the perturbation in a way that the corresponding field theory is integrable. To prove the integrability of this massive (this claim is proved at the end of the present section) theory, one must calculate the characters of the modules of the identity $I$ and $V_{nm}^{(t)}$. The ``basic" currents $T(z)$ and $T^-(z)$ generate an infinite-dimensional vector subspace $\Lambda$ in the representation space. This subspace can be constructed by successive applications of the generators $L_{-n}$ and $L_{-m}^-$ with $n,m>0$ to the identity operator $I$. $\Lambda$ can be decomposed to a direct sum of eigenspaces of $L_0$, i.e. \begin{equation} \Lambda\,=\,\bigoplus_{s=0}^{\infty} \Lambda_{s}~,\quad L_0\,\Lambda_s = s\,\Lambda_s~. \end{equation} The space $\Lambda$ contains the subspace $\Lambda'=\partial_z\Lambda$. Therefore, in order to separate the maximal linearly independent set, one must take the factor space $\hat{\Lambda}=\Lambda/(L_{-1}\Lambda\,\bigoplus\,L_{-1}^{-} \Lambda)$ instead of $\Lambda$. The space $\hat{\Lambda}$ admits a similar decomposition as a direct sum of eigenspaces of $L_0$. It follows that the formula of the character for $\hat{\Lambda}$ takes the form \begin{equation} \chi_0 = (1-q)^2 \prod_{n=1}^{+\infty}\,\frac{1}{(1-q^n)^2}~. \end{equation} The dimensionalities of the subspaces $\hat{\Lambda}_s$ can be determined from the character formula \begin{equation} \sum_{s=0}^{\infty} \, q^s\, \dim(\hat{\Lambda}_s) = (1-q)\,\chi_0 + q~. \end{equation} \indent On the other hand, the module $V$ of the primary field $V_{nm}^{(t)}$ can be constructed by successively applying the generators $L_{-k}$ and $L_{1/2-l}^-$ with $k,l>0$ to the primary field $V_{nm}^{(t)}$. This space $V$ and the corresponding factor space $\widehat{V} = V/L_{-1}V$ may also be decomposed in a direct sum of $L_0$ eigenspaces: \begin{equation} V=\bigoplus_{s=0}^{\infty}\,V_s^{(t)}~,\quad L_0\,V_s^{(t)}=s\, V_s^{(t)}~. \end{equation} The dimensionalities of $V_s^{(t)}$ in this factor space associated with the relevant field \begin{equation} V_{(1,1)}^{(t)}=e^{i\frac{\alpha_0}{2}\Phi}\sigma_{\epsilon} \end{equation} are given by the character formula \begin{equation} \sum_{s={\Bbb N}/2}^{+\infty}\, q^{s+\Delta_{(1,1)}^{(t)}}\, \dim(\hat{V}_s^{(t)})= \chi_{\Delta_{(1,1)}^{(t)}}\, (1-q)~, \label{char1} \end{equation} where \begin{eqnarray} \label{char2} \chi_{\Delta_{(1,1)}^{(t)}}&=&q^{\Delta_{(1,1)}^{(t)}} \prod_{n=1}^{+\infty}\frac{1}{(1-q^{n})(1-q^{n-1/2})}~,\\ \Delta_{(1,1)}^{(t)}&=&\frac{1}{16}\left(1-{6\over p(p+1)}\right)~. \end{eqnarray} When the dimensionalities of $\widehat{V}_s^{(t)}$ (calculated from \calle{char1}, \calle{char2}) are compared to those of $\hat{\Lambda}_{s+1}$, we see that for $s=1,3,5,\dots$ the $\dim(\widehat{\Lambda}_{s+1})$ exceeds the $\dim(\widehat{V}^{(t)}_s)$ at least by the unity, i.e. $\dim(\widehat{\Lambda}_{s+1})> \dim(\widehat{V}^{(t)}_s),~s=1,3,5,\dots~.$ This proves that the model \begin{equation} \label{action} S_{\lambda}=S_p + \lambda\,\int\,d^2z\,e^{i\frac{\alpha_0}{2} \Phi(z,\overline{z})}\,\sigma_{\epsilon'}(z,\overline{z}) \end{equation} possesses an infinite set of non-trivial IMs. We note here that there are no such IMs for perturbations by the operators $V_{nm}^{(t)}$ with $n,m>1$. We now briefly study the renormalization group flow behaviour in the vicinity of the fixed point. Solving the Callan-Symanzik equation \cite{IZ} up to third order, one can obtain the $\beta$-function \begin{equation} \beta=\varepsilon\, g\, \left( 1 + \frac{Y}{6}\, g^2\right) + {\cal O}(g^4) ~. \end{equation} In the above equation, we have denoted \begin{equation} \varepsilon = 1-\Delta_{(1,1)}^{(t)} \end{equation} and \begin{equation} Y = \int d^2 z_1 \int d^2 z_2 \,\langle V_{(1,1)}^{(t)}(z_1,\overline{z}_1) V_{(1,1)}^{(t)}(z_2,\overline{z}_2)V_{(1,1)}^{(t)}(1,1) V_{(1,1)}^{(t)}(0,0) \rangle ~. \end{equation} Since $Y>0$, we conclude that there is no reason to expect the exsistance of any non-trivial zeros of the $\beta$-function. In the absence of zeros, the field theory described by the action \calle{action} has a finite correlation length $R_c\sim \lambda^{-1/2\varepsilon}$ and the spectrum consists of particles with non-zero mass of order $m\sim R_c^{-1}$. In this case, the IMs force the scattering of the particles to be factorizable, i.e. there is particle production, the set of particle momenta is preserved, the $n$-particle $S$-matrix is a product of 2-particle $S$-matrices etc. \section{Infinite Quantum Group Symmetry} \label{nonlocal-charges} \setcounter{equation}{0} \indent In this section we briefly review the method developed in ref. \cite{BL} and then we apply it to our model. We consider a CFT perturbed by a relevant operator with zero Lorentz spin. The Euclidean action is given by \begin{equation} \label{pert-action} S_\lambda=S_{\rm CFT}+\frac{\lambda}{2\pi} \,\int\,d^2z\,V_{\rm pert}(z,\overline{z})~, \end{equation} where the perturbation field can be written as $V_{\rm pert}(z,\overline{z})= V_{\rm pert}(z)\overline{V}_{\rm pert}(\overline{z})$ (or a sum of such terms but in our case this is irrelevant). Let us assume that for the conformal invariant action $S_{\rm CFT}$ there exist the chiral currents $J(z)$, $\overline{J}(\overline{z})$ satisfying equations $\partial_{\overline z}J(z)=0$, $\partial_z\overline{J}(\overline{z})=0$. Then for the action \calle{pert-action} $S_\lambda$, the perturbed currents, which are local with respect to the perturbing field, up to the first order, are given by Zamolodchikov's equations \cite{zam89} \begin{equation} \begin{array}{l} \partial_{\overline z}J(z,\overline{z})=\lambda\oint_z\, {d\omega\over 2\pi i}\, V_{\rm pert} (\omega,\overline{z})J(z)~,\\ \\ \partial_z\overline{J}(z,\overline{z})=\lambda\oint_{\overline{z}}\, {d\overline{\omega}\over 2\pi i}\, V_{\rm pert}(z,\overline{\omega})\overline{J}(\overline{z})~. \end{array} \end{equation} The condition for the conservation of the currents up to first order in perturbation theory is that the residues of OPEs appearing in the above contour integrals are total derivatives: \begin{equation} \begin{array}{l} {\rm Res}\Big(V_{\rm pert}(\omega)J(z)\Big)=\partial_zh(z)~, \\ \\ {\rm Res}\Big(\overline{V}_{\rm pert}(\overline{\omega}) \overline{J}(\overline{z})\Big) =\partial_{\overline{z}} \overline{h}(\overline{z})~. \end{array} \end{equation} Then Zamolodchikov's equations for the currents are written in the form \begin{equation} \label{continuity-equation} \begin{array}{l} \partial_{\overline{z}}J(z,\overline{z})=\partial_zH(z,\overline{z})~,\\ \\ \partial_z\overline{J}(z,\overline{z})= \partial_{\overline{z}}\overline{H}(z,\overline{z})~, \end{array} \end{equation} where the fields $H$, $\overline{H}$ are \begin{equation} \begin{array}{l} H(z,\overline{z})=\lambda\, \lbrack h(z)\overline{V}_{\rm pert}(\overline{z}) +\dots\rbrack~,\\ \\ \overline{H}(z,\overline{z})=\lambda\,\lbrack V_{\rm pert}(z)\overline{h}(\overline{z})+\dots\rbrack~, \end{array} \end{equation} where the dots represent contributions coming from terms in the OPEs which are more singular than the residue term. The conserved charges following from the conserved currents \calle{continuity-equation} are \begin{equation} \label{charges} \begin{array}{l} Q=\int\,{dz\over 2\pi i}\,J+\int {d\overline{z}\over 2\pi i}\, H~,\\ \\ \overline{Q}=\int\,{d\overline{z}\over 2\pi i}\,\overline{J} +\int\,{dz\over 2\pi i}\,\overline{H}~. \end{array} \end{equation} Using the non-trivial braiding relations between the conserved currents, one can obtain the $q$-deformed affine Lie algebra for the conserved charges \calle{charges}. We are now going to implement the above construction of non-local charges for the theory described by the action \calle{action}. We will thus derive the $q$-deformed Lie algebra underlying the theory. Using the construction explained above, we can show that the action \calle{action} admits the following non-local conseved quantum currents: \begin{equation} \label{continuity2} \begin{array}{l} \partial_{\overline{z}}J =\partial_zH~,\\ \nonumber\\ \partial_z\overline{J}=\partial_{\overline z} \overline{H}~, \end{array} \end{equation} where \begin{equation} \label{currents} \begin{array}{l} J=\colon e^{ia\varphi(z)}\, e^{ib\varphi^-(z)}\colon\, \sigma(z)~,\\ \\ \overline{J}= \colon e^{ia\overline{\varphi}(\overline{z})}e^{ib \overline{\varphi}^-(\overline{z})}\colon \,\overline{\sigma}(\overline{z})~,\\ \\ H(z,\overline{z})=\lambda\, A \, \colon e^{i(a+\alpha_0/2) \varphi (z)}e^{i(b+k) \varphi^-(z)} \overline{\sigma}(\overline{z}) e^{i\frac{\alpha_{0}}{2} \overline{\varphi}(\overline{z})}\colon~,\\ \\ \overline{H}(z,\overline{z})=\lambda\, A\, \colon e^{i(a+\alpha_0/2) \overline{\varphi}(\overline{z})} e^{i(b+k) \overline{\varphi}^-(\overline{z})} \sigma (z)e^{i\frac{\alpha_0}{2}\varphi(z)} \colon~, \end{array} \end{equation} and \begin{eqnarray} a &=& -(15/8+k^{2})/(\alpha_{0}+4k^{2}/\alpha_{0})~, \nonumber\\ b &=& 2k a/\alpha_0~, \label{constants}\\ A &=& \alpha_0/2(a + \alpha_0/2)~.\nonumber \end{eqnarray} In the derivation of \calle{currents}, we used the OPEs \begin{equation} \begin{array}{l} \sigma(z)\, \sigma(x)=(z-x)^ {k^2-1/8}:e^{ik\varphi^{-}(x)}:+\ldots~,\\ \\ \overline{\sigma}(\overline z)\overline{\sigma}(\overline x)= (\bar z-\bar x)^{\overline{k}^2-1/8} \,:e^{i\overline{k}\overline{\varphi}^-(\overline{x})}:+\ldots~. \end{array} \end{equation} From the continuity equations \calle{continuity2} we define the conserved charges \begin{equation} \begin{array}{l} Q =\int\,\frac{dz}{2\pi i}\,J + \int\,\frac{d\overline{z}}{2\pi i}\,H ~,\\ \nonumber\\ \overline{Q} =\int\,\frac{dz}{2\pi i}\,\overline{H} + \int\frac{d\overline{z}}{2\pi i}\,\overline{J}~. \end{array} \end{equation} To find the commutation relations between the charges $Q$ and $\overline{Q}$, we must first derive the braiding relations of the non-local conserved currents $J$, $\overline{J}$. To this end we will make use of the well known identity \begin{equation} e^A\,e^B=e^B\,e^A\,e^{\lbrack A,B\rbrack}~, \quad \lbrack A,\lbrack A, B\rbrack\rbrack= \lbrack B,\lbrack A,B\rbrack\rbrack=0~. \end{equation} We then obtain the following braiding relations \begin{equation} \begin{array}{ll} e^{ia\varphi(z)}e^{ib\varphi(z')}= e^{\mp i\pi ab}\,e^{ib\varphi(z')}e^{ia\varphi(z)}~, &\quad z\lessgtr z'~,\\ \\ e^{ia\varphi^{-}(z)}e^{ib\varphi^{-}(z')}= e^{\mp i\pi ab}\,e^{ib\varphi^-(z')}e^{ia\varphi^-(z)} ~, &\quad z\lessgtr z'~,\\ \\ e^{ia\overline{\varphi}(\overline{z})} e^{ib\overline{\varphi}(\overline{z}')}= e^{\pm i\pi ab}\,e^{ib\overline{\varphi}(\overline{z}')} e^{ia\overline{\varphi}(\overline{z})}~, &\quad \overline{z}\lessgtr \overline{z}'~,\\ \\ e^{ia\overline{\varphi}^-(\overline{z})} e^{ib\overline{\varphi}^-(\overline{z}')}= e^{\pm i\pi ab}\,e^{ib\overline{\varphi}^-(\overline{z}')} e^{ia\overline{\varphi}^- (\overline{z})}~, &\quad \overline{z}\lessgtr \overline{z}'~,\\ \\ e^{ia\varphi(z)}e^{ib\overline{\varphi}(\overline{z}')}=e^{i\pi ab} \,e^{ib\overline{\varphi}(\overline{z}')} e^{ia\varphi(z)}~, &\quad \forall z,\overline{z}'~,\\ \\ e^{ia\varphi^-(z)}e^{ib\overline{\varphi}^-(\overline{z}')}= e^{i\pi ab}\,e^{ib\overline{\varphi}^-(\overline{z}')}e^{ia\varphi^-(z)}~, &\quad \forall z,\overline{z}'~. \end{array} \end{equation} Using the representation of the twist fields $\sigma, \overline{\sigma}$ in terms of scalar bosonic fields which was proposed in ref. \cite{AZ}, we can derive the following braiding relations: \begin{equation} \begin{array}{ll} \sigma(z)\sigma(z')=e^{\mp i\pi/8}\,\sigma(z')\sigma(z)~,&\quad z\lessgtr z'~, \\ \\ \overline{\sigma}(\overline{z})\overline{\sigma} (\overline{z}')=e^{\pm i\pi/8}\,\overline{\sigma}(\overline{z}') \overline{\sigma}(\overline{z})~,&\quad \overline{z}\lessgtr \overline{z}'~, \\ \\ \sigma(z)\overline{\sigma}(\overline{z}')= e^{+i\pi/8}\,\overline{\sigma}(\overline{z}')\sigma(z)~,& \quad \forall z,\overline{z}'~. \nonumber \end{array} \end{equation} Consequently the non-local conserved currents have the non-trivial braiding relations \begin{equation} J(x,t)\overline{J}(y,t)= q^{\nu}\,\overline{J}(y,t)J(x,t)~, \end{equation} where \begin{equation} q=e^{-i\pi}~,\quad \nu = 1/8-aa-bb~. \end{equation} \indent Using the above braiding relations and the expressions \calle{currents}, one finds that the conserved charges satisfy the relations \begin{eqnarray} Q\overline{Q}-q^{\nu}\,\overline{Q}Q =\frac{\lambda}{2\pi i}\,\int_t\, (dz\partial_z+d\overline{z}\partial_{\overline{z}})\, A\, e^{i(a+\alpha_0/2)\varphi(z)} e^{i(b+k)\varphi^{-}(z)}\times \nonumber\\ \times A\,e^{i(a+\alpha_{0}/2)\overline{\varphi}(\overline{z})} e^{i(b+k)\overline{\varphi}^-(\overline{z})}~. \label{QQ} \end{eqnarray} Now let us recall that the scalar field $\varphi^-$ lives on the orbifold $S^1 / {\Bbb Z}_2$ and hence the momentum $k$ must be quantized. Therefore, the above relations must be transformed to \begin{eqnarray} \widehat Q_{\epsilon}\widehat{\overline{Q}}_{\overline{\epsilon}}- q^{\nu_{\epsilon\overline{\epsilon}}}\, \widehat{\overline{Q}}_{\overline{\epsilon}}\widehat Q_{\epsilon}&=& {\lambda\over 2\pi i}\, \sum\, A_L^{nm}A_R^{nm}\, \int_t \, (dz\,\partial_z+ d\overline{z}\,\partial_{\overline{z}})\times \nonumber\\ &\times & e^{i(a_L^{nm}+\alpha_0/2)\varphi(z)+ i(a_R^{nm}+\alpha_0/2)\overline{\varphi}(\overline{z})}\times\nonumber\\ &\times & e^{i(b_L^{nm}+k_L^{nm})\varphi^-(z)+ i(b_R^{nm}+k_R^{nm})\overline{\varphi}^-(\overline{z})}~, \end{eqnarray} where \begin{equation} \begin{array}{l} \nu_{\overline{\epsilon}\epsilon}= 1/8-a_L^{nm}a_R^{nm}-b_L^{nm}b_R^{nm} \\ \\ k_L^{nm}=k_L^{nm}(\epsilon,\epsilon')= {n\over R} + \left( m+{\epsilon+\epsilon'\over 2} \right)\,{R\over 2}~, \\ \\ k_R^{nm}=k_R^{nm}(\overline{\epsilon},\epsilon')= {n\over R}-\left( m+ {\overline{\epsilon}+\epsilon'\over 2}\right) \,{R\over 2}~. \nonumber \end{array} \end{equation} The constants $a_L^{nm}$, $a_R^{nm}$, $b_L^{nm}$, $b_R^{nm}$, $A_L^{nm}$, $A_R^{nm}$ are obtained from the relations \calle{constants} and $\epsilon,\overline{\epsilon}, \epsilon'\in\{0,1\}$. Finally, the topological charge for the model \calle{action} is defined as follows: \begin{eqnarray} {\cal T}_{\rm top}&=&\int_{-\infty}^{+\infty}\,dx\,\partial_x\Phi(x)+ \int_{-\infty}^{+\infty}\,dx\,\partial_x\Phi^-(x)\nonumber\\ &=&\int_{-\infty}^{+\infty}\,dx\,\partial_x\,(\varphi + \overline{\varphi})+ \int_{-\infty}^{+\infty}\,dx\,\partial_x(\varphi^- + \overline{\varphi}^-) \nonumber\\ &=&T_{\rm top}+\overline{T}_{\rm top}+ T_{\rm top}^-+\overline{T}_{\rm top}^-~, \label{top-charg} \end{eqnarray} where $\Phi$, $\Phi^-$ and the quasi-chiral components $\varphi, \overline{\varphi}, \varphi^-,\overline{\varphi}^-$ are related by the following equations: \begin{equation} \begin{array}{l} \varphi(x,t)=\frac{1}{2}\, \left(\Phi(x,t)+\int_{-\infty}^x\, dy\, \partial_t \Phi(y,t)\right)~,\\ \nonumber\\ \overline{\varphi}(x,t)=\frac{1}{2}\,\left(\Phi(x,t)- \int_{-\infty}^x\, dy\,\partial_t \Phi(y,t)\right)~,\\ \nonumber\\ \varphi^-(x,t)=\frac{1}{2}\,\left(\Phi^-(x,t)+ \int_{-\infty}^x\,dy\,\partial_t\Phi^-(y,t)\right)~,\\ \nonumber\\ \overline{\varphi}^-(x,t)=\frac{1}{2}\,\left(\Phi^-(x,t)- \int_{-\infty}^x\,dy\, \partial_t\Phi^-(y,t)\right)~, \end{array} \end{equation} These equations guarantee that $\Phi=\varphi+\overline{\varphi}$ and $\Phi^-=\varphi^-+ \overline{\varphi}^-$. Taking into account all these, the right hand side of the equation \calle{QQ} can be reexpressed in terms of the usual topological charges charge in \calle{top-charg}: \begin{eqnarray} \widehat{Q}_\epsilon\widehat{\overline{Q}}_{\overline{\epsilon}} - q^{\nu_{\overline{\epsilon}\epsilon}}\, \widehat{\overline{Q}}_{\overline{\epsilon}}\widehat{Q}_\epsilon = \frac{\lambda}{2\pi i}\, \sum\, A_L^{nm}A_R^{nm}\, \lbrack 1-e^{i(a_L^{nm}+\alpha_0/2)T_{\rm top}+ i(a_R^{nm}+\alpha_0/2)\overline{T}_{\rm top}}\times\nonumber\\ \times e^{i(b_L^{nm}+k_L^{nm})T_{\rm top}^-+ i(b_R^{nm}+k_R^{nm})\overline{T}_{\rm top}^-}\rbrack~.~~~~~ \label{QQ2} \end{eqnarray} Then, one can easily calculate the commutators \begin{equation} \label{TQ} \begin{array}{l} \lbrack T_{\rm top},Q_\epsilon^{nm}\rbrack= a_L^{nm}\, Q_{\epsilon}^{nm}~,\quad \lbrack \overline{T}_{\rm top}, \overline{Q}_{\overline{\epsilon}}^{nm}\rbrack= a_R^{nm}\,\overline{Q}_{\overline{\epsilon}}^{nm}~, \\ \\ \lbrack T_{\rm top}^-,Q_{\epsilon}^{nm}\rbrack= b_{L}^{nm}\, Q_{\epsilon}^{nm}~,\quad \lbrack\overline{T}_{\rm top}^-, \overline{Q}_{\overline{\epsilon}}^{nm}\rbrack= b_R^{nm}\,\overline{Q}_{\overline{\epsilon}}^{nm}~. \end{array} \end{equation} Thus, these commutation relations \calle{TQ} together with the relations \calle{QQ2} constitute the algebra, to the lowest non-trival order in perturbation theory, which is the symmetry of the $S$-matrix of the theory. Unfortunately, the isomorphism between the algebra \calle{QQ2},\calle{TQ} and the Hopf algebra has not been established yet, and, hence, the universal $R$-matrix of this hidden Hopf algebra has not been studied. However, we are going to make some additional comments about these open questions in the near future. \section{Conclusions} To summarize, in the present paper we have introduced a new integrable model in quantum field theory. The novelty of the model resides in the fact that it is built on a hyper-elliptic surface instead of the usual Euclidean plane. The quantum symmetry of the model has been identified in terms of the non-local conserved charges. This has led to a generalization of the method first introduced by Bernard and LeClair \cite{BL} for the affine Toda field theories where only boson fields are involved. As is understood very well by now, the quantum non-local conserved charges provide a quantum field theoretic basis for understanding quantum groups. Unfortunately, the mapping from the physical algebra satisfied by the non-local charges to the $q$-deformed Lie algebra has not been discovered yet. If this mapping is found, one will be able to study the universal $R$-matrix and consequently uncover the structure of the $S$-matrix. \vspace{.5cm} {\bf Acknowledgements} We would like to thank A. LeClair, F. Smirnov and R. Poghossian for helpful discussions. \vspace{.5cm}
2,869,038,154,143
arxiv
\section{Introduction} Wikipedia defines a galaxy as ``a gravitationally bound system of stars, stellar remnants, interstellar gas, dust, and dark matter.'' Establishing the relative proportions of each is one of the great challenges of extragalactic astronomy. Hundreds of papers have been written on the stellar masses of galaxies, but the accuracy with which they can be measured is very much a matter of debate. \medskip {\narrower\noindent \emph{Nobody ever measures the stellar mass. That is not a measurable thing, it's an inferred quantity. You measure light, OK? You can measure light in many bands but you infer stellar mass. Everybody seems to agree on certain assumptions that are completely unproven.} -- Carlos Frenk, 2017 May 15\footnote {\url{http://online.kitp.edu/galhalo-c17/panel1/rm/jwvideo.html} (44:48)} \par} \medskip The difficulties in measuring stellar masses are magisterially examined in a recent paper by Newman et al (2017) comparing stellar masses for three galaxies computed separately using stellar dynamics, gravitational macro-lensing and stellar population synthesis. Variants of each of the three approaches are considered. The principal source of uncertainty appears to be the contribution of dark matter in the first two methods and the low mass cutoff in the stellar initial mass function in the third. In a paper on a different galaxy by three of the same authors, Conroy et al (2017) say: \medskip {\narrower\noindent \emph{To illustrate the sensitivity of the total mass to the cutoff, for a single power law with $\alpha = 2.7$, the mass-to-light ratio is 70\% higher if the cutoff is $0.05 M_\odot$ compared to $0.08 M_\odot$.} \par} \medskip Frenk is wrong to say that \emph{everyone} agrees to certain assumptions. Schechter and Wambsganss (2004) describe a micro-lensing method for measuring the stellar-to-dark surface mass density in galaxies that macro-lens quasars. The technique measures the \emph {graininess} of the gravitational potential, to which faint stars, brown dwarfs and stellar remnants all contribute, invisible though they might be. That approach has been refined (Schechter et al 2014) yielding a stellar surface mass density for a sample of ten lensing galaxies that is a factor of $1.23 \times e^{\pm 0.47}$ greater than that of a Salpeter IMF with a $0.10 M_\odot$ cutoff. The uncertainty is dominated by the small sample size. In what follows we review the micro-lensing technique for measuring stellar masses and then report on efforts to increase the size of the lensing galaxy sample. \section{Stellar Masses from Micro-lensing} \subsection{Flux Ratio Anomalies} Witt et al (1995) argued that gravitational micro-lensing is a ``universal'' phenomenon in gravitationally lensed quasars and that it was responsible for the flux ratio anomaly observed in MG0414+0534, where one of the images was (and still is today) more than a magnitude fainter than expected from the macro-model for the gravitational potential. \vskip2.6truein \special{psfile=pg1115flat.ps hscale=50 vscale=50 angle=270 hoffset=-10 voffset=275} {\narrower\baselineskip=11pt\noindent Figure 1. Probability distribution for the ratio of observed to macro-model flux (expressed as a magnitude difference) at three different stellar mass fractions for the $A2$ image of the quadruple lens PG 1115+080. The different shapes of the distributions permit determination of the stellar mass fraction. \par} \medskip While one might expect the amplitudes of those flux ratio anomalies to increase with increasing stellar surface mass densities, Schechter and Wambsganss (2002) showed the dependence is not monotonic. This is illustrated in figure 1, where the micro-lensing probability density distributions are shown for three different stellar mass fractions for the A2 image of PG1115+080, the first quadruply lensed quasar. The micro-lensing is less strong when all of the surface density is in stars than when only 10\% is in stars. One might also expect the flux ratio anomalies to depend upon the masses of the stars involved, but they have been shown to be extremely insensitive to the distribution of stellar masses (Schechter et al 2004), subject to the condition that the emission comes from regions small compared to the Einstein radii of the micro-lensing stars. To excellent approximation they depend only on the surface mass density. To understand the micro-lensing fluctuations, one must remember that images appear wherever the light travel time from the quasar to the observer has a stationary point. In the absence of a lens, there will only be one image, a minimum of the light travel time. A galaxy with a sufficiently elliptical potential produces two minima, two saddlepoints and a maximum, the last of which is almost always infinitely demagnified. The stars that lie close to each macro-image produce micro-minima and micro-saddlepoints, breaking the macro-images into micro-images (Paczynski 1986). \subsection{Twinkling Quasars} The micro-images are the gravitational analog of the speckles produced by the Earth's atmosphere. The movement of the stars within the galaxy and of the galaxy relative to the quasar causes the speckle pattern to change, and with that the brightness of the speckles. The quasars scintillate, just as stars do. This suggests a straighforward approach to measuring the surface mass density of the A2 image in PG1115+080. Carry out repeated photometric observations, accumulate a histogram of fluxes, and compare it with the panels in figure 1. Unfortunately the timescale for gravitational scintillation in lensed quasars is of order ten years. As is often done in astronomy, one can substitute single epoch observations of a large number of similar objects for many observations of one object. \subsection{Estimating Stellar Surface Mass Density} One proceeds from stellar fluxes to stellar masses as follows: \begin{enumerate} \item Measure fluxes from images. \item Measure/estimate stellar surface brightness at position of each quasar image. \item Make an initial guess of $M/L,$ the stellar mass-to-light ratio. \item Carry out micro-lensing simulations and compute micro-lensing probability distributions based on adopted $M/L$. \item Assign a figure-of-merit to measure consistency of fluxes and probability distributions. \item Make a new guess of $M/L$ and iterate. \end{enumerate} One can sidestep the problems associated with measuring surface brightness at the position of the quasar images and bringing the surface brightnesses to a common bandpass and epoch by using the stellar mass fundamental plane (Hyde and Bernardi 2009). Instead of $M/L$, the free parameter is a factor ${\cal F}$ by which one multiplies an adopted stellar mass fundamental plane to obtain the best agreement with the observed fluxes. \section{Wanted: More Quadruply Lensed Quasars} The multiplicative uncertainty in the Schechter et al (2014) result, a factor of 1.6, would be reduced to a factor of roughly 1.3 if the sample size were quadrupled from ten to forty. Given how long it took to assemble the first ten, one might be tempted to skip the remainder of this contribution. But the rate at which new quadruple lenses are being discovered has accelerated over the past 3 years with the availability of the VST-ATLAS (Shanks et al 2015), DES (Abbott et al 2016), and PanSTARRS (Chambers et al 2016) surveys. In figure 2 we show images of eight of fifteen quadruply lensed quasars known to the author to have been discovered in the past three years. Two of the images are from VST-ATLAS, three are from DES and three are from PanSTARRS. The teams discovering these systems included Lin et al (2016), Agnello et al (2017), Berghea et al (2017), Ostrovski et al (2017), Lucey et al (2017), Anguita et al (private communication) and Schechter et al (to be published). \vskip1.8truein \special{psfile=octet_v2.ps hscale=70 vscale=70 angle=000 hoffset=-25 voffset=-240} {\narrower\baselineskip=11pt\noindent Figure 2. Eight quadruply lensed quasars discovered in the past 3 years. Images for the first two are taken from the VST-ATLAS survey, the next three from the DES, and the last three from PanSTARRS. The scales for the surveys are $0\farcs21$, $0\farcs26$ and $0\farcs25$ per pixel, respectively. The second image is in Sloan $r$, with all the rest in Sloan $i$. \par} \medskip There is reason to think that the acceleration of the discovery rate will continue. Until now lensed quasars have been found by first looking for quasar-colored objects and then resolving them into multiple images. This works at the brighter apparent magnitudes, where the light from the quasar images dominates that from the galaxy. Lucey et al (2017) argue that at fainter apparent magnitudes, the light from the galaxy will dominate that from the quasar. They report the discovery of two quadruply lensed quasars that, at first, were thought to be galaxies. What singled these objects out was their differential deblending in the 2MASS and PanSTARRS catalogs. This produced astrometric offsets that called for further scrutiny. Lemon et al (2017) use differential deblending in the SDSS and GAIA catalogs, but they start with known quasars. They might equally well have started with galaxies. \section{A Challenge: The Size of Quasar Continuum Emitting Regions} While we may not be making the same ``completely unproven'' assumptions as other investigators measuring stellar masses, we have our own set of assumptions. In particular, we assume that the continuum emitting region producing the flux ratio anomalies is sufficiently small -- much smaller than the Einstein radii of the micro-lenses -- that it can be treated as a point source. In their original paper, Schechter and Wambsganss (2004) analyzed optical fluxes and obtained inconsistent results assuming pointlike emitting regions. They were able to reconcile those discrepancies by adopting a toy model in which 50\% of the flux was pointlike and 50\% of the light was very extended and not subject to micro-lensing. Subsequent work by Pooley et al (2007), Morgan et al (2010) and Blackburne et al (2011) showed that the continuum emitting regions of bright lensed quasars were factors of 3 - 30 larger than predicted by the venerated Shakura and Sunyaev (1973) model. Schechter et al (2014) used X-ray flux ratios in their estimate of the factor by which Salpteter mass surface densities needed to be multiplied to allay concerns about the size of the continuum emitting region. Jim{\'e}nez-Vicente et al (2015) took a different tack, carrying out a joint analysis of stellar mass fraction and emitting region size. The two approaches yield consistent results, albeit with large uncertainties. The size of the X-ray sample will continue to grow as long as the Chandra X-ray Observatory continues to operate. Unfortunatly none of the currently planned X-ray missions will be able to make such measurements as they lack Chandra's resolution. As discussed above, newly discovered quadruply lensed quasars are likely to be less luminous than those first discovered. It is reasonable to expect their continuum emitting regions to be correspondingly smaller, mitigating the effect of their partial resolution. \section{Limits on MaCHOs, Including Primordial Black Holes} In our calculations, we implicitly assume that the dark halo component of a lens is smoothly distributed. This translates to halo particles of at most planetary mass, depending upon the poorly known sizes of quasar X-ray emitting regions. \vskip3.0truein \special{psfile=moneymacho.ps hscale=40 vscale=40 angle=270 hoffset=30 voffset=225} {\narrower\baselineskip=11pt\noindent Figure 3. Likelihoods for a range of fractional contributions of MaCHOs to the dark matter surface density in ten lensed quasars. Note the finite likelihood for a negative fraction, which would result if a Salpeter IMF overestimates the surface mass density. \par} We can invert our assumptions, and take the stellar surface mass density to be known (adopting in our case a Salpeter IMF) and instead let the factor ${\cal F}$ represent the fraction of the dark halo in Massive Compact Halo Objects (MaCHOs). The goal is exactly the same as that of the MaCHO Project (Alcock et al 2000), but we use the static micro-lensing of quasars rather than the time-variable micro-lensing of stars. A significant advantage of the present technique is that there is no upper limit to masses of the compact objects. Mediavilla et al (2009) used static micro-lensing to place explicit limits on the fraction of halo in the form of primordial black holes, a subject reviewed by Carr et al (2016). Their argument was refined by Mediavilla et al (2017). They use optical rather than X-ray flux ratios and the overlap between the two samples is only 50\%, so one might think it worth the investment of time to re-analyze the Schechter et al (2014) sample. The investment was very small. Exactly one line of code needed to be changed. Results from that effort are shown if Figure 3. The most likely fraction of the dark halo in MaCHOs is something less than 10\%, confirming the results of Mediavilla et al (2009). Carl Sagan famously said ``Extraordinary claims require extraordinary evidence.'' We suspect Sagan would have preferred to explain the small excess granularity in lensing galaxies as the product of a somewhat higher stellar surface mass density. \acknowledgements The author thanks Jeremy and Joan Mould for many years of friendship. Though Jeremy and I collaborated on only one paper, it was a good one, among the most highly cited publications for both of us. By virtue of proximity (we overlapped both in Tucson and in Pasadena) and affinity we came to know each other well. Jeremy is unusal among astronomers in that he has always insisted on thinking things through for himself. One might have thought this was among the first requirements for a scientist. With Jeremy it is actually the case. In thinking about our past interactions I can hear his exaggerated ``hmmmmmmmm'' in response to some new idea or result. I can also hear him saying ``It seems to me ...'' followed by a careful argument. While I won't get to hear his ``hmmmmmmmm'' when he reads this contribution, I do look forward to an email beginning ``It seems to me ...''.
2,869,038,154,144
arxiv
\section{} Saddle points and the dynamics in their vicinities play crucial roles in chemical reactions. A saddle point on a multi-dimensional potential energy surface is defined as a stationary point at which the Hessian matrix does not have zero eigenvalues and, at least, one of the eigenvalues is negative. Saddle points are classified by the number of the negative eigenvalues, and a saddle that has $n$ negative eigenvalues is called an \textit{index-$n$ saddle}. Especially an index-one saddle on a potential surface has long been considered to make bottleneck of reactions \cite{Glasstone1941,Steinfeld1989,Bonnet2010}, the sole unstable direction corresponding to the ``reaction coordinate.'' This is because index-one saddles are considered to be the lowest energy stationary point connecting two potential minima, of which one corresponds to the reactant and the other to the product, and the system must traverse the index-one saddle from the reactant to the product \cite{Zhang2006,Skodje2000,Shiu2004,Bartsch2005a}. To estimate reaction rate constants across the saddles, transition state theory was proposed \cite{Glasstone1941,Steinfeld1989,Bonnet2010}, by envisaging the existence of a non-recrossing dividing surface (i.e., transition state (TS)) in the region of index-one saddle. Recent studies of nonlinear dynamics in the vicinity of index-one saddles have revealed the firm theoretical ground for the robust existence of the no-return TS in the phase space \cite{KBAr6I,KB01,Wiggins2001,UzerNonlin02,Bartsch2005a,Li2006,Kawai2010a,Hernandez2010,NFLrev,QNFrev,QTDNF,NFrotSK,NFrotUC,Teramoto2011,Koon2000,Jaffe2002,Gabern2005,Gabern2006} (see also books \cite{book_adv05,book_adv11} and references therein). The scope of the dynamical reaction theory based on normal form (NF) theory \cite{LL92} , a classical analog of Van Vleck perturbation theory, is not limited to only chemical reactions, but also includes, for example, ionization of a hydrogen atom under electromagnetic fields \cite{Wiggins2001,UzerNonlin02}, isomerization of clusters \cite{KBAr6I,KB01}, orbit designs in solar systems \cite{Koon2000,Jaffe2002,Gabern2005,Gabern2006}, and so forth. Very recently, these approaches have been generalized to dissipative multidimensional Langevin equations \cite{Bartsch2005a,Hernandez2010,NFLrev}, laser-controlled chemical reactions with quantum effects \cite{QNFrev,QTDNF}, systems with rovibrational couplings \cite{NFrotSK,NFrotUC}, and showed the robust existence of reaction boundaries even while a no-return TS ceases to exist \cite{Kawai2010a}. For complex molecular systems, the potential energy surface becomes more complicated, and transitions from a potential basin to another involve not only index-one saddles but also higher index saddles \cite{Minyaev2004,Shida2005,Heidrich1986}. For example, it was shown in a computer simulation of an inert gas cluster containing seven atoms that transitions from a solid-like phase to a liquid-like phase occur mostly through index-two saddles rather than through index-one saddle with the increase of kinetic temperature \cite{Shida2005}. This indicates that the more rugged a system's energy landscape becomes and/or the more ``temperature'' increases, the more frequently the system contains higher index saddles. To reveal the fundamental mechanism of the passage through a saddle with index greater than one, the phase space structure was recently studied on the basis of NF theory \cite{Haller2010,Haller2011,Ezra2009a,Collins2011}. For example, the extension of the dynamical reaction theory into higher index saddles was discussed \cite{Haller2010,Haller2011,Ezra2009a} for a stronger repulsive degree of freedom(DoF) \cite{Haller2010,Haller2011} and a dividing surface to separate the reactant and the product was proposed for higher index saddles \cite{Collins2011}. While these studies are of importance, the stronger repulsive DoF does not necessarily serve as the reactive direction, as shown for an index-two saddle in structural isomerization of aminoborane \cite{Minyaev2004}. In addition, these studies rely on the assumption that NF performed in the region of the saddle can find the reactivity boundaries if the perturbation calculation converges \cite{Haller2010,Haller2011,Ezra2009a,Collins2011}. In studies of chemical reactions, one needs to assign regions of the phase space as ``reactants" or ``products". Invariant manifolds in the phase space that separate the origin and the destination of trajectories have provided us with significant implications in the rate calculation and the orbit design in non-RRKM systems \cite{Koon2000,Jaffe2002,Gabern2005,Gabern2006,KBAr6I,KB01,Wiggins2001,UzerNonlin02,Bartsch2005a,Li2006,Kawai2010a,Hernandez2010,NFLrev,QNFrev,QTDNF,NFrotSK,NFrotUC,Teramoto2011,Koon2000,Jaffe2002,Gabern2005,Gabern2006} (see also books \cite{book_adv05,book_adv11} and references therein). In this Letter we investigate how one can identify the reactivity boundaries to determine the fate of the reaction for higher index saddles. We analyze a two-DoF Hamiltonian system with an index-two saddle by using NF theory and investigate its applicability in determining if the system undergoes reactions or not. We will emphasize the subtlety in defining the ``reactant'' and ``product'' regions in the phase space, and point out the difference between the regions defined by NF and those defined by the original coordinates. If the total energy of the system is just slightly above a stationary point, the $n$-DoF Hamiltonian $H$ can well be approximated by normal mode Hamiltonian $H_0$ \begin{equation} \label{eq:nm} H(\vect{p},\vect{q}) \approx H_{0}(\vect{p},\vect{q}) = \sum_{j=1}^n \frac{1}{2}(p_j^2+k_j q_j^2) \end{equation} with normal mode coordinate \vect{q}=$(q_1,\dots,q_n)$ and its conjugate momenta \vect{p}=$(p_1,\dots,p_n)$, where $k_j \in \mathbb{R}$ is ``spring constant'' or the curvature of the potential energy surface along the $j$th direction. The constants $k_j$ can be positive or negative. If negative, the potential energy is maximum along the $j$th direction. Then the direction exhibits an unstable motion corresponding to ``sliding down the barrier,'' and can be regarded as ``reaction coordinate.'' The index of the saddle corresponds to the number of negative $k_j$. Flow of the DoF with negative $k_j$ is depicted in Fig.~\ref{fig:trj}(a). Here one can introduce the following coordinates \begin{eqnarray} \label{eq:xieta_nm} \eta_j=& (p_j+\lambda_j q_j)/(\lambda_j\sqrt{2}) , ~~ \xi_j =&(p_j-\lambda_j q_j)/\sqrt{2} , \end{eqnarray} where $\lambda_j=\sqrt{-k_j}$. When Eq.\;(\ref{eq:nm}) holds, the action variable defined by $I_j=\xi_j\eta_j$ is an integral of motion, and trajectories run along the hyperbolas given by $I_j=const.$ shown by gray lines in Fig. \ref{fig:trj}(a). \begin{figure} \includegraphics[width=8.5cm]{trj.eps \caption{ (color online). Destination/origin dividing set of trajectories sliced on several sections ($q_2=0,1,3$ with $p_2>0$). Each curve represents a set of trajectories (gray, orange, blue), and each initial condition of the set of trajectories is given by a contour of the initial value of the action $I_1$ in the asymptotic region: the initial condition of the destination dividing set of trajectories(blue) is given on that of $q_2=5$ with $p_2>0$ , and that of the origin dividing set of trajectories (orange) is given on the section of $q_2=-5$ with $p_2>0$ under negative time evolution. Here, we have energetically inaccessible region (dashed lines) because of positive kinetic energy $\sum_{j=1}^n p_j^2/2$. } \label{fig:trj} \end{figure} The $\eta_j$- and $\xi_j$-axes run along the asymptotic lines of the hyperbolas in Fig.~\ref{fig:trj}(a). One can tell the destination and origin regions of trajectories from the signs of $\eta_j,\xi_j$ as follows: If $\eta_j>0$, the trajectory goes into $q_j>0$ and if $\eta_j<0$, then the trajectory goes into $q_j<0$. Therefore one can determine the destination of trajectories from the sign of $\eta_j$. Similarly, the origin of trajectories can be determined from the sign of $\xi_j$. Hereafter we call the set $\eta_j=0$ ``destination-dividing set,'' $\xi_j=0$ ``origin-dividing set,'' and each of these sets constitute ``reactivity boundaries.'' The Hamiltonian of Eq.~(\ref{eq:nm}) corresponds to the lowest order (quadratic) part of the Taylor expansion of $H$. As total energy of the system increases, one needs to consider higher order terms $H_{\varepsilon}(\vect{p},\vect{q})$: \begin{equation} \label{eq:fullH} H(\vect{p},\vect{q}) = H_{0}(\vect{p},\vect{q}) + H_{\varepsilon}(\vect{p},\vect{q}), \end{equation} where $H_{\varepsilon}$ is power series starting from cubic and higher order terms. Note that, in this case, the actions $\vect{I}$ are no longer constants of motion. However, previous studies \cite{KBAr6I,KB01,Wiggins2001,UzerNonlin02,QNFrev,QTDNF,NFrotSK,NFrotUC} showed that a nonlinear canonical transformation $(p_1,\dots,p_n,q_1,\dots,q_n)\rightarrow(\bar p_1,\dots,\bar p_n,\bar q_1,\dots,\bar q_n)$ can provide new action variables as constants of motion, and the associated degrees of freedom are decoupled with each other (to a certain order of approximation) in the new coordinates. Here the new actions and the coordinates are defined in parallel with Eq.~(\ref{eq:xieta_nm}) by using the newly introduced coordinates $(\bar{\vect{p}},\bar{\vect{q}})$: \begin{eqnarray} \label{eq:Inf} \bar{I}_j = & \bar{\xi}_j \bar{\eta}_j,\cr \bar\eta_j = & (\bar p_j+\lambda_j \bar q_j)/(\lambda_j\sqrt{2}) ,~~ \bar\xi_j = & (\bar p_j-\lambda_j \bar q_j)/(\sqrt{2}) . \end{eqnarray} The newly introduced coordinates $(\bar{\vect{p}},\bar{\vect{q}})$ are called NF coordinates. The new actions $\bar{I}_j$ are now constants of motion, and consequently the flow around the stationary point follows the contour lines shown in Fig.~ \ref{fig:trj}(a), if the axes are changed to the new coordinate $\bar{q}_1$ and $\bar{p}_1$. Thus one can still know the destination and the origin of trajectories from the signs of $\bar{\eta}$ and $\bar{\xi}$. Note that the NF theory is based on the assumption that linear terms dominate dynamics around the saddle and are weakly perturbed by nonlinear terms. Under this assumption $\lambda$s of the normal modes around a saddle point dominate the dynamics and can extract the integrals of motion if the perturbation calculation converges. In order to understand whether such reactivity boundaries extracted by NF actually coincide with the true reactivity boundaries that determine the asymptotic behavior of a chemical reaction through index-two saddle, we scrutinize a two DoF model system with an index-two saddle whose higher order term in Eq. (\ref{eq:fullH}) is \begin{equation} \label{eq:nonlinear} H_\varepsilon(\vect{p},\vect{q}) = \varepsilon q_1^2 q_2^2 \exp(2-q_1^2-q_2^2). \end{equation} This nonlinear term is effective locally around $|q_1|=|q_2|=1$, and vanishes in the asymptotic region($|q_1|$ or $|q_2|=\infty$) and in the vicinity of the saddle ($|q_1|$ and $|q_2|\approx 0$). In what follows, we employ the system parameters as $E=10^{-2},~\varepsilon=10^{-1},~\lambda_1=1/\sqrt{2}$ and $\lambda_1:\lambda_2=1:\gamma$ (golden ratio). In order to observe the trajectories and the destination- and the origin-dividing sets, we take a set of sections of the phase space at some values of $q_j$ ($j=1$ or $2$). For example, Fig.~\ref{fig:trj}(d) shows the section at $q_2=0$ with $p_2>0$. There the gray curves are contour lines of the initial value of the normal mode action $I_j$. The blue and orange curves are the destination-dividing set and the origin-dividing set, respectively. Numerical extraction of the destination-dividing set is carried out as follows: First we take a set of points on the line $\eta_1=0$ on the section $q_2=5$ and $p_2>0$. This set divides the destination of the trajectories correctly, the large negative values of $q_2$ and the negative sign of $p_2$ ensure that the trajectories will go into the asymptotic region with negative $q_2$, where the flows of the trajectories are given by the normal mode Hamiltonian [see Eqs.~(\ref{eq:fullH}) and (\ref{eq:nonlinear})], and $H_\varepsilon$ becomes negligible for large $|\vect{q}|$ as shown in Fig.~\ref{fig:trj}(a),(b). The set is then numerically propagated backward in time into the inner region (smaller values of $q_2$) where the nonlinear term is significant as shown in Fig.~\ref{fig:trj}(c),(d). Similarly, the origin-dividing set is calculated by taking a set of points on $\xi_1=0$ on the section of $q_2=-5$ and $p_2>0$, and propagating them forward in time. Now we compare the numerically calculated destination- and origin-dividing sets with those calculated by the NF theory. Fig.~\ref{fig:dfm} shows the destination-dividing set on the section of $q_2=0$ with $p_2>0$. We observe discrepancy between the numerically calculated set and those of NF ($\bar{\eta}_1^{(3)}=0$ and $\bar{\eta}_1^{(15)}=0$, where the upper indices denote the polynomial order of NF). When compared with the normal mode approximation ($\eta_1=0$), it is seen that the effect of the nonlinearity is evaluated in the opposite way in the NF compared to the true destination-dividing set. \begin{figure} \includegraphics[width=8.5cm]{dfm.eps} \caption{ (color online). Discrepancy between the numerically extracted destination-dividing set and that of NF $\bar{\eta}_1=0$ on the section of $q_2=0$ with $p_2>0$. Square depicted in (a) denotes the region which are magnified in (b). The shaded areas denote a discrepancy region where the two destination-dividing sets were different. } \label{fig:dfm} \end{figure} The failure of the NF observed here in calculating the destination-dividing set is not due to the lack of convergence in the perturbation expansion used in the NF theory, because, firstly, the results of the third- and the fifteenth-order of expansion compared in Fig.~\ref{fig:dfm} confirm a good convergence of the NF, and secondly, we have confirmed that the numerically computed trajectories follow the NF destination-dividing set $\bar{\eta}_1=0$ in the saddle region, that is, the set $\bar{\eta}_1=0$ is truly an invariant set. Thus the NF describes correctly the dynamics of this system, and the sign of $\bar{\eta}_1$ predicts the destination of the trajectory in the $(\bar{q}_1,\bar{p}_1)$-space. However, the ``destination'' predicted from the sign of the NF coordinate $\bar{\eta}_1$ rather refers to the sign of $\bar{q}_1$ in the future, as can be seen from the discussion in Fig.~\ref{fig:trj}(a). The NF can fail to predict the destination of trajectories when the sign of $\bar{q}_1$ is different from the originally used position coordinate $q_1$. Figure~\ref{fig:cnt} presents some contour lines of $\bar{q}_1(\mathbf{p},\mathbf{q}|E)=0$ and $\bar{q}_2(\mathbf{p},\mathbf{q}|E)=0$ on the $q_1$-$q_2$ space and the $q_2$-$q_1$ space, respectively, with some fixed values of $p_1$ and $p_2$. The right (left) hand side region of each contour line in the spaces corresponds to a region of $\bar{q}_j>0$ ($\bar{q}_j<0$) for fixed $p_j$ ($j=1$ in Fig.~\ref{fig:cnt}(a), $j=2$ in Fig.~\ref{fig:cnt}(b)). These plots indicate that there exist regions where the signs of $\bar{q}_j$ and $q_j$ are different, and the size of the discrepancy regions ($\mathrm{sgn}~ q_j \neq \mathrm{sgn}~ \bar{q}_j$) tends to enlarge with the increase of $|p_j|$(e.g., see the shaded areas in Fig.~\ref{fig:cnt}). Likewise, such failure of the NF also occurs for $\bar{\xi}_1=0$ in determining the origin. As Fig.~\ref{fig:cnt} indicates, such a discrepancy can also occur for $\bar{\eta}_2=0$ and $\bar{\xi }_2=0$. \begin{figure} \includegraphics[width=8.5cm]{qh_0.eps \caption{ (color online). Contour lines of $\bar{q}_1(\mathbf{p},\mathbf{q} |H=E)=0$ and $\bar{q}_2(\mathbf{p},\mathbf{q}| H=E)=0$ on the $q_1$-$q_2$ space (a) and the $q_2$-$q_1$ space (b) with some fixed values of $p_1$ and $p_2$ whose values are indicated in the insets. The gray bold curves denote representative trajectories. For instance, the discrepancy regions of $\mathrm{sgn}~ q_i \ne \mathrm{sgn}~ \bar{q}_i$ with $p_i=0.14 (i=1,2)$ are denoted by the gray colored areas. } \label{fig:cnt} \end{figure} Note however that the significance of discrepancy in the NF reactivity boundary is different depending on the instability of these reactive DoFs. Trajectories, denoted by the gray bold curves in Fig.~\ref{fig:cnt}(a)(b), are more strongly repelled along the $q_2$ direction than $q_1$ due to the difference of the repulsion ($\lambda_2>\lambda_1$). The discrepancy between those NF reactivity boundaries and the corresponding destination- and origin-dividing sets is more pronounced along $q_1$ than along $q_2$. It is because trajectories more often enter into the discrepancy region of $\mathrm{sgn}~ q_1 \neq \mathrm{sgn}~ \bar{q}_1$ than that of $\mathrm{sgn}~ q_2 \neq \mathrm{sgn}~ \bar{q}_2$ due to the difference of the repulsion. Because we interpret this result in terms of the relative magnitudes of the $\lambda$s without referring to any specific properties of our model, similar results are expected to be found generally in the dynamics around index-two saddles in reacting systems when linear terms dominate dynamics around the saddle and are weakly perturbed by nonlinear terms. In conclusion, we have numerically constructed the destination- and the origin-dividing sets in a two DoF system with an index-two saddle, and compared the results of NF theory with them. We have found the failure of the NF in identifying the reactivity boundaries especially along the less repulsive DoF even while the perturbation calculation converges. On the contrary, significant discrepancy was not observed along the strong repulsive DoF, which agrees with the studies \cite{Haller2010,Haller2011}. Such discrepancy could also occur in index-one saddles, although the difference between $\bar q$ and $q$ have not been found with significance in index-one saddles \cite{KBAr6I,KB01,Wiggins2001,UzerNonlin02,Li2006,Kawai2010a,QNFrev,QTDNF,NFrotSK,NFrotUC,Teramoto2011}. This is probably because, in the case of index-one saddles, there is only one repulsive DoF and all the other DoFs are bound so that trajectories have less possibility to go into the discrepancy regions after leaving the region of the saddle. In the context of studying dynamics of chemical reaction systems, one needs to divide the asymptotic region of the phase space into ``reactants'' and ``products.'' For the case of the index-one saddle, this division has seemed trivial because we have only one reactive direction (say $q_1$) and therefore only two asymptotic regions ($q_1\rightarrow+\infty$ and $q_1\rightarrow-\infty$). For the case of the higher index saddle, however, we have more than one ``reactive'' direction and the division of the phase space is not trivial any more. In this study, to define states we designed a model system that becomes separable in the asymptotic region. In a general case, the asymptotic ``reactant'' and ``product'' regions must be assigned by referring to the chemical nature of each specific system such as breaking and formation of chemical bonds. According to the present results, however such assignment can be different from those made by NF, especially for less repulsive DoF. Such less repulsive DoF can sometimes serve as the reactive coordinate in molecular systems \cite{Minyaev2004}. This indicates that chemical reactions through higher index saddles can involve much richer structures which require reconsideration of the concepts of ``reactant'' and ``product'' themselves. Future works, therefore, will need either to modify the NF reaction theory to remedy the discrepancy between $q$ and $\bar{q}$, or resort to numerical calculations, although the latter is difficult for high DoF systems. We acknowledge Dr. Yusuke Ohtani and Prof. Mikito Toda for their fruitful discussions. This work has been partially supported by the Japan Society for the Promotion of Science. The computations were partially performed using the Research Center for Computational Science, Okazaki, Japan.
2,869,038,154,145
arxiv
\section{Introduction} The study of animal movement is fundamental to ecology because it is inherently linked to critical processes that scale from individuals to populations and communities to ecosystems \citep{Hooten::AnimalMovementTelemetryBook}. Rapid technological advancements over the past several decades have given rise to a variety of electronic tracking devices that can remotely monitor animals in challenging environments \citep{Hussey2015::aquaticAnimalTelemetry} as well as an assortment of statistical methods for analyzing the resulting (big) movement data. Statistical models for animal movement data are most commonly formulated in discrete time \citep{Hooten2017::JABESIntro}, and are increasingly aimed at inferring behavioural ``states" from observed tracks. In this context, the data (called tracks, or location data) generally consist of a regularly observed time series of locations of an animal. Inferring behavioural states from location data was initially made possible by a proposal in \citet{Morales2004::MovementRandomWalks} to transform the data into a bivariate series of step lengths and deflection angles. In their example, they use characteristics of the step length and deflection angle series to determine when an elk is in an ``encamped" state and when it is in an ``exploratory" state. There are many different ways to estimate behavioural states from this type of tracking data. While traditionally achieved using likelihood methods (frequentist or Bayesian), any unsupervised classification method can be used. Some examples are mixture models \citep{Morales2004::MovementRandomWalks}, clustering models, and $k$-means clustering \citep{Curry2014::movementClustering}. If researchers have actually observed an animal's behaviour at some points in time (for example through recorded video), then any supervised classification method could also be used. The work of \citet{Morales2004::MovementRandomWalks} along with that of \citet{Jonsen::2005DCRWAnimalMovement}, popularized the state-switching model framework into the {\itshape de facto} way of analysing animal movement data in discrete time \citep{McClintock2012::MultistateRandomWalk,Whoriskey::SwitchingHMM,Patterson2017::AnimalMovementOverview}. While not all animal movement models which incorporate state-switching into the movement process have distinct behavioural states, the ones that do generally fall under the hidden Markov model (HMM) framework. These models assume that there are underlying behaviours driving the animal movement process \citep{Michelot::moveHMM}. Hidden Markov models for animal movement have a number of desirable properties: they have an easily computable likelihood which is typically fast to optimize, the model parameters have clear interpretations, and they can fairly easily handle different types of data (including missing data) in the same model \citep{Zucchini::HMMsforTimeSeries}. The baseline formulation of the HMM has a few key assumptions: the underlying state process is assumed to form a Markov chain, and the observed step lengths and deflection angles are conditionally independent given the behavioural state. The effect of various violations of these assumptions are discussed in \citet{Pohle::numberStatesHMM}. They found that neglecting a semi-Markov state process (which directly models state residency time), a higher order Markov chain for the behavioural process, or violations of conditional independence can introduce bias to parameter estimates and favor models which have more behavioural states than actually exist. Semi-Markov state processes are also considered in \citet{Langrock2012::HMMsExtensions} while higher order state processes are presented in \citet{Zucchini::HMMsforTimeSeries}. The papers in the recent Journal of Agricultural, Biological, and Environmental Statistics special edition on animal movement also started to address other problems associated with the discrete-time model framework in general, such as telemetry error, irregularly spaced data, and occasional missing data \citep{McClintock::telemetryObservationError}, the temporal scale and resolution of the behaviours involved in the data \citep{Vianey::MultiScaleHMMMovement}, and choosing the number of behavioural states to use \citep{Pohle::numberStatesHMM}. Methods for assessing goodness of fit for animal movement models were discussed in \citet{Potts::MovementModelResiduals}, wherein they found that none of the 20 highest cited papers at the time tested goodness of fit to the data. Since then, the moveHMM \citep{Michelot::moveHMM} and momentuHMM \citep{McClintock2018::momentuHMMpackage} \texttt{R} packages have implemented easy to use residuals, although the use of residuals in the literature is still uncommon, or at least under-reported. The current paper introduces a conditionally autoregressive hidden Markov model (CarHMM) that does not require the assumption of conditional independence of the movement process given the behavioural process. We do this by introducing an autocorrelation parameter in the step length distribution of the traditional HMM (such as that implemented in \citet{Michelot::moveHMM}). The use of an autocorrelated step length process was also present in a continuous-state model for estimating the effect of environmental covariates on behavioural memory in \citet{Forester2007::StateSpaceBehaviouralMemory}. Throughout, we provide general practice guidelines wherever possible. Since analyses of animal movement typically use offline data , we propose standardizing the observed step lengths by dividing by the mean observed step length. This allows comparison of models across data sources, animals, species, etc. We use a lag-plot of step length for determining if the conditional autocorrelation is necessary, and note its possible use to help in choosing the number of behavioural states to use. In the case of irregularly observed tracks, we also discuss how to choose an appropriate interpolation time step for the model, as well as how to deal with extensive missing data by grouping observations. In Section \ref{sec::modelForm}, we present the formulation of the model including computation of the likelihood and give references for the theoretical properties. Section \ref{sec::paramInterp} discusses the biology associated with specific transformations of the model parameters. Section \ref{sec::dataInspectModelCheck} deals with pre-processing the locations by choosing a time step and dealing with missing data, as well as model selection and validation. Both Section \ref{sec::paramInterp} and Section \ref{sec::dataInspectModelCheck} are useful for most types of discrete-time models, including the HMM and the CarHMM. Section \ref{sec::Simulation} presents four short simulation studies. Section \ref{sec::mgreySeal} demonstrates best practice for using the CarHMM through the analysis of a male grey seal. \section{CarHMM formulation} \label{sec::modelForm} We assume that the data consist of a set of step lengths $d_{(t,t+1)}$ between locations at time $t$ to $t+1$ and deflection angles $\theta_{t}$ between locations at times $t-1$, $t$, and $t+1$. Locations are assumed to be observed on a discrete and evenly spaced time grid. Here, step lengths measure the distance between consecutive locations, and deflection angles measure the angular change in direction between three locations. We discuss observations which are irregularly spaced in Section \ref{ssec::preProcess}. We introduce a behavioural state process $B_{t}$ which is a Markov chain on a finite set of states $\{1,~...,~k\}$. Thus the distribution for $B_{t}$ is completely determined by the value $b_{t-1}$ of $B_{t-1}$ and the transition probability matrix $\mathbf{A}$. The $(i,j)^{\text{th}}$ entry $a_{i,j}$ of $\mathbf{A}$ gives the probability of transitioning from state $i$ to state $j$. We assume (other choices are possible) that the initial distribution of the behavioural state Markov chain is given by the stationary distribution $\bm{\delta}$, which is the vector such that $\bm{\delta}\mathbf{A} = \bm{\delta}$ and $\sum_{i}\delta_{i}=1$. Given the behavioural state at time $t$, the step length at time $(t,t+1)$ and deflection angle at time $t$ are assumed to be conditionally independent of all other observations and behavioural states, with the key exception that step length at time $(t,t+1)$ is allowed to be dependent on step length at time $(t-1,t)$. A first order autoregressive process is assumed for step lengths $d_{(t,t+1)}$. While any valid distributions can be used, in this presentation we assume a gamma ($\Gamma$) distribution for step lengths and a wrapped Cauchy (WC) distribution for deflection angles $\theta_{t}$. A $\Gamma\left[(1-\phi)\cdot\murl + \phi\cdot d_{(t-1,t)},~\sigma\right]$ distribution has mean $\mu = (1-\phi)\cdot\murl + \phi\cdot d_{(t-1,t)}>0$ (reversion level $\murl>0$, autocorrelation $0<\phi<1$) and standard deviation $\sigma>0$, with the more traditional shape and scale parameters being $(\mu / \sigma) ^ 2$ and $\sigma^{2} / \mu$, respectively. The $\text{WC}\left(c,~\rho\right)$ distribution has center $c\in \left[-\pi,~\pi\right]$ and concentration $\rho\in\left(0,1\right)$ with density function \[ f\left(\theta;~c,~\rho\right) = \frac{1}{2\pi}\cdot \frac{1-\rho^{2}}{1+\rho^{2} - 2\rho\cdot\cos\left(\theta - \mu\right)}. \] In cases where we have all of the data before analysis (i.e.~we are not streaming the data), we standardize all step lengths by dividing by the observed mean step length. This removes units (for example, kilometers) and standardizes parameter interpretation across data sources, animals, species, etc. Comparison of standardized parameter estimates across data sources is dependent on many factors, including the temporal resolution of each data source, choices made during the model procedure, and the biology/ecology of the animals being compared. Conditional on the mean observed step length, dividing by the mean does not alter parameter inference since dividing a gamma distribution by a (non-zero) constant results in another gamma distribution. In practice, we store the observed mean step length so we can un-standardize later if desired. For the rest of the paper, we will assume that the symbol $d_{(t,t+1)}$ stands for standardized step length. With all of the terms now defined, the CarHMM is formulated as \[ \renewcommand{\arraystretch}{1.7} \begin{array}{rll} \text{Location:} & \multicolumn{2}{l}{\mathbf{x}_{t+1} = \mathbf{x}_{t} + d_{(t,t+1)}\cdot \mathbf{H}\left(\theta_{t}\right)\cdot\left[d^{-1}_{(t-1,t)}\cdot\left(\mathbf{x}_{t}-\mathbf{x}_{t-1}\right)\right]} \\ \text{Action:} & \multicolumn{2}{l}{ \begin{minipage}{20em} \[ \renewcommand{\arraystretch}{0.8} \begin{array}{rl} d_{(t,t+1)}~\vert~B_{t}&=~b \sim \Gamma\left((1-\phi_{b})\cdot\murl[,b] + \phi_{b}\cdot d_{(t-1,t)},~\sigma_{b}\right) \\ \theta_{t}~\vert~B_{t}&=~b \sim\text{WC}\left(c_{b},~\rho_{b}\right) \end{array} \] \end{minipage}} \\ \text{Behaviour:} & \text{Pr}\left[B_{t} = j ~\vert~ B_{t-1} = i\right] = a_{ij}, & i,j\in\left\{1,2,...,k\right\} \\ \text{Initial Conditions:} & \multicolumn{1}{l}{\text{Pr}\left[B_{1} = i\right] = \delta_{i}, } & \begin{minipage}{15em} $d_{(0,1)}$ is fixed from the data as the first observed step length. \end{minipage}\\ % \end{array} \] Although the locations $\mathbf{x}_{t}$ themselves do not enter the likelihood for the model directly, we include the ``Movement" equation to show the connection between the locations and the step lengths and deflection angles. In this equation, $\mathbf{H}\left(\theta_{t}\right)$ represents the change in direction at time $t$. If, as we strongly recommend, the coordinates are latitude-longitude pairs, then the equation as given is more of a symbolic representation. The fully written equation is based on spherical geometry. If the coordinates are projected, then $\mathbf{H}$ can be written as a standard $2\times 2$ rotation matrix. The likelihood is computed as the matrix product \[ L = \bm{\delta}\mathbf{L}\left(d_{(1,2)},\theta_{1}\right)\mathbf{A} \mathbf{L}\left(d_{(2,3)},\theta_{2}\right)\mathbf{A} \cdots \mathbf{L}\left(d_{(n,n+1)},\theta_{n}\right)\mathbf{1}_{k\times 1} \] where $\mathbf{L}\left(d_{(t,t+1)},\theta_{t}\right)$ is the diagonal matrix \[ \text{Diag}\left[\raisebox{11pt}{} f\left(\raisebox{11pt}{} d_{(t,t+1)},~\theta_{t}~\vert~B_{t} = 1\right),~...,~f\left(\raisebox{11pt}{} d_{(t,t+1)},~\theta_{t}~\vert~B_{t} = k\right) \right], \] and $\mathbf{1}_{k\times 1}$ is a vector of ones. Since $d_{(t,t+1)}$ and $\theta_{t}$ are considered conditionally independent given the behavioural state, their joint probability density function is the product of the individual densities. In practice, we compute the log-likelihood using forward recursion with scaling as presented in Section 3.2 of \citet{Zucchini::HMMsforTimeSeries}. When the autocorrelation $\phi_{b}$ is fixed at 0 for all $b$, the CarHMM reduces to a standard HMM. In addition, when $\phi_{b}$ is fixed at 1 for all $b$, it is possible to show that the CarHMM reduces to a component-wise relative of the hidden Markov movement model of \citet{Whoriskey::SwitchingHMM} though the details will not be shown here. Further, other generalizations to the standard movement HMM such as adding a semi-Markov state process could be applied to the CarHMM. When using the $\Gamma$ distribution $\murl[,b]$ must be non-negative and $\phi_{b}$ must be within the unit interval. Another useful choice of distribution for step length is the log-normal distribution, where the log of the step length has mean $(1-\phi_{b})\cdot\murl[,b] + \phi_{b} \cdot d_{(t-1,t)}$. In this case the parameters are unrestricted. However to ensure the step length process is stable, it is sufficient (but not necessary) that the estimates satisfy $\left\vert\hat\phi_{b}\right\vert <1$ for all $b$. If this is not the case it may be a sign of numerical instability in the optimizer. We use the maximum likelihood framework; the parameters to be estimated are $\murl[,b]$, $\phi_{b}$, $\sigma_{b}$, $c_{b}$, $\rho_{b}$ for $b\in 1,~...,~k$ giving $5k$ parameters for the ``Action" distribution, and the off-diagonal transition probabilities $a_{i,j}$ for $i,j\in 1,~...,~k,~i\neq j$ giving $k\cdot(k-1)$ parameters for the ``Behaviour" distribution, for a total of $k^{2} + 4k$ parameters. The remaining transition probabilities are not free parameters since the row sums of $\mathbf{A}$ must equal 1. The unobserved behavioural states $B_{t}$ are predicted using the well known Viterbi algorithm, see e.g.~\citet{Zucchini::HMMsforTimeSeries}. Identifiability of models in the Markov-switching autoregressive class, which includes the CarHMM, is proven in \citet{Douc::AutoregressiveMarkovRegimeNormality}, with consistency and asymptotic normality of the ML estimates when using the log-normal distribution resulting from the same paper. Consistency and asymptotic normality when using the $\Gamma$ distribution is studied in \citet{Ailliot::gammaMSAR}. One notable consistency condition is that the entries of $\mathbf{A}$ must be strictly positive for parameter estimation to be consistent. If any estimated value is close to zero, this could be a sign of having too many states in the model, unless there is a biologically meaningful reason for including the extra state. \section{Interpretation of CarHMM parameters} \label{sec::paramInterp} Here we discuss the concepts of activity budget, behavioural residency time, and mean reversion level. These are all obtained as transformations of the model parameters and are related to the biology of the animal. Both activity budget and behavioural residency time can be obtained from the transition probability matrix $\mathbf{A}$. The stationary distribution $\bm{\delta}$ itself can be interpreted as an activity budget, where the $i^{\text{th}}$ entry of $\bm{\delta}$ gives the expected proportion of time that the animal spends in the $i^{\text{th}}$ behavioural state. For example, the activity budget can give estimates of the proportion of time spent transiting as compared to foraging. Behavioural residency time is the amount of time that an animal will remain in a given behavioural state before switching to a different state. These can be modelled explicitly using semi-Markov state processes, though in our case they follow a geometric distribution \citep{Langrock2012::HMMsExtensions}. For a geometric distribution, the expected number of time steps spent in state $i$ is given by $\bm{E}_{t}(\text{state}~i)= 1 ~/~ (1 - p_{i,i})$. When converted to real time units (hours, minutes, etc.) by multiplying by the chosen time step, this value gives an estimate of the time scale of the behaviour being modelled and is important in giving biologically meaningful interpretations to the behavioural states. Since the step length process within a given behavioural state follows a first-order autoregressive process, the parameter $\murl[,b]$ gives the reversion level of the process for a given state. The mean reversion level is defined to be the expected value of step length as the time spent within the behavioural state approaches infinity. \[ \mu_{RL,b} = \lim_{t\to\infty}\bm{E}\left( D_{(t,t+1)} ~\biggr\vert B_{t'} = b ~\forall ~t' \leq t \right). \] Within each behavioural state this value acts as an attractor in the step length distribution. Thus consecutive step lengths in the same behavioural state will tend to converge on this value, with the strength of attraction being inversely proportional to $\phi_{b}$ and proportional to the distance of the previous step length to $\mu_{RL,b}$. One of the attractive features of hidden Markov models for movement data is the connection between the underlying behavioural state of the Markov chain with behaviours exhibited by the animal. While the connection between the behavioural state in the model and the behaviours exhibited by the animal is sometimes tenuous, it can be useful to label the behavioural states of the model. Two common ``behaviours" used in the HMM context are ``foraging" and ``transiting" (quotations are used to emphasise that these labels may not reflect actual behaviour). In the standard HMM, foraging is typically characterized by short step lengths and diffuse deflection angles. Transiting is typically characterized by long step lengths and deflection angles concentrated at zero degrees. We can update these behaviours in the CarHMM framework. Here foraging is characterized by short step lengths with little autocorrelation along with diffuse deflection angles. Transiting is characterized by longer and highly autocorrelated step lengths along with deflection angles concentrated at zero degrees. \section{Data inspection and model checking} \label{sec::dataInspectModelCheck} \subsection{Pre-processing locations} \label{ssec::preProcess} When dealing with marine animal tagging data the observed locations are typically not on a regular time grid. In order to use the CarHMM, we linearly interpolate the observed locations to a regular time grid. An alternative is to use the multiple imputation approach proposed in \citet{McClintock::telemetryObservationError}. However even when using this multiple imputation approach one must choose a sensible time grid. Interpolation requires making a few decisions such as: how are observations which are very far apart in time (long stretches of missing data) dealt with? and, what is the best time step (the time between points on the temporal grid) to use for the interpolation? We deal with the first by splitting the track into separate groups whenever the time between consecutive observations is greater than some cutoff level, which we call the group cutoff level. After defining the time step, which we require to be the same for all groups, and the group cutoff level the observed locations are interpolated within their separate groups on to the regular time grid. The interpolated locations are then processed to obtain the deflection angles and step lengths which enter the likelihood. There are two metrics we propose to help in choosing a time step and group cutoff level: the proportional sample size \[ n_{\text{prop}} = \frac{\text{\# interpolated locations}}{\text{\# observed locations}} \] and the adjusted proportional sample size \[ n_{\text{adj}} = \frac{\text{\# interpolated locations} - 2 \cdot \text{\# groups}}{\text{\# observed locations} - 2}. \] The first is designed to preserve the number of locations in the track, while the second is designed to preserve the number of data points which enter the likelihood. To choose a (heuristically) best time step and group cutoff level, we recommend: \begin{enumerate} \item{restrict the time step to be somewhere between the median and 3$^{rd}$ quartile of the observed time differences in the original data;} \item{for whichever time step is chosen, restrict the group cutoff level to be no more than twice the time step (and no less than the time step itself);} \item{with the above restrictions, set up a grid of time steps and group cutoff levels and compute $n_{\text{prop}}$ and $n_{\text{adj}}$ for each point in the grid. Choose whichever makes both $n_{\text{prop}}$ and $n_{\text{adj}}$ as close to 1 as possible.} \end{enumerate} We follow these guidelines in choosing the time step for the best practice analysis of Section \ref{sec::mgreySeal}. These guidelines are meant to avoid both over-smoothing the data (and therefore losing information) by choosing too large a time step, or inadvertently replicating the data by choosing too small a time step. Values of $n_{\text{prop}}$ or $n_{\text{adj}}$ less than one are indicators of over-smoothing while values greater than one are indicators of data replication. A brief experiment suggested that interpolation (either linear interpolation or with a variety of different splines) may introduce a significant amount of autocorrelation in the step lengths. Further investigation is needed to determine the exact effects of interpolation. Whether most of the autocorrelation is inherent to the track or is introduced by interpolating the locations, accounting for it in the model (such as the CarHMM does) is necessary. Once the locations are interpolated and grouped with a regular time step, they are processed to obtain deflection angles and step lengths. These should be obtained from unprojected coordinates so that both the deflection angles and step lengths are accurate throughout the spatial extent of the data. The model is fitted to all of the grouped data in a single likelihood, assuming that groups are independent of each other and that the groups share the same true parameters. Within a group, the first step length of that group is taken to be the initial condition for the step length autoregressive process, and the initial distribution of the underlying state is always taken to be the stationary distribution. \subsection{Model selection} \label{ssec::modelSelection} We consider two components to model selection for the CarHMM: deciding whether to use the HMM or the CarHMM (i.e., whether to fix $\phi_{b}=0$ or not), and choosing the number of behavioural states. First we introduce the lag-plot, an exploratory graphic useful for understanding the nature of any autocorrelation present in the step length process. The lag-plot at lag $k$ is a kernel-density plot of $d_{(t,t+1)}$ against $d_{(t-k,t+1-k)}$. Examples of these plots are given in Figure \ref{fig::simCarHMMlag} of Section \ref{sec::Simulation}. These lag-plots give a more detailed description of the autocorrelation than the simple autocorrelation function, and have a couple of immediately helpful uses. Most importanly in the current case, by examining a lag-plot at lag 1, it can be possible to determine which of the HMM and CarHMM is more appropriate for the data. These plots show the different types of autocorrelation present within the HMM and CarHMM, compared with a real dataset. The HMM plot will have a pattern of distinct circular droplets along the line $y=x$, a result of the autocorrelation in the behavioural states, while the CarHMM plot will have an elongated smear along the line $y=x$, due to the within-state autocorrelation in step lengths. In ideal cases, the lag-plot at lag 1 may also suggest the number of states exhibited in the data. Particularly for data with HMM-like autocorrelation, the number of distinct droplets corresponds to the number of distinct states. This becomes more complicated with more latent states and with more CarHMM-like autocorrelation. Choosing the number of states to use is a notoriously difficult problem, with traditional metrics such as AIC and BIC generally selecting too many states to be biologically meaningful. For in-depth discussion, we refer the reader to \citet{Pohle::numberStatesHMM}. A recommended starting point is to use as few states as necessary to achieve an adequate fit. Other uses of the lag-plot include comparing the autocorrelation characteristics of different time step choices. For example, we could test the intuitively attractive idea that a short time step results in step lengths with high within-state autocorrelation, while a long time step results in step lengths with low within-state autocorrelation. Further, with multi-state models the traditional autocorrelation function can do a poor job of quantifying the autocorrelation. The lag-plot includes more detail such that the characteristics of each state can be discerned. \subsection{Residuals} \label{ssec::residuals} For model checking we follow the probability scale residual framework of \citet{Shepherd::residuals}, and in particular use one-step-ahead forecast residuals. Here, the forecast distribution for step length would be a mixture of gamma distributions with means $(1-\phi_{b}) \cdot\murl[,b] + \phi_{b} \cdot d_{(t-1,t)}$, standard deviations $\sigma_{b}$, and mixture rates given by the $b_{t-1}^{\text{th}}$ row of $\mathbf{A}$. If we have specified the model structure correctly then these residuals should be uniformly distributed on (-1,1) and exhibit no autocorrelation. Further, they should have this property within each behavioural state, though small sample size can be a problem here. Departures from a uniform distribution can be detected by looking at a quantile-quantile (Q-Q) plot of the residuals. Residual autocorrelation can be identified in plots of the autocorrelation function of these residuals, or in lagplots such as those proposed for model selection. \section{Simulation Study} \label{sec::Simulation} To address the performance and properties of the new CarHMM model, we present brief vignettes of four simulation studies. First, we investigate the effect of track length. Second, we look at the effect of within-state autocorrelation. The third study looks at the effect of the transition probabilities. The fourth and final study compares the regular HMM and the CarHMM. When simulating data we simulate both the underlying behavioural state and observed data from scratch. We do not reuse the behavioural states estimated from original data. The main metric we use to assess these simulations is the interquartile range (first and third quartiles) of the state estimate error. For a particular track, the state estimate error is the percentage of state estimates which do not agree with the true simulated state. The quartiles are then computed over e.g.~50 simulations under the same model. To account for numerical instability in the maximization of the likelihood, our fitting procedure for these studies attempts to fit the model to each simulated track at most 10 times, with different random starting values each time, until the model converges. If the model does not converge successfully on any of those 10 attempts, we remove that particular track from consideration. We also remove tracks which give unreasonable parameter estimates (in particular, any model fit which gives a stationary distribution with an entry less than 0.01), or estimates which give clear signs of numeric instability (deflection angle concentration less than $10^{-3}$, or a transition probability matrix with any row having all equal entries). When using real data we can tweak exactly how we optimize the likelihood (change various control parameters, pick starting values, etc.) so this practice is not an inherent shortcoming of the model or fitting method. The Viterbi algorithm is currently the most common way to estimate behavioural states in HMM-like models. Briefly, the algorithm takes as input parameter point estimates and the observed data, and outputs the most likely sequence of behavioural states as a point estimate. With the Viterbi algorithm, the accuracy of the state estimates is dependent on the amount of overlap of the state-dependent distribution. If two states have significant overlap, the Viterbi algorithm will perform much worse than if the two states were distinct. At no point are standard errors of parameter estimates or uncertainty statements about the behavioural states considered. Because of this, any error in the parameter estimates directly translates to a source of error in the state estimates, with no hope of correcting for the uncertainty in the parameter estimates. Because the Viterbi algorithm only gives the most likely sequence of behavioural states with no uncertainty estimates, the estimated behavioural states must be interpreted with care. In addition to the tenuous connection between the behavioural state labels and the biology, there is the additional problem that we do not know how likely the most likely behavioural state path is. Simulations, such as the ones below, can help determine how much error to expect. However, since the actual states are unobserved, it is not possible to know the actual error rate. \subsection{Effect of track length} \label{ssec::simTrackLength} \begin{figure} \centering \includegraphics[width = \textwidth]{simLengthCompare.png} \caption{The top panel gives the bias for simulated tracks of different lengths under the same parameters, for both a two- and three-state model. The important feature is that the bias for all parameters converges to zero ($\sim$500 locations), showing that the parameters can be successfully estimated given a long enough track. The bottom two panels give the state estimate error give number summary (min, median, max, and quartiles). Each track length used 50 simulations. In both the two state model and the three state model, the median error rate quickly stabilizes ($\sim$250 observations for the two state model, $\sim$500 for the three state model), but does not converge to zero.} \label{fig::simLengthCompare} \end{figure} In practice, one would hope that collecting more data (i.e.~longer animal tracks) would decrease the amount of error in both parameter estimates and state estimates. Simulations suggest that, while error in parameter estimates will decrease with longer track lengths, there is an inherent amount of error to be expected in behavioural state estimates that cannot be overcome with increased track length. Figure \ref{fig::simLengthCompare} shows the bias and state estimate error for two different models. Here we compute bias as the median difference, across simulations, of parameter estimates from the true parameter. The two state model is a HMM which takes (slightly modified) parameters estimated from an elk dataset analyzed in the vignette for the R package moveHMM \citep{Michelot::moveHMM}. The three state model is a CarHMM which takes parameters estimated from a grey seal dataset (different from the one presented in Section \ref{sec::mgreySeal}). The parameters for both models can be found in Table \ref{tab::twoElkPars} and Table \ref{tab::threeSealLengthPars}, respectively, of ESM \ref{sapp::simGraphs} Figure \ref{fig::simLengthCompare} shows that collecting more and more data for a single track is not effective past a certain point. For the remaining simulation studies we use track lengths of 1,000. We report the first and third quartiles of the state estimate error, which Figure \ref{fig::simLengthCompare} suggests will have stabilized at this track length. \subsection{Effect of autocorrelation} \label{ssec::simPhi} \begin{table} \centering \begin{tabular}{crccc} & & \multicolumn{3}{c}{State 2} \\ & & Low & Med & High \\ & Low & (0.131, 0.148) [4] & (0.111, 0.128) [2] & (0.074, 0.087) [0] \\ State 1 & Med & & (0.139, 0.161) [5] & (0.082, 0.098) [0] \\ & High & & & (0.209, 0.240) [0] \end{tabular} \caption{First and third quartiles for the state estimate error for different combinations of low, medium, and high autocorrelation. The number in square brackets gives the number of simulations which did not converge, out of 100 simulations.} \label{tab::simAutocorrelation} \end{table} The accuracy of the Viterbi algorithm depends heavily on the amount of overlap of the state-dependent distributions. Recall that the mean of the step length distribution is given by \( (1-\phi)\cdot\murl + \phi\cdot d_{(t-1,t)}. \) Consider the autocorrelation $\phi$ as a weight between $\murl$, which will depend on the state at time $t$, and $d_{(t-1,t)}$, which does not. If $\phi$ is close to one in two states with drastically different $\murl$, then the two states will overlap since $\murl$ is essentially irrelevant in both states. Table \ref{tab::simAutocorrelation} shows the state error rate for a two state model with different amounts of autocorrelation (each state taking either a low, medium, or high amount). The parameters are modified from the same elk example used in Section \ref{ssec::simTrackLength}. Overall we see that increasing the autocorrelation of both states leads to an increase in the amount of state estimate error, while differentiating the amount of autocorrelation between the two states leads to a decrease in the amount of error. \subsection{Effect of transition probabilities} \label{ssec::simTransition} \begin{table} \centering \begin{tabular}{crcccc} & & \multicolumn{4}{c}{State 2~~($\phi = 0.892$)} \\ & & 0.5 & 0.6 & 0.7 & 0.9 \\ & 0.5 & (0.214, 0.234) [1] & (0.204, 0.227) [0] & (0.188, 0.207) [1] & (0.125, 0.154) [2] \\ State 1 & 0.6 & (0.226, 0.240) [3] & (0.204, 0.223) [2] & (0.194, 0.210) [0] & (0.127, 0.147) [2] \\ ($\phi = 0.407$) & 0.7 & (0.224, 0.245) [2] & (0.207, 0.223) [2] & (0.196, 0.218) [2] & (0.115, 0.134) [0] \\ & 0.9 & (0.213, 0.242) [6] & (0.179, 0.226) [6] & (0.154, 0.180) [1] & (0.082, 0.104) [2] \\ \end{tabular} \caption{The row and column headings give the probability of staying within the given state from one time to the next. First and third quartiles for the state estimate error. The number in square brackets gives the number of simulations which did not converge, out of 20 simulations. The amount of error decreases as the probability of remaining in state 2 increases. The error is not significantly affected by the probability of remaining in state 1.} \label{tab::simTransition} \end{table} Unlike in the standard HMM, the observed state-dependent distributions for the CarHMM are indirectly affected by the transition probabilities of the underlying behavioural states. States with low autocorrelation act as anchors in the step length series, while states with high autocorrelation tend to wander. The longer that an animal is in a state with high autocorrelation (by having a high probability of remaining in the same state), the more we expect that step length series to wander. Figure \ref{fig::simTransitlag} in ESM \ref{sapp::simGraphs} gives observed step length distributions for a variety of different transition probabilities. Table \ref{tab::simTransition} gives state estimate error under the same variety of transition probabilities. We see that the probability of remaining in state 2 (with high autocorrelation) affects the state estimate error, as this probability is what determines how free the second state is to wander. The more that the high autocorrelation state drifts away from the mean of the low autocorrelation state (from left to right in the table), the less overlap there is in their distribution, which increases the accuracy of the Viterbi algorithm. The parameters can be found in Table \ref{tab::twoSealPars} of ESM \ref{sapp::simGraphs}. \subsection{Comparison of HMM and CarHMM} \label{ssec::simCompare} \begin{table} \centering \begin{tabular}{crcc} & & \multicolumn{2}{c}{Two State Model} \\ \multicolumn{2}{c}{Simulated Model} & HMM & CarHMM \\ Fitted & HMM & (0.120, 0.138) [8] & (0.434, 0.474) [0] \\ Model & CarHMM & (0.125, 0.145) [7] & (0.072, 0.083) [0] \end{tabular}% \begin{tabular}{cc} \multicolumn{2}{c}{Three State Model} \\ HMM & CarHMM \\ (0.044, 0.058) [15] & (0.373, 0.445) [30] \\ (0.047, 0.060) [3] & (0.157, 0.186) [2] \end{tabular} \caption{First and third quartiles for the state estimate error. The number in square brackets gives the number of simulations which did not converge, out of 100 simulations. When the data is simulated with no within-state autocorrelation, the HMM and the CarHMM have essentially the same error rate. However, when the data is simulated with within-state autocorrelation, the HMM performs very poorly compared to the CarHMM.} \label{tab::simHmmvsCar} \end{table} \begin{figure} \centering \includegraphics[width = 0.9\textwidth]{simLagCompare.png} \caption{Lagplots for simulated HMM and CarHMM data. The three states of the HMM data are clearly shown by the three droplet patterns caused by the lack of within-state autocorrelation. The CarHMM does not clearly show the number of states, but shows the characteristic smeared line of the within-state autocorrelation. One can compare these plots to lag-plots of real data to help determine an appropriate model for the data.} \label{fig::simCarHMMlag} \end{figure} To show the importance of accounting for conditional autocorrelation in the data, we simulate data under both the HMM and the CarHMM and fit both the HMM and CarHMM to each simulation. We use parameters from two different datasets: the ``Low-High" two state parameters from the elk track considered in subsection \ref{ssec::simPhi}, and three state parameters estimated from the grey seal track analyzed in Section \ref{sec::mgreySeal}. The parameters for the models can be found in Table \ref{tab::twoElkPars} of ESM \ref{sapp::simGraphs}, and Table \ref{tab::msealCarHMM3Pars} of Section \ref{sec::mgreySeal}, respectively. Figure \ref{fig::simCarHMMlag} shows example lag-plots under the HMM and the CarHMM for the three state model. As mentioned earlier, these plots can help in model selection. Table \ref{tab::simHmmvsCar} shows the state estimation error rate for the four different scenarios. The CarHMM is just as effective as the HMM when fitted to HMM data with no conditional autocorrelation. However, the two-state HMM ($\sim 40-45\%$ error) performs only slightly better than random guessing ($50\%$ error) when fitted to CarHMM data with conditional autocorrelation. We expect this amount of error to persist across models that have at least one state with significant autocorrelation. The three-state HMM has the same problem, although performs much better than the $\sim$66\% error expected from random guessing. These simulations raise interesting questions about the validity of previous research utilizing hidden Markov models with irregularly timed data, especially since we suspect a non-trivial amount of autocorrelation is introduced through interpolating the locations to a regular grid. However, we only mention this point and leave the discussion for another time. \paragraph{Computation Time and Implementation} All simulations were computed on a laptop running Linux with a quadcore Intel Core i7-7500U CPU with 8GB of RAM. To compare the computation speed of the HMM with the CarHMM, we timed how long it took to fit a three state HMM and a three state CarHMM 100 times to the seal data in Section \ref{sec::mgreySeal}. We also timed how long it took to simulate and refit 100 simulations from each model. The HMM averaged 2.43 seconds per fit, and an additional 0.77 seconds per simulation. The CarHMM averaged 2.37 seconds per fit, and an additional 1.23 seconds per simulation. The difference in computation time between the two models is essentially negligible. Our implementation of the CarHMM uses the R package Template Model Builder \citep{Kristensen::2015TMB}, which allows for fast computation through automatic differentiation. It also has the ability to fix parameters at given values, allowing our HMM and CarHMM implementation to be identical. Our implementation and other functional tools discussed earlier are available as an R package at the first author's GitHub page. This package also includes the data used in Section \ref{sec::mgreySeal}. \section{Best practice analysis of a male grey seal track} \label{sec::mgreySeal} \begin{figure}[h!] \centering \includegraphics[width = \textwidth]{bestPractice.png} \caption{Map and lag plot of the grey seal track used in the best practice case study. Grey seals are large marine predators found in the North Atlantic ocean that are commonly observed travelling hundreds of kilometres to forage. This grey seal came from the Sable Island colony of Eastern Canada.} \label{fig::bestPracticeFigure} \end{figure} In this Section we demonstrate what we now consider to be basic best practice for analyzing animal movement data and reporting subsequent results. Plots, including residual plots and state estimate maps, are given in ESM \ref{sapp::msealGraphics}. The data are a subset of a male grey seal track on the Scotian shelf, analyzed previously in \citet{Whoriskey::SwitchingHMM}. The seal was tracked using GPS with negligible observation error. Due to some data collection issues (the median time differences abruptly change without explanation) we will look at only the final 3,158 locations with time differences having a mean, median, and 3rd quartile of 100, 64, and 122 minutes, respectively. First, one must choose values for the time step and group cutoff. To do this, set up a grid of values where the time step ranges from 60 minutes to 120 minutes by increments of 3 minutes, and the group cutoff ranges from the time step to twice the time step in increments representing 5\% of the time step. This range of values for the time step is chosen to range approximately from the median to the 3rd quartile. Refer to Section \ref{ssec::preProcess} for more detail. Both metrics for choosing a good time step and group cutoff discussed in Section \ref{ssec::preProcess} chose an optimal time step of 66 minutes and group cutoff of 132 minutes. The resulting interpolated track consists of 3,129 locations in 251 groups with $n_{\text{prop}} = 0.991$ and $n_{\text{adj}} = 0.832$. The mean of the unstandardized step lengths is 2.10 kilometres per time step (1.91 km/hr). The most useful plot of the data is the lag-plot of $d_{(t,t+1)}$ vs.~$d_{(t-1,t)}$ and is shown as part of Figure \ref{fig::bestPracticeFigure}. This plot shows the smeared texture that is characteristic of the CarHMM. The residuals for a two state CarHMM have autocorrelation on the border of significance. Neither a three state or a four state CarHMM give improved residuals (not shown). The data nor any of the residuals showed evidence of long-term seasonality. Given no other reason to choose a specific number of states, we recommend using the least number of states which you feel accurately describe the data. We also remind the reader that behavioural state labels such as ``foraging" and ``transiting" may not be reflective of the actual biology. \paragraph{Two state model} The parameter estimates are given in Table \ref{tab::twoSealPars} in ESM \ref{sapp::simGraphs}. State 2 is interpretable as a ``transiting" behaviour. The autocorrelation parameter ($\phi_{2} = 0.89$) and concentration parameter $(\rho_{2} = 0.86)$ are suitably high, and the standard deviation $(\sigma_{2} = 0.244;~\sigma_{2}/\mu_{RL,2} = 0.14)$ is suitably low. A map of the state estimates also indicates a ``transiting" behaviour. State 1 does not have as clear an interpretation. It may be tempting to label it a ``foraging" behaviour to complement the ``transiting" behaviour of State 2, however the parameter estimates for State 1 do not fully support this view. The autocorrelation parameter ($\phi_{1}= 0.41)$ is not close to 0 and the concentration parameter ($\rho_{1}=0.51$) is higher than expected. Further, a map of the state estimates shows that some of the behaviour picked up by this state does not have traditional ``foraging" characteristics. This suggests that State 1 may be picking up two distinct behaviours. We believe these behaviours may be a ``foraging" behaviour and a ``large area search" behaviour, although many other possibilities may exist. For this reason, we suggest using a third state to further differentiate these behaviours. \begin{table} \centering \begin{tabular}{rcccrcccrccc} \multicolumn{12}{c}{Three State CarHMM Parameter Estimates} \\ $\mathbf{d_{(t,t+1)}}$ & State 1 & State 2 & State 3 & $\bm{\theta_{t}}$ & State 1 & State 2 & State 3 & $\mathbf{A}$ & $\mathbf{p_{\cdot,1}}$& $\mathbf{p_{\cdot,2}}$ & $\mathbf{p_{\cdot,3}}$ \\ \hline $\mathbf{\bm{\mu}_{RL,b}}$ & 0.398 & 1.291 & 2.074 & $\mathbf{c}$ & -0.129 & -0.050 & 0.002 & $\mathbf{p_{1,\cdot}}$ & 0.713 & 0.287 & 0.000 \\ $\mathbf{\bm{\phi}_{b}}$ & 0.277 & 0.781 & 0.961 & $\bm{\rho}$ & ~0.402 & ~0.780 & 0.906 & $\mathbf{p_{2,\cdot}}$ & 0.149 & 0.797 & 0.054 \\ $\bm{\sigma}$ & 0.279 & 0.318 & 0.164 & & & & & $\mathbf{p_{3,\cdot}}$ & 0.000 & 0.120 & 0.880 \\ & & & & & & & & $\bm{\delta}$ & 0.264 & 0.508 & 0.228 \\ \end{tabular} \caption{Parameter estimates for a male grey seal track using the three state CarHMM.} \label{tab::msealCarHMM3Pars} \end{table} \paragraph{Three state model} The parameter estimates are given in Table \ref{tab::msealCarHMM3Pars}. State 1 is closer to a ``foraging" behaviour than it was in the two state model, and a map of the state estimates places State 1 where we might \emph{a priori} expect ``foraging" to take place based solely on the locations. State 3 is archetypal ``transiting" behaviour with both $\phi_{3}$ and $\rho_{3}$ close to 1. Based on the transition probabilities which do not allow transitions between State 1 and State 3, one would label State 2 a ``transitional" behaviour. Based on parameter estimates and a map of the state estimates there is no reason to believe that a fourth state is needed. The expected residency times are: 3.48 timesteps (3 hr 50 min) for the ``foraging" behaviour; 4.93 timesteps (5 hr 25 min) for the ``transitional" behaviour; and 8.33 timesteps (9 hr 10 min) for the ``transiting" behaviour. The expected activity budget gives 26.4\% of the seal's time spent ``foraging", 22.8\% of its time spent ``transiting", and 50.8\% of its time transitioning between the two\comment{; the exhibited activity budget gives 30.5\% , 25.1\%, and 44.4\%, respectively}. A simulation study of 93 convergent simulations out of 100 gave first and third quartiles of the state estimate error as $(20.5\%,~22.9\%)$. \section{Discussion} \label{sec::conclusion} We have introduced the conditionally autoregressive hidden Markov model (CarHMM) for highly accurate tracking data as an alternative to both the HMM originally developed in \citet{Morales2004::MovementRandomWalks} and the HMMM documented in \citet{Whoriskey::SwitchingHMM}. Subjective choices are often involved during data processing and model fitting. When fitting discrete-time movement models, the choice of time step often depends on the discrete behaviour of interest as well as the observation frequency \citep{Breed2012::stateSpaceTrack}. We propose a statistic to help the user choose a time step based on producing a roughly similar number of interpolated locations and data points as the original tracking data set. This could be combined with the multi-scale model of \citet{Vianey::MultiScaleHMMMovement} to study the discrete behaviour of interest. We have additionally proposed a method to deal with long periods of missing data. In some formulations of the HMM, a missing location enters the joint likelihood by including the contribution of the underlying behavioural state Markov chain ($\mathbf{A}$ in our formulation above) while removing the observation contribution for that location ($\mathbf{L}$ in our formulation) \citep{Zucchini::HMMsforTimeSeries}. We instead decided to split the track into multiple groups for compartmentalized model fitting, and offered metrics for choosing how to perform this partition. Frequent long periods of missing data can be common in marine environments. The CarHMM draws a new link between HMMs and the DCRWS model of \citet{Jonsen::2005DCRWAnimalMovement}. Within the marine context, the two most commonly sought-after behavioural states are foraging and transiting. These states are typically assumed to follow an area-restricted search pattern, whereby foraging patches are characterized by shorter step lengths occurring in diffuse directions, and are interspersed with periods of directed travel consisting of longer step lengths directed straight ahead (see e.g.~\citet{Whoriskey::SwitchingHMM}). While these states can be directly inferred from the state-dependent distributions of the HMM, the interpretation of these state estimates resulting from the DCRWS is less straightforward. Within the DCRWS, the main parameter influencing the step lengths is an autocorrelation term ($\gamma$). Usually (again see e.g.~\citet{Whoriskey::SwitchingHMM}), high $\gamma$ values are interpreted as highly persistent movement (indicative of transiting) and low $\gamma$ values constitute highly random movement (representing foraging). As a result, transiting and foraging are not necessarily delineated by longer and shorter step lengths. The CarHMM combines the two approaches such that we now have a clear interpretation of the step lengths but can still account for the fact that some animals will tend to move in a similar (or dissimilar) manner across time. These properties make the CarHMM a useful model for linking movement data to behavioural characteristics. \\ ~\\ {\itshape Acknowledgements} \vspace{4\baselineskip} \bibliographystyle{apalike}
2,869,038,154,146
arxiv
\section{Introduction}\label{sec: introduction} In 2010, Futorny and and Ovsienko introduced the notion of a Galois order \cite{FO10}, a class of objects consisting of pairs $(\mathscr{U},\Gamma)$ of an integral domain $\Gamma\subset\mathscr{U}$ an associative (noncommutative) $\mathbb{C}$-algebra. These pairs generalizes the relation of $U(\gl_n)$ and its \emph{Gelfand-Tsetlin} subalgebra $\mathbb{C}\left\langle\bigcup_{k=1}^nZ(U(\gl_k))\right\rangle$. Many important objects have been shown to be members of this collection including: generalized Weyl algebras \cite{BavulaGWA},\cite{rosenberg_1995}, $U(\gl_n)$, shifted Yangians and finite $W$-algebras \cite{FMO10}, $U_q(\gl_n)$ \cite{FH14}, Coloumb branches \cite{Webster19}, the spherical subalgebra of rational Cherednik algebras of imprimitive complex reflection groups $G(\ell,p,n)$ \cite{lepage2019rational}. The primary motivation of creating this objects is to unify the study of their Gelfand-Tsetlin modules \cite{early_mazorchuk_vishnyakova_2018}, \cite{futorny_grantcharov_ramirez_2015},\cite{FUTORNY20183182},\cite{SilverthorneWebster20}. The current research in this area has been focused on principal Galois orders (Definition \ref{def: principal and co-principal Galois orders}) which act naturally on the subalgebra $\Gamma$, and they contain all of the examples of interest. In particular in 2019, Webster showed that principal Galois orders can be realized as centralizer subalgebras of \emph{principal flag orders} (Definition \ref{def: principal flag order}), which are Galois orders in which the $G$ is trivial and $\mathscr{M}$ is the semidirect product of the group and monoid from the original data (see Lemma 2.5 in \cite{Webster19}). In particular, the data is almost the same, except that $\Lambda$ is assumed to be Noetherian. These flag orders prove easier to study once more as they are no longer subalgebras of group invariant algebras. One particular class of principal flag order is the standard flag order (Definition \ref{def: standard flag order}) which is the largest principal flag orders with given set of data as it contains every principal flag order with a given set of data. One of the first directions that one takes after defining a new object is to describe morphisms between these objects. This is particularly important in order to study these objects in the realm of category theory. In this paper we endeavor to take those first steps into constructing morphisms and studying related standard flag orders. We describe a sufficient condition for such maps to exist in Theorem \ref{thm: morphisms sufficient condition}. Also in Section \ref{sec: morphisms}, we prove that certain short exact sequences give rise to embeddings of standard flag orders (Theorem \ref{thm: standard order intersection}). This allows us to prove a property for standard flag orders similar to one of differential operators on polynomial rings (Theorem \ref{thm: standard order quotient}). In Section \ref{sec: tensors}, we construct tensor products of standard flag orders and prove the existence of a chain of embeddings (Theorem \ref{thm: tensor of std orders}). We use this result show that principal flag orders (and principal Galois orders) are closed under tensor products (Corollaries \ref{cor: principal flag orders closed under tensor} and \ref{cor: principal galois orders closed under tensor} respectively). \subsection{Galois orders}\label{sec: Galois Orders} Galois orders were introduced in \cite{FO10}. We will be following the set up from \cite{HARTWIG2020106806}. Let $\Lambda$ be an integrally closed domain, $G$ a finite subgroup of $\Aut(\Lambda)$, and $\mathscr{M}$ a submonoid of $\Aut(\Lambda)$. \begin{align*} \hypertarget{A1}{\rm (A1)}\quad & (\mathscr{M}\mathscr{M}^{-1})\cap G=1_{\Aut{}{\Lambda}} & \text{(\emph{separation})}\\ \hypertarget{A2}{\rm (A2)}\quad & \forall g\in G, \forall\mu\in\mathscr{M}\colon {}^g\mu=g\circ\mu\circ g^{-1}\in\mathscr{M} & \text{(\emph{invariance})}\\ \hypertarget{A3}{\rm (A3)}\quad & \Lambda \text{ is Noetherian as a module over } \Lambda^G & \text{(\emph{finiteness})} \end{align*} Let $L=\Frac(\Lambda)$ and $\mathcal{L}=L\#\mathscr{M}$, the skew monoid ring, which is defined as the free left $L$-module on $\mathscr{M}$ with multiplication given by $a_1\mu_1\cdot a_2\mu_2=(a_1\mu_1(a_2))(\mu_1\mu_2)$ for $a_i\in L$ and $\mu_i\in\mathscr{M}$. As $G$ acts on $\Lambda$ by automorphisms, we can easily extend this action to $L$, and by {\rm (A2)}, $G$ acts on $\mathcal{L}$. So we consider the following $G$-invariant subrings $\Gamma=\Lambda^G$, $K=L^G$, and $\mathcal{K}=\mathcal{L}^G$. A benefit of these assumptions is the following lemma. \begin{lemma}[\cite{HARTWIG2020106806}, Lemma 2.1 (ii), (iv) \& (v)]\label{lem: Hartwig big lemma} \item \begin{enumerate}[\rm (i)] \item $K=\Frac(\Gamma)$. \item $\Lambda$ is the integral closure of $\Gamma$ in $L$. \item $\Lambda$ is a finitely generated $\Gamma$-module and a Noetherian ring. \end{enumerate} \end{lemma} What follows are some definitions and propositions from \cite{FO10}. \begin{definition}[\cite{FO10}] A finitely generated $\Gamma$-subring $\mathscr{U}\subseteq\mathcal{K}$ is called a \emph{Galois $\Gamma$-ring} (or \emph{Galois ring with respect to $\Gamma$}) if $K\mathscr{U}=\mathscr{U}K=\mathcal{K}$. \end{definition} \begin{definition} Let $u\in\mathcal{L}$ such that $u=\sum_{\mu\in\mathscr{M}}a_\mu \mu$. The \emph{support of $u$ over $\mathscr{M}$} is the following: \[ \supp u=\bigg\{\mu\in\mathscr{M}~\Big\vert~a_\mu\neq0\text{ for }u=\sum_{\mu\in\mathscr{M}}a_\mu \mu\bigg\} \] \end{definition} \begin{proposition}[\cite{FO10}, Proposition 4.1]\label{prop: Gamma Ring Alt Conditions} Assume a $\Gamma$-ring $\mathscr{U}\subseteq\mathcal{K}$ is generated by $u_1,\ldots,u_k\in \mathscr{U}$. \begin{enumerate}[\rm (1)] \item If $\bigcup_{i=1}^k\supp u_i$ generate $\mathscr{M}$ as a monoid, then $\mathscr{U}$ is a Galois ring. \item If $L\mathscr{U}=L\#\mathscr{M}$, then $\mathscr{U}$ is a Galois ring. \end{enumerate} \end{proposition} \begin{theorem}[\cite{FO10}, Theorem 4.1 (4)]\label{thm: Center of a Galois Ring} Let $\mathscr{U}$ be a Galois $\Gamma$-ring. Then the center $Z(\mathscr{U})$ of the algebra $\mathscr{U}$ equals $\mathscr{U}\cap K^{\mathscr{M}}$, where $K^{\mathscr{M}}=\{k\in K\mid \mu(k)=k~\forall\mu\in\mathscr{M}\}$ \end{theorem} \begin{definition}[\cite{FO10}]\label{def: Galois Order defintion} A Galois $\Gamma$-ring $\mathscr{U}$ in $\mathcal{K}$ is a \emph{left} (respectively \emph{right}) \emph{Galois $\Gamma$-order in $\mathcal{K}$} if for any finite-dimensional left (respectively right) $K$-subspace $W\subseteq\mathcal{K}$, $W\cap\mathscr{U}$ is a finitely generated left (respectively right) $\Gamma$-module. A Galois $\Gamma$-ring $\mathscr{U}$ in $\mathcal{K}$ is a \emph{Galois $\Gamma$-order in $\mathcal{K}$} if $\mathscr{U}$ is a left and right Galois $\Gamma$-order in $\mathcal{K}$. \end{definition} \begin{definition}[\cite{DFO94}]\label{def: Harish-Chandra subalg} Let $\Gamma\subset\mathscr{U}$ be a commutative subalgebra. $\Gamma$ is called a \emph{Harish-Chandra subalgebra} in $\mathscr{U}$ if for any $u\in\mathscr{U}$, $\Gamma u\Gamma$ is finitely generated as both a left and as a right $\Gamma$-module. \end{definition} Let $\mathscr{U}$ be a Galois ring and $e\in\mathscr{M}$ the unit element. We denote $\mathscr{U}_e=\mathscr{U}\cap Le$. \begin{theorem}[\cite{FO10}, Theorem 5.2]\label{thm: Galois Order condition} Assume that $\mathscr{U}$ is a Galois ring, $\Gamma$ is finitely generated and $\mathscr{M}$ is a group. \begin{enumerate}[\rm (1)] \item Let $m\in\mathscr{M}$. Assume $m^{-1}(\Gamma)\subseteq\Lambda$ (respectively $m(\Gamma)\subseteq\Lambda$). Then $\mathscr{U}$ is right (respectively left) Galois order if and only if $\mathscr{U}_e$ is an integral extension of $\Gamma$. \item Assume that $\Gamma$ is a Harish-Chandra subalgebra in $\mathscr{U}$. Then $\mathscr{U}$ is a Galois order if and only if $\mathscr{U}_e$ is an integral extension of $\Gamma$. \end{enumerate} \end{theorem} The following are some useful results from \cite{HARTWIG2020106806}. \begin{proposition}[\cite{HARTWIG2020106806}, Proposition 2.14]\label{prop: Gamma maxl comm in a Galois Order} $\Gamma$ is maximal commutative in any left or right Galois $\Gamma$-order $\mathscr{U}$ in $\mathcal{K}$. \end{proposition} \begin{lemma}[\cite{HARTWIG2020106806}, Lemma 2.16]\label{lem: Order Containment Implication} Let $\mathscr{U}_1$ and $\mathscr{U}_2$ be two Galois $\Gamma$-rings in $\mathcal{K}$ such that $\mathscr{U}_1\subseteq\mathscr{U}_2$. If $\mathscr{U}_2$ is a Galois $\Gamma$-order, then so too is $\mathscr{U}_1$. \end{lemma} It is common to write elements of $L$ on the right side of elements of $\mathscr{M}$. \begin{definition} For $X=\sum_{\mu\in\mathscr{M}}\mu\alpha_\mu\in\mathcal{L}$ and $a\in L$ defines the \emph{evaluation of $X$ at $a$} to be \[ X(a)=\sum_{\mu\in\mathscr{M}}\mu(\alpha_\mu\cdot a)\in L. \] Similarly defined is \emph{co-evaluation} by \[ X^\dagger(a)=\sum_{\mu\in\mathscr{M}}\alpha_\mu\cdot(\mu^{-1}(a))\in L \] \end{definition} The following was independently defined by \cite{Vishnyakova17} called the \emph{universal ring}. \begin{definition} The \emph{standard Galois $\Gamma$-order} is as follows: \[ \mathcal{K}_\Gamma:=\{X\in\mathcal{K}\mid X(\gamma)\in\Gamma~\forall\gamma\in\Gamma\}. \] Similarly we define the \emph{co-standard Galois $\Gamma$-order} by \[ {}_\Gamma{\mathcal{K}}:=\{X\in\mathcal{K}\mid X^\dagger(\gamma)\in\Gamma ~\forall\gamma\in\Gamma\}. \] \end{definition} \begin{definition}\label{def: principal and co-principal Galois orders} Let $\mathscr{U}$ be a Galois $\Gamma$-ring in $\mathcal{K}$. If $\mathscr{U}\subseteq\mathcal{K}_\Gamma$ (resp. $\mathscr{U}\subseteq{}_\Gamma{\mathcal{K}}$), then $\mathscr{U}$ is called a \emph{principal} (resp. \emph{co-principal}) \emph{Galois $\Gamma$-order}. \end{definition} In \cite{HARTWIG2020106806} it was shown that any (co-)principal Galois $\Gamma$-order is a Galois order in the sense of Definition \ref{def: Galois Order defintion}. \subsection{Flag orders} In the notation of flag orders the group $G$ is denoted by $W$ instead. Additionally, Hartwig's \hyperlink{A3}{\rm (A3)} is replaced by the assumption that $\Lambda$ is Noetherian (though as Lemma \ref{lem: Hartwig big lemma} shows this follows from Hartwig's setup). \begin{definition}\label{def: principal flag order} A \emph{principal flag order} with data $(\Lambda,W,\mathscr{M})$ is a subalgebra of $F\subset\Frac(\Lambda)\#(W\ltimes\mathscr{M})$ such that: \begin{enumerate}[\rm (i)] \item $\Lambda\#W\subset F$, \item $\Frac(\Lambda)F=\Frac(\Lambda)\#(W\ltimes\mathscr{M})$, \item For every $X\in F$, $X(\Lambda)\subset\Lambda$.\label{item: principal flag order property} \end{enumerate} \end{definition} \begin{definition}\label{def: standard flag order} The \emph{standard flag order} with data $(\Lambda,W,\mathscr{M})$ is the subalgebra of all elements $X\in\Frac(\Lambda)\#(W\ltimes\mathscr{M})$ satisfying (\ref{item: principal flag order property}) and is denoted $\mathcal{F}_\Lambda$. \end{definition} \begin{example} Let $\Lambda=\mathbb{C}[x_1,x_2,\ldots,x_n]$, $W\leq GL(\mathbb{C}^n)$ a complex reflection group (e.g. $W=S_n$), $\mathscr{M}=\mathbb{Z}^n$. Then $\mathcal{F}_\Lambda$ is the degenerate double affine nilHecke algebra associated to $W$ \cite{KumarBook}. \end{example} Recall the definition of a standard flag order (see Definition \ref{def: principal flag order}). Let $(\Lambda,W,\mathscr{M})$ be our data, $\mathcal{F}=\Frac(\Lambda)\#(W\ltimes\mathscr{M})$, and $\mathcal{F}_\Lambda$ be the corresponding standard flag order. In this chapter we study morphisms between standard flag orders. One motivation for this is future applications to representation theory, via restriction/induction functors. \begin{notation} For simplicity, $W\ltimes\mathscr{M}$ is written as $\hat{W}$. \end{notation} \begin{example}\label{ex: standard order is nilhecke algebra} If $\Lambda=\mathbb{C}[x_1,x_2,\ldots,x_n]$ and $\hat{W}$ is a finite complex reflection group action on $\mathbb{C}^n$, then $\mathcal{F}_\Lambda$ is the nilHecke algebra of $\hat{W}$ (see \cite{Webster19}). \end{example} \section{Morphisms}\label{sec: morphisms} \subsection{A sufficient condition} Let $(\Lambda_1,W_1,\mathscr{M}_1)$, $(\Lambda_2,W_2,\mathscr{M}_2)$ be two flag order data, $L_i$ the field of fractions of $\Lambda_i$ for $i=1,2$ and $\mathcal{F}_{\Lambda_i}$ denote the corresponding standard flag orders. Recall in particular that $\hat{W}_i=W_i\ltimes\mathscr{M}_i$ acts faithfully on $\Lambda_i$. \begin{theorem}\label{thm: morphisms sufficient condition} Let $\varphi:\Lambda_1\to\Lambda_2$ be a ring homomorphism and $\psi:\hat{W}_1\to \hat{W}_2$ be a group homomorphism such that \begin{equation}\label{eq:phipsi-condition} \varphi\big(w(a)\big)=\psi(w)\big(\varphi(a)\big),\qquad \forall a\in\Lambda_1,\forall w\in \hat{W}_1. \end{equation} \begin{enumerate}[{\rm (i)}] \item\label{item: sufficient condition algebra homomorphism} There is an algebra homomorphism \begin{equation} \Phi: L_1\# \hat{W}_1\to L_2\# \hat{W}_2 \end{equation} given by \begin{equation} \Phi(fw)=\varphi(f)\psi(w),\qquad f\in L_1, w\in \hat{W}_1 \end{equation} \item\label{item: sufficient condition map restricts to standard orders} Suppose there is a subspace $U$ of $\Lambda_2$ such that $\Lambda_2\cong \varphi(\Lambda_1)\otimes U$ as $\psi(\hat{W}_1)$-modules, where $\psi(\hat{W}_1)$ acts on $\varphi(a)\otimes u$ by \[ \psi(w)\big(\varphi(a)\otimes u\big) = \psi(w)\big(\varphi(a)\big)\otimes u = \varphi(w(a))\otimes u. \] Then $\Phi$ restricts to an algebra homomorphism \begin{equation} \Phi: \mathcal{F}_{\Lambda_1} \to \mathcal{F}_{\Lambda_2} \end{equation} \end{enumerate} \end{theorem} \begin{proof} \ref{item: sufficient condition algebra homomorphism} $L_i\# \hat{W}_i= L_i\otimes_{\K} \hat{W}_i$ as $(L_i,\hat{W}_i)$-bimodules, so it suffices to show that $\Phi$ preserves the relation $wf=w(f)w$ for all $w\in \hat{W}_1, f\in L_1$. This relation is preserved iff $\psi(w)\varphi(a)=\varphi(w(a))\psi(w)$ for all $w\in \hat{W}_1$ and $a\in \Lambda_1$. The left hand side equals $\psi(w)\big(\varphi(a)\big) \psi(w)$ so the identity is equivalent to \eqref{eq:phipsi-condition}. \ref{item: sufficient condition map restricts to standard orders} Let $X=\sum_{w\in \hat{W}_1}f_w w \in \mathcal{F}_{\Lambda_1}$. By assumption any element of $\Lambda_2$ is a sum of elements of the form $b=\varphi(a)\otimes u$, where $a\in\Lambda_1$ and $u\in U$. We have \[ \Phi(X)(b) = \sum_{w\in \hat{W}_1} \varphi(f_w)\psi(w) \big( \varphi(a)\otimes u\big) \] By assumption on how $\psi(W_1)$ acts on such tensors, this equals \[ \sum_{w\in \hat{W}_1} \varphi(f_w) \big(\varphi(w(a))\otimes u\big)= \varphi\big(\sum_{w\in \hat{W}_1} f_w w(a)\big)\otimes u \in \varphi(\Lambda_1)\otimes U = \Lambda_2 \] Thus $\Phi(X)\in\mathcal{F}_{\Lambda_2}$ \end{proof} \begin{example}\label{ex: Klein 4->S4} Consider two sets of flag order data. The first is $(\mathbb{C}[x,y],V,\mathbbm{1})$ where $V=\langle \tau_x,\tau_y\rangle$ is the Klein four-group acting by $\tau_x\colon f(x,y)\mapsto f(-x,y)$ and $\tau_y\colon f(x,y)\mapsto f(x,-y)$. The second is $(\mathbb{C}[x_1,x_2,x_3,x_4],S_4,\mathbbm{1})$ $S_4$ is the symmetric group on 4 elements acting on $\mathbb{C}[x_1,x_2,x_3,x_4]$ by permutation of variables. Our two homomorphisms are: \begin{align*} \varphi&\colon\mathbb{C}[x,y]\rightarrow\mathbb{C}[x_1,x_2,x_3,x_4] & \text{by }& x\mapsto x_2-x_1\text{ and }y\mapsto x_4-x_3,\\ \psi&\colon V\rightarrow S_4 & \text{by } & \tau_x\mapsto (12) \text{ and }\tau_y\mapsto (34). \end{align*} Together they both satisfy equation \ref{eq:phipsi-condition}. Our subspace to show part \ref{item: sufficient condition map restricts to standard orders} is $U=\mathbb{C}[x_1+x_2,x_3+x_4]$. Now by Theorem \ref{thm: morphisms sufficient condition} there is an algebra homomorphism \[ \Phi\colon\mathcal{F}_{\mathbb{C}[x,y]}=\mathbb{C}[x,y]\langle\frac{1}{x}(\tau_x-\mathbbm{1}),\frac{1}{y}(\tau_y-\mathbbm{1})\rangle\rightarrow \mathcal{F}_{\mathbb{C}[x_1,x_2,x_3,x_4]}=\mathbb{C}[x_1,x_2,x_3,x_4]\langle\frac{1}{x_{i+1}-x_i}((i,i+1)-\mathbbm{1})\mid i\in\{1,2,3\}\rangle \] that sends \[ \frac{1}{x}(\tau_x-\mathbbm{1})\mapsto\frac{1}{x_2-x_1}((12)-\mathbbm{1})\quad\text{and}\quad\frac{1}{y}(\tau_y-\mathbbm{1})\mapsto\frac{1}{x_4-x_3}((34)-\mathbbm{1}). \] \end{example} \begin{example} Let $\Lambda=\mathbb{C}[x_1,x_2,\ldots,x_n]$, and $\mathcal{F}_\Lambda$ be the standard flag order corresponding to $(\Lambda,S_n,\mathbbm{1})$ and $\widetilde{\mathcal{F}_\Lambda}$ be the standard flag order corresponding to $(\Lambda,A_n,\mathbbm{1})$, where $S_n$ and $A_n$ are the symmetric group and alternating group respectively. In this situation $\varphi$ is the identity map, and $\psi\colon A_n\rightarrow S_n$ is the natural embedding. This gives us $\Phi\colon\widetilde{\mathcal{F}_\Lambda}\rightarrow\mathcal{F}_\Lambda$. Recall that $\mathcal{F}_\Lambda$ is the nilHecke algebra of $S_n$ (see Example \ref{ex: standard order is nilhecke algebra} above and \cite{Webster19}). Thus we will define $\widetilde{\mathcal{F}_\Lambda}$ as the nilHecke algebra of $A_n$. \end{example} \begin{example}\label{ex: U(gln) std flag order maps} Define $\Lambda_n := \mathbb{C}[x_{ji}\mid 1\leq i\leq j\leq n]$, $\mathbb{S}_n=S_1\times S_2\times\cdots\times S_n$ where $S_j$ is the symmetric group on $j$ elements acting by permutation of the variables, $\mathscr{M}_n:=\mathbb{Z}^{n(n-1)/2}=\langle\delta^{ji}\mid 1\leq i\leq j\leq n-1\rangle$ written multiplicatively with the following action: \[ \delta^{ji}(x_{k\ell})=x_{k\ell}-\delta_{jk}\delta_{i\ell} \] All of which come from the Galois order realization of $U(\gl_n)$ from \cite{FO10}. Let $\varphi_n\colon\Lambda_n\rightarrow\Lambda_{n+1}$ and $\psi\colon\mathbb{S}_n\rightarrow\mathbb{S}_{n+1}$ be the standard embeddings and observe that $\Lambda_{n+1}=\varphi_n(\Lambda_{n})\otimes\mathbb{C}[x_{n+1,i}\mid 1\leq i\leq n+1]$. All of the conditions for Theorem \ref{thm: morphisms sufficient condition} are met, so we have a map $\Phi_n\colon\mathcal{F}_{\Lambda_n}\rightarrow\mathcal{F}_{\Lambda_{n+1}}$ where $\mathcal{F}_{\Lambda_n}$ is the standard flag order with data $(\Lambda_n,\mathbb{S}_n,\mathscr{M}_n)$. Moreover, \[ \mathcal{F}_{\Lambda_1}\xrightarrow{\Phi_1}\mathcal{F}_{\Lambda_2}\xrightarrow{\Phi_2}\mathcal{F}_{\Lambda_3} \xrightarrow{\Phi_3}\cdots \] By Lemma 2.3 in \cite{Webster19} the standard Galois order $\mathcal{K}_{\Gamma_n}$ is isomorphic to the centralizer subalgebra $e_n\mathcal{F}_{\Lambda_n}e_n$ where $e_n=\frac{1}{\#\mathbb{S}_n}\sum_{\sigma\in\mathbb{S}_n}\sigma$ is the symmetrizing idempotent. It was shown in \cite{HARTWIG2020106806} that $U(\gl_n)$ is a principal Galois order. Thus we have the following commuting diagram. \[ \begin{tikzpicture} \node(F1) {$\mathcal{F}_{\Lambda_1}$}; \node(F2) [right=0.75cm of F1] {$\mathcal{F}_{\Lambda_2}$}; \node(F3) [right=0.75cm of F2] {$\mathcal{F}_{\Lambda_3}$}; \node(F4) [right=0.75cm of F3] {$\cdots$}; \node(K1) [below=0.75cm of F1] {$\mathcal{K}_{\Gamma_1}$}; \node(K2) [right=0.75cm of K1] {$\mathcal{K}_{\Gamma_2}$}; \node(K3) [right=0.75cm of K2] {$\mathcal{K}_{\Gamma_3}$}; \node(K4) [right=0.75cm of K3] {$\cdots$}; \node(U1) [below=0.75cm of K1] {$U(\gl_1)$}; \node(U2) [below=0.75cm of K2] {$U(\gl_2)$}; \node(U3) [below=0.75cm of K3] {$U(\gl_3)$}; \node(U4) [below=0.95cm of K4] {$\cdots$}; \foreach \i in {1,...,4} { \draw[>=stealth,->,black] (K\i) -- (F\i); \draw[>=stealth,->,black] (U\i) -- (K\i); } \foreach \i/\j in {1/2,2/3,3/4} { \draw[>=stealth,->,black] (F\i) -- (F\j) node[midway,above] (P\i) {$\Phi_\i$}; } \foreach \x in {U,K} { \draw[>=stealth,->,black] (\x1) -- (\x2); \draw[>=stealth,->,black] (\x2) -- (\x3); \draw[>=stealth,->,black] (\x3) -- (\x4); } \end{tikzpicture} \] \end{example} The fact that the maps between the standard Galois orders exists in the previous example can be described in general by the following: \begin{corollary}\label{cor: std Galois order morphism} Given the setting of Theorem \ref{thm: morphisms sufficient condition} and additionally assume that $\hat{W_2}=\psi(\hat{W_1})\times\hat{H}$ with $(\psi(w),h)\in\hat{W_2}$ acting on $\varphi(a)\otimes u\in\Lambda_2=\varphi(\Lambda_1)\otimes U$ by \begin{equation}\label{eq: direct product action on tensor} (\psi(w),h)(\varphi(a)\otimes u)=\varphi(w(a))\otimes h(u), \end{equation} the map $\Phi\colon\mathcal{F}_{\Lambda_1}\rightarrow\mathcal{F}_{\Lambda_2}$ restricts to their centralizer subalgebras $\Phi\colon\mathcal{K}_{\Gamma_1}\rightarrow\mathcal{K}_{\Gamma_2}$ \end{corollary} \begin{proof} By assumption, $\hat{W_2}=\psi(\hat{W_1})\times\hat{H}$. As such, $\#W_2 = \#W_1\cdot\#H$ and \[ e_2=\frac{1}{\#W_2}\sum_{h\in\hat{H}}\sum_{w\in W_1}(\psi(w),h)=\frac{\#W_1}{\#W_2}\sum_{h\in\hat{H}}(\psi(e_1),h) =\frac{1}{\#H}\sum_{h\in\hat{H}}(\psi(e_1),h) \] where \[ (\psi(e_1),h)=\frac{1}{\#W_1}\sum_{w\in W_1}(\psi(w),h)\quad\text{for }h\in\hat{H}. \] By Theorem \ref{thm: morphisms sufficient condition} and Lemma 2.3 from \cite{Webster19}, we have \[ \mathcal{K}_{\Gamma_2}\cong e_2\mathcal{F}_{\Lambda_2}e_2\supseteq e_2\mathcal{F}_{\Lambda_1}e_2. \] We claim that $e_2\mathcal{F}_{\Lambda_1}e_2\cong\mathcal{K}_{\Gamma_1}$. This is clear by observation that $e_2=\frac{1}{\#\hat{H}}(\psi(e_1),h)$ made at the beginning of this proof and the required action of $(\psi(e_1),h)$ from (\ref{eq: direct product action on tensor}). \end{proof} \subsection{Split short exact sequences} We show that certain short exact sequences: \[ 0\rightarrow I\rightarrow\Lambda_2\rightarrow\Lambda_1\rightarrow0, \] give rise to embeddings of standard flag orders. \begin{theorem}\label{thm: standard order intersection} Let $(\Lambda_1,W_1,\mathscr{M}_1)$ and $(\Lambda_2,W_2,\mathscr{M}_2)$ be flag order data and $\mathcal{F}_{\Lambda_1},\mathcal{F}^\prime_{\Lambda_2}$ be the corresponding standard flag orders such that the following are true: \begin{itemize} \item $\Lambda_2=\Lambda_1\oplus I$, where $I$ is an ideal of $\Lambda_2$, \item there are embeddings $W_1\rightarrow W_2$ and $\mathscr{M}_1\rightarrow\mathscr{M}_2$ inducing an embedding $\hat{W}_1\rightarrow\hat{W}_2$ that satisfies the Condition \ref{eq:phipsi-condition} with the natural embedding of $\Lambda_1\rightarrow\Lambda_2$, \item for every $w\in\hat{W}_1$ and $a\in I$, $w(a)=a$. \end{itemize} Then $\mathcal{F}_{\Lambda_2}\cap\mathcal{F}_1=\mathcal{F}_{\Lambda_1}$. In particular, $\mathcal{F}_{\Lambda_1}\hookrightarrow\mathcal{F}_{\Lambda_2}$. \end{theorem} \begin{proof} The first two assumptions allow for an embedding \[ \mathcal{F}_1=\Frac(\Lambda_1)\#\hat{W}_1\rightarrow\Frac(\Lambda_2)\#\hat{W}_2=\mathcal{F}_2. \] Thus this intersection is reasonable to consider. \noindent$\subset\colon$ Let $X\in\mathcal{F}_{\Lambda_2}\cap\mathcal{F}_1$. First, $X(\Lambda_1)\subset\Lambda_2$ as $\Lambda_1\subset\Lambda_2$ and $X\in\mathcal{F}_{\Lambda_2}$. Second, $X(\Lambda_1)\subset\Frac(\Lambda_1)$. Hence, \[ X(\Lambda_1)\subset\Lambda_2\cap\Frac(\Lambda_1)=(\Lambda_1\oplus I)\cap\Frac(\Lambda_1)=\Lambda_1\oplus(I\cap\Frac(\Lambda_1)). \] We claim that $\Frac(\Lambda_1)\cap I=0$. This follows as $I\cap\Lambda_1=0$ and if $I\cap(\Frac(\Lambda_1)\setminus\Lambda_1)\neq0$, then $1\in I$ which is a contradiction. Thus $X\in\mathcal{F}_{\Lambda_1}$. \noindent$\supset\colon$ Let $X\in\mathcal{F}_{\Lambda_1}$. It is obvious that $X\in\mathcal{F}_1\subset\mathcal{F}_2$. We need to show that $X(\Lambda_2)\subset\Lambda_2$. Recall that $\Lambda_2=\Lambda_1\oplus I$ and $X(a+b)=X(a)+X(b)$. By assumption, $X(\Lambda_1)\subset\Lambda_1\subset\Lambda_2$, so all that remains is to show $X(I)\subset\Lambda_2$. By the third assumption, for any $a\in I$, $X(a)=a\cdot X(1)$. Now $X(1)\in\Lambda_1$, so $X(a)\in a\Lambda_1\subset I\subset\Lambda_2$. Hence, $X\in\mathcal{F}_{\Lambda_2}\cap\mathcal{F}_1$. \end{proof} We now apply the above to prove a result inspired by differential operators on affine varieties. \begin{definition} Given an ideal $I\subset\Lambda$, we define: \[ \mathcal{F}_\Lambda[I]=\{X\in\mathcal{F}_\Lambda\mid X(I)\subset I\}, \] the \emph{subring of $\mathcal{F}_\Lambda$ that fixes $I$}. \end{definition} \begin{definition} Given and ideal $I\subset\Lambda$, we define \[ I\mathcal{F}_\Lambda=\{X\in\mathcal{F}_\Lambda\mid X(\Lambda)\subset I\}, \] the \emph{subring of $\mathcal{F}_\Lambda$ send $\Lambda$ to $I$. In fact, $I\mathcal{F}_\Lambda$ is an ideal of $\mathcal{F}_\Lambda[I]$.} \end{definition} To see that $I\mathcal{F}_\Lambda$ is an ideal, let $X\in\mathcal{F}_\Lambda[I]$ and $Y\in I\mathcal{F}_\Lambda$. Then for some $a\in I$, \[ XY(a)=X(Y(a))\in X(I)\subset I, \] so $XY\in\mathcal{F}_\Lambda[I]$. Similarly, $YX\in\mathcal{F}_\Lambda[I]$ by \[ YX(a)=Y(X(a))\in Y(I)\subset I. \] \begin{remark} We observe that $I\cdot\mathcal{F}_\Lambda\subset I\mathcal{F}_\Lambda$ based on the fact that the action of $\Lambda$ on itself is multiplication, and generally the containment is strict. \end{remark} \begin{lemma}\label{lem: quotient injects into End} The map $\mathcal{F}_{\Lambda_2}[I]/I\mathcal{F}_{\Lambda_2}\rightarrow\End(\Lambda_1)$ is injective. \end{lemma} \begin{proof} First we observe that $\mathcal{F}_{\Lambda_2}[I]\rightarrow\End(\Lambda_1)$ by sending $X\mapsto(a+I\mapsto X(a)+I)$. We now claim the kernel of this map is $K=I\mathcal{F}_{\Lambda_2}$. It is clear that $K\supset I\mathcal{F}_{\Lambda_2}$, and if $X\in K$ then $X(a+I)=I$, that is $X(a)\in I$ for all $a\in\Lambda_1$. Since $\Lambda_2=\Lambda_1\oplus I$, it follows that $X\in I\mathcal{F}_{\Lambda_2}$. Hence the map is injective. \end{proof} \begin{theorem}\label{thm: standard order quotient} Following the same assumptions as in Theorem \ref{thm: standard order intersection}, we have an embedding $\eta\colon\mathcal{F}_{\Lambda_1} \hookrightarrow\mathcal{F}_{\Lambda_2}[I]/I\mathcal{F}_{\Lambda_2}$ \end{theorem} \begin{proof} In the proof of Theorem \ref{thm: standard order intersection} it was shown that $\mathcal{F}_{\Lambda_1}\hookrightarrow\mathcal{F}_{\Lambda_2}[I]$, and it is known that $F_{\Lambda_1}\hookrightarrow\End(\Lambda_1)$. This gives rise to the following diagram: \[ \begin{tikzpicture} \node(E) {$\End(\Lambda_1)$}; \node(T) [above=0.75cm of E] {$\mathcal{F}_{\Lambda_2}[I]$}; \node(Q) [left=1cm of E] {$\mathcal{F}_{\Lambda_2}[I]/I\mathcal{F}_{\Lambda_2}$}; \node(S) [right=1cm of E] {$\mathcal{F}_{\Lambda_1}$}; \draw[>=stealth, right hook->,black] (Q) -- (E); \draw[>=stealth, <-left hook,black] (E) -- (S); \draw[>=stealth, ->,black] (T) -- (E); \draw[>=stealth, <-left hook,black] (T)--(S); \draw[>=stealth, ->>,black] (T) -- (Q); \end{tikzpicture} \] The left triangle arises from Lemma \ref{lem: quotient injects into End} and clearly commutes. Now the right triangle commutes because for all $a\in\Lambda_1$, $X(a)=X(a+I)$ by definition. Thus the whole triangle commutes, and $\mathcal{F}_{\Lambda_1} \hookrightarrow\mathcal{F}_{\Lambda_2}[I]/I\mathcal{F}_{\Lambda_2}$. \end{proof} \begin{example} We can apply this result to the same set-up as Example \ref{ex: Klein 4->S4} in this setting as $\mathbb{C}[x_1,x_2,x_3,x_4]=\mathbb{C}[x_2-x_1,x_4-x_3]\oplus(x_2+x_1,x_4+x_3)$, our choice of maps in Example \ref{ex: Klein 4->S4} satisfy the other two requirements. Therefore, Theorem \ref{thm: standard order quotient} applies. We describe the components of the RHS for the benefit of the reader: \[ \mathcal{F}_{\mathbb{C}[x_1,x_2,x_3,x_4]}[(x_2+x_1,x_4+x_3)] =\mathbb{C}[x_1,x_2,x_3,x_4]\left\langle\frac{1}{x_2-x_1}((12)-\mathbbm{1}),\frac{1}{x_4-x_3}((34)-\mathbbm{1})\right\rangle \] and \[ (x_2+x_1,x_4+x_3)\mathcal{F}_{\mathbb{C}[x_1,x_2,x_3,x_4]}=(x_2+x_1,x_4+x_3)\cdot\mathcal{F}_{\mathbb{C}[x_1,x_2,x_3,x_4]}. \] \end{example} While the map $\eta$ in Theorem \ref{thm: standard order quotient} is surjective in our previous example, this is not generally true. This is unlike the situation of differential operators on polynomial rings. Even if $\Lambda_2$ is a polynomial ring and $\hat{W}_2$ a complex reflection group. The following example demonstrates this. \begin{example}\label{example: counter example to quotient isomorphism} Let $\Lambda_2=\mathbb{C}[x_1,x_2,x_3]$, $\Lambda_1=\mathbb{C}[x_1]$, $I=(x_2,x_3)$, $\hat{W}_2=S_3$ acting by permutation of variables, and $\hat{W}_1$ trivial. In this case $\mathcal{F}_{\Lambda_1}\subsetneq\mathcal{F}_{\Lambda_2}[I]/I\mathcal{F}_{\Lambda_2}$, as the permutation $(23)$ is on the right hand side, but is not in the image of $\eta$ as $\hat{W}_1$ is trivial. \end{example} \section{Tensor Products}\label{sec: tensors} Let $(\Lambda_i,\mathscr{M}_i,W_i)$ for $i=1,2$ be the data for standard flag orders $\mathcal{F}_{\Lambda_i}\subset\mathcal{F}_i=\Frac(\Lambda_i)\#\hat{W}_i$, where $\hat{W}_i=W_i\ltimes\mathscr{M}_i$. Let $\Lambda=\Lambda_1\otimes\Lambda_2$, $\mathscr{M}=\mathscr{M}_1\times\mathscr{M}_2$, $W=W_1\times W_2$, and $\mathcal{F}=\Frac(\Lambda)\#\hat{W}$, where $\hat{W}=W\ltimes\mathscr{M}=\hat{W}_1\times\hat{W}_2$. The following is a generalization of Lemma 2.17 (ii) from \cite{HARTWIG2020106806}. \begin{lemma}\label{lem: nonzero determinant} Given a collection of elements $\{X_i\}_{i=1}^n\in\mathcal{F}$ that are linearly independent over $\Frac(\Lambda)$, then there exists $\{a_i\}_{i=1}^n\in\Lambda$ such that \[ \det\bigg(\big(X_i(a_j)\big)_{i,j=1}^n\bigg)\neq0 \] \end{lemma} \begin{proof} Identical to the proof of Lemma 2.17 (ii) in \cite{HARTWIG2020106806}. \end{proof} \begin{lemma}\label{lem: aj are simple tensors} When applying Lemma 2.17 (ii) from \cite{HARTWIG2020106806} to $A=\Lambda$ and $F=\Frac(\Lambda)$, and $\sigma_1,\ldots,\sigma_n\in W\ltimes\mathscr{M}$ the choices of $(a_1,a_2,\ldots,a_n)\in\Lambda^n$ can be selected such that $a_j$ is a simple tensor for each $j=1,2,\ldots,n$. \end{lemma} \begin{proof} We use induction on $n$. For $n=1$, since $\sigma_1\in\hat{W}$ acts as an automorphism of $\Lambda$, it is nonzero on the simple tensor $1\otimes1$. For $n>1$, we assume we have simple tensors $(a_1,a_2,\ldots,a_{n-1})\in\Lambda^{n-1}$ such that $(\sigma_j(a_i))_{i,j=1}^{n-1}$ has nonzero determinant. We now observe by part (i) of Lemma 2.17 from \cite{HARTWIG2020106806} that there exists an $a_n\in\Lambda$ such that \[ (\sigma_n-\sum_{i=1}^{n-1}x_i\sigma_i)(a_n)\neq0. \] We claim that we can choose $a_n$ to be a simple tensor. If for the sake of argument we assume that $\sigma_n-\sum_{i=1}^{n-1}x_i\sigma_i$ is zero on every simple tensor, then if $a_n=\sum_{j=1}^k a_j^{(1)}\otimes a_j^{(2)}$ is a sum of simple tensors, where $a_j^{(i)}\in\Lambda_i$, \[ 0\neq(\sigma_n-\sum_{i=1}^{n-1}x_i\sigma_i)(a_n)=\sum_{j=1}^k(\sigma_n-\sum_{i=1}^{n-1}x_i\sigma_i)(a_j^{(1)}\otimes a_j^{(2)})=0. \] Which is a contradiction. \end{proof} \begin{notation} Below, if $A$ is an algebra action on a vector space $V$, and $W\subset V$ is a subspace, then we put $A_W=\{a\in A\mid aW\subset W\}$. \end{notation} Recall that the standard Galois order $\mathcal{K}_\Gamma$ can be regarded as a spherical subalgebra of $\mathcal{F}_\Lambda$, as $\mathcal{K}_\Gamma\cong e\mathcal{F}_\Lambda e$, where $e=\frac{1}{\#W}\sum_{w\in W}w$ \cite{Webster19}. \begin{theorem}\label{thm: tensor of std orders} \text{} \begin{enumerate}[\rm (a)] \item There is a chain of embeddings \[ \mathcal{F}_{\Lambda_1}\otimes\mathcal{F}_{\Lambda_2} \hookrightarrow\mathcal{F}_\Lambda\hookrightarrow (\mathcal{F}_1\otimes\mathcal{F}_2)_\Lambda. \] \item There is a chain of embeddings \[ \mathcal{K}_{\Gamma_1}\otimes\mathcal{K}_{\Gamma_2} \hookrightarrow\mathcal{K}_\Gamma\hookrightarrow (\mathcal{K}_1\otimes\mathcal{K}_2)_\Gamma. \] \end{enumerate} \end{theorem} \begin{proof} (a) First we observe the following is an embedding of algebras: \[ \psi\colon\mathcal{F}_1\otimes\mathcal{F}_2\hookrightarrow\mathcal{F} \] by $X_1(w_1,\mu_1)\otimes X_2(w_2,\mu_2)\mapsto X_1((w_1,\mu_1),(1,1))X_2((1,1),(w_2,\mu_2))$ and extending linearly. If we restrict this embedding to $\mathcal{F}_{\Lambda_1}\otimes\mathcal{F}_{\Lambda_2}$, this gives us an embedding \[ \widetilde{\psi}\colon\mathcal{F}_{\Lambda_1}\otimes\mathcal{F}_{\Lambda_2}\hookrightarrow\mathcal{F}_\Lambda. \] To see this, we just need to show that $\psi(X_1\otimes X_2)\big(\Lambda\big)\subset\Lambda$ for $X_1\otimes X_2\in\mathcal{F}_{\Lambda_1}\otimes\mathcal{F}_{\Lambda_2}$. However, this holds since $X_1\otimes X_2(\lambda_1\otimes\lambda_2)=X_1(\lambda_1)\otimes X_2(\lambda_2)\in\Lambda_1\otimes\Lambda_2$ for all $\lambda_1\in\Lambda_1,\lambda_2\in\Lambda_2$. Next we show the second embedding. We first observe what happens when applying Lemma 2.17 from \cite{HARTWIG2020106806} to a $X\in \mathcal{F}_\Lambda$. We observe that $X=\sum_{i=1}^kf_i(w_{i1},w_{i2})$ where $f_i\in\Frac(\Lambda)$ and $(w_{i1},w_{i2})\in\hat{W}$. Let $n=\vert\{w\in\hat{W}_1\mid\exists w^\prime\in\hat{W}_2\colon (w,w^\prime)\in\supp_{\hat{W}}{X}\}\vert$ and $m=\vert\{w^\prime\in\hat{W}_2\mid\exists w\in\hat{W}_1\colon(w,w^\prime)\in\supp_{\hat{W}}{X}\}\vert$. WLOG we can assume that $k=n\cdot m$. Let $\{a_{j1}\}$ (resp. $\{a_{j2}\})$ be the set of elements of $\Lambda_1$ (resp. $\Lambda_2$) such that the matrix $A_1:=(w_{i1}(a_{j1}))_{i,j=1}^n$ (resp. $A_2:=(w_{i2}(a_{j2}))_{i,j=1}^m$) has non-zero determinant denoted $d_1$ (resp. $d_2$). Then the matrix $A=\big((w_{i1},w_{i2})(a_{j1}\otimes a_{j2})\big)$ has non-zero determinant; moreover, it is clear that $A=A_1\otimes A_2$ so the determinant is $d=d_1^m\otimes d_2^n$. As such, if $A^\prime$ is the adjugate of $A$ ($A^\prime\cdot A=d\cdot I_{k}$), then it follows using $A^\prime$ that for each $i=1,\ldots,k$: \[ f_i\in\frac{1}{d}\Lambda=\frac{1}{d_1^m}\Lambda_1\otimes\frac{1}{d_2^n}\Lambda_2. \] This shows us that $X\in\psi(\mathcal{F}_1\otimes\mathcal{F}_2)$; moreover, $X\in \psi\big((\mathcal{F}_1\otimes\mathcal{F}_2)_{\Lambda}\big)$. This leads to the second embedding.\\ (b) The symmetrizing idempotent in the group algebra of $W$ can be factored as $e=e_1e_2=e_2e_1$, where $e_i=\frac{1}{\#W_i}\sum_{w\in W_i} w$ for $i=1,2$. Thus $e\mapsto e_1\otimes e_2$. Therefore, by part (a) and using Webster's observation \cite{Webster19} that $e\mathcal{F}_\Lambda e\cong \mathcal{K}_\Gamma$ and $e_i\mathcal{F}_{\Lambda_i} e_i\cong \mathcal{K}_{\Gamma_i}$ for $i=1,2$, this proves the claim. \end{proof} \begin{remark} In all examples we know of, the map $\widetilde{\psi} \colon\mathcal{F}_{\Lambda_1}\otimes\mathcal{F}_{\Lambda_2}\hookrightarrow\mathcal{F}_\Lambda$ is surjective, making $\widetilde{\psi}$ an isomorphism. \end{remark} \begin{example} Let $\Lambda=\mathbb{C}[x_1,x_2,\ldots,x_n]$, $W$ trivial, and $\mathscr{M}\cong\mathbb{Z}^n$, then $\mathcal{F}_\Lambda=\Lambda\#\mathbb{Z}^n\cong A_n(\mathbb{C})$ the $n$-th Weyl algebra. As is well-known $A_n(\mathbb{C})\otimes A_m(\mathbb{C})\cong A_{n+m}(\mathbb{C})$. \end{example} \begin{example} Let $\mathscr{M}$ be trivial. Then $\hat{W}=W$ is finite and $L\#W\cong\End_{\Lambda^W}(L)=\End_{\Lambda^W}(L)$ \cite{HersteinBook}, hence $(L\# W)_\Lambda=\End_{\Lambda^W}(\Lambda)=\End_\Gamma(\Lambda)$. As such, if $\mathscr{M}_1,\mathscr{M}_2$ trivial, then \begin{align*} \mathcal{F}_{\Lambda_1}\otimes\mathcal{F}_{\Lambda_2} \cong\End_{\Gamma_1}(\Lambda_1)\otimes\End_{\Gamma_2}(\Lambda_2)&\cong\End_{\Gamma_1\otimes\Gamma_2}(\Lambda_1\otimes\Lambda_2)\cong\mathcal{F}_\Lambda,\\ \intertext{via} \Phi\vert_{\Lambda_1\otimes1}\otimes\Phi\vert_{1\otimes\Lambda_2}&\mapsfrom\Phi\\ \Psi_1\otimes\Psi_2&\mapsto\big((a_1\otimes a_2)\mapsto\Psi_1(a_1)\otimes\Psi_2(a_2)\big) \end{align*} \end{example} Theorem \ref{thm: tensor of std orders} gives us the two very useful corollaries. \begin{corollary}\label{cor: principal flag orders closed under tensor} Principal flag orders are closed under tensor products. That is, given two principal flag orders $F_1$ and $F_2$ with data $(\Lambda_1,W_1,\mathcal{M}_1)$ and $(\Lambda_2,W_2,\mathcal{M}_2)$ respectively, with each $\Lambda_i$ a $\K$-algebra for a ground field $\K$, $F_1\otimes_{\K}F_2$ is a principal flag order with data $(\Lambda_1\otimes\Lambda_2,W_1\times W_2,\mathcal{M}_1\times\mathcal{M}_2)$. \end{corollary} \begin{proof} We need to show that $F_1\otimes F_2$ satisfies the 3 conditions from Definition \ref{def: principal flag order}. It is clear that $F_i\subset\mathcal{F}_{\Lambda_i}$ for each $i$ from the definition of the standard flag order. As such, $F_1\otimes F_2\subset \mathcal{F}_{\Lambda_1\otimes\Lambda_2}$ via the embedding from Theorem \ref{thm: tensor of std orders} which proves the third condition. Also, $\Lambda_i\# W_i\subset F_i$ for each $i$, so we satisfy the first condition that $(\Lambda_1\otimes\Lambda_2)\#(W_1\times W_2)\cong(\Lambda_1\#W_1)\otimes(\Lambda_2\#W_2)\subset F_1\otimes F_2$. All that remains to prove is the second condition. To do this we observe the following: \[ \Frac(\Lambda_1\otimes\Lambda_2)=\Frac(\Frac(\Lambda_1)\otimes\Frac(\Lambda_2)) \] which implies that \[(\Lambda_1\otimes\Lambda_2)^{-1}(\Frac(\Lambda_1)\otimes\Frac(\Lambda_2)) =\Frac(\Lambda_1\otimes\Lambda_2) \] Therefore, we see that \begin{align} (\Lambda_1\otimes\Lambda_2)^{-1}(F_1\otimes F_2)&=(\Lambda_1\otimes\Lambda_2)^{-1}((\Frac(\Lambda_1)\#(W_1\ltimes\mathscr{M}_1))\otimes(\Frac(\Lambda_2)\#(W_2\ltimes\mathscr{M}_2)))\label{eq: localize from principal condition}\\ &=(\Lambda_1\otimes\Lambda_2)^{-1}((\Frac(\Lambda_1)\otimes\Frac(\Lambda_2))\#((W_1\times W_2)\ltimes(\mathscr{M}_1\times\mathscr{M}_2))\nonumber\\ &=\Frac(\Lambda_1\otimes\Lambda_2)\#((W_1\times W_2)\ltimes(\mathscr{M}_1\times\mathscr{M}_2))\label{eq: field of fracs the same} \end{align} with \ref{eq: localize from principal condition} follows from each $F_i$ being a principal flag order and \ref{eq: field of fracs the same} follows from the above fact. Thus proving the second condition and that $F_1\otimes F_2$ is a principal flag order. \end{proof} \begin{corollary}\label{cor: principal galois orders closed under tensor} Principal Galois orders are closed under tensor products. More explicitly, given two principal Galois $\Gamma_1$-order $\mathscr{U}_1$ and a principal Galois $\Gamma_2$-order $\mathscr{U}_2$, with each $\Lambda_i$ a $\K$-algebra for a ground field $\K$, $\mathscr{U}_1\otimes_{\K}\mathscr{U}_2$ is a Galois $\Gamma_1\otimes\Gamma_2$-order. \end{corollary} \begin{proof} Essentially the same as the proof of the previous corollary with the observation that \[ (\Frac(\Lambda_1)\#\mathscr{M}_1)^{W_1}\otimes(\Frac(\Lambda_2)\#\mathscr{M}_2)^{W_2} =((\Frac(\Lambda_1)\otimes\Frac(\Lambda_2))\#(\mathscr{M}_1\times\mathscr{M}_2))^{W_1\times W_2} \] based on the action of $W_1\times W_2$. \end{proof} \section*{Acknowledgements} The author would like to thank Jonas Hartwig for some comments and helpful discussion. Finally, the author would like to thank Iowa State University, where the author resided during some of the results in this paper. \printbibliography \end{document}
2,869,038,154,147
arxiv
\section{Introduction} Very few near-Earth objects (NEOs) as small as 1991~VG (about 10~m) have given rise to so much controversy and imaginative conjectures. Asteroid 1991~VG was the first NEO ever discovered moving in an orbit that is similar to that of the Earth (Scotti \& Marsden 1991). This element of novelty led to a stimulating debate on how best to interpret the new finding. On the one hand, it could be the first member of a new orbital class of NEOs (e.g. Rabinowitz et al. 1993); on the other, it could be a relic of space exploration (e.g. Scotti \& Marsden 1991; Steel 1995b). In addition to the primary debate on the possible character ---natural versus artificial--- of 1991~VG, a lively discussion resulted in multiple theories about its most plausible origin; the main asteroid belt (e.g. Brasser \& Wiegert 2008), lunar ejecta (e.g. Tancredi 1997), a returning spacecraft (e.g. Steel 1995b) or space debris (e.g. Scotti \& Marsden 1991), and being an extraterrestrial artefact (e.g. Steel 1995a), were all argued in favour and against as possible provenances for this object. After being last observed in 1992 April, 1991~VG has spent over a quarter of a century at small solar elongation angles, out of reach of ground-based telescopes. Now, this peculiar minor body has been recovered (Hainaut, Koschny \& Micheli 2017)\footnote{\url{http://www.minorplanetcenter.net/mpec/K17/K17L02.html}} and the new data may help in confirming or ruling out early speculative theories about its origin. Here, we use the latest data available on 1991~VG to study its past, present and future orbital evolution in an attempt to understand its origin and current dynamical status. This paper is organized as follows. In Section 2, we present historical information, current data, and the various theories proposed to explain the origin of 1991~VG. Details of our numerical model and 1991~VG's orbital evolution are presented and discussed in Section 3. In Section 4, we show that even if 1991~VG is certainly unusual, other known NEOs move in similar orbits and orbital models predict that such objects must exist naturally. Arguments against 1991~VG being a relic of alien or even human space exploration are presented in Section 5. Our results are discussed in Section 6. Section 7 summarizes our conclusions. \section{Asteroid 1991~VG: data and theories} Asteroid 1991~VG was discovered on 1991 November 6 by J.~V. Scotti observing with the Spacewatch 0.91-m telescope at Steward Observatory on Kitt Peak at an apparent visual magnitude of 20.7, nearly 0.022~au from the Earth (Scotti \& Marsden 1991). The rather Earth-like orbit determination initially led to suspect that the object was of artificial origin, i.e. returning space debris, perhaps a Saturn S-IVB third stage. It experienced a close encounter with our planet at nearly 0.0031~au on 1991 December 5 and with the Moon at 0.0025~au on the following day, but it was not detected by radar at NASA's Goldstone Deep Space Network on December 12 (Scotti et al. 1991). After being last imaged in 1992 April, 1991~VG remained unobserved until it was recovered on 2017 May 30 by Hainaut et al. (2017) observing with the Very Large Telescope (8.2-m) from Cerro Paranal at a magnitude of 25. The new orbit determination (see Table~\ref{elements}) is based on 70 observations that span a data-arc of 9\,339 d or 25.57 yr and shows that 1991~VG is an Apollo asteroid following a somewhat Earth-like orbit ---semimajor axis, $a$=1.026~au, eccentricity, $e$=0.04975, and inclination, $i$=1\fdg44--- with a minimum orbit intersection distance (MOID) with the Earth of 0.0053~au. \begin{table} \fontsize{8}{11pt}\selectfont \tabcolsep 0.05truecm \caption{Heliocentric Keplerian orbital elements and 1$\sigma$ uncertainties of 1991~VG at epoch JD 2458000.5 that corresponds to 00:00:00.000 TDB, Barycentric Dynamical Time, on 2017 September 4 (J2000.0 ecliptic and equinox. Source: JPL's Small-Body Database.) } \centering \begin{tabular}{lcc} \hline Orbital parameter & & Value$\pm$uncertainty (1$\sigma$) \\ \hline Semimajor axis, $a$ (au) & = & 1.0255840443$\pm$0.0000000006 \\ Eccentricity, $e$ & = & 0.049746422$\pm$0.000000010 \\ Inclination, $i$ (\degr) & = & 1.437055$\pm$0.000004 \\ Longitude of the ascending node, $\Omega$ (\degr) & = & 73.26393$\pm$0.00007 \\ Argument of perihelion, $\omega$ (\degr) & = & 23.96147$\pm$0.00007 \\ Mean anomaly, $M$ (\degr) & = & 246.83514$\pm$0.00002 \\ Perihelion, $q$ (au) & = & 0.974564908$\pm$0.000000010 \\ Aphelion, $Q$ (au) & = & 1.0766031805$\pm$0.0000000006 \\ Absolute magnitude, $H$ (mag) & = & 28.5 \\ \hline \end{tabular} \label{elements} \end{table} As for the possible origin of 1991~VG and based only on its orbital elements, Scotti \& Marsden (1991) suggested immediately that it might be a returning spacecraft. West et al. (1991) pointed out that its light curve might be compatible with that of a rapidly rotating satellite ---probably tumbling--- with highly reflective side panels, further supporting the theory that 1991~VG could be an artificial object. Although an artificial origin was proposed first, T.~Gehrels pointed out that if 1991~VG was natural, it might be a representative of a new orbital class of objects, the Arjunas; an unofficial dynamical group of small NEOs following approximately Earth-like orbits which could be secondary fragments of asteroids that were originally part of the main belt and left their formation region under the effect of Jupiter's gravity (Cowen 1993; Rabinowitz et al. 1993; Gladman, Michel \& Froeschl{\'e} 2000). In his analysis of terrestrial impact probabilities, Steel (1995b) assumed that 1991~VG was a returned spacecraft. An artificial nature for 1991~VG was discussed in detail by Steel (1995a), concluding that it was a robust candidate alien artefact (inert or under control); this conclusion was contested by Weiler (1996). The controversy on a possible alien origin for 1991~VG was continued by Steel (1998) and Weiler (1998) to conclude that either the detection of 1991~VG was a statistical fluke (unusual NEO or space debris of terrestrial origin) or a very large number of alien probes are following heliocentric orbits. Tancredi (1997) reviewed all the available evidence to conclude that 1991~VG could be a piece of lunar ejecta, the result of a relatively large impact. Tatum (1997) favoured a natural origin for 1991~VG to state that any asteroid moving in an Earth-like orbit with semimajor axis in the range 0.9943--1.0057~au will inevitably collide with our planet, i.e. observed NEOs in such paths must be relatively recent arrivals. Brasser \& Wiegert (2008) re-examined the topic of the origin of 1991~VG and argued that it had to have its origin on a low-inclination Amor- or Apollo-class object. Here and in order to identify the most Earth-like orbits among those of known NEOs, we have used the $D$-criteria of Southworth \& Hawkins (1963), $D_{\rm SH}$, Lindblad \& Southworth (1971), $D_{\rm LS}$ (in the form of equation 1 in Lindblad 1994 or equation 1 in Foglia \& Masi 2004), Drummond (1981), $D_{\rm D}$, and the $D_{\rm R}$ from Valsecchi, Jopek \& Froeschl\'e (1999) to search the known NEOs for objects that could be dynamically similar to our planet, considering the orbital elements of the Earth for the epoch JDTDB 2458000.5 (see below) that is the standard time reference used throughout this research. The actual values are: semimajor axis, $a$ = 0.999215960~au, eccentricity, $e$ = 0.017237361, inclination, $i$ = 0\fdg000524177, longitude of the ascending node, $\Omega$ = 230\fdg950190495, and argument of perihelion, $\omega$ = 233\fdg858793714. The list of NEOs has been retrieved from JPL's Solar System Dynamics Group (SSDG) Small-Body Database (SBDB).\footnote{\url{http://ssd.jpl.nasa.gov/sbdb.cgi}} Considering the currently available data on NEOs, the orbit of 1991~VG is neither the (overall) most Earth-like known ---the current record holder is 2006~RH$_{120}$ (Bressi et al. 2008a; Kwiatkowski et al. 2009) which has the lowest values of the $D$-criteria, but it has not been observed since 2007 June 22--- nor the one with the orbital period closest to one Earth year ---which is 2014 OL$_{339}$ with $a$=0.9992~au or an orbital period of 364.83227$\pm$0.00002~d (Vaduvescu et al. 2014, 2015; de la Fuente Marcos \& de la Fuente Marcos 2014; Holmes et al. 2015)--- nor the least eccentric ---which is 2002~AA$_{29}$ with $e$=0.01296 (Connors et al. 2002; Smalley et al. 2002), followed by 2003~YN$_{107}$ with $e$=0.01395 (McNaught et al. 2003; Connors et al. 2004)--- nor the one with the lowest inclination ---which is probably 2009~BD with $i$=0\fdg38 (Buzzi et al. 2009; Micheli, Tholen \& Elliott 2012), followed by 2013~BS$_{45}$ with $i$=0\fdg77 (Bressi, Scotti \& Hug 2013; de la Fuente Marcos \& de la Fuente Marcos 2013). Other NEOs with very Earth-like orbits are 2006~JY$_{26}$ (McGaha et al. 2006; Brasser \& Wiegert 2008; de la Fuente Marcos \& de la Fuente Marcos 2013) and 2008~KT (Gilmore et al. 2008; de la Fuente Marcos \& de la Fuente Marcos 2013). In summary and within the context of the known NEOs, although certainly unusual, the orbital properties of 1991~VG are not as remarkable as initially thought. On a more technical side, the orbit determinations of 2006~RH$_{120}$ and 2009~BD required the inclusion of non-gravitational accelerations (radiation-pressure related) in order to reproduce the available astrometry (see e.g. Micheli et al. 2012). The orbital solution of 1991~VG (see Table~\ref{elements}) was computed without using any non-gravitational forces and reproduces the observations used in the fitting. \section{Integrations and orbital evolution} Understanding the origin and current dynamical status of 1991~VG demands a detailed study of its past, present and future orbital evolution. A careful statistical analysis of the behaviour over time of its orbital elements and other relevant parameters using a sufficiently large set of $N$-body simulations, including the uncertainties associated with the orbit determination in a consistent manner, should produce reasonably robust conclusions. In this section, we use a publicly available direct $N$-body code\footnote{\url{http://www.ast.cam.ac.uk/~sverre/web/pages/nbody.htm}} originally written by Aarseth (2003) that implements a fourth order version of the Hermite scheme described by Makino (1991). The suitability of this software for Solar system studies has been successfully and extensively tested (see de la Fuente Marcos \& de la Fuente Marcos 2012). Consistent with the new data on 1991~VG (Hainaut et al. 2017), non-gravitational forces have been excluded from the calculations; the effects of solar radiation pressure have been found to be negligible in the calculation of the orbit in Table~\ref{elements} and for an average value for the Yarkovsky drift of 10$^{-9}$~au~yr$^{-1}$ (see e.g. Nugent et al. 2012), the time-scale to escape the orbital neighbourhood of the Earth is about 12 Myr, which is some orders of magnitude longer than the time intervals discussed in this research. Our calculations make use of the physical model described by de la Fuente Marcos \& de la Fuente Marcos (2012) and of initial conditions ---positions and velocities in the barycentre of the Solar system for the various bodies involved, including 1991~VG--- provided by JPL's \textsc{horizons}\footnote{\url{https://ssd.jpl.nasa.gov/?horizons}} (Giorgini et al. 1996; Standish 1998; Giorgini \& Yeomans 1999; Giorgini, Chodas \& Yeomans 2001; Giorgini 2011, 2015) at epoch JD 2458000.5 (2017-September-04.0 TDB, Barycentric Dynamical Time), which is the $t$ = 0 instant in our figures. \begin{figure} \centering \includegraphics[width=\linewidth]{fcon1x8_1991VG.eps} \caption{Evolution of the values of the orbital elements and other relevant parameters for the nominal orbit of 1991~VG in Table~\ref{elements}. The top panel shows the geocentric distance; encounters at ranges well below the Hill radius of the Earth, 0.0098~au (also shown), are common. The Kozai-Lidov parameter is shown in the second to top panel. The value of the resonant angle is displayed in the third to top panel. The evolution of the orbital elements, semimajor axis, eccentricity, inclination and argument of perihelion is shown in the fourth to top panel and the fourth, third and second to bottom panels, respectively. The bottom panel shows the distance from the Sun to the descending (thick line) and ascending nodes (dotted line); the aphelion and perihelion distances of Venus, the Earth and Mars are indicated as well. } \label{control} \end{figure} Fig.~\ref{control} shows the evolution backwards and forward in time of several orbital elements and other relevant parameters of 1991~VG using initial conditions compatible with the nominal orbit in Table~\ref{elements}. The time interval displayed (10 kyr) is consistent with the analysis of its dynamical lifetime carried out by Brasser \& Wiegert (2008). Fig.~\ref{control}, top panel (geocentric distance), shows that this NEO experiences recurrent close encounters with our planet well within the Hill radius, which is 0.0098~au. The Hill radius of the Earth is the maximum orbital distance of an object (natural or artificial) to remain gravitationally bound to our planet, i.e. to be a satellite. When inside the Hill sphere, the Earth's attraction dominates that of the Sun even for objects with a positive value of the geocentric energy, i.e. unbound passing objects. In order to be captured as a satellite of our planet, the geocentric energy of the object must be negative (Carusi \& Valsecchi 1979). This simple criterion does not include any constraint on the duration of the capture event; Rickman \& Malmort (1981) recommended the addition of an additional restriction that the object completes at least one revolution around our planet while its geocentric energy is still negative (see Section 5 for a more detailed analysis applied to 1991~VG). Given its low eccentricity, 1991~VG cannot undergo close encounters with major bodies other than the Earth--Moon system. As the orbit of 1991~VG is somewhat Earth-like, these are often low-velocity encounters (as low as 0.9~km~s$^{-1}$). Such fly-bys can be very effective in perturbing an orbit even if the close approaches are relatively distant, but in this case we observe very frequent (every few decades) close fly-bys. Under such conditions, one may expect a very chaotic evolution as confirmed by the other panels in Fig.~\ref{control}. Although very chaotic orbits present great challenges in terms of reconstructing the past dynamical evolution of the affected objects or making reliable predictions about their future behaviour, Wiegert, Innanen \& Mikkola (1998) have shown that it is still possible to arrive to scientifically robust conclusions if a proper analysis is performed. On the other hand, low-velocity encounters well within the Hill radius can lead to temporary capture events (see Section 5). Fig.~\ref{control}, second to top panel, shows the evolution of the value of the so-called Kozai-Lidov parameter $\sqrt{1 - e^2} \cos i$ (Kozai 1962; Lidov 1962) that measures the behaviour of the component of the orbital angular momentum of the minor body perpendicular to the ecliptic. The value of this parameter remains fairly constant over the time interval studied; the dispersion is smaller than the one observed for typical NEOs following Earth-like paths (see e.g. figs 3, 6 and 9, B-panels, in de la Fuente Marcos \& de la Fuente Marcos 2016b). The variation of the relative mean longitude of 1991~VG or difference between the mean longitude of the object and that of the Earth, $\lambda_{\rm r}$ (see e.g. Murray \& Dermott 1999), is shown in Fig.~\ref{control}, third to top panel. When $\lambda_{\rm r}$ changes freely in the interval (0, 360)\degr ---i.e. $\lambda_{\rm r}$ circulates--- 1991~VG is not subjected to the 1:1 mean-motion resonance with our planet. If the value of $\lambda_{\rm r}$ oscillates or librates ---about 0{\degr} (quasi-satellite), $\pm$60{\degr} (Trojan) or 180{\degr} (horseshoe)--- then the orbital periods of 1991~VG and the Earth are virtually the same and we speak of a co-orbital companion to our planet. Fig.~\ref{control}, third to top panel, shows that 1991~VG has been a recurrent transient co-orbital of the horseshoe type in the past and it will return as such in the future, however, it is not a present-day co-orbital companion of the Earth. Fig.~\ref{orbit} shows the most recent co-orbital episode of 1991~VG in further detail; the variation of the relative mean longitude indicates that 1991~VG followed a horseshoe path for about 300~yr. \begin{figure} \centering \includegraphics[width=\linewidth]{forbit1991VG.eps} \caption{Variation of the relative mean longitude, $\lambda_{\rm r}$, over time during the most recent co-orbital episode of 1991~VG (top panel). The path followed by 1991~VG during the time interval ($-$635, $-$350)~yr (bottom panel) describes a horseshoe pattern when seen in a frame of reference centred at the Sun and rotating with the Earth, projected on to the ecliptic plane. The figure also shows the orbit of the Earth, its position at (1, 0)~au, and the Sun at (0, 0)~au. } \label{orbit} \end{figure} Fig.~\ref{control}, fourth to top panel, shows the evolution of the value of the semimajor axis of 1991~VG. Earth's co-orbital region goes from $\sim$0.994~au to $\sim$1.006~au, or a range in orbital periods of 362--368~d, and the figure shows that 1991~VG only enters this zone for relatively brief periods of time although it remains in its neighbourhood during the entire integration. Fig.~\ref{control}, fourth and third to bottom panels, shows how the eccentricity and the inclination, respectively, change over time. Although in both cases the evolution is very irregular, there is some weak coupling between both orbital elements and in some cases, when the eccentricity reaches a local maximum, the value of the inclination reaches a local minimum and vice versa. This explains why the value of the Kozai-Lidov parameter (Fig.~\ref{control}, second to top panel) remains relatively stable throughout the integrations; this is also a sign that the Kozai-Lidov mechanism (Kozai 1962; Lidov 1962) may be at work, at least partially. This interpretation is confirmed in Fig.~\ref{control}, second to bottom panel, as the value of the argument of perihelion, $\omega$, shows signs of libration (it does not circulate) which is a typical side effect of the Kozai-Lidov mechanism. Fig.~\ref{control}, bottom panel, shows the evolution of the nodal distances of 1991~VG; encounters with the Earth--Moon system are only possible in the neighbourhood of the nodes and both nodes tend to drift into the path of our planet in a chaotic manner. The orbital evolution displayed in Fig.~\ref{control} gives a general idea of the dynamical behaviour of 1991~VG, but it does not show the effect of the uncertainties in the orbit determination (see Table~\ref{elements}). In order to account for this critical piece of information, we use the Monte Carlo using the Covariance Matrix (MCCM) method detailed in section 3 of de la Fuente Marcos \& de la Fuente Marcos (2015c) ---the covariance matrix to generate initial positions and velocities has been obtained from JPL's \textsc{horizons}. Fig.~\ref{disper} shows the results of the evolution of 250 control orbits generated using the MCCM method. These simulations confirm that the orbital evolution of 1991~VG is chaotic on time-scales longer than a few decades. \begin{figure} \centering \includegraphics[width=\linewidth]{fvgdisper.eps} \caption{Evolution of the dispersions of the values of the orbital elements of 1991~VG for 250 control orbits: semimajor axis (top panel), eccentricity (second to top panel), inclination (middle panel), longitude of the ascending node (second to bottom panel), and argument of perihelion (bottom panel). Average values are plotted as thick black curves and their ranges (1$\sigma$ uncertainties) as thin red curves. } \label{disper} \end{figure} \section{Unusual but not uncommon} One of the arguments originally applied to reject a natural origin for 1991~VG was that, in accordance with the evidence available at that time, such a NEO was highly improbable, from an orbital point of view. Brasser \& Wiegert (2008) using numerical simulations estimated that the probability of a NEO ever ending up on an Earth-like orbit could be about 1:20\,000. If we use the latest NEO orbital model described by Granvik et al. (2013a,b) and Bottke et al. (2014) and implemented in the form of a publicly available survey simulator,\footnote{\url{http://neo.ssa.esa.int/neo-population}} we obtain a probability of finding a NEO moving in an orbit akin to that of 1991~VG of about $10^{-6}$ ---the degree of similarity between two orbits has been estimated as described before, using the $D$-criteria. As the previously mentioned NEO orbit model only strictly applies to NEOs with $H<25$~mag (in fact, the single object predicted by the model has a magnitude of 24.24) and 1991~VG is smaller, it cannot be discarded that objects other than 1991~VG may be moving along similar paths. In order to confirm or reject this plausible hypothesis, we have used the $D$-criteria mentioned above to search the known NEOs (data from JPL's SSDG SBDB as before) for objects dynamically similar to 1991~VG. We apply these criteria using osculating Keplerian orbital elements, not the customary proper orbital elements (see e.g. Milani 1993, 1995; Milani \& Kne{\v z}evi{\'c} 1994; Kne{\v z}evi{\'c} \& Milani 2000; Milani et al. 2014, 2017), because 1991~VG-like orbits are inherently chaotic on very short time-scales. Table~\ref{aliens} shows the data of three other NEOs ---2001~GP$_{2}$ (McMillan et al. 2001), 2008~UA$_{202}$ (Bressi et al. 2008b) and 2014~WA$_{366}$ (Gibbs et al. 2014)--- which are dynamically similar to 1991~VG (they have $D_{\rm LS}$ and $D_{\rm R} < 0.05$). Integrations analogous to those in Fig.~\ref{control} (not shown) indicate that the evolution of these three NEOs (although their orbit determinations are in need of significant improvement) bears some resemblance to that of 1991~VG. Common features include relatively frequent close encounters with the Earth--Moon system and very chaotic short-term evolution (all of them), small values of their MOIDs, recurrent trapping by 1:1 mean-motion resonances with the Earth (particularly 2008~UA$_{202}$), evolution temporarily affected by the Kozai-Lidov effect and other consistent properties. Asteroid 2008~UA$_{202}$ is considered an easily retrievable NEO (Garc{\'{\i}}a Y{\'a}rnoz, Sanchez \& McInnes 2013). \begin{table*} \centering \fontsize{8}{11pt}\selectfont \tabcolsep 0.07truecm \caption{Orbital elements, orbital periods ($P$), perihelion ---$q = a \ (1 - e)$--- and aphelion ---$Q = a \ (1 + e)$--- distances, number of observations ($n$), data-arc span, absolute magnitudes ($H$) and MOID with the Earth of NEOs following orbits similar to that of 1991~VG. The values of the various $D$-criteria ($D_{\rm SH}$, $D_{\rm LS}$, $D_{\rm D}$ and $D_{\rm R}$) with respect to 1991~VG are displayed as well. The minor bodies are sorted by ascending $D_{\rm LS}$ and only those with $D_{\rm LS}$ and $D_{\rm R} < 0.05$ are shown. The orbits are referred to the epoch 2017 September 4 as before. Source: JPL's Small-Body Database.} \begin{tabular}{lllllllllllllllll} \hline Asteroid & $a$ (au) & $e$ & $i$ (\degr) & $\Omega$ (\degr) & $\omega$ (\degr) & $P$ (yr) & $q$ (au) & $Q$ (au) & $n$ & arc (d) & $H$ (mag) & MOID (au) & $D_{\rm SH}$ & $D_{\rm LS}$ & $D_{\rm D}$ & $D_{\rm R}$ \\ \hline 2014 WA$_{366}$ & 1.03433 & 0.07150 & 1.55915 & 67.10073 & 287.63348 & 1.05 & 0.9604 & 1.1083 & 55 & 49 & 26.9 & 0.00732 & 0.0981 & 0.0261 & 0.1829 & 0.0276 \\ 2001 GP$_{2}$ & 1.03779 & 0.07380 & 1.27825 & 196.80669 & 111.40484 & 1.06 & 0.9612 & 1.1144 & 25 & 27 & 26.9 & 0.00174 & 0.1291 & 0.0277 & 0.2019 & 0.0201 \\ 2008 UA$_{202}$ & 1.03318 & 0.06857 & 0.26357 & 21.08289 & 300.90518 & 1.05 & 0.9623 & 1.1040 & 16 & 6 & 29.4 & 0.00022 & 0.1139 & 0.0304 & 0.1655 & 0.0132 \\ \hline \end{tabular} \label{aliens} \end{table*} NEOs 2001 GP$_{2}$ and 2008 UA$_{202}$ are included in the list of asteroids that may be involved in potential future Earth impact events maintained by JPL's Sentry System (Chamberlin et al. 2001; Chodas 2015).\footnote{\url{https://cneos.jpl.nasa.gov/sentry/}} Asteroid 2001 GP$_{2}$ has a computed impact probability of 0.00021 for a possible impact in 2043--2107; asteroid 2008 UA$_{202}$ is listed with an impact probability of 0.000081 for a possible impact in 2050--2108. Their very similar range of years for a most probable potential impact ---i.e. they reach their closest perigees at similar times even if their synodic periods are long--- suggest a high degree of dynamical coherence for these two objects even if their values of $\Omega$ and $\omega$ are very different. Although they have relatively high values of the impact probability, their estimated diameters are smaller than 20~m. In the improbable event of an impact, its effects would be local, not too different from those of the Chelyabinsk event or some other recent minor impacts (see e.g. Brown et al. 2013; Popova et al. 2013; de la Fuente Marcos \& de la Fuente Marcos 2015b; de la Fuente Marcos, de la Fuente Marcos \& Mialle 2016); however, an object like 2008 UA$_{202}$ probably would break up in the Earth's atmosphere and few fragments, if any, will hit the ground (or the ocean). \section{Natural or artificial?} Some of the orbital properties of 1991~VG have been used to argue in favour or against a natural or artificial origin for this object. A very unusual dynamical feature that was pointed out by Tancredi (1997) was the fact that 1991~VG experienced a temporary satellite capture by the Earth during its 1991--1992 fly-by; such satellite capture showed a recurrent pattern. The backward evolution of the new orbit determination fully confirms the analysis made by Tancredi (1997). Fig.~\ref{energy}, top panel, shows that the Keplerian geocentric energy of 1991~VG (relative binding energy) became negative during the encounter and also that 1991~VG completed an entire revolution around our planet (bottom panel). Therefore, it might have matched both criteria (see above, Carusi \& Valsecchi 1979; Rickman \& Malmort 1981) to be considered a bona fide satellite or, perhaps more properly, a minimoon of the Earth; using the terminology in Fedorets, Granvik \& Jedicke (2017) we may speak of a temporarily captured orbiter. However, the relative binding energy was not negative for the full length of the loop around our planet pictured in Fig.~\ref{energy}, bottom panel. As the loop followed by 1991~VG with respect to the Earth was travelled in the clockwise sense, this event may be regarded as the first ever documented retrograde capture of a satellite by our planet, even if it had a duration of about 28 d. Fig.~\ref{energy}, top panel, also shows that this unusual phenomenon is recurrent in the case of 1991~VG. But, if there are other objects moving in 1991~VG-like orbits, how often do they become temporary satellites of the Earth? \begin{figure} \centering \includegraphics[width=\linewidth]{fenergy1991VG.eps} \caption{Keplerian geocentric energy of 1991~VG as a function of time (top panel), ephemeral (lasting about a month) satellite captures happen when the value of the geocentric energy becomes negative. The unit of energy is such that the unit of mass is 1~$M_{\odot}$, the unit of distance is 1~au and the unit of time is one sidereal year divided by 2$\pi$. The path followed by 1991~VG (same frame of reference as in Fig.~\ref{orbit}) during the approximate time interval ($-$30, $-$19)~yr (bottom panel) shows that this minor body went around our planet once during its previous fly-by in 1991--1992. Asteroid 1991~VG moves clockwise (retrograde) in the figure as the time goes forward; the temporary satellite capture event happened during 1992 February. } \label{energy} \end{figure} Figs~\ref{dminimoons} and \ref{mapminimoons} show the results of $10^{6}$ numerical experiments in which a virtual object moving in a 1991~VG-like orbit ---orbital elements assumed to be uniformly distributed in the volume of orbital parameter space defined by $a\in(0.95, 1.05)$~au, $e\in(0.0, 0.1)$, $i\in(0, 3)$\degr, $\Omega\in(0, 360)$\degr, $\omega\in(0, 360)$\degr, and the time of perihelion passage $\tau_{q}\in(2458000.5, 2458365.75)$ JD--- undergoes a fly-by with our planet. The region chosen encloses the orbit solutions of 1991~VG, 2001~GP$_{2}$, 2008~UA$_{202}$ and 2014~WA$_{366}$, and those of other NEOs cited earlier in this paper as well. We did not use the NEO orbit model mentioned before to generate the synthetic orbits because we wanted to survey the relevant volume of orbital parameter space in full detail so our results could be applied to both natural and artificial objects. The evolution of the virtual objects was followed just for one year of simulated time to minimize the impact of orbital chaos and resonant returns (see e.g. Milani, Chesley \& Valsecchi 1999) on our conclusions. This short time interval is fully justified because our previous analyses show that, after experiencing a close fly-by with the Earth, an object moving in a 1991~VG-like orbit most likely jumps into another 1991~VG-like orbit. These experiments have been carried out with the same software, physical model and relevant initial conditions used in our previous integrations. Our calculations show that the probability of becoming a temporary (for any length of time) satellite of our planet for members of this group of objects is 0.0036. The overall capture rate is roughly similar to that found by Granvik et al. (2012). In our case, the probability of a capture for less than one month is 0.0023, for one to two months is 0.0011, and for more than two months is 0.00021 (see Fig.~\ref{dminimoons}). Captures for less than 7 d are less probable than captures for 7 to 14 d. Therefore, most objects temporarily captured as Earth's transient bound companions spend less than one month in this state and they do not complete one full revolution around our planet. These results also indicate that as long as we have NEOs in 1991~VG-like orbits, they naturally tend to become temporary satellites of our planet. However, captures for more than two months are rather unusual and those lasting one year, exceedingly rare. This result is at odds with those in Granvik et al. (2012) and Fedorets et al. (2017), but this is not surprising because our short integrations are biased against producing temporarily captured orbiters due to the comparatively small size of our synthetic sample ---$10^{6}$ experiments versus $10^{10}$ in Granvik et al. (2012)--- and our choice of initial conditions ---e.g. their integrations start at 4--5 Hill's radii from the Earth. Fedorets et al. (2017) show explicitly in their table 1 that 40 per cent of all captures should be temporarily captured orbiters. As objects moving in 1991~VG-like paths are being kicked from one of these orbits into another, it is perfectly normal to observe recurrent but brief capture episodes as those discussed for 1991~VG. Such temporary captures are also observed during the integrations of 2001~GP$_{2}$, 2008~UA$_{202}$ and 2014~WA$_{366}$ although they tend to be shorter in duration and less frequent. In addition, no virtual object collided with the Earth during the calculations, which strongly suggests that even if impact probabilities are theoretically high for many of them, it is much more probable to be captured as an ephemeral satellite of our planet than to become an actual impactor; this conclusion is consistent with recent results by Clark et al. (2016), but it is again at odds with results obtained by Granvik et al. (2012) and Fedorets et al. (2017), who found that about one per cent of their test particles impacted the Earth. This significant discrepancy comes out of the facts pointed out before, comparatively small size of our synthetic sample and different initial conditions. \begin{figure} \centering \includegraphics[width=\linewidth]{fdistmm.eps} \caption{Frequency distribution of the duration of episodes of temporary capture as natural satellite of our planet. The bin size is 7 d. } \label{dminimoons} \end{figure} Fig.~\ref{mapminimoons} shows how the duration of the episode of temporary capture as natural satellite of our planet depends on the initial values of the semimajor axis, eccentricity and inclination. The colours (or grey scale) in the maps depend on the duration of the episode in days as indicated in the associated colour box. NEOs moving in 1991~VG-like orbits of the Amor- or Apollo-class are more likely to experience longer capture episodes. The probability of getting captured decreases rapidly for objects with $e>0.05$ and/or $i>1\fdg5$, and the duration of the recorded episodes is shorter. It is important to notice that one of these virtual objects might leave the assumed initial volume of NEO orbital parameter space (see above) in a time-scale of the order of 1 kyr as Fig.~\ref{disper} shows. In addition, the longest orbital period of a satellite of our planet is about 205~d (if it is at the Hill radius); therefore, most of the temporary captures recorded in our numerical experiment do not qualify as true satellites according to Rickman \& Malmort (1981) because they did not complete at least one revolution when bound to the Earth; following the terminology used by Fedorets et al. (2017), we may speak of temporarily captured fly-bys in these cases. In fact and strictly speaking, the event in Fig.~\ref{energy} is compatible with a temporarily captured fly-by not a temporarily captured orbiter; one of the annual epicycles happens to (somewhat accidentally) loop around the Earth lasting several months, but the geocentric energy is negative only for a fraction of the time taken to travel the loop. Indeed, our limited numerical experiment has been optimized to show how frequent temporarily captured fly-bys ---not orbiters--- are; the histogram in Fig.~\ref{mapminimoons} matches reasonably well that of temporarily captured fly-bys in fig.~2 in Fedorets et al. (2017). \begin{figure} \centering \includegraphics[width=\linewidth]{fmapMiniMoons.eps} \caption{Duration of episodes of temporary capture as natural satellite of our planet in days as a function of the initial values of $a$ and $e$ (bottom panel) and $a$ and $i$ (top panel). The colours (or grey scale) in the colour maps are proportional to the duration in days of the episode in Fig.~\ref{dminimoons}. The results of $10^{6}$ experiments are plotted (see the text for details). } \label{mapminimoons} \end{figure} The topic of the capture of irregular satellites by planets has been studied by e.g. Astakhov et al. (2003), Nesvorn{\'y}, Vokrouhlick{\'y} \& Morbidelli (2007) and Emel'yanenko (2015). Jupiter is a well-documented host for these captures (see e.g. Rickman \& Malmort 1981; Tancredi, Lindgren \& Rickman 1990; Kary \& Dones 1996). In regards to the Earth, the topic has only recently received attention (see e.g. Baoyin, Chen \& Li 2010; Granvik, Vaubaillon \& Jedicke 2012; Bolin et al. 2014; Brelsford et al. 2016; Clark et al. 2016; Jedicke et al. 2016; Fedorets et al. 2017). Fedorets et al. (2017) have predicted that the largest body constantly present on a geocentric orbit could have a diameter of the order of 0.8~m. The recent scientific interest on the subject of transient bound companions of our planet was triggered by the exceptional close encounter between our planet and 2006~RH$_{120}$ (Bressi et al. 2008a; Kwiatkowski et al. 2009). Kwiatkowski et al. (2009) showed that 2006~RH$_{120}$ was temporarily captured into a geocentric orbit from 2006 July to 2007 July. In their work, they confirmed that 2006~RH$_{120}$ is a natural object and it cannot be lunar ejecta; they favour a scenario in which its capture as transient satellite was the result of aerobraking in the Earth's atmosphere of a NEO previously moving in a standard Earth-crossing orbit with very low MOID, a low-eccentricity Amor- or Apollo-class minor body. If we interpret the capture of 2006~RH$_{120}$ as a minimoon within the context of our previous numerical experiment, in which the probability of remaining captured for an entire year is about $10^{-6}$, this episode might be a statistical fluke or perhaps indicate that the population of NEOs capable of experiencing such episodes is exceedingly large. In principle, our results strongly favour the interpretation of the 2006~RH$_{120}$ capture episode as a clear outlier. Short-lived satellite capture events consistent with those in Figs~\ref{dminimoons} and \ref{mapminimoons} have been routinely observed in simulations of real NEOs moving in Earth-like orbits (see e.g. the discussion in de la Fuente Marcos \& de la Fuente Marcos 2013, 2014, 2015a). While our simulations show that the capture episode experienced by 1991~VG is unusual but not uncommon, the one by 2006~RH$_{120}$ seems to be truly uncommon and it is difficult to assume that the same scenario that led 1991~VG to become a minimoon can be applied to 2006~RH$_{120}$ as well. However, as 2006~RH$_{120}$ is a confirmed (by radar) natural object, it is reasonable to assume that 1991~VG is natural too. Within the context of numerical experiments optimized to study temporarily captured orbiters ---not fly-bys--- the case of 2006~RH$_{120}$ is not unusual though. In fact, Granvik et al. (2012) and Fedorets et al. (2017) found a good match between the probability associated with the capture of 2006~RH$_{120}$ and predictions from their models for the temporarily captured orbiter population. The orbital solution in Table~\ref{elements} predicts that in addition to its close approaches to our planet in 2017--2018 (2017 August 7 and 2018 February 11) and 1991--1992 (1991 December 5 and 1992 April 9), 1991~VG experienced similar fly-bys on 1938 August 31 and 1939 March 14, then on 1956 June 14 and 1957 March 26, and more recently on 1974 August 27 and 1975 March 15. The first two dates predate the start of any human space exploration programme, so they can be customarily discarded. The date in 1956 follows some suborbital tests, one of them on June 13; the same can be said about 1957, another suborbital flight took place on March 25. It is highly unlikely that debris from suborbital tests could have been able to escape into a heliocentric orbit to return at a later time. There were no documented launches in or around 1975 March 15, but the spacecraft Soyuz 15 was launched on 1974 August 26 at 19:58:05 UTC (Clark 1988; Newkirk 1990). This manned spacecraft failed to dock with the Salyut 3 space station due to some electronic malfunction and returned to the ground safely two days later. One may argue that 1991~VG might be some part of Soyuz 15 (perhaps some stage of the Proton K/D launch system) as the time match is very good, but Soyuz 15 followed a low-Earth orbit and it is extremely unlikely that any of the stages of the heavy-lift launch vehicle (e.g. the second stage, 8S11K, of 14 m and 11\,715 kg or the third stage of 6.5 m and 4\,185 kg, empty weights) could have escaped the gravitational field of our planet. Although both space debris and active spacecraft have received temporary designations as minor bodies by mistake (see e.g. section 9 of de la Fuente Marcos \& de la Fuente Marcos 2015d), objects with initial conditions coming from artificial paths tend to be removed from Earth's orbital neighbourhood rather fast (de la Fuente Marcos \& de la Fuente Marcos 2015d). This is to be expected as spacecraft move under mission control commands and trajectories must be corrected periodically. In addition, the orbital solutions of the NEOs mentioned in the previous sections (including that of 1991~VG) did not require the inclusion of any non-gravitational acceleration (e.g. pressure related) to reproduce the available observations with the exception of two objects, 2006~RH$_{120}$ and 2009~BD, for which the effects of the radiation pressure were detected (see e.g. Micheli et al. 2012). Objects of artificial origin are characterized by a low value of their bulk density (while the average asteroid density is 2\,600~kg~m$^{-3}$, but 2006~RH$_{120}$ has 400~kg~m$^{-3}$ and 2009~BD may have 640~kg~m$^{-3}$, Micheli et al. 2012) or conversely by a high value of its proxy, the Area to Mass Ratio (AMR), that may be $>10^{-3}$~m$^{2}$~kg$^{-1}$ for an artificial object. The lowest values of the bulk density of natural objects are linked to the presence of highly porous rocky materials. The density of a fully loaded 8S11K was about 886 kg m$^{-3}$ and an empty one was significantly less dense at perhaps 62 kg m$^{-3}$ (as a hollow metallic shell); the AMR of an empty 8S11K is close to 5.4$\times$10$^{-3}$~m$^{2}$~kg$^{-1}$. The AMR values of the NEOs cited in this work are compatible with a natural origin for all of them (including 1991~VG). Reproducing the paths followed by objects of artificial origin (e.g. space debris) requires the inclusion of non-gravitational accelerations in order to properly account for the observational data; this is also applicable to inert or active spacecraft and very likely to any putative extraterrestrial artefact. As 1991~VG does not exhibit any of the properties characteristic of artificial objects, it must be natural. \section{Discussion} The data review and analyses carried out in the previous sections show that, although certainly unusual, the orbital properties and the dynamical evolution of 1991~VG are not as remarkable as originally thought. Although initially regarded as a mystery object, it is in fact less of a puzzle and more of a dynamically complex NEO, one of a few dozens which roam temporarily in the neighbourhood of the path of the Earth. Steel (1995a, 1998) hypothesized that 1991~VG could be an alien-made object, some type of self-replicating probe akin to that in von Neumann's concept. In perspective, this imaginative conjecture may indeed be compatible with what is observed in the sense that if an inert fleet of this type of alien-made objects is actually moving in Earth-like orbits, they would behave as an equivalent population of natural objects (i.e. NEOs) moving in 1991~VG-like orbits. The presence of a present-day active (i.e. under intelligent control) fleet of alien probes can be readily discarded because the observed objects do not appear to be subjected to any non-gravitational accelerations other than those linked to radiation pressure and perhaps the Yarkovsky effect, and also because of the lack of detection of any kind of alien transmissions. An inert (or in hibernation mode) fleet of extraterrestrial artefacts would be dynamically indistinguishable from a population of NEOs if the values of their AMRs are low enough. However and adopting Occam's razor, when there exist two explanations for an observed event, the simpler one must be given precedence. Human space probes, radar, spectroscopic and photometric observations performed over several decades have all shown that, in the orbital neighbourhood of the Earth, it is far more common to detect space rocks than alien artefacts. Scotti \& Marsden (1991) and West et al. (1991) used orbital and photometric data to argue that 1991~VG could be a human-made object, a piece of rocket hardware or an old spacecraft that was launched many decades ago. The orbital evolutions of relics of human space exploration exhibit a number of traits that are distinctive, if not actually unique, and the available observational evidence indicates that none of them are present in the case of 1991~VG and related objects; almost certainly, 1991~VG was never launched from the Earth. The putative fast rotation period and large-amplitude light curve reported in West et al. (1991) could be compatible with 1991~VG being the result of a relatively recent fragmentation event, where the surface is still fresh, and an elongated boulder is tumbling rapidly; 2014 WA$_{366}$ has an orbit quite similar to that of 1991~VG, perhaps they are both fragments of a larger object. Tancredi (1997) put forward a novel hypothesis for the origin of 1991~VG, ejecta from a recent lunar impact. Our calculations strongly suggest that objects moving in 1991~VG-like orbits may not be able to remain in this type of orbit for an extended period of time. These integrations indicate that, perhaps, the present-day 1991~VG is less than 10 kyr old in dynamical terms, but impacts on the Moon capable of ejecting objects the size of 1991~VG are nowadays very, very rare. Brasser \& Wiegert (2008) have studied this issue in detail and the last time that a cratering event powerful enough to produce debris consistent with 1991~VG took place on the Moon could be about 1 Myr ago. Unless we assume that 1991~VG was born that way, then left the orbital neighbourhood of the Earth, and recently was reinserted there, the presence of 1991~VG is difficult to reconcile with an origin as lunar ejecta. After discarding an artificial (alien or human) or lunar origin, the option that remains is the most natural one, 1991~VG could be an unusual but not uncommon NEO. We know of dozens of relatively well-studied NEOs that move in Earth-like orbits and our analysis shows that three of them follow 1991~VG-like orbits. All these orbits are characterized by relatively high probabilities of experiencing temporary captures as satellites of the Earth and also of becoming trapped in a 1:1 mean motion resonance with our planet; in a recurrent manner for both cases. We have robust numerical evidence that 1991~VG has been a co-orbital and a satellite of our planet in the past and our calculations predict that these events will repeat in the future. Therefore, the peculiar dynamics of 1991~VG is not so remarkable after all, when studied within the context of other NEOs moving in Earth-like orbits. In addition to the few discussed here, multiple examples of these behaviours can be found in the works by de la Fuente Marcos \& de la Fuente Marcos (2013, 2015a,b, 2016a,b). NEOs moving in 1991~VG-like orbits tend to spend less than one month as satellites (i.e. inside the Hill sphere) of our planet and follow paths of the horseshoe type when moving co-orbital (i.e. outside the Hill radius) to our planet; they appear to avoid the Trojan and quasi-satellite resonant states (see e.g. de la Fuente Marcos \& de la Fuente Marcos 2014, 2015a) perhaps because of their comparatively wide semimajor axes relative to that of the Earth. Another unusual dynamical property of 1991~VG and related objects is that of being subjected to the Kozai-Lidov mechanism (see Fig.~\ref{control}, second to bottom panel, libration in $\omega$), at least for brief periods of time. This is also a common behaviour observed for many NEOs moving in Earth-like orbits (see e.g. de la Fuente Marcos \& de la Fuente Marcos 2015d). In a seminal work, Rabinowitz et al. (1993) argued that 1991~VG and other NEOs were signalling the presence of a secondary asteroid belt around the path of our planet. These objects are unofficially termed as the Arjunas and are a loosely resonant family of small NEOs which form the near-Earth asteroid belt. It is difficult to imagine that all these objects were formed as they are today within the main asteroid belt and eventually found their way to the NEO population as they were. None the less, the Hungarias have been suggested as a possible direct source for this population (Galiazzo \& Schwarz 2014). NEO orbit models like the one used here show that this is possible, but it is unclear whether the known delivery routes are efficient enough to explain the size of the current population of small NEOs, and in particular the significant number of Arjunas like 1991~VG. Fragments can also be produced {\it in situ} (i.e. in the neighbourhood of the path of the Earth) by multiple mechanisms: subcatastrophic impacts (see e.g. Durda et al. 2007), tidal disruptions after close encounters with planets (see e.g. Schunov{\'a} et al. 2014) or the action of the Yarkovsky--O'Keefe--Radzievskii--Paddack (YORP) mechanism (see e.g. Bottke et al. 2006). These processes can generate dynamically coherent groups of genetically related objects ---although YORP spin-up is considered dominant (see e.g. Jacobson et al. 2016)--- which may randomize their orbits in a relatively short time-scale as they move through an intrinsically chaotic environment (see Fig.~\ref{disper}). In addition, the superposition of mean motion and secular resonances creates dynamical families of physically unrelated objects (see e.g. de la Fuente Marcos \& de la Fuente Marcos 2016c) which intertwine with true families resulting from fragmentation. On a more practical side, all these objects are easily accessible targets for any planned NEO sample-return missions (see e.g. Garc{\'{\i}}a Y{\'a}rnoz et al. 2013) or even commercial mining (see e.g. Lewis 1996; Stacey \& Connors 2009). Bolin et al. (2014) have predicted that, while objects like 1991~VG or 2006~RH$_{120}$ are extremely challenging to discover using the currently available telescope systems, the Large Synoptic Survey Telescope or LSST (see e.g. Chesley et al. 2009) may be able to start discovering them on a monthly basis when in full operation, commencing in January 2022. Our independent assessment of the current dynamical status and short-term orbital evolution of 1991~VG leads us to arrive to the same basic conclusion reached by Brasser \& Wiegert (2008): asteroid 1991~VG had to have its origin on a low-inclination Amor- or Apollo-class object. However and given its size, it must be a fragment of a larger object and as such it may have been produced {\it in situ}, i.e. within the orbital neighbourhood of the Earth--Moon system, during the relatively recent past (perhaps a few kyr ago). \section{Conclusions} In this paper, we have studied the dynamical evolution of 1991~VG, an interesting and controversial NEO. This investigation has been carried out using $N$-body simulations and statistical analyses. Our conclusions can be summarized as follows. \begin{enumerate}[(i)] \item Asteroid 1991~VG currently moves in a somewhat Earth-like orbit, but it is not an Earth's co-orbital now. It has been a transient co-orbital of the horseshoe type in the past and it will return as such in the future. \item Extensive $N$-body simulations confirm that the orbit of 1991~VG is chaotic on time-scales longer than a few decades. \item Our calculations confirm that 1991~VG was a natural satellite of our planet for about one month in 1992 and show that this situation may have repeated multiple times in the past and it is expected to happen again in the future. Being a recurrent ephemeral natural satellite of the Earth is certainly unusual, but a few other known NEOs exhibit this behaviour as well. \item A realistic NEO orbit model shows that although quite improbable, the presence of objects moving in 1991~VG-like orbits is not impossible within the framework defined by our current understanding of how minor bodies are delivered from the main asteroid belt to the NEO population. \item Consistently, we find three other minor bodies ---2001~GP$_{2}$, 2008~UA$_{202}$ and 2014~WA$_{366}$--- that move in orbits similar to that of 1991~VG. \item NEOs, moving in 1991~VG-like orbits have a probability close to 0.004 of becoming transient irregular natural satellites of our planet. \item Our results show that, although featuring unusual orbital properties and dynamics, there is no compelling reason to consider that 1991~VG could be a relic of human space exploration and definitely it is not an alien artefact or probe. \end{enumerate} The remarkable object 1991~VG used to be considered mysterious and puzzling, but the new data cast serious doubt on any possible origin for this NEO other than a natural one. We find no evidence whatsoever of an extraterrestrial or intelligent origin for this object. Spectroscopic studies during its next perigee on 2018 February may be able to provide better constraints about its most plausible source, in particular whether it is a recent fragment or not. \section*{Acknowledgements} We thank the referee, M. Granvik, for his constructive, thorough and very helpful reports, S.~J. Aarseth for providing the code used in this research, A.~I. G\'omez de Castro, I. Lizasoain and L. Hern\'andez Y\'a\~nez of the Universidad Complutense de Madrid (UCM) for providing access to computing facilities. This work was partially supported by the Spanish `Ministerio de Econom\'{\i}a y Competitividad' (MINECO) under grant ESP2014-54243-R. Part of the calculations and the data analysis were completed on the EOLO cluster of the UCM, and we thank S. Cano Als\'ua for his help during this stage. EOLO, the HPC of Climate Change of the International Campus of Excellence of Moncloa, is funded by the MECD and MICINN. This is a contribution to the CEI Moncloa. In preparation of this paper, we made use of the NASA Astrophysics Data System, the ASTRO-PH e-print server, and the MPC data server.
2,869,038,154,148
arxiv
\section{Introduction} \label{sec:intro} The sound event classification (SEC) task consists of identifying a set of sound events in an audio recording \cite{fayek2019sound}. Designing signal processing algorithms to assess and extract this information is a key step in several applications such as multimedia indexing based on audio content, context-aware mobile devices, interactive robots, surveillance systems, among many others \cite{mesaros2018multi}. Thereby, over the past few years, the interest in SEC has been increasing and different works have been proposed to handle this task \cite{salamon2017deep, ozer2018noise, fayek2019sound, lu2020deep}. In order to perform the SEC task, usually, the first step is to apply an algorithm to extract features from an audio sample. Common approaches are Mel-frequency cepstral coefficient (MFCC), zero-crossing rate (ZCR), and linear predictive coding (LPC) \cite{qawaqneh2017deep, lu2020deep}. Next, the extracted features can be used as inputs for a classifier, such as Support Vector Machines (SVM) \cite{pedersen2007accent}. Recently, different works have proposed to use convolutional neural networks (CNNs) to perform audio classification \cite{lee2009unsupervised, salamon2017deep, lu2020deep}. In most of these works, the authors propose to convert the audio recordings into spectrograms, resulting in a 2D representation of frequency \textit{vs.} time \cite{lu2020deep}. In this context, Mel-Spectogram has become a quite popular method to convert an audio signal to a 2D representation that can be used as input for popular CNN architectures. Traditionally in many CNNs models, the final classification is performed based on the maximum a posteriori (MAP) estimation, which does not consider the statistics of the intermediate activations directly. This architecture design can lead to an unexpected behaviour when the network's input has a different distribution from the training data. Such variations are even more common in SEC due to the intrinsic properties of audio signals, which drastically suffer from additive noise. Differently from previous approaches, in this work we propose a methodology that directly takes into account the activations of all the network layers to produce a metric, which is used to perform classification. This metric relies on feature-wise correlations computed using Gram Matrices across the intermediate CNN features. The idea of extracting feature-wise correlations using Gram Matrices was previously proposed by Gatys \textit{et al.}~\cite{gatys2016image}, resulting in a breakthrough algorithm to perform image style transfer. Recently, Sastry and Oore \cite{sastry2020detecting} proposed the Gram-OOD, an algorithm that uses Gram Matrices to compute CNN feature-wise correlations in order to tackle the out-of-distribution (OOD) detection problem. Later, Pacheco \textit{et al.}~\cite{pacheco2020out} proposed the Gram-OOD*, a lighter version of the original algorithm that introduces a normalization step and assess fewer layers than the original method. Both algorithms are agnostic to the model and can be applied to any neural network architecture. In this work, we propose the Gram-Classifier, an adaptation of the Gram-OOD* method that uses the Gram Matrices of the feature maps of a CNN to predict a class label for a given sample. The proposed method benefits from a statistical analysis of intermediate CNN activations, resulting in better robustness against variations. We evaluated the proposed method for sound event classification in two datasets using four different well-known CNN architectures. The obtained results show that our method performs better than the baselines using fully-connected layers and MAP classification, resulting in an average improvement of 3\% in terms of classification accuracy. This improvement indicates that our method is able to better use the potential of feature maps generated by CNN models. The remainder of this paper is organized as follows. In Section 2, we describe the Gram Matrix, how to compute feature-wise correlation, and the proposed methodology. In Section 3, we present the experimental evaluation and, in Section 4, we draw our conclusions. \section{Background and Proposed Method} \label{sec:methods} As previously mentioned, our method is inspired by \cite{sastry2020detecting} and \cite{pacheco2020out}. Therefore, in this section, we revisit the idea of features correlation using Gram Matrix and then we describe the proposed Gram-Classifier for sound event classification. \subsection{Features correlation with Gram Matrix} Let us consider a set of vectors $V = \{ \mathbf{v}_1, \cdots, \mathbf{v}_k \}$ in which $\mathbf{v}_k \in \mathbb{R}^z$. Essentially, $V$ is defined by the following matrix: \begin{equation} \label{eq:matrix_of_vs} V = \begin{bmatrix} \mathbf{v}_1\\ \vdots\\ \mathbf{v}_k \end{bmatrix} = \begin{bmatrix} v_{11} & \cdots & v_{1z}\\ \vdots & \ddots & \cdots \\ v_{k1} & \cdots & v_{kz} \end{bmatrix}. \end{equation} \noindent The matrix composed of the vector-wise scalar product $\left \langle \mathbf{v}_i, \mathbf{v}_j \right \rangle$ is named as the Gram Matrix of $V$ \cite{boyd2018introduction}: \begin{equation} \label{eq:G} G = \begin{bmatrix} \left \langle \mathbf{v}_1, \mathbf{v}_1 \right \rangle & \cdots & \left \langle \mathbf{v}_1, \mathbf{v}_k \right \rangle \\ \vdots & \ddots & \vdots \\ \left \langle \mathbf{v}_k, \mathbf{v}_1 \right \rangle & \cdots & \left \langle \mathbf{v}_k, \mathbf{v}_k \right \rangle \end{bmatrix}. \end{equation} \noindent This operation can be rewritten to the matrix formulation as: \begin{equation} \label{eq:original_gram_mat} G = V^T V \end{equation} \noindent The scalar product between two vectors may be interpreted as a similarity measure. In other words, it expresses the vectors' correlation. In this sense, the Gram Matrix measures the pairwise correlation of the set of vectors in $V$. \subsection{Using Gram Matrices to extract features deviation of CNN layers} Let us consider a trained CNN composed of $L$ activation layers in which the representation at the $l^{th}$ layer consists of $K$ feature maps, each of size $m \times n$. Considering the $l^{th}$ layer, we interpret each feature map as a vector $\mathbf{v}_k \in \mathbb{R}^{m*n}$ and stack them as a two-dimensional matrix $F_l$ similar to the one presented in Eq.~\ref{eq:matrix_of_vs}. Essentially, this matrix stores all feature maps extracted by a CNN for a given layer. Finally, as we are dealing with a classification problem, let us also consider a dataset composed of $c \in \{1, \cdots, C\}$ classes and a training ($\textrm{Tr}$), a validation ($\textrm{Va}$), and a testing ($\textrm{Te}$) partitions. \subsubsection{Gram matrix of feature maps} \label{sec:gram_mat_feat} The first step of the method is to compute the Gram Matrix of each $F_l$ according to Eq. \ref{eq:original_gram_mat} \cite{sastry2020detecting, pacheco2020out}: \begin{equation} \label{eq:gram_mat} G_l = F_l{F_l}^T \end{equation} \noindent As described in Eq. \ref{eq:G}, the $k^{th}$ row of $G_l$ matrix represents the pairwise correlation between the feature map $k$ with all others. Assessing every single correlation is redundant and impracticable. Thereby, we aggregate the rows of $G_l$ to achieve the accumulated pairwise correlation for each feature map: \begin{equation} \label{eq:accum_corr} \hat{g}_{lk} = \sum_{i=1}^{m*n} \left \langle \mathbf{v}_k, \mathbf{v}_i \right \rangle \;\; \Rightarrow \;\; \hat{G}_{l} = \begin{bmatrix} \hat{g}_{l1} \\ \vdots \\ \hat{g}_{lK} \end{bmatrix} \end{equation} \noindent Lastly, in order to ensure that all $\hat{G}_l$ matrices have the same scale, we normalize its values according to \cite{pacheco2020out}: \begin{equation} \label{eq:norm_G} \tilde{G}_{l} = \frac{ \hat{G}_{l} - \min(\hat{G}_{l}) }{\max(\hat{G}_{l})-\min(\hat{G}_{l} )}. \end{equation} \subsection{Deviation from features correlation} Let us suppose we have a test sample $\breve{\mathbf{x}}$ and we want to compute a metric of how much the feature maps extracted from $\breve{\mathbf{x}}$ deviates from the training samples. For this, we use the matrix of accumulated correlations $\tilde{G}_{l}$, which represents a global descriptor for each CNN layer. This process is detailed as follows. First, considering the training partition $\textrm{Tr}$, we compute the minimum ($\lambda$) and maximum ($\Lambda$) values in $\tilde{G}_l$ with respect to the layer $l$: \begin{equation} \label{eq:mins} \lambda_{l} = \min\left[\tilde{G}_l(X)\right] \end{equation} \begin{equation} \label{eq:maxs} \Lambda_{l} = \max\left[\tilde{G}_l(X)\right] \end{equation} \noindent where $X$ represents all samples in the $\textrm{Tr}$. Essentially, the method assumes that $\tilde{G}_l$ may be approximated by a uniform distribution and \{$\lambda_{l}, \Lambda_{l} \}$ map the limits of this distribution given for each layer. It is important to note that this step is performed offline, i.e., we compute \{$\lambda_{l}, \Lambda_{l} \}$ and store it to be used during inference. The next step is to generate the deviation metric, which is computed based on the following equation \cite{sastry2020detecting}: \begin{equation} \delta(\lambda, \Lambda, g) = \begin{cases} 0 & \textrm{if} \; \lambda \leq g \leq \Lambda \\ \frac{\lambda - g}{\mid \lambda \mid} & \textrm{if} \; g < \lambda \\ \frac{g - \Lambda}{\mid \Lambda \mid} & \textrm{if} \; g > \Lambda, \end{cases} \end{equation} \noindent where $g$ is a single value of $G$ and $\lambda$ and $\Lambda$ are the minimum and maximum values extracted from $G$. Therefore, for the testing sample $\breve{\mathbf{x}}$, we compute $\delta$ for each layer $l$: \begin{equation} \delta_l(\breve{\mathbf{x}}) = \sum_{k=1}^{K} \delta(\lambda_{l}[k],\Lambda_{l}[k],\hat{G}_{l}(\breve{\mathbf{x}})[k]). \end{equation} \noindent Finally, we aggregate the deviation for all layers to produce the total deviation: \begin{equation} \label{eq:Delta} \Delta(\breve{\mathbf{x}}) = \sum_{l=1}^L \frac{\delta_l(\breve{\mathbf{x}})}{ \mathbb{E}_{\textrm{Va}} \left[ \delta_l \right ] } \end{equation} \noindent where $\mathbb{E}_{\textrm{Va}} \left[ \delta_l \right ]$ is the expected deviation at layer $l$ computed using the validation partition $\textrm{Va}$. Computing the normalized sum of layer-wise deviations helps to account for variations in the scale of layerwise deviations ($\delta_l$), which depends on the number of channels in the layer ($K$), number of pixels per channel ($m\times n$) and semantic information contained in the layer \cite{pacheco2020out}. \subsection{Gram-Classifier: a method to use feature-wise correlations to perform classification} Now that we presented how to compute the total deviation from feature map correlations, let us introduce our method to perform classification. We start our method by selecting the CNN layers to account for the total deviation $\Delta$. Essentially, taking into account all layer deviations $\delta$ is not efficient and may not contribute to classify a given sample. Thereby, we propose a way to select the layers of interest. As we are handling a classification problem composed of $c \in C$ classes, let us consider $\text{Tr}_c^+$ as the training samples of class $c$, and $\text{Tr}_c^-$ as the training samples of the other classes. We define the $D^{c^-}_{\delta_l}$ and $D^{c^+}_{\delta_l}$ as the data distribution of the deviation $\delta$ from the same layer $l$, i.e., the values of $\delta$ stratified per layer and class. Next, we compute the distance between both distributions $D^{c^+}_{\delta_l}$ and $D^{c^-}_{\delta_l}$ using the Wasserstein distance ($W_d$) \cite{villani2009wasserstein} to generate a score $I_l$ for each layer $l$: \begin{equation} \label{eq:wasserstain} I_l = W_d ( D^{c^+}_{\delta_l},D^{c^-}_{\delta_l} ) \end{equation} \noindent Finally, we select the set of interest layers $l$ that have the highest values in $I_l$. In order to perform the classification for a new test sample $\breve{\mathbf{x}}$, we compute the total deviation $\Delta(\breve{\mathbf{x}})$ -- with respect to each class and using the layers selected in the previous step -- as follows: \begin{equation} \boldsymbol{\Delta}(\breve{\mathbf{x}}) = \left [ \Delta_1 (\breve{\mathbf{x}}), \Delta_2 (\breve{\mathbf{x}}), \cdots, \Delta_C(\breve{\mathbf{x}}) \right ] \end{equation} \noindent Each $\Delta_c$ represents how much the test sample $\breve{\mathbf{x}}$ deviates from the training feature maps with respect to the class $c$. In other words, the lower the deviation, the closer the sample is to the class $c$. Therefore, the prediction class for $\breve{\mathbf{x}}$ is determined as follows: \begin{equation} \text{pred}(\breve{\mathbf{x}}) = \text{argmin} \left [ \boldsymbol{\Delta}(\breve{\mathbf{x}}) \right ] \end{equation} \noindent Essentially, as the total deviation over the classes relies on the correlations extracted from the Gram Matrix, we chose the class that is more similar to the test sample. \section{Experimental evaluation} \label{sec:experiments} In this section, we carry out experiments\footnote{Code is available on https://github.com/a-joia/Gram-Classifier} in order to evaluate the performance of the proposed method on a set of trained classifier. We use four different CNN architectures trained on two sound event classification datasets. In this section we describe the experimental setup and present the obtained results. \subsection{Datasets and metrics} We evaluate our method on two datasets: \begin{itemize} \item DCASE 2020 Task 1B\footnote{http://dcase.community/challenge2020/task-acoustic-scene-classification\#subtask-b}: an acoustic scene classification dataset composed of 40 hours of data and three major classes: indoor, outdoor, and transportation. \item DCASE 2019 Task 1A\footnote{http://dcase.community/challenge2019/task-acoustic-scene-classification-results-a}: an acoustic scene classification dataset composed of 40 hours of data and 10 different classes. The data is split into segments 10 seconds per sample. \end{itemize} Both DCASE 2020 and DCASE 2019 datasets have a standard training, validation, and testing partitions, however, the ground truth for the testing partition is not public available. Thereby, we evaluated the performance of our approach using the public evaluation set. Particularly for DCASE 2019, we submitted our results on the leaderboard partition to be evaluated on Kaggle\footnote{https://www.kaggle.com/c/dcase2019-task1a-leaderboard/}. For the evaluation metric, we reported the average classification accuracy (ACC), balanced accuracy (BA). \subsection{Experimental setup} In order to assess the performance of the proposed method, we train four well-known CNN architectures with random weight initialization. The baseline architectures are ResNet50 \cite{he2016deep}, DenseNet-121, \cite{huang2017densely}, VGGNet-16 \cite{simonyan2014very}, and MobileNet-v2 \cite{sandler2018mobilenetv2}. We trained all networks for 100 epochs without data augmentation and used RAdam optimizer~\cite{liu2019radam} with learning rate equals to 0.001 and batch size equals to 32. We select the model weights based on the best validation score. For both datasets, we resampled all the segments to a rate of 16 kHz and use the 128-dimensional Mel-spectrograms as input features to each CNN. The spectrograms were extracted using 1024 bins of fast Fourier transform applied on temporal windows of 40 milliseconds with 20 milliseconds of overlap. For each CNN model, we compared the results of the original architecture using the MAP estimation with the proposed Gram-Classifier. We also compared all the results with the reference baseline available scores for each benchmark previously described. \subsection{Results} In this section, we present the results obtained for the previously described datasets. In Table \ref{tab:dcase2020_results} is presented the obtained results for DCASE 2020 dataset in terms of accuracy and balanced accuracy. \begin{table} [!htp] \tiny \resizebox{\columnwidth}{!}{% \begin{tabular}{c|cc|cc} \hline \multirow{3}{*}{\textbf{Model}} & \multicolumn{4}{c}{\textbf{DCASE 2020 Task 1b}} \\ \cline{2-5} &\multicolumn{2}{c|}{Baseline} & \multicolumn{2}{c}{Gram-Classifier} \\ \cline{2-5} & ACC & BA & ACC & BA \\ \hline ResNet50 & 91.54 & 91.54 & \textbf{93.66} & \textbf{93.62} \\ DenseNet-121 & 93.26 & 93.35 & \textbf{93.50} & \textbf{93.45} \\ VGGNet-16 & 90.36 & 90.62 & \textbf{92.38} & \textbf{92.41} \\ MobileNet-v2 & 92.68 & 92.72 & \textbf{93.70} & \textbf{93.61} \\ \hline \hline AVG & 91.96 & 92.05 & 93.31 & 93.27 \\ \hline \hline {DCASE Baseline} & 88.00 & NA & NA & NA \\\hline \end{tabular} } \caption{Experimental results of the proposed method compared with the baseline models and the competition baseline for DCASE 2020 dataset. AVG is the average performance considering all models.} \label{tab:dcase2020_results} \end{table} As we can see, the proposed method generally performs better than the CNN models using maximum a posteriori estimation and the reference baseline. Quantitatively, it provides an average improvement of around 1.35\% in the classification accuracy when comparing to all models. Comparing to the baseline, the CNN models with MAP improves the classification accuracy in around 4\% and the Gram-Classifier in around 5\%. In Table \ref{tab:dcase2019_results}, we present the results for DCASE 2019 dataset. For this dataset, beyond comparing the CNN models performance for the public test partition, we also include the private results -- P.ACC metric in the table. \begin{table}[!htp] \resizebox{\columnwidth}{!}{% \begin{tabular}{c|cc|c|cc|c} \hline \multirow{3}{*}{\textbf{Model}} & \multicolumn{6}{c}{\textbf{DCASE 2019 Task 1A}} \\ \cline{2-7} &\multicolumn{3}{c|}{Baseline} & \multicolumn{3}{c}{Gram-Classifier} \\ \cline{2-7} & ACC & BA & P. ACC & ACC & BA & P. ACC \\ \hline ResNet50 & 62.16 & 62.34 & 64.66 & \textbf{63.15} & \textbf{63.56} & \textbf{64.83} \\ DenseNet-121 & 63.48 & 63.61 & 64.16 & \textbf{65.22} & \textbf{65.35} & \textbf{66.83} \\ VGGNet-16 & 60.02 & 60.25 & 63.17 & \textbf{61.18} & \textbf{61.42} & \textbf{66.83} \\ MobileNet-v2 & 60.31 & 60.41 & 61.83 & \textbf{62.78} & \textbf{62.96} & \textbf{63.66} \\ \hline \hline AVG & 61.45 & 61.615 & 63.66 & \textbf{64.47} & \textbf{63.72} & \textbf{65.79} \\ \hline \hline DCASE Baseline & 62.5 & NA & 63.00 & NA & NA & NA \\\hline \end{tabular} } \caption{Experimental results of the proposed method compared to baseline models the competition baseline for DCASE 2019 dataset. AVG is the average performance considering all models.} \label{tab:dcase2019_results} \end{table} Observing Table \ref{tab:dcase2019_results}, we notice that the proposed classifier method provides a better performance for all CNN models. It improves the average accuracy in around 3\% for the public test partition and in around 2\% for the private one. Considering the reference baseline, the Gram-Classifier improves the accuracy by over 2\% for both partitions. \subsection{Discussion} The experiment results achieved in this section indicate that using a classifier that takes into account more layers within the CNN may improve the classification performance. Intuitively, lower level feature maps store representations that may be exploited by a classifier. The main contribution of the proposed method is to provide a way to compute a metric that consider all layers across the network. However, it is worth noticing that we are using a pre-trained model, and the proposed method does not contribute during the training phase of the network. The results achieved to sound event classification suggest that this method is particularly appropriate for this task. The feature maps generated by convolutional networks trained on spectograms of acoustics sound events bring relevant spatial information that may be useful to improve classification. Essentially, the feature-wise correlation computed using Gram Matrix seems to be a proper approach to correlate this spatial information among the sound samples within the same class. \section{Conclusions} \label{sec:conclusions} In this paper, we propose a new approach to perform sound event classification using Convolutional Neural Networks (CNNs) and Gram Matrices. As described, our method computes the Gram Matrices of the features maps within the CNN and use them to determine a deviation metric, which is used to perform the classification. An advantage of this method is that it is agnostic to the CNN model, i.e., it can be easily applied to any type of model. We performed experiments using four well-known CNN architectures trained on two benchmarks. The obtained results show that our method improved the classification performance for all CNN models in both benchmarks. Quantitatively, it provides an average improvement of around 1\% to 3\% in classification accuracy. Despite the promising results, there is still room for improvement. In the near future, we intend to investigate methods to improve the representation of the features aiming to get better spatial separation. \section{COPYRIGHT FORMS} \bibliographystyle{IEEEbib}
2,869,038,154,149
arxiv
\section{The NO$\mathbf{\nu}$A Experiment} The NuMI\footnote{Neutrinos at the Main Injector} Off-Axis $\nu_e$ Appearence (NO$\nu$A) experiment is a long baseline neutrino oscillation experiment designed to measure the oscillation parameter $\theta_{13}$ through the observation of muon neutrinos oscillating to electron neutrinos. Depending on how large $\theta_{13}$ is, NO$\nu$A will also be able to address the neutrino mass hierarchy and charge-parity violation. In addition to these measurements, NO$\nu$A will make precision measurements of the oscillation parameters $\theta_{23}$ and $\Delta m_{23}^2$ as seen in Fig. \ref{sensitivity}. To make these measurements NO$\nu$A will use two detectors to measure the NuMI beam created at Fermilab in Batavia, IL. The NuMI beam provides a source of neutrinos by colliding 120 GeV protons with a graphite target \cite{numi}. The collisions produce primarily pions and kaons with one sign of these charged particles focused into a beam using magnetic horns. The pions and kaons decay producing muon neutrinos a majority of the time. Both NO$\nu$A detectors sit 14 milliradians off-axis to the NuMI beam and are functionally equivalent to each other with the only difference being the overall mass of each detector. The first of these detectors, the near detector, is 220 tons and is located 1 km downstream from the target, 105 m underground and measures the initial composition of the NuMI beam. The second of the the detectors, the far detector, is 14 ktons in mass, is located in Ash River, MN near the Canadian border 810 km from the near detector, and measures the oscillated composition of the NuMI beam. The off-axis location of the detectors results in a narrow energy spectrum beam of muon neutrinos around 2 GeV, which is close to the oscillation maximum for an 810 km baseline. Currently a prototype of the NO$\nu$A detectors has been constructed on the surface at Fermilab and is taking data, while the far and near detectors are scheduled to start construction this winter. A full description of the NO$\nu$A experiment is documented in the NO$\nu$A Technical Design Report \cite{TDR}. \begin{figure*}[t] \centering \includegraphics[width=135mm]{figure1.pdf} \caption{Confidence limits on precision measurements of $\Delta m_{23}^2$ and $\sin^2(2\theta_{23})$ after 6 years of running assuming a value of $2.35\times10^{-3}$ eV$^2$ for $\Delta m_{23}^2$ and several best fit values of $\sin^2(2\theta_{23})$.} \label{sensitivity} \end{figure*} \section{The NO$\mathbf{\nu}$A Detectors} The NO$\nu$A detectors are designed to measure the oscillation of muon neutrinos to electron neutrinos. The primary goal of the NO$\nu$A detectors is to resolve the event topologies of electron neutrino charged current interactions, which result in an electromagnetic shower from the electron produced in this event. Additionally, the detectors must be able to reconstruct long muon tracks coming from muon neutrino charged current interactions. In order to be sensitive to neutrino interaction topologies the NO$\nu$A detectors are constructed in a cellular structure. Each cell is made out of reflective PVC and filled with liquid scintillator (mineral oil doped with $\sim$5\% pseudocumene), resulting in 2 GeV muons having a mean path length of 10 m. The cell dimensions for the far detector are 4 cm by 6 cm by 15 m, with each cell being 0.15 radiations lengths wide. The cells of the near detector have the same dimensions except for the length. Each cell contains a loop of wavelength shifting fiber with both ends connected to a single pixel of an avalanche photodiode (APD). 32 cells make up a planar detector module with each module connected to a single APD. Each APD is connected to a front end board which amplifies and shapes the APD signals creating a digitized record of measurements that is sent to the rest of the data acquisition system. Modules are glued together to form individual planes of the detector with 12 modules per plane in the far detector and 2 or 3 modules per plane in the near detector. The detectors are constructed by gluing planes together with each detector plane rotated orthogonally to the previous plane. The detectors are placed such that the planes are oriented perpendicular to the neutrino beam. The alternating orientation of the detector planes gives two independent detector views which can be reconstructed into full three dimensional events. The PVC cellular structure forms a ``fully active" liquid scintillator tracking calorimeter. With this design, the near detector will have a cosmic rate of 50 Hz and will see 30 neutrino events per beam spill with a 10 $\mu$sec beam spill every 1.33 s while the far detector will have a cosmic rate of 200 kHz and will see 3-4 neutrino events per day. In addition to the near and far detectors, a prototype detector has been constructed and is currently taking data. The prototype detector utilizes the same detector technology as the near and far detectors and is of equivalent size to the near detector. The prototype detector is located above ground approximately 1 km from the target, 110 milliradians off-axis to the beam. Currently the prototype detector is partially instrumented and taking data with a cosmic rate of 2-3 kHz and sees approximately 19 neutrino events per day if fully instrumented. \section{Track Reconstruction and Application} \subsection{Track Reconstruction Application} Track reconstruction provides a general utility to help accomplish a wide range of goals in the NO$\nu$A experiment from physics analysis to monitoring detector performance. Accurate track reconstruction forms the base to determine the oscillation parameters that the NO$\nu$A experiment aims to measure. Specifically, reconstruction of muon tracks is necessary to understand the muon neutrino beam composition. In addition to its application to specific physics analyses, another example of the utility of track reconstruction is in detector calibration. Since the detectors will see a relatively large cosmic flux, reconstruction of comic muon tracks will be used to perform several levels of detector calibration. One type of calibration corrects for the differences in signal pulse heights from tracks going through cells at different distances from the APD caused by fiber attenuation. Determining this correction relies on accurate reconstruction of tracks to determine the distance from the APD readout that a track passes through the cell. This effect is shown in Fig. \ref{adc} and its correction Fig. \ref{calibration}. Another type of calibration corrects for any cell to cell differences in measured pulse heights of reconstructed cosmic ray tracks. Finally, an absolute energy calibration will be performed based on Michel electrons and the stopping power of stopped muons. \begin{figure*}[t] \centering \includegraphics[width=135mm]{figure2.pdf} \caption{Path length-corrected muon response for different distances from fiber end for a single example cell. W is the position of the track in the cell's long dimension with 175 cm corresponding to the end closest to the APD and -175 corresponding to the looped fiber end. The small peak close to an ADC value 75 is due to cell edge effects.} \label{adc} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=135mm]{figure3.pdf} \caption{Muon response after attenuation calibration obtained from Fig. \ref{adc}} \label{calibration} \end{figure*} \subsection{Reconstruction Method} Several track reconstruction methods for the NO$\nu$A detectors have been developed to address the many tracking applications, some of which were given above. One of these methods, based on Kalman filters, will be presented. The Kalman filter approach to track reconstruction was chosen because the formalism allows for both the finding and fitting of tracks in one process. Also, it can find multiple tracks within a group of time correlated hits, which is necessary to separate particles coming from the same vertex. Additionally, the Kalman filter routine can be extended to allow for the proper handling of multiple scattering \cite{billoir,fruhwirth}. Currently the reconstruction has been developed for straight tracks as an approximation to the true particle tracks which show nonlinear effects due to multiple scattering. The reconstruction takes place in three steps. The first step applies a base level calibration of the hits recorded in the detector correcting for cell to cell differences in the detectors. An event display showing the calibrated hits in a full trigger window from cosmic data taken with the prototype detector is shown in Fig. \ref{evd}. The second step takes all the hits recorded in the full trigger window and clusters them into groups associated together in time by looking for a minimum level of activity in the detector without large time gaps between hits. For reference the prototype detector's trigger window is 500 $\mu$s with groups of time clustered hits averaging to a $\sim$900 ns time duration. Figure \ref{slicer} shows the time grouping of hits from the data shown in Fig. \ref{evd}. The color indicates hits that have been grouped together. The final step of the reconstruction takes all the individual time groups of hits and applies a geometric pattern recognition routine to find tracks. The pattern recognition routine is made up of three major subroutines. The first subroutine forms track seeds by assuming that adjacent hits in each time grouped cluster belong to the same track in each independent detector view. The second subroutine then uses a Kalman filter to propagate the track seeds plane by plane through the detector adding hits to the track that are consistent with the track. The consistency of a hit is determined based on the change in $\chi^2$ of the track fit from the inclusion of the hit. The track fit is updated after the addition of any hit using the Kalman filter to perform a weighted average fit of the track to the hits. The third subroutine then takes all the tracks found in each independent view and matches them together forming a three dimensional reconstructed track. The reconstruction method requires that each track passes through 3 planes and has at least 4 hits in each view. Figure \ref{tracks} shows the fully reconstructed tracks found in the data shown in Fig. \ref{evd}. The colored hits now indicate hits that belong to the same track with the line showing the track fit. \begin{figure*}[t] \centering \includegraphics[width=135mm]{figure4.pdf} \caption{Event Display showing cosmic data taken with the NO$\nu$A prototype detector. The Event Display shows the top and side views of the detector. The muon catcher is constructed from alternating PVC planes with planes of steel and is located on the right side of the Event Display. Each instrumented cell is shown as a light gray box.} \label{evd} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=135mm]{figure5.png} \caption{Event Display showing time clustered hits. The color corresponds to hits belonging to the same time grouping.} \label{slicer} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=135mm]{figure6.png} \caption{Event Display showing fully reconstructed tracks. The color corresponds to hits belonging to the same track with the line indicating the fit of the track.} \label{tracks} \end{figure*} \subsection{Preliminary Results} The reconstruction method described above has been applied to Monte Carlo simulation and cosmic data from the prototype detector for validation. A preliminary evaluation of the reconstruction efficiency as a function of zenith angle based on Monte Carlo cosmic ray simulation has been performed and is shown in Fig. \ref{coseff}. The efficiency is defined as the fraction of tracks that where reconstructed out of the total number of tracks that pass the reconstruction requirements. At high zenith angle the statistics dominate the efficiency calculation. To ensure that the algorithm efficiently reconstructs tracks at these angles, the reconstruction efficiency of 2 GeV uniformly distributed single particle muon Monte Carlo was calculated. The result is shown in Fig. \ref{singlepart} confirming that the technique is fully efficient for the full angular range of 2 GeV muon tracks. \begin{figure*}[t] \centering \includegraphics[width=135mm]{figure7.pdf} \caption{Preliminary reconstruction efficiency of simulated cosmic ray tracks as a function of the zenith angle of the true tracks.} \label{coseff} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=135mm]{figure8.pdf} \caption{Preliminary reconstruction efficiency of simulated uniformly distributed single particle 2 GeV muon tracks as a function of the zenith angle of the true tracks.} \label{singlepart} \end{figure*} Additionally, a preliminary comparison between the cosmic ray Monte Carlo and cosmic data from the prototype detector shows the angular distribution, shown in Fig. \ref{ang}, which overall agree with each other. Some differences can be noted in comparing the prototype data to simulation as the detector is not fully instrumented or aligned where as the Monte Carlo assumes a fully instrumented, aligned detector. \begin{figure*}[t] \centering \includegraphics[width=135mm]{figure9.pdf} \caption{Angular distribution of tracks from cosmic data from the NO$\nu$A prototype detector, reconstructed cosmic ray Monte Carlo tracks, and true cosmic ray Monte Carlo tracks. The fall off at low zenith angle results from the requirement that tracks pass through minimum of 3 planes.} \label{ang} \end{figure*} Finally, the reconstruction of tracks has been applied to candidate neutrino events that have been identified in the NO$\nu$A prototype detector data. Figure \ref{candidate} shows the reconstructed tracks from a two prong event identified as a potential neutrino event. The reconstruction has separated the hits into the two separate tracks which are identifiable by eye in the raw data. The reconstruction of the background cosmic rays in the data has been suppressed in Fig. \ref{candidate}; however, the reconstruction method does identify them allowing for separation between neutrino and background events. \begin{figure*}[t] \centering \includegraphics[width=135mm]{figure10.png} \caption{Reconstructed candidate neutrino event from the NO$\nu$A prototype detector. Reconstruction of cosmic rays has been suppressed for clarity.} \label{candidate} \end{figure*} \section{Summary} The NO$\nu$A experiment requires accurate track reconstruction to accomplish both the short term goal of detector calibration as well as the long term neutrino oscillation analysis goals . Currently a track reconstruction method is in place to find and fit straight tracks in the NO$\nu$A detectors. The method is being applied to data being taken from the NO$\nu$A prototype detector and is being actively developed to improve its efficiency as well as to encompass multiple scattering effects. \bigskip
2,869,038,154,150
arxiv
\section{Introduction} There is a widespread belief that the continuum description of spacetime as provided by general relativity must necessarily break down at very short length scales and/or very high curvatures. A number of very different approaches to an eventual theory of quantum gravity have been presented in the literature; these candidate theories are too varied and too extensive to summarise here. Suffice it to say, though, that whatever the {\it atoms of spacetime}\/ may turn out to be, at the moment there exists a large body of well--established knowledge concerning the {\it thermodynamics of spacetime}\/. For recent advances in this direction, as well as more detailed bibliography, we refer the reader to the original articles \cite{PADDY1, PADDY2, PADDY3, PADDY4} as well as the review papers \cite{PHILO, MOUSTOS}. On the whole, the picture that emerges is that of a continuum description after some appropriate coarse graining of some underlying degrees of freedom (the atoms of spacetime mentioned above). Even if the precise nature of the latter is unknown yet, one can still make progress following a thermodynamical approach: one ignores large amounts of detailed knowledge (say, the precise motions followed by the atoms of a gas) while concentrating only on a few coarse--grained averages (say, the overall pressure exerted by the atoms of a gas on the container walls). This way of approaching the problem has come to be called {\it the emergent approach}\/. In the emergent approach to spacetime presented in ref. \cite{VERLINDE}, gravity qualifies as an entropic force. Roughly speaking, this is the statement that we do not know the fundamental degrees of freedom underlying gravity, but their overall macroscopic effect is that of driving the system under consideration in the direction of increasing entropy. If gravitational forces are entropy gradients, then gravitational equipotential surfaces can be identified with isoentropic surfaces. This insight justifies identifying the gravitational potential and the entropy function (up to dimensional factors). Recalling the arguments of ref. \cite{VERLINDE}, a classical point particle approaching a holographic screen causes the entropy of the latter to increase by one quantum $k_B$. We will replace the classical particle of ref. \cite{VERLINDE} with a density of particles representing the (baryonic and dark) matter contents of a hypothetical Newtonian Universe. This volume density will be identified with the squared modulus of a nonrelativistic wavefunction $\psi$ satisfying the Schroedinger equation. Let $U$ denote the gravitational potential. Once dimensions are corrected (using $\hbar$ and $k_B$), the expectation value $\langle\psi\vert U\vert\psi\rangle$ becomes the quantum--mechanical analogue of the entropy increase caused by a classical particle approaching a holographic screen in ref. \cite{VERLINDE}. Therefore {\it the expectation value $\langle\psi\vert U\vert\psi\rangle$ becomes a measure of the gravitational entropy of the Universe when the matter of the Universe is described by the wavefunction $\psi$}\/. The next question is to determine the Newtonian potential $U$ governing the Universe as a whole. Of course, even within the Newtonian approximation, $U$ necessarily appears as a very rough average. We can however find guidance in the Hubble expansion of the Universe \cite{HUBBLE, PERLMUTTER, RIESS}, which holds reasonably well over cosmological distances. This receding behaviour of the galaxies can be easily modelled by a phenomenological potential, namely, an isotropic harmonic potential carrying a negative sign: \begin{equation} U_{\rm Hubble}({\bf r})=-\frac{H_0^2}{2}{\bf r}^2. \label{potenzi} \end{equation} As the angular frequency we take the current value of Hubble's constant $H_0$. (Thus $U_{\rm Hubble}$ has the dimensions of energy per unit mass, or velocity squared). The potential $U_{\rm Hubble}$ encodes the combined effect of the gravitational attraction, and of the repulsion caused by the dark energy on the matter content of the Universe (baryonic and dark matter). We can therefore identify the Hubble potential $U_{\rm Hubble}$ of Eq. (\ref{potenzi}) with the gravitational potential $U$ in the previous paragraph. {}Following ref. \cite{BONDI}, let us briefly recall why $U_{\rm Hubble}$ in fact combines a Newtonian gravitational attraction, plus a harmonic repulsion.\footnote{See Eq. (9.14 b) of ref. \cite{BONDI}, the right--hand side of which is the force that one would obtain by differentiation of our Eq. (\ref{potenzi}). The fact that ref. \cite{BONDI} defended the Steady State theory, the rival that lost against the Big Bang theory currently accepted, has no bearing on this discussion, as the Newtonian limit is the same.} In the Newtonian limit considered throughout in this paper, the gravitational attraction is computed by applying Gauss' law to a sphere filled with a homogeneous, isotropic density of matter. Then the gravitational field {\it within the sphere}\/ turns out to be proportional to the position vector, so the corresponding potential becomes a quadratic function of the position. Altogether, the total potential at any point within the cosmological fluid is the sum of two harmonic potentials; Hubble's constant $H_0$ is the frequency of this total harmonic potential. In this way the Newtonian space $\mathbb{R}^3$ is foliated by a continuous succession of concentric spheres with growing radii. Each one of these spheres qualifies as a gravitational equipotential surface. By what was said above, these surfaces are also isoentropic surfaces, the gradients thereto pointing in the direction of the gravitational force. The negative sign in Eq. (\ref{potenzi}) expresses the essential fact that this net force is repulsive instead of attractive. Already at the classical level, this potential possesses no state of least energy; a problem that resurfaces at the quantum level, as the inexistence of a stable vacuum state \cite{BROADBRIDGE}. What saves the day is the crucial observation that, in fact, {\it our observable Universe is finite in size}\/, instead of extending over all of $\mathbb{R}^3$. The current value $R_0$ of the radius of the observable Universe provides us with a natural cutoff. In this way a stable vacuum state is guaranteed to exist. \section{Newtonian cosmology as a quantum mechanics} The Poisson equation satisfied by the nonrelativistic gravitational potential $U$ created by a mass density $\rho$, \begin{equation} \nabla^2U=4\pi G\rho, \label{tretre} \end{equation} arises naturally in the weak--field limit of Einstein's field equations. In this limit, also called the {\it Newtonian approximation}\/, the (baryonic and dark) matter contents of the Universe is modelled as an ideal fluid (see, {\it e.g.}, the textbook \cite{WEINBERG}) satisfying the Poisson equation (\ref{tretre}) as well as the continuity equation \begin{equation} \frac{\partial\rho}{\partial t}+\nabla\cdot\left(\rho{\bf v}\right)=0 \label{knoott} \end{equation} and the Euler equation \begin{equation} \frac{\partial{\bf v}}{\partial t}+\left({\bf v}\cdot\nabla\right){\bf v}+\frac{1}{\rho}\nabla p-{\bf F}=0. \label{stella} \end{equation} In Eqs. (\ref{knoott}) and (\ref{stella}), $\rho$ is the volume density of fluid mass, $p$ is the pressure, ${\bf v}$ is the velocity field within the cosmological fluid, and ${\bf F}$ the force per unit volume acting on the fluid. The cosmological principle requires that the velocity ${\bf v}$ be everywhere proportional to the position vector ${\bf r}$. This latter statement is nothing but Hubble's law, which one can mimic by means of the phenomenological potential (\ref{potenzi}). Indeed the latter satisfies the Poisson equation (\ref{tretre}), \begin{equation} \nabla^2U_{\rm Hubble}=-3H_0^2, \label{quattro} \end{equation} the right--hand side corresponding to a {\it negative}\/ mass density $\rho=-3mH_0^2/(4\pi G)$. In ref. \cite{CABRERA} we have pointed out the existence of a remarkable {\it duality between nonrelativistic quantum mechanics on the one hand, and Newtonian cosmology on the other}\/ \cite{WIDROW}. Specifically, nonrelativistic quantum mechanics has a quantum probability fluid that exactly mimics the behaviour of the cosmological fluid, the latter considered in the Newtonian approximation. One proves that Eqs. (\ref{knoott}) and (\ref{stella}), which govern the cosmological fluid, become the very equations that govern the quantum probability fluid after applying the Madelung transformation. The inclusion of the Hubble potential as an external force acting on the quantum system then yields Eq. (\ref{tretre}). The duality just mentioned can be used to {\it compute thermodynamical quantities of the Universe using standard quantum mechanics}\/. In the introduction we have argued that the operator ${\bf R}^2=X^2+Y^2+Z^2$, which is proportional to the Hubble potential (\ref{potenzi}), is a measure of the amount of gravitational entropy enclosed by the Universe. Correcting dimensions by means of the appropriate physical constants, the operator \begin{equation} {\cal S}:={\cal N}\frac{k_B m H_0}{\hbar}{\bf R}^2 \label{nachhause} \end{equation} qualifies as a Boltzmann entropy. Above $m$ is the total mass (baryonic and dark) of the observable Universe. A {\it dimensionless}\/ factor ${\cal N}$ is left undetermined by these simple arguments; on general grounds we expect ${\cal N}$ to be of order unity. We call ${\cal S}$ the gravitational entropy operator. The present paper is a continuation of, and an improvement on, our previous article \cite{CABRERA}. Let us examine this point in more detail. Within the scope of the approximations considered here, the effective Hamiltonian operator $H_{\rm eff}$ acting on the wavefunction $\psi({\bf r})$ that models the cosmological fluid is \begin{equation} H_{\rm eff}=-\frac{\hbar^2}{2m}\nabla^2-\frac{k_{\rm eff}}{2}{\bf r}^2, \qquad k_{\rm eff}=mH_0^2. \label{jamilto} \end{equation} Above we have defined the effective elastic constant $k_{\rm eff}$ corresponding to the Hubble potential (\ref{potenzi}). The amount of mass $m_V$ contained within a volume $V$ equals $m_V=m\int_V{\rm d}^3x\vert\psi\vert^2$; the whole observable Universe is a sphere of radius $R_0$ (we collect our cosmological data $m$, $H_0$, $R_0$ from ref. \cite{PLANCK}). Considering the Universe as a sphere with finite radius has the advantage that the instabilities \cite{BROADBRIDGE} due to the negative sign of the potential are avoided naturally. Although the Hamiltonian (\ref{jamilto}) can be diagonalised and its exact eigenfunctions can be obtained explicitly \cite{CABRERA, FINSTER}, the latter are extremely cumbersome for explicit computations. As a first step, for the sake of simplicity, in ref. \cite{CABRERA} we obtained the expectation value $\langle {\cal S}\rangle$ using a set of eigenfunctions of the {\it free}\/ Hamiltonian $-\hbar^2\nabla^2/(2m)$. The analysis performed in this paper uses the exact eigenfunctions of the effective Hamiltonian (\ref{jamilto}); this improves on the results of our calculation of ref. \cite{CABRERA}. The values thus obtained will be closer to actual (empirical) estimates for the entropy of the Universe \cite{ASTROPH}, so the upper bound ${\cal S}_{\rm max}\sim 10^{123}k_B$ set by the holographic principle will no longer be saturated. Specifically, we will refine the results of our previous ref. \cite{CABRERA} by 3 orders of magnitude, see Eqs. (\ref{ergo}) and (\ref{kraft}) below. Further work is required in order to extend our results beyond the Newtonian limit \cite{UPCOMING}; this extension will hopefully yield values in even better agreement with existing estimates. \section{Estimate of the entropy}\label{npetrp} Let us separate variables in the effective Hamiltonian (\ref{jamilto}) using spherical coordinates. The standard factorisation $\psi({\bf r})=R(r)Y_{lm}(\theta,\varphi)$ leads to a radial wave equation \begin{equation} \frac{1}{r^2}\frac{{\rm d}}{{\rm d}r}\left(r^2\frac{{\rm d}R}{{\rm d}r}\right)-\frac{l(l+1)}{r^2}R+\frac{2m}{\hbar^2}\left(E+\frac{k_{\rm eff}}{2}r^2\right)R=0. \label{radas} \end{equation} The choice $l=0$ imposed by the cosmological principle leads to \begin{equation} r^2\frac{{\rm d}^2R}{{\rm d}r^2}+2r\frac{{\rm d}R}{{\rm d}r}+\frac{2m}{\hbar^2}\left(Er^2+\frac{mH_0^2}{2}r^4\right)R=0. \label{vesel} \end{equation} As shown in refs. \cite{CABRERA, FINSTER}, two linearly independent solutions of (\ref{vesel}) turn out to be \begin{equation} R_{\alpha}^{(1)}(r)=\exp\left(\frac{{\rm i}\beta^2r^2}{2}\right) {}_1F_1\left(\frac{3}{4}-\frac{{\rm i}\alpha}{4},\frac{3}{2}; -{\rm i}\beta^2r^2\right) \label{herri} \end{equation} and \begin{equation} R_{\alpha}^{(2)}(r)=\frac{1}{r}\exp\left(\frac{{\rm i}\beta^2r^2}{2}\right){}_1F_1\left(\frac{1}{4}-\frac{{\rm i}\alpha}{4},\frac{1}{2}; -{\rm i}\beta^2r^2\right), \label{tabernae} \end{equation} where ${}_1F_1(a,b;z)$ is the confluent hypergeometric function \cite{LEBEDEV}, and the parameters $\alpha$, $\beta$ take on the values \begin{equation} \alpha:=\frac{2E}{\hbar H_0},\qquad \beta^4:=\frac{m^2H_0^2}{\hbar^2}. \label{morcilla} \end{equation} To begin with, the complete wavefunction corresponding to the radial wavefunction (\ref{herri}) reads \begin{equation} \psi_{\alpha}^{(1)}(r,\theta,\varphi)=\frac{N_{\alpha}^{(1)}}{\sqrt{4\pi}}\exp\left(\frac{{\rm i}\beta^2r^2}{2}\right) {}_1F_1\left(\frac{3}{4}-\frac{{\rm i}\alpha}{4},\frac{3}{2}; -{\rm i}\beta^2r^2\right); \label{karkon} \end{equation} the radial normalisation factor $N_{\alpha}^{(1)}$ will be determined presently. The eigenfunction $\psi_{\alpha}^{(1)}$ is singularity free over the entire interval $[0,R_0]$. A numerical estimate yields $\beta\simeq 1.1\times 10^{35}$ metres${}^{-1}$. Given that $R_0\simeq 4.4\times 10^{26}$ metres, the dimensionless product $(\beta r)^2$ in Eq. (\ref{karkon}) quickly drives the function ${}_1F_1$ into its asymptotic regime, where it can be approximated as \cite{LEBEDEV} \begin{equation} {}_1F_1(a,b;z)\simeq\frac{\Gamma(b)}{\Gamma(b-a)}{\rm e}^{-{\rm i}\pi a}z^{-a} +\frac{\Gamma(b)}{\Gamma(a)}{\rm e}^z\,z^{a-b},\quad \vert z\vert\to\infty, \label{toninfest} \end{equation} whenever $\vert{\rm arg}(z)\vert<\pi$ and $b\neq 0,-1,-2,\ldots$ We will also need Stirling's formula \begin{equation} \Gamma(t)\simeq\exp\left[\left(t-\frac{1}{2}\right)\ln t -t+\frac{1}{2}\ln 2\pi\right], \label{gamagrande} \end{equation} valid for $\vert t\vert\to\infty$ whenever $\vert{\rm arg}(t)\vert<\pi$. When applying Stirling's approximation we will select the main branch of the complex logarithm. Another order--of--magnitude estimate yields $\alpha\simeq 10^{52} E$, with the energy $E$ expressed in Joule; this fact allows to drop the first summand in (\ref{toninfest}) in favour of the second. Then a lengthy calculation based on Eqs. (\ref{toninfest}) and (\ref{gamagrande}) yields the desired asymptotic expression of the confluent hypergeometric function in (\ref{karkon}): $$ {}_1F_1\left(\frac{3}{4}-\frac{{\rm i}\alpha}{4},\frac{3}{2}; -{\rm i}\beta^2r^2\right)\simeq\frac{1}{2\sqrt{2}}\exp\left(\frac{3}{4}-{\rm i}\pi\right) \exp\left(\frac{\pi\alpha}{2}\right)\exp\left(\frac{{\rm i}\alpha}{4}\ln\frac{\alpha}{4}\right) $$ \begin{equation} \times\exp\left\{-{\rm i}\left[\beta^2 r^2+\frac{\alpha}{2}\ln (\beta r) \right]\right\}\exp\left(-\frac{3}{2}\ln \beta r\right),\qquad r\to\infty. \label{tertia} \end{equation} {}Finally substituting Eq. (\ref{tertia}) into Eq. (\ref{karkon}), and absorbing an irrelevant constant within the normalisation factor $N_{\alpha}^{(1)}$, we obtain the following asymptotic wavefunction: $$ \psi_{\alpha}^{(1)}(r,\theta,\varphi)\simeq\frac{N_{\alpha}^{(1)}}{\sqrt{4\pi}} \exp\left(\frac{\pi\alpha}{2}\right) \exp\left(\frac{{\rm i}\alpha}{4}\ln\frac{\alpha}{4}\right) $$ \begin{equation} \times \exp\left\{-\frac{{\rm i}}{2}\left[\alpha\ln(\beta r) +\beta^2r^2\right]\right\}(\beta r)^{-3/2},\qquad r\to\infty. \label{quarta} \end{equation} We observe that the asymptotic expression (\ref{quarta}) is singular at $r=0$ while the original wavefunction (\ref{karkon}) was not. This is just a consequence of having replaced the exact wavefunction with its asymptotic approximation for large $r$. Therefore Eq. (\ref{quarta}) applies at most over the interval $[\epsilon, R_0]$, where $\epsilon>0$ is small but nonvanishing. We need to determine a suitable $\epsilon$ and the wavefunction $\psi_{\alpha}^{(1)}$ over $[0,\epsilon]$. A natural choice to make is $\epsilon=\beta^{-1}$. This is sufficiently small while, at the same time, values of $r>\beta^{-1}$ fall well within the asymptotic regime (\ref{toninfest}) of the confluent hypergeometric function. Over the interval $[0,\beta^{-1}]$ we will approximate ${}_1F_1$ by its Taylor expansion ${}_1F_1(a,b;z)\simeq 1+az/b$ \cite{LEBEDEV}. Altogether the normalised, approximate wavefunction for the matter contents of the Universe $$ \psi_{\alpha}^{(1)}(r,\theta,\varphi)=\sqrt{\frac{\beta^3}{4\pi\ln \left(\beta R_0\right)}} \exp\left(\frac{{\rm i}\alpha}{4}\ln\frac{\alpha}{4}\right) $$ \begin{equation} \times\left\{\begin{array}{ll} \exp\left(-{\rm i}/{2}\right),\quad\qquad\quad\quad\qquad\qquad\qquad\qquad\;\; r\in[0,\beta^{-1}]\\ \exp\left\{-\frac{{\rm i}}{2}\left[\alpha\ln (\beta r) +\beta^2r^2\right]\right\} \left(\beta r\right)^{-3/2},\qquad\; r\in[\beta^{-1},R_0] \end{array}\right. \label{octavia} \end{equation} is regular over the entire interval $[0,R_0]$. With the wavefunction (\ref{octavia}) we obtain \begin{equation} \langle\psi_{\alpha}^{(1)}\vert {\bf R}^2\vert\psi_{\alpha}^{(1)}\rangle=\frac{R_0^2}{2\ln \left(\beta R_0\right)}, \label{incontro} \end{equation} after dropping subleading terms in $\beta$. Substituted back into Eq. (\ref{nachhause}), this produces a value of the entropy \begin{equation} \langle\psi_{\alpha}^{(1)}\vert {\cal S}\vert\psi_{\alpha}^{(1)}\rangle=6{\cal N}\times10^{120} k_B \label{ergo} \end{equation} which, taking ${\cal N}=1/6$, is three orders of magnitude below the upper bound ${\cal S}_{\rm max}\sim10^{123}k_B$ set by the holographic principle. This is a considerable improvement on the results of ref. \cite{CABRERA}, where the holographic bound was saturated. In the case of the second, linearly independent radial wavefunction (\ref{tabernae}) we have the complete eigenfunction \begin{equation} \psi_{\alpha}^{(2)}(r,\theta,\varphi)=\frac{N_{\alpha}^{(2)}}{\sqrt{4\pi}}\frac{1}{r}\exp\left(\frac{{\rm i}\beta^2r^2}{2}\right){}_1F_1\left(\frac{1}{4}-\frac{{\rm i}\alpha}{4},\frac{1}{2}; -{\rm i}\beta^2r^2\right). \label{oeko} \end{equation} As opposed to $\psi_{\alpha}^{(1)}$, the wavefunction $\psi_{\alpha}^{(2)}$ is singular at $r=0$. Again applying Eqs. (\ref{toninfest}) and (\ref{gamagrande}) one finds the asymptotics \begin{equation} {}_1F_1\left(\frac{1}{4}-\frac{{\rm i}\alpha}{4},\frac{1}{2};-{\rm i}\beta^2r^2\right)\simeq\frac{1}{\sqrt{2}}\exp\left(\frac{1}{4}-\frac{{\rm i}\pi}{2}\right)\exp\left(\frac{\pi\alpha}{2}+\frac{{\rm i}\alpha}{4}\ln\frac{\alpha}{4}\right) \label{dosefe} \end{equation} $$ \times\exp\left[-{\rm i}\left(\frac{\alpha}{2}\ln\beta r + \beta^2 r^2\right)\right]\exp\left(-\frac{1}{2}\ln \beta r\right),\quad r\to\infty. $$ Next substituting (\ref{dosefe}) into (\ref{oeko}) produces, after absorbing an irrelevant constant within the normalisation factor, \begin{equation} \psi_{\alpha}^{(2)}(r,\theta,\varphi)\simeq\frac{N_{\alpha}^{(2)}}{\sqrt{4\pi}}\frac{1}{r} \exp\left(\frac{\pi\alpha}{2}+\frac{{\rm i}\alpha}{4}\ln\frac{\alpha}{4}\right) \label{hellas} \end{equation} $$ \times\exp\left[-\frac{{\rm i}}{2}\left(\alpha\ln\beta r+\beta^2r^2\right)\right] (\beta r)^{-1/2},\quad r\to\infty. $$ {}Finally, arguments similar to those leading up to Eq. (\ref{octavia}) produce the following normalised, approximate wavefunction over the complete interval $[0, R_0]$: $$ \psi_{\alpha}^{(2)}(r,\theta,\varphi)=\sqrt{\frac{\beta}{4\pi\ln(\beta R_0)}}\exp\left(\frac{{\rm i}\alpha}{4}\ln\frac{\alpha}{4}\right) $$ \begin{equation} \times\left\{\begin{array}{ll} \frac{1}{r}\exp\left(-{\rm i}/{2}\right),\quad\quad\quad\qquad\qquad\qquad\qquad\qquad\;\; r\in[0,\beta^{-1}]\\ \frac{1}{r}\exp\left\{-\frac{{\rm i}}{2}\left[\alpha\ln(\beta r) +\beta^2r^2\right]\right\} \left(\beta r\right)^{-1/2},\qquad\; r\in[\beta^{-1},R_0]. \end{array}\right. \label{tredicesima} \end{equation} We observe that the approximate wavefunction (\ref{tredicesima}) remains singular at $r=0$, as imposed by the exact wavefunction (\ref{oeko}). With the above one computes \begin{equation} \langle\psi_{\alpha}^{(2)}\vert{\bf R}^2\vert\psi_{\alpha}^{(2)}\rangle=\frac{R_0^2}{2\ln(\beta R_0)}, \label{lola} \end{equation} coincident with the corresponding result (\ref{incontro}) for the regular wavefunction. Therefore \begin{equation} \langle\psi_{\alpha}^{(2)}\vert {\cal S}\vert\psi_{\alpha}^{(2)}\rangle=6{\cal N}\times10^{120} k_B, \label{kraft} \end{equation} in complete agreement with the entropy already found in (\ref{ergo}) for the regular wavefunction. \section{Discussion} The holographic principle sets an upper bound of approximately $10^{123}k_B$ on the entropy content of the Universe. Some phenomenological estimates \cite{ASTROPH} place the actual value at around $10^{104}k_B$, gravitational entropy (and, in particular, black holes) representing the largest single contributors to the entropy budget of the Universe. Although Newtonian cosmology does allow for black holes \cite{MEXICO}, the many simplifications made by our elementary model necessarily leave out some essential physics of the Universe. Nevertheless, our toy model succeeds in capturing some key elements of reality. For example, the upper bound set by the holographic principle is always respected, even by such a crude approximation as the free waves \cite{CABRERA}. The Hubble waves (\ref{octavia}) and (\ref{tredicesima}) represent a considerable improvement on the free waves, as they reduce the expectation value of the entropy by three orders of magnitude. We hope that a fully general--relativistic treatment \cite{UPCOMING} will yield results in even better agreement with existing empirical estimates. Admittedly, solutions (\ref{herri}) and (\ref{tabernae}) violate the cosmological principle. In fact any solution to the (interacting) Schroedinger equation will violate the cosmological principle. Only free wave solutions to the free wave equation ({\it i.e.}\/, with zero potential) satisfy the cosmological principle. However, the free wavefunctions of our previous ref. \cite{CABRERA} saturate the holographic principle, while our improved Hubble wavefunctions (\ref{herri}) and (\ref{tabernae}) no longer saturate it. This is essential for the very existence of life in the Universe. Given that the cosmological principle itself is an idealisation, we believe the improved entropy results obtained using Hubble wavefunctions outweigh the violation of the cosmological principle. Since $\alpha$ in Eq. (\ref{morcilla}) is the (dimensionless) energy eigenvalue in $H_{\rm eff}\psi=E\psi$, the parameter $\alpha$ plays the same role that the quantum number $n\in\mathbb{N}$ plays in the standard harmonic oscillator, where the potential energy is positive definite. Our negative definite harmonic potential does not have quantised energy levels, but continuous energy levels $\alpha$ instead. However the range of values covered by $\alpha$, while unbounded above, is bounded below by the existence of the radius of the Universe: a classical particle at rest at $r=R_0$ would carry an energy \begin{equation} E_0=-\frac{1}{2}mH_0^2R_0^2. \label{zpetite} \end{equation} This configuration can be regarded as the classical vacuum state. In terms of the dimensionless eigenvalue $\alpha$, this energy equals \begin{equation} \alpha_0=-\frac{mH_0R_0^2}{\hbar}=-2.6\times 10^{123}. \label{zteddy} \end{equation} The vacuum energy (\ref{zteddy}) has been determined by a classical argument; although the uncertainty principle will shift the minimum energy (\ref{zteddy}) by a positive amount, this correction can be discarded for our purposes, as it will be negligible compared to (\ref{zteddy}) itself. The negative sign in (\ref{zteddy}) is due to the Hubble potential (\ref{potenzi}), while the dimensionless factor $2.6$ is of order unity. Thus the vacuum energy (\ref{zteddy}) yields the approximate equality \begin{equation} \vert\alpha_0\vert\simeq\frac{{\cal S}_{\rm max}}{k_B}\simeq10^{123}. \label{buzle} \end{equation} The above numerical coincidence is in fact a consistency check on all our previous arguments. It confirms once again that the holographic bound never gets exceeded, since both the energy and the entropy grow quadratically with the distance. We have seen in section \ref{npetrp} that the linearly independent wavefunctions $\psi_{\alpha}^{(1)}$ and $\psi_{\alpha}^{(2)}$ coalesce asymptotically in $r$. This occurs despite the fact that $\psi_{\alpha}^{(1)}$ is regular at $r=0$ while $\psi_{\alpha}^{(2)}$ is singular. In turn, this implies that issues of regularity of the wavefunction at $r=0$ are irrelevant for our purposes. Our estimate of the entropy remains valid regardless of the precise wavefunction used in a neighbourhood of $r=0$; this neighbourhood is $[0,\beta^{-1}]$. The constant $\beta$ arises naturally when diagonalising the effective Hubble Hamiltonian (\ref{jamilto}), see Eq. (\ref{morcilla}). It turns out that $\beta^{-1}\simeq10^{-35}$ metres, which is close to the value of the Planck length $L_P$, \begin{equation} \beta^{-1}=\sqrt{\frac{\hbar}{mH_0}}\simeq L_P=\sqrt{\frac{\hbar G}{c^3}}. \label{apross} \end{equation} Our toy model of the Universe thus possesses an intrinsic length scale, $\beta^{-1}$, which numerically equals the Planck length. This approximate equality is no coincidence: the value of $m$ is that of the mass enclosed by the Hubble horizon for a critical Universe, $m\simeq 1/(H_0G)$, hence $\beta\simeq 1/\sqrt{G}=1/L_P$. Our analysis is rooted in previous studies \cite{ELZE,GALLEGO} on the emergent property of quantum mechanics. According to the hypothesis of emergence, quantum mechanics as we know it should be the effective theory of some underlying mechanics, the coarse graining of which would yield our current quantum models. Important recent work in general relativity \cite{PADDY1,PADDY2,PADDY3,PADDY4} also points in the same direction: gravity appears to be the {\it thermodynamics}\/ of some underlying degrees of freedom, a continuous spacetime emerging only as their low--energy limit. That seemingly unrelated fields such as quantum theory and general relativity might share fundamental common features \cite{MATONEGRAVITY} is an intriguing possibility worthy of future study. \vskip0.5cm \noindent {\bf Acknowledgements} This research was supported by grant no. ENE2015-71333-R (Spain).
2,869,038,154,151
arxiv
\section{Introduction} Since some time, it has been realized \cite{Turok} that defects (textures) associated with the non-trivial winding of massless scalar fields may be of interest even though they are intrinsically unstable if the winding number is large enough \cite{Ryden,Borrill,Stefan}. Indeed, it is the fact that textures continually enter the horizon during the evolution of the universe that makes the spectrum of density fluctuations near scale-invariant (although not Gaussian) and makes textures promising candidates for large scale structure formation even in light of the COBE results \cite{COBE,Turok_Stock}. A simple theory that admits global textures is given by the lagrangian \begin{eqtn}{lagr} {\cal L}= \frac{1}{2} \partial_\mu {\bf \Phi} \cdot \partial^\mu {\bf \Phi} - \frac{\lambda}{4}( {\bf \Phi}^2 - \phi_0^2 )^2 \end{eqtn} where ${\bf \Phi}$ is a $4$ component real scalar field, Here $\phi_0$ is the symmetry breaking scale and $\lambda$ is a dimensionless coupling constant. For convenience we do the rescaling ${\bf \Phi} \rightarrow \phi_0 {\bf \Phi}$, so the action for this theory will be \begin{eqtn}{action} S[{\bf \Phi}] = \phi_0^2 \int d^4x \sqrt{-g} \left( \mbox{$1\over2$} \partial_\mu {\bf \Phi} \cdot \partial^\mu {\bf \Phi} - \mbox{$w\over4$}( {\bf \Phi}^2 - 1 )^2 \right), \end{eqtn} were $w=\lambda \phi_0^2$ and $g$ is the determinant of the space-time metric. Upon quantization we have in this theory one massive Higgs particle with mass $m_H=\sqrt{w/2}$ and $N-1$ massless (Goldstone) bosons. We will however not be interested in the particle spectrum, instead we will only consider the classical equations of motion, for the field ${\bf \Phi}$, \begin{eqtn}{eom1} \partial_\mu( \sqrt{-g} \partial^\mu {\bf \Phi} ) = - \sqrt{-g} w ( {\bf \Phi}^2 - 1 ) {\bf \Phi}. \end{eqtn} For cosmological applications, it is important that the Goldstone modes remain massless, creating long-range correlations and field dynamics over cosmologically relevant length scales. Arguments, based on quantum gravity effects, have been given \cite{Kamion} which seem to make the survival of exact global symmetries questionable. This statement is, however, based on unknown physics at the Planck scale. More specifically, mechanisms present, e.g. in string models may well protect the potential of the Goldstone modes of the texture scalar fields. (For a recent discussion of such mechanisms, see \cite{Kallosh}.) Here we will not enter into this discussion but simply assume that textures can exist and study the properties of their evolution when the dynamics is given by the action (\ref{action}). We will parameterize the field using hyper-spherical coordinates $\rho$, $\chi$, $\tilde{\theta}$ and $\tilde{\varphi}$, in the following way, \begin{eqtn}{field} { \bf \phi} ({\bf r},t) = \rho ( \cos \chi , \sin \chi \cos \tilde{\theta} , \sin \chi \sin \tilde{\theta} \cos \tilde{\varphi} , \sin \chi \sin \tilde{\theta} \sin \tilde{\varphi} ). \end{eqtn} We will look at the ``spherically symmetric'' (or hedgehog) ansatz, were we let the coordinate functions depend on time $t$ and the spatial spherical coordinates $r$, $\theta$ and $\varphi$ as $\rho=\rho(r,t)$, $\chi=\chi(r,t)$, $\tilde{\theta}=\theta$ and $\tilde{\varphi}=\varphi$. It is common to consider the field as a stiff source, which means that one is assuming that the self-coupling of the field is much stronger than the self-gravitational coupling. Thus only the background metric is required in the equation of motion for the field. The perturbation of the background metric can then be calculated from Einstein's equations with the stress-energy tensor of the texture field added. The applicability of the stiff approximation in the self-similar case is discussed in \cite{selfgrav}. When one studies the formation of large scale structure in the early universe, the background metric is taken to be Friedmann-Robertson-Walker (FRW). We will discuss how one can find solutions valid at medium large scales using the equation for the Minkowski background. \section{Minkowski background} For a Minkowski background the equations of motion~(\ref{eom1}) in terms of the hyper-spherical coordinates become \begin{eqtn}{eomM1} \begin{array}{l} r^2(\rho^2 \dot{\chi})\dot{ } = (\rho^2 r^2 \chi')' - \rho^2 \sin 2\chi \\ r^2 w ( \rho^2 - 1 ) = r^2( \dot{\chi}^2 - \chi'^2) - 2 \sin^2 \chi + (( r^2 \rho')' - r^2 \ddot{\rho})/\rho. \end{array} \end{eqtn} For length scales larger than the inverse mass of the radial ``Higgs'' mode $m_r^{-1}=(\lambda \phi_0^2)^{-1/2}$ the dynamics of the field can be described by a nonlinear $\sigma$ model (NLSM) \cite{Turok}. The NLSM is characterized by that the field is exactly on the vacuum-manifold everywhere, thus $\rho=1$. In this case the first equation of (\ref{eomM1}) admits a self-similar ansatz $y=r/t$ and becomes \begin{eqtn}{semiy} (1-y^2)(y^2\chi_{yy}+2y\chi_y)=\sin 2\chi(y). \end{eqtn} This equation has a singular behavior at $y=0$ and $y=\pm 1$, the conditions for regular solutions are $\sin 2\chi(0)=0$ and $\sin 2\chi(\pm 1)=0$. This equation has the well known solutions found by Turok and Spergel \citen{Spergel}. \begin{eqtn}{atan} \chi(y) = m \pi \pm 2 \:\arctan (\pm y), \end{eqtn} where $m$ is an integer. These solutions are indeed very special, coming from the spherically symmetric self-similar ansatz to the non-linear sigma model approximation. An important feature is, however, that these symmetric scaling solutions appears in the numerical simulations as attractors \cite{Ryden}. In fact, many of the calculations of the effects of textures, e.g. on the microwave background radiation, rely on the use of these simple analytic solutions \cite{Spergel,Durrer}. In order to see when the NLSM approximation is applicable we go a step beyond it. Instead of insisting on $\rho\equiv 1$, we will assume only that the derivatives of $\rho$ are negligible in the equations of motion. We will then recover the NLSM equation for $\chi$, so for $\chi$ we will use the selfsimilar NLSM solution~(\ref{atan}). The second of the eqs. (\ref{eomM1}) $\rho$ becomes \begin{eqtn}{rho1} \rho^2 = 1 + ( \dot{\chi}^2 - \chi'^2 - \frac{2}{r^2} \sin^2 \chi )/w, \end{eqtn} which gives \begin{eqtn}{rho2} \rho^2 = 1 - \frac{12 t^2 - 4 r^2}{w(r^2 + t^2)^2}, \end{eqtn} upon insertion of our self-similar NLSM solution for $\chi$. It can be checked that $\rho'$ and $\dot{\rho}$ can be neglected in the first of the eqs. (\ref{eomM1}) if $r^2 + t^2 >> 1/w$. Thus we conclude that there exists a $r_0>>1/\sqrt{w}$ such that the selfsimilar solution~(\ref{atan}) is valid for all $r^2 + t^2 > r_0$. If we have a field that initially for $t<0$, $t^2>>1/w$ is described by $ \chi(r,t) = 2 \:\arctan (-r/t)$ we will have an unwinding event for $r^2 + t^2 < r_0^2$ where the field is forced to leave the vacuum manifold. The solution~(\ref{atan}) is valid right to the time $t=r-r_0$, when the information from the unwinding event reaches $r$. We thus can match the solution at $t=0$, $\chi(r,0)=\pi$, $\dot{\chi}(r,0) = 2/r$, with $\chi(r,t) = 2 \pi - 2 \:\arctan ( r/t)$, valid for $0<t<r-r_0$. How the field behaves for $t>r-r_0$ depends on the details of the unwinding event and must be decided by making a numerical simulation of the full field equations \cite{Barriola}. One thus find that the field after the unwinding goes asymptotically to the NLSM solution $ \chi(r,t) = \pi + 2 \:\arctan ( r/t)$ for $t>r$. This solution describes an expanding shell of goldstone bosons, the winding number is zero. Before we continue discussing Minkowskian self-similar solutions, we study the equations for the FRW background metric. \section{FRW background} We will discuss the evolution of spherical textures in a flat FRW background \begin{equation} ds^2=a^2(\eta)(d\eta^2-dr^2-r^2(d\theta^2+sin^2\theta d\varphi^2)). \end{equation} Here $\eta$ is the conformal time. The time dependence for $a$ is $a(\eta)\propto \eta^\alpha$, where $\alpha=1$ corresponds to a radiation dominated universe and $\alpha=2$ corresponds to a matter dominated universe. The equation of motion for the NLSM is now \begin{equation}\label{whole} (r^2\chi_{r})_r - r^2( \chi_{\eta \eta}+2\frac{\alpha}{\eta}\chi_\eta ) = \sin 2\chi \end{equation} We are interested in solutions where unwinding can occur so we try the selfsimilar ansatz $\chi(r,t)=\chi(r/t)$, where $t=\eta-\eta_*$ and $\eta_*$ is the time for the unwinding. The equations with $\alpha\neq0$ admit this ansatz only if $\eta_*=0$ which coincides with the time for the big bang singularity. One would nevertheless hope to have some use of this ansatz if we are interested in expanding textures that unwinded very early. The self-similar ansatz $y = r/\eta$ gives \begin{equation}\label{sege} y^2(1-y^2)\chi_{yy}+2y(1+y^2(\alpha-1))\chi_y=\sin 2\chi(y). \end{equation} However, we will show that with $\alpha = 1\ {\rm or}\ 2$ there does not exist any non-trivial solutions to (\ref{sege}) passing through $y=1$ with finite derivative. To show this we use the regularity conditions that we get by letting $y \rightarrow 1$ in (\ref{sege}) and the first and second derivative of that equation. We thus have at $y=1$ \begin{eqtn}{regRW} \begin{array}{l} 2 \alpha \chi_y = \sin 2 \chi, \\ (\alpha - 1) \chi_{yy} + (3 \alpha - 2 - \cos 2 \chi) \chi_y = 0, \\ (\alpha - 2) \chi_{yyy} + (6 \alpha - 9 - \cos 2 \chi) \chi_{yy} + (6 (\alpha-1) +2 \cos 2 \chi) \chi_y = 0. \\ \end{array} \end{eqtn} \vspace{.4cm} For $\alpha = 1$ we find $(\cos 2\chi(1) - 1)\chi_y(1) = 0$. This gives $\chi_y(1)=0$ since $\cos 2\chi(1)=1$ implies $\chi_y(1)=0$. For $\alpha = 2$ we find $$ (\cos^2 2\chi(1) - {14\over 3}\cos 2\chi(1) + {11\over 3})\chi_y(1) = 0, $$ the two solutions for $\cos 2 \chi(1)$ are $\cos 2 \chi(1) = {7\pm 4 \over 3}$ so the only real solution is also here $\chi_y(1)=0$. We see that in both cases for regular solutions we must have $\chi_y(1)=0$. Since $\chi(y) = n\pi/2$, integer $n$ are solutions to (\ref{sege}) with $\chi_y(1) = 0$ and $\sin 2\chi(1)=0$ we conclude that there does not exist any non-trivial regular solutions. Instead we go over to discuss the validity of using the equation for the Minkowski background as the limiting case when we look at small scales. If we substitute $\eta = \eta_* + t$ in (\ref{whole}) and assume that $t<<\eta_*$ we get \begin{equation}\label{small} (r^2\chi_{r})_r - r^2( \chi_{tt}+2\frac{\alpha}{\eta_*}\chi_t ) = \sin 2\chi. \end{equation} Now if we can neglect the term linear in $\chi_t$ compared to $\chi_{tt}$ we recover the Minkowski equation. Inserting the solution $\chi(r/t)=2 \arctan (r/t)$ we see that this approximation is consistent only when $|t| \eta_* >> \alpha (r^2+t^2)$. Thus we have to try a different approximation in order to get something valid for $t\approx 0$. We have found a way to get rid of the term linear in the time-derivative by a certain transformation. In the case $\alpha=1$ when the equation of motion is \begin{equation} (r^2\chi_{r})_r - r^2( \chi_{\eta \eta}+\frac{2}{\eta}\chi_\eta ) = \sin 2\chi \end{equation} we can make the substitution $\chi(r,\eta) = {\eta_* \over \eta}\psi(r,\eta)$ and get \begin{equation}\label{rad1} (r^2\psi_{r})_r - r^2 \psi_{\eta \eta} = {\eta \over \eta_*} \sin 2{\eta_* \over \eta}\psi, \end{equation} which become the Minkowskian equation after substituting $\eta=\eta_* + t$ and neglecting $t$ compared with $\eta_*$. Thus we find for $\alpha=1$ the solutions \begin{eqtn}{solrm} \chi(r,\eta) = {\eta_* \over \eta_*+t} \psi_M(r/t) \end{eqtn} valid for $t<<\eta_*$, where $\psi_M(r/t)$ is any solution to the Minkowskian equation. The same trick can be done in the case $\alpha=2$, but first we have to change to the coordinates $u=3\eta_*^2 r$ and $\tau=\eta^3$ in eq. (\ref{whole}) giving \begin{equation}\label{mat1} (u^2\chi_{u})_u - (\tau/\tau_*)^{4/3} u^2( \chi_{\tau \tau} + \frac{2}{\tau}\chi_\tau ) = \sin 2\chi, \end{equation} which has the solutions \begin{eqtn}{solmm} \chi(u,\tau) = {\tau_* \over \tau_*+t} \psi_M(u/t) \end{eqtn} valid for $t<<\tau_*$. In the linear approximation of Einstein's eqs. one can calculate the Newtonian gravitational acceleration from the unwinding texture solution (\ref{atan}), the result is \begin{eqtn}{force0} \vec{g}= - \varepsilon {r \over r^2+t^2}\hat{r}, \end{eqtn} where $\varepsilon=8\pi G\phi_0^2$, G the gravitational constant and $\hat{r}$ is the radial unit vector. This acceleration give rise to a velocity kick inwards of surrounding homogenous dust of the amount $\pi\varepsilon$ \cite{Spergel}. For the solution (\ref{solmm}) the same calculation gives to first order in $t/\tau_*$; \begin{eqtn}{force1} \vec{g}= - \varepsilon {r \over r^2+t^2}(1-t/\tau_*)\hat{r}. \end{eqtn} We notice that the acceleration (\ref{force1}) is enhanced at $t<0$ compared with the ordinary (\ref{force0}), and vice versa for $t>0$, which can be of importance for the form of the resulting matter perturbations. The velocity kick of the dust over the time interval $\{-t_0,t_0\}$, $t_0<<\tau_*$ is $2\varepsilon\arctan t_0/r$, so at $r<<t_0$ it is still $\pi\varepsilon$. \section{New solutions} We now examine the self-similar solutions of the Minkowskian equation of motion in greater detail. The solutions (\ref{atan}) have a winding charge $Q=\pm1$. One would perhaps believe that there exist solutions with higher $|Q|$ than $1$ as has been claimed in \cite{Borrill}, but this is not the case. For a self-similar spherical texture we have $|Q|\leq 1$. This is implied by the theorem we prove in the appendix that if $\chi(y)$ is a regular solution to (\ref{semiy}) with $\chi(0)=0$ then $0<|\chi(y)|<\pi$, for all finite $y$, and we have $0<|\chi(\infty)|\leq\pi$. The argumentation of reference \cite{Borrill} concerning solutions with $|Q|>1$ is based on the erroneous assumption that there exist solutions satisfying the boundary conditions $\chi(0)=0$, $\chi(1)=n\pi/2$, with $n>1$, (see the corollary of Lemma~1). We also want to emphasize that a boundary value problem such as (\ref{semiy}) with $\chi(0)=0$ and $\chi(1)=\pi/2$, does not necessarily possess a unique solution. Actually we have by numerical means been able to demonstrate the existence of whatseems to be a countably infinite set of additional solutions with total winding charge $Q$ less than unity. These solutions are characterized by the number of oscillations around the value $\chi =\pi/2$, and have rapidly increasing derivatives at the origin. We want to demonstrate the existence of these new solutions by accurate numerical techniques \cite{num} with some modifications necessary to handle the singular points of the equation. We want to solve the boundary value problem using the shooting technique. The most straightforward strategy would then be to consider the initial value problem $\chi(0)=0,\; \chi_y(0)=\beta$, and numerically integrate this to the point $y=1$, we denote the solutions by $\chi(y,\beta)$. The equation $\chi(1,\beta)=\pi/2$ may now be solved for $\beta$ by trial. However, because of the singularities it becomes numerically impossible to start the integration from $y=0$, so we have to modify our method. We must start the integration at a small distance away from $y=0$ and use a series expansion in order to get an appropriate initial condition. By making a series expansion of $\chi(y)$ close to the origin of the form: \begin{equation} \chi(y)=\sum_{k=0}^{\infty}a_{2k+1}y^{2k+1} \end{equation} and inserting this into the differential equation Eq. (\ref{semiy}), one can find the coefficients $a_3$, $a_5$,... in terms of $a_1=\beta$. One finds, e.g., $$ a_3={2\beta-4\beta^3/3\over 10} $$ and $$ a_5={3\beta-3\beta^3+\beta^5\over 35}. $$ Let us now consider the initial value problem, \begin{equation}\label{init} \chi(\varepsilon)=\beta \varepsilon+a_3\varepsilon^3+a_5\varepsilon^5,\; \chi_y(\varepsilon)=\beta, \end{equation} and integrate only up to $y=1-\varepsilon$. For each $\varepsilon$ we may find a $\beta$ such that \begin{equation} \chi(1-\varepsilon,\beta)+\varepsilon\chi_y(1-\varepsilon,\beta) = \pi/2, \end{equation} we then must check that the value of $\beta$ converges when we choose $\varepsilon$ smaller and smaller. It is also possible to use a more accurate extrapolation formula near y=1, one may again use the form of the original differential equation to write $$ \chi(1-z)={\pi\over 2}-\gamma z-{\gamma\over2}z^2-{\gamma\over 6}z^3+ {\gamma (1-\gamma^2)\over 18} z^4+... $$ where $\gamma = \chi_y(1)$. This expression can also be used to continue the solution past the singular point y=1. With this careful treatment of the singular points of the differential equation, its solution is otherwise straightforward. For the numerical solution we used a Runge-Kutta method with adaptive size control. We have employed this technique and thus discovered a set of such different $\beta$'s. In Fig.\,~ 1 the first four of the solutions corresponding to these $\beta$'s are displayed, we number them with the mode number $n$ starting with $n=0$ for the analytical solution. These solutions appear to be very robust according to various stability checks we have made of our numerical algorithm, so we are confident in the belief that the presence of the solutions is not a numerical artefact. We have also checked that when we vary $\alpha$ in $(\ref{sege})$ around $0$ we still find solutions which approach our solutions in a continuous way when $\alpha\rightarrow 0$. From the conspicuously regular pattern of the first solutions shown in Fig.\,~ 1, we conjecture that the number of solutions is countably infinite. That self-similar solutions with winding number less than unity exist is potentially of great importance, since as shown in numerical simulations \cite{Ryden} and backed by analytical arguments \cite{Stefan}, configurations with $Q > 1/2$ collapse and contribute to structure formation. We expect that well inside the horizon where spacetime is approximately Minkowski our new scaling solutions could play a dynamical role in structure formation. These solutions need a very high resolution numerical code to appear in the simulations since the derivatives at $y=0$ are very high. We plan to investigate these questions as well as the attractor nature of the solutions in future work. The applicability of the NLSM for the solutions with winding number less than one can be examined in the same way as for the analytical solution. The NLSM is valid for $r^2+t^2>r_0^2$ and we find that we get a factor of around hundred extra in $r_0$ for each mode, the condition reads $r_0>>100^n/\sqrt{w}$, where $n$ is the mode number. We can use the solutions for times $t<r-r_0$ as for the analytical solution, but they can not be matched at $t=r$ with any selfsimilar solution, thus for $t>r$ the selfsimilarity will necessarily be lost. To conclude, we have investigated in quite some detail the nature and validity of the self-similar ansatz to the texture equations of motion. We have analyzed possible modifications when one goes beyond the non-linear sigma model approximation, the Minkowski background approximation, and the "ground state" arctangent solution. In future work, the effects caused by including the self-gravitational coupling will be investigated. We are grateful to P. Ernstr\"om for useful discussions. The work of L.B. was supported by the Swedish Natural Science Research Council (NFR) and EEC-SCIENCE contract no. SC1*-CT91-0650. \newpage {\Large Appendix} \vskip .5cm In this appendix, we prove the following theorem: {\bf Theorem } If $\chi(y)$ is a regular solution to (\ref{semiy}) with $\chi(0)=0$ then $0<|\chi(y)|<\pi$, for all finite $y$, and we have $0<|\chi(\infty)|\leq\pi$. For the proof we need some lemmas: {\bf Lemma 1} If $\chi(y)$ is a regular solution to (\ref{semiy}) with $\chi(0)=0$ then $0<|\chi(y)|<\pi$ for $0<y\leq1$. The proof is similar to one used in \cite{stat} concerning the boundary conditions for the static ansatz $\chi(r,t)=f(r)$: We make the variable substitution $x=1/y$ in (\ref{semiy}) and get, \begin{equation}\label{semix} (x^2-1)\chi_{xx}=\sin 2\chi(x),\; \chi(\infty)=0. \end{equation} We multiply this with $\chi_x$ and integrate from $x$ to $\infty$, \begin{equation}\label{partint} \left[(x^2-1)\chi_{x}^2\right]_x^\infty-2\int_x^\infty dx\:x\chi_x^2 =-\left[\cos 2\chi(x)\right]_x^\infty. \end{equation} Since $\chi_x(x)=-\frac{1}{x^2}(\chi_y(0)+{\cal O}(1/x))$ for large $x$, (\ref{partint}) reduces to \begin{equation} -(x^2-1)\chi_{x}^2(x) - 2\int_x^\infty dx\:x\chi_x^2 =\cos 2\chi(x)-1. \end{equation} The left-hand side is always negative for $1\leq x<\infty$ so we must have $0<|\chi(x)|<\pi$ for $1\leq x<\infty$. {\bf Corollary} If $\chi(y)$ is a regular solution to (\ref{semiy}) with $\chi(0)=0$ and $\chi_y(0)>0$ then $\chi(1)=\pi/2$, if $\chi_y(0)<0$ then $\chi(1)=-\pi/2$. This follows immediately from lemma 1 and the regularity condition $\chi(1)=n \pi/2$, integer n. {\bf Lemma 2} If $\chi(y)$ is a solution to (\ref{semiy}) with $\chi(1)=\pi/2$ and $\chi_y(1)>1$ then there exists a $0<y_0<1$ such that $\chi(y_0)=0$, if $\chi_y(1)<-1$ then there exists a $0<y_0<1$ such that $\chi(y_0)=\pi$. A brief outline of the proof: We make the substitution $y=\tan \theta$ which gives the equation \begin{equation} \cos 2\theta(\sin^2\theta \chi_{\theta\theta} +\sin 2\theta \chi_\theta) =\sin 2 \chi(\theta). \end{equation} After differentiation of this equation one can get some inequalities on the third derivative of $\chi$, these can then be used in order to show that if $\chi_\theta(\pi/4)>2$ then $\chi_\theta(\theta)>\chi_\theta(\pi/4)$, $0\leq\theta<\pi/4$. ( Note that $\chi_\theta(\pi/4=2\chi_y(1)$.) This leads to the existence of a $0<\theta_0<\pi/4$ such that $\chi(\theta_0)=0$. A similar argument shows that if $\chi_\theta(\pi/4)<-2$ then there exists a $0<\theta_0<\pi/4$ such that $\chi(\theta_0)=\pi$. {\bf Lemma 3} If $\chi(y)$ is a solution to (\ref{semiy}) with $\chi(1)=\pi/2$ and $|\chi_y(1)|<1$ then $0<\chi(y)<\pi$, $y\geq 1$. An outline of the proof: We denote the known solutions with $\chi(1)=\pi/2$ by $\chi^a(x)=\pi/2 \pm (2\: \arctan x-\pi/2)$. Using the equation we get if we multiply (\ref{semix}) with $\chi_x$ and integrate from $x$ to $1$, we can show that if $|\chi_x(1)|<1$ then $|\chi_x(x)|<|\chi^a_x(x)|$ for $0\leq x\leq 1$. Since $0\leq\chi^a(x)\leq\pi$ we thus have $0<\chi(x)<\pi$, $0\leq x\leq1$. \vspace{.4cm} Proof of the theorem : If $\chi(y)$ is a regular solution to (\ref{semiy}) with $\chi(0)=0$ and $\chi_y(0)>0$ then the corollary tells us that $\chi(1)=\pi/2$. From lemma 1 together with lemma 2 it follows that we can not have $|\chi_y(1)|>1$. Together with the existence of the solution $ \chi(y)=2\:\arctan y$ (which has $\chi_y(1)=1$ and $\chi(\infty)=\pi$) we thus conclude that $-1<\chi_y(1)\leq1$. {}From lemma 3 it now follows that $0<\chi(y)<\pi$, $y\geq1$, this together with Lemma 1 thus tells us that $0<\chi(y)<\pi$, for all finite $y$. A similar reasoning gives $-\pi<\chi(y)<0$ if $\chi_y(0)<0$, which completes the proof. \vspace{.4cm}
2,869,038,154,152
arxiv
\section*{Introduction} \normalsize The Galois Field of two elements, denoted GF(2), is the field containing 0 (zero) and 1 (one). The operations of addition and multiplication are defined as follows: \par \begin{minipage}[b]{0.4\textwidth} \centering \begin{table}[H] \centering \begin{tabular}{c|cc} +&0&1\\ \hline 0&0&1\\ 1&1&0 \end{tabular} \caption{Addition in GF(2).} \label{tab:Addition_In_F2} \end{table} \end{minipage} \hfill \begin{minipage}[b]{0.4\textwidth} \centering \begin{table}[H] \centering \begin{tabular}{c|cc} $\cdot$&0&1\\ \hline 0&0&0\\ 1&0&1 \end{tabular} \caption{Multiplication in GF(2).} \label{tab:Multiplication_In_F2} \end{table} \end{minipage} \par Since GF(2) satisfies the axioms required to be a field, we may consider vector spaces over GF(2), which may be endowed with a norm. In order to meaningfully define a norm on a vector space over GF(2), we define a function $|\cdot|:\mathrm{GF(2)}\rightarrow\mathbb{R}$ which acts as an absolute value. \begin{equation} |0|=0,\quad|1|=1 \end{equation} \par This definition of absolute value trivially satisfies non-negativity, positive-definiteness, multiplicativity, as well as the triangle inequality. So it is indeed sensible to define the absolute value for elements of GF(2) in this way. \begin{theorem*} There exists an infinite dimensional banach space S over GF(2) such that each bounded linear operator on S attains its norm. \end{theorem*} \begin{proof} Define an infinite dimensional banach space $S$ over GF(2) as follows: \begin{equation} S=\{\;(s_{1},s_{2},\dots )\;\;|\;\;s_{i}\neq0 \text{ for finitely many }i\in\mathbb{N}\;\} \end{equation} Vector addition and scalar multiplication are defined entry-wise. \begin{align} \mathbf{x}+\mathbf{y}=(x_{1}+y_{1},x_{2}+y_{2},\dots) &&\alpha\mathbf{x}=(\alpha{x}_{1},\alpha{x}_{2},\dots) \end{align} Note here that the operations $x_i+y_i$ and $\alpha x_i$ occur in GF(2). The space $S$ will be given the norm $\|\,\|_{S}:S\rightarrow\mathbb{R}$ defined by: \begin{align} \norm{\mathbf{x}}_{S}= \begin{cases} 0,&\mathbf{x}=\mathbf{0}\\ 1,&\mathbf{x}\neq\mathbf{0} \end{cases} \end{align} Here, the zero vector is taken to be the sequence of all zeros. This space has the canonical basis, where $\mathbf{e}_{n}$ has a 1 in the $n^{th}$ spot and 0 in the rest. \begin{equation} \mathbf{e}_n=(0,\dots,0,1,0,\dots ) \end{equation} First we must verify that $S$ is a vector space. Since addition is performed entry-wise, associativity and commutativity are inherited properities of addition in the field. The identity element is the sequence of all zeros. Furthermore, since $1+1=0$ in GF(2), every element of $S$ is its own inverse with respect to addition. Now let $\alpha$ and $\beta$ be elements of GF(2), and $\mathbf{x}$ be in $S$. Then: \begin{equation} \alpha(\beta\mathbf{x})= \begin{cases} \mathbf{0},&\alpha=0\textrm{ or }\beta=0\\ \mathbf{x},&\alpha=1\textrm{ and }\beta=1 \end{cases} \end{equation} Similiary for $(\alpha\beta)\mathbf{x}$, and thus scalar multiplication is compatible with field multiplication. The identity element of scalar multiplication is $1\in \text{GF(2)}$. Finally, scalar multiplication trivially distributes over vector addition as well as field addition. Thus $S$ is a vector space. \par\hfill\par Now we verify that $S$ is a normed space. By the definition of $\| \,\|_S$, only the zero vector has norm zero, and all others have norm one. Thus positive-definiteness of the norm is satisfied. Now let $\alpha$ be an element of GF(2), and let $\mathbf{x}$ be a zero vector in $S$. If $\alpha$ is one then we observe: \begin{equation} \norm{\alpha\mathbf{x}}_{S}=\norm{\mathbf{x}}_{S} =1\norm{\mathbf{x}}_{S} =|\alpha|\norm{\mathbf{x}}_{S} \end{equation} If $\alpha$ is zero then instead we have: \begin{align*} \norm{\alpha\mathbf{x}}_{S}=\norm{\mathbf{0}}_{S} =0=0\norm{\mathbf{x}}_{S} =|\alpha|\norm{\mathbf{x}}_{S} \end{align*} In either case the result is that $\|\,\|_S$ is absolutely homogeneous. Moving forward, if $\mathbf{x}$ and $\mathbf{y}$ are distinct non-zero vectors in $S$, the norm of their sum will equal one. However the sum of their norms will be two, and thus be greater. For all other cases - one of them is the zero vector, both of them are the zero vector, or they are inverses - the triangle inequality trivially holds. Thus $\|\,\|_S$ is a norm on the vector space $S$. \par Lastly, in order for $S$ to be a Banach space, it must be complete with respect to the metric induced by the norm. If $\mathbf{x}$ and $\mathbf{y}$ are non-zero vectors in $S$, then the distance between them is given as: \begin{align*} \norm{\mathbf{x}-\mathbf{y}}_{S} =\norm{\mathbf{x}+(-\mathbf{y})}_{S} =\norm{\mathbf{x}+\mathbf{y}}_{S} =\begin{cases} 1,&\mathbf{x}\neq\mathbf{y}\\ 0,&\mathbf{x}=\mathbf{y} \end{cases} \end{align*} We see that the metric induced by the norm is the discrete metric, so for a sequence in $S$ to be Cauchy, it must eventually be constant. Thus every Cauchy sequence in $S$ converges. Now that we have verified that $S$ is a Banach space, we must show that every operator $T$ in the space of bounded linear operators acting on $S$, $L(S)$ attains its norm. Let $T\in L(S)$ be a non-zero operator. The norm of $T$, $\|T\|$ is defined as: \begin{equation} \norm{T}=\sup\{\,\,\norm{T\mathbf{x}}_{S}\,\,\;|\,\,\; \mathbf{x}\in S,\,\,\;\norm{\mathbf{x}}_{S}=1\,\,\} \end{equation} For any vector in $S$, the norm of its image under $T$ can only be either zero or one. Since $T$ was assumed not to be the zero operator, we obtain: \begin{equation} \norm{T}=1 \end{equation} Furthermore, there must exist some $\tilde{\mathbf{x}}\in S$ such that $\norm{T\tilde{\mathbf{x}}}_{S}=1$, otherwise $T$ would have to be the zero operator. It should also be noted that the zero operator attains its norm via any point in $S$. Therefore, every operator in $L(S)$ attains its norm. This concludes the proof. \end{proof} \section*{Remarks} It is worth noting that by definition, $S$ cannot be a Hilbert space, since given some distinct and non-zero $\mathbf{x},\mathbf{y}\in S$, the parallelogram identity: \begin{equation} \norm{\mathbf{x}+\mathbf{y}}_{S}^{2}+\norm{\mathbf{x}-\mathbf{y}}_{S}^{2} =2\norm{\mathbf{x}}_{S}^{2}+2\norm{\mathbf{y}}_{S}^{2} \end{equation} Is true if and only if $\mathbf{x}=\mathbf{y}=\mathbf{0}$. Now consider the subset of sequences, $S_m^p$, which has $m$ 1's, all of which occur before or at the $p^{th}$ position in the sequence. There are $p$-choose-$m$ such sequences. \begin{align} \bigcup\limits_{m=0}^p S_m^p \end{align} The union of these sets, as shown above, consists of all sequences with zeros after the $p^{th}$ entry. This union is a finite union of finite sets, and thus finite. The infinite union for all $p$ will be exactly our space $S$. \begin{align} S = \bigcup\limits_{p=0}^\infty \bigcup\limits_{m=0}^p S_m^p . \end{align} This is a countable union of finite sets, and thus $S$ is countable. Furthermore, because $S$ has the discrete topology, the only dense subset of $S$ is $S$ itself. Thus $S$ is a countable dense subset, and $S$ is trivially separable.
2,869,038,154,153
arxiv
\section{Introduction} In this paper we suppose that $E$ is a nonzero real Banach space with dual $E^*$. In \cite{PARTTWO}, we defined the {\em quasidensity} of a subset of $E \times E^*$. This was actually a special case of the concept of the {\em $r_L$--density} of a subset of a {\em Banach SN space} that had been previously defined in \cite{PARTONE}, and the analysis in \cite{PARTTWO} was heavily dependent on \cite{PARTONE}. The purpose of this paper is to give a development of the properties of quasidensity that is independent of \cite{PARTONE}. This paper also contains many results that did not appear in \cite{PARTTWO}. \par In Section~\ref{FENCHELsec}, we discuss proper convex functions on a Banach space and their Fenchel conjugates and biconjugates. We also introduce the (well known) canonical map from a Banach space into its {\em bidual}, which we denote by $\widehat{\ }$. In Theorems~\ref{HKFthm} and \ref{HKF2thm} and Lemma~\ref{RSlem}, we discuss some subtler properties of proper convex functions that are not necessarily lower semicontinuous. These subtler properties will be used in Theorem~\ref{THREEthm}. \par In Section~\ref{EEsec}, we discuss Banach spaces of the form $E \times E^*$. For this kind of Banach space, there is a (not so well known) canonical map from the space into its {\em dual}, which we denote by $L$ \big(see \eqref{Ldef}\big). We define the {\em quasidensity} of of a subset of $E \times E^*$ (or, equivalently, of a multifunction from $E$ into $E^*$) in\break Definition~\ref{QDdef}. The definition of quasidensity does not require monotonicity, though there is a rich theory of the interaction of quasidensity and monotonicity which we will discuss in Sections~\ref{MONsec}--\ref{SPECsec} --- the definition of {\em monotonicity} does not actually appear until Section~\ref{MONsec}. Lemma~\ref{SLlem}, Theorem~\ref{NIthm} and Corollaries~\ref{NIcor} and \ref{THAcor} contain useful results on quasidensity without a monotonicity assumption. In particular, Theorem~\ref{NIthm} says that $L$ ``preserves quasidensity'', and we establish in Corollary~\ref{NIcor} that every quasidense set is of {\em type (NI)}, a concept that has been extensively studied over the past two decades. We will return to this issue below. We mention in Example~\ref{SUBex} that the subdifferential of a proper, convex, lower semicontinuous function on $E$ is quasidense. This result is generalized in \cite{SW} to certain more general subdifferentials of nonconvex functions. \par In Section~\ref{RLsec}, we initiate the theory of the {\em coincidence sets} of certain convex functions. The basic idea is that we consider a proper convex function, $f$, on $E \times E^*$ that dominates the canonical bilinear form, $q_L$, and the corresponding coincidence set is the set on which $f$ and $q_L$ coincide. (The ``$q$'' in this notation stands for ``quadratic''.) The main results in this section (and the pivotal results of this paper) are Theorem~\ref{FCthm} (the primal condition for quasidensity),\break Theorem~\ref{FSTARthm} (the dual condition for quasidensity) and Theorem~\ref{THREEthm} (the theorem of the three functions). As we observed above, the definition of monotonicity is not used explicitly before Section~\ref{MONsec}, but monotonicity is hiding below the surface because, as we shall see in Lemma~\ref{CONTlem}, coincidence sets are monotone. \par In Section~\ref{EPIsec}, we investigate the coincidence sets of the partial episums of a pair of convex functions. This analysis will lead to the two sum theorems for quasidense maximally monotone multifunctions that we will establish in Theorem~\ref{STDthm} and Theorem~\ref{STRthm}. \par We start our explicit discussion of monotonicity in Section~\ref{MONsec}. We prove in Theorem~\ref{RLMAXthm} that every closed, monotone quasidense multifunction is maximally monotone. On the other hand, we give examples of varying degrees of abstraction in Example~\ref{TAILex} and Theorems~\ref{SMAXthm}, \ref{SFTthm}(b) and \ref{SPECTthm} of maximally monotone linear operators that are not quasidense. The link between Section~\ref{RLsec} and Section~\ref{MONsec} is provided by Lemma~\ref{CONTlem}, in which we give a short proof of the result first established by Burachik--Svaiter and Penot that coincidence sets are monotone. So suppose that $S\colon\ E \rightrightarrows E^*$ is monotone and $G(S) \ne \emptyset$. In Definition~\ref{THdef}, we define the function, $\theta_S\colon E^* \times E^{**} \to \,]{-}\infty,\infty]$, by adapting Definition~\ref{THAdef}. The well known {\em Fitzpatrick function}, $\varphi_S$, is defined in Definition~\ref{PHdef} by $\varphi_S = \theta_S \circ L$. There is a short history of the Fitzpatrick function in Remark~\ref{FDEFrem}. Now let $S$ be maximally monotone. Then we prove in Theorem~\ref{PHthm} that $S$ is quasidense if, and only if, ${\varphi_S}^* \ge q_{\widetilde L}$ on $E^* \times E^{**}$, and we prove in Theorem~\ref{THthm} that $S$ is quasidense if, and only if, $\theta_S \ge q_{\widetilde L}$ on $E^* \times E^{**}$. These two results enable us to give two partial converses to Theorem~\ref{RLMAXthm} in Corollaries~\ref{SURJcor} and \ref{CONVcor}, namely that {\em if $S$ is maximally monotone and surjective then $S$ is quasidense} and that {\em if $E$ is reflexive and $S$ is maximally monotone then $S$ is quasidense}. Theorem~\ref{THthm} is particularly significant because it shows that a maximally monotone multifunction $S$ is quasidense exactly when it is of type (NI). \par \par In Section~\ref{DSUMSsec}, we prove the {\em Sum theorem with domain constraints} that was established in \cite{PARTONE}. It is important to realize that we do not merely give sufficient conditions for a sum theorem for a pair of maximally monotone multifunctions to hold. In fact, we prove that, under the given conditions, the sum of a pair of {\em closed, monotone and quasidense} multifunctions is again {\em closed, monotone and quasidense}. \par In Section~\ref{FITZEXTsec}, we discuss the {\em Fitzpatrick extension} of a closed, monotone and quasidense multifunction. This will be needed for our analysis of the {\em Sum theorem with range constraints} that will be the topic of Section~\ref{RSUMSsec}. If $S\colon\ E \rightrightarrows E^*$ is closed, monotone and quasidense then the Fitzpatrick extension, $S^{\mathbb F}\colon\ E^* \rightrightarrows E^{**}$, of $S$ is defined formally in terms of ${\varphi_S}^*$ in \eqref{PHSTCRIT}, and we give two other characterization of $S^{\mathbb F}$ in \eqref{THCRIT}. We prove in Theorem~\ref{AFMAXthm} that $S^{\mathbb F}$ is maximally monotone, but we will see in Example~\ref{TAILex}, Theorems~\ref{SFTthm}(b) and \ref{SPECTthm} that it may fail to be quasidense. \big(It is observed in Remark~\ref{GOSSrem}, that $(y^*,y^{**}) \in G(S^{\mathbb F})$ exactly when $(y^{**},y^*)$ is in the {\em Gossez extension of $G(S)$}\big). $S^{\mathbb F}$ is defined in rather an abstract fashion, but we give a situation in Theorem~\ref{TSthm} in which we can give a more explicit description of $S^{\mathbb F}$. Theorem~\ref{TSthm} was obtained by analyzing some results of Bueno and Svaiter on {\em linear} multifunctions, which we will discuss in greater detail in Section~\ref{NONQDEXTsec}. Theorem~\ref{TSthm} does {\em not} have any linearity assumptions, but Theorem~\ref{LINVWthm} is an application to linear maps. \par In Section~\ref{RSUMSsec}, we prove the {\em Sum theorem with range constraints} that was first established in \cite{PARTONE}. \par In Section~\ref{ANOTHERsec}, we discuss a slight modification of an example due to Bueno and Svaiter of a non-quasidense maximally monotone skew linear operator from a subspace of $c_0$ into $\ell_1$. In Section~\ref{NONQDEXTsec} we discuss a procedure due to Bueno and Svaiter for constructing quasidense linear maps from a Banach space into its dual with a non-quasidense Fitzpatrick extension. In Section~\ref{SPECsec}, we give a specific example of the construction of Section~\ref{NONQDEXTsec}, a map from $c_0$ into $\ell_1$. \par Given a maximally monotone multifunction, there are a number of conditions that are equivalent to its quasidensity. Broadly speaking, they separate into two classes, depending on whether or not they use the bidual in their definition. \par Conditions that do not use the bidual include the {\em negative alignment condition} (see \cite[Theorem 11.6, p.\ 1045]{PARTONE}), two ``fuzzy'' criteria for quasidensity \big(in which an element of $E^*$ is replaced by a nonempty $w(E^*,E)$--compact convex subset of $E^*$, or an element of $E$ is replaced by a nonempty $w(E,E^*)$--compact convex subset of $E$ --- see \cite[Section 8, pp. 14--17]{PARTTWO}\big) and the {\em type (FP)} condition \big(see \cite[Section 10, pp. 20--22]{PARTTWO}\big). \par There are many classes of maximally monotone multifunctions coinciding with those of type (FP) in the literature that {\em do} require the bidual in their\break definitions. We mention {\em type (D)}, {\em dense type}, {\em type (ED)} and {\em Type (NI)}. These equivalences have been known for some time. See \cite[Introduction, pp.\ 6--7]{PARTTWO} for a discusion of these with references to the sources of these results. \par The bidual is not mentioned explicitly in the {\em statements} of Theorem~\ref{Dthm}, Corollary~\ref{SURJcor} or Theorem~\ref{STDthm}, but our {\em proofs} of all of these results ultimately depend on the bidual at one point or another. This raises the fascinating question whether there are proofs of any of these results that do not depend on the bidual. This seems to be quite a challenge. Another similar challenge is to find a proof that does not depend on the bidual of the fact that a maximally monotone multifunction is quasidense if, and only if, it is of type (FP). Of course, such a proof could not go through the equivalence of both of these classes of multifunctions with those of type (NI). \par It was proved in \cite[Theorem 11.9, pp.\ 1045--1046]{PARTONE} that every closed,\break monotone quasidense multifunction is of {\em type (ANA)}. It was also proved in \cite[Theorem 7.2, p.\ 14 and Theorem 8.5, pp.\ 16--17]{PARTTWO} that every closed, monotone quasidense multifunction is of {\em type (FPV)}, and {\em strongly maximal}. These observations lead to the three interesting problems of finding maximally monotone multifunctions that fail to be in any of these three classes. \par The author would like to thank Orestes Bueno for a very interesting\break discussion, which led to the analysis that we present in Theorem~\ref{TSthm} and\break Sections~\ref{NONQDEXTsec} and \ref{SPECsec}. This discussion took place during the author's stay in the Erwin Schrodinger International Institute for Mathematics and Physics of the University of Vienna in January-February, 2019. The author would like to express his sincere appreciation to the Erwin Schrodinger Institute for their support. \par All vector spaces in this paper are {\em real}. \section{Fenchel conjugates}\label{FENCHELsec} We start off by introducing some Banach space notation. If $X$ is a nonzero Banach space and $f\colon\ X \to \,]{-}\infty,\infty]$, we write $\hbox{\rm dom}\,f$ for $\big\{x \in X\colon\ f(x) \in \mathbb R\big\}$. $\hbox{\rm dom}\,f$ is the {\em effective domain} of $f$. We say that $f$ is {\em proper} if $\hbox{\rm dom}\,f \ne \emptyset$. We write ${\cal PC}(X)$ for the set of all proper convex functions from $X$ into $\,]{-}\infty,\infty]$ and ${\cal PCLSC}(X)$ for the set of all proper convex lower semicontinuous functions from $X$ into $\,]{-}\infty,\infty]$. We write $X^*$ for the dual space of $X$ \big(with the pairing $\bra\cdot\cdot\colon X \times X^* \to \mathbb R$\big). If $f \in {\cal PC}(X)$ then, as usual, we define the {\em Fenchel conjugate}, $f^*$, of $f$ to be the function on $X^*$ given by \begin{equation}\label{FSTAR} f^*(x^*) := \sup\nolimits_{X}\big[{x^*} - f\big] = \sup\nolimits_{x \in X}\big[\bra{x}{x^*} - f(x)\big]. \end{equation} \par $X^{**}$ stands for the bidual of $X$ \big(with the pairing $\bra\cdot\cdot\colon X^* \times X^{**} \to \mathbb R$\big). If $g \in {\cal PCLSC}(X^*)$ then, according to \eqref{FSTAR}, we define the Fenchel conjugate, $g^*$, of $g$ to be the function on $X^{**}$ given by \begin{equation*} g^*(x^{**}) := \sup\nolimits_{X^*}\big[{x^{**}} - g\big] = \sup\nolimits_{x^* \in X^*}\big[\bra{x^*}{x^{**}} - g(x^*)\big]. \end{equation*} So, if $f \in {\cal PCLSC}(X)$ and we interpret $f^{**}$ to mean $(f^*)^*$ then $f^{**}$ is the function on $X^{**}$ given by \begin{equation} f^{**}(x^{**}) := \sup\nolimits_{x^* \in X^*}\big[\bra{x^*}{x^{**}} - f^*(x^*)\big]. \end{equation} If $x \in X$, we write $\widehat x$ for the canonical image of $x$ in $X^{**}$, that is to say \begin{equation*} (x,x^*) \in X \times X^* \Longrightarrow \bra{x^*}{\widehat x} = \bra{x}{x^*} \end{equation*} If $g\colon\ X \to \,]{-}\infty,\infty]$, we write $\hbox{\rm epi}\,g$ for the {\em epigraph} of $g$, \begin{equation*} \{(x,\lambda) \in X \times \mathbb R\colon\ g(x) \le \lambda\}. \end{equation*} \par If $h \in {\cal PC}(X)$, the {\em lower semicontinuous envelope} of $h$, ${\overline h}$, is defined by $\hbox{\rm epi}\,{\overline h} = \overline{\hbox{\rm epi}\,h}$. See \cite[p.\ 62]{ZBOOK}. Of course, to make this definition legitimate, some effort has to be made to show that $\overline{\hbox{\rm epi}\,h}$ is the epigraph of a function. Since $\hbox{\rm epi}\,{\overline h}$ is closed, ${\overline h}$ is lower semicontinuous. It is worth pointing out that if $h$ is a discontinuous linear functional then ${\overline h} = -\infty$ on $X$. \begin{theorem}\label{HKFthm} Let $h \in {\cal PC}(X)$. Let $k\colon\ X \to \,]{-}\infty,\infty]$ be lower semicontinuous and $k \le h$ on $X$. Then $k \le {\overline h} \le h$ on $X$ and ${\overline h}^* \le h^*$ on $X^*$. It follows from this that ${\overline h} \in {\cal PCLSC}(X)$ and ${\overline h}^* = h^*$ on $X^*$. \end{theorem} \begin{proof} We know from \cite[Theorem 2.2.6(i), p.\ 62]{ZBOOK} that ${\overline h}$ is convex. It follows from the hypotheses that $\hbox{\rm epi}\,h \subset \hbox{\rm epi}\,k$ and $\hbox{\rm epi}\,k$ is closed in $X \times \mathbb R$. Consequently, $\hbox{\rm epi}\,h \subset \hbox{\rm epi}\,{\overline h} = \overline{\hbox{\rm epi}\,h} \subset \hbox{\rm epi}\,k$, from which $k \le {\overline h} \le h$ on $X$, as required. \par If $x^* \in X^*$ and $h^*(x^*) \in \mathbb R$ then the Fenchel--Young inequality implies that $x^* - h^*(x^*) \le h$ on $X$, so $\hbox{\rm epi}\,h \subset \hbox{\rm epi}\,\big(x^* - h^*(x^*)\big)$. Since $x^* - h^*(x^*)$ is continuous, $\hbox{\rm epi}\,\big(x^* - h^*(x^*)\big)$ is closed, thus $\hbox{\rm epi}\,{\overline h} = \overline{\hbox{\rm epi}\,h} \subset \hbox{\rm epi}(x^* - h^*(x^*))$, from which ${\overline h} \ge x^* - h^*(x^*)$ on $X$. It follows easily that ${\overline h}^*(x^*) \le h^*(x^*)$. Of course, this inequality persists even if $h^*(x^*) = \infty$, and so we have proved that ${\overline h}^* \le h^*$ on $X^*$. This completes the proof of Theorem~\ref{HKFthm}. \end{proof} The main tool in the proof of Theorem~\ref{HKFthm} was epigraphical analysis. The drawback of this method is that the definition of ${\overline h}$ is not very intuitive. We now discuss a more explicit geometric method of obtaining the function required for Theorem~\ref{THREEthm}, which we can actually express as a biconjugate. The preliminary work is done in Lemma~\ref{RSlem} below, which is of independent interest. \par We shall use Rockafellar's version of the Fenchel duality theorem \big(which originally appeared in Rockafellar, \cite[Theorem~3(a), p.\ 85]{FENCHEL}\big) in the following form: {\em Let $p,u \in {\cal PC}(X)$ and $u$ be continuous. Then} \begin{equation}\label{RTR1} (p + u)^*(0) = \min\nolimits_{x^* \in X^*}\big[p^*(x^*) + u^*(-x^*)\big]. \end{equation} We could have used instead K\"onig's sandwich theorem, a simple application of the Hahn--Banach theorem, see \cite[Theorem 1.7, p.\ 112]{KONIG}. \begin{lemma}\label{RSlem} Let $p \in {\cal PC}(X)$. Let $s\colon\ X \to \,]{-}\infty,\infty]$ be lower semicontinuous, $s \le p$ on $X$ and $s(0) > 0$. Then: \par\noindent {\rm (a)}\enspace There exists $K \in [0,\infty[$ such that $p + K\|\cdot\| \ge 0$ on $X$. \par\noindent {\rm (b)}\enspace There exists $x^* \in X^*$ such that $p^*(x^*) \le 0$. \end{lemma} \begin{proof} (a)\enspace Since the result is obvious if $p \ge 0$ on $X$, we can and will suppose that there exists $w \in E$ such that $p(w) < 0$. Let $\theta \in \mathbb R, \theta < s(w)$. It follows that $\theta < s(w) \le p(w) < 0$. Since $s$ is lower semicontinuous, there exists $m \ge 1$ such that $\inf_{y \in X,\ s(y) \le 0}\|y\| \ge {\textstyle\frac{1}{m}}$\quad\hbox{and}\quad $\inf_{z \in X,\ s(z) \le \theta}\|z - w\| \ge {\textstyle\frac{1}{m}}$. Let $\alpha := p(w) - \theta > 0$. Let $y \in X$. We will show that \begin{equation}\label{RS1} p(y) + \alpha m^2\|w\|\|y\| - \theta m\|y\| \ge 0. \end{equation} This gives the desired result, with $K := \alpha m^2\|w\| - \theta m$. \par\noindent {\bf Case 1.} ($p(y) \ge 0$)\enspace In this case, \eqref{RS1} is obvious since $\alpha > 0$ and $\theta < 0$. \par\noindent {\bf Case 2.} ($\theta \le p(y) < 0$)\enspace In this case, $s(y) < 0$, and so $\|y\| \ge {\textstyle\frac{1}{m}}$, hence $m\|y\| - 1 \ge 0$. Again since $\alpha > 0$ and $\theta < 0$, \begin{equation*} p(y) + \alpha m^2\|w\|\|y\| - \theta m\|y\| \ge \theta - \theta m\|y\| = (-\theta)(m\|y\| - 1) \ge 0, \end{equation*} which gives \eqref{RS1}. \par\noindent {\bf Case 3.} ($p(y) < \theta$)\enspace Let $\beta := \theta - p(y) > 0$. $\beta$ (unlike $\alpha$) depends on $y$. Here, the convexity of $p$ and the fact that $s \le p$ on $X$ imply that \begin{equation*} s\left(\frac{\alpha y + \beta w}{\alpha + \beta}\right) \le p\left(\frac{\alpha y + \beta w}{\alpha + \beta}\right) \le \frac{\alpha p(y) + \beta p(w)}{\alpha + \beta} = \frac{\alpha(\theta - \beta) + \beta(\alpha + \theta)}{\alpha + \beta} = \theta. \end{equation*} Thus, from the choice of $m$ again, \begin{equation*} \frac{\alpha(\|y\| + \|w\|)}{\alpha + \beta} \ge \left\|\frac{\alpha(y - w)}{\alpha + \beta}\right\| = \left\|\frac{\alpha y + \beta w}{\alpha + \beta} - w\right\| \ge \frac1m. \end{equation*} This is equivalent to the statement $\alpha m\|w\| + \alpha m\|y\| - \alpha - \beta \ge 0$. Substituting $\beta = \theta - p(y)$, we see that \begin{equation}\label{RS2} p(y) \ge \theta + \alpha - \alpha m\|w\| - \alpha m\|y\|. \end{equation} We still have $m\|y\| - 1 \ge 0$, and also $\alpha m\|w\| - \theta - \alpha = \alpha m\|w\| - p(w) > 0$. It follows that $(\alpha m\|w\| - \theta - \alpha)(m\|y\| - 1) \ge 0$. Equivalently, \begin{equation*} \alpha m^2\|w\|\|y\| - \theta m\|y\| \ge \alpha m\|y\| + \alpha m\|w\| - \theta - \alpha. \end{equation*} \eqref{RS1} now follows by adding this to \eqref{RS2}. \par Now let $u := K\|\cdot\|$. From (a), $p + u \ge 0$ on $X$, and so $(p + u)^*(0) \le 0$. \eqref{RTR1} now gives $x^* \in X^*$ such that $p^*(x^*) + u^*(-x^*) \le 0$. Since $u(0) = 0$, $u^*(-x^*) \ge 0$, and thus we obtain (b). \end{proof} \begin{theorem}\label{HKF2thm} Let $h \in {\cal PC}(X)$. Let $k\colon\ X \to \,]{-}\infty,\infty]$, be lower semicontinuous and $k \le h$ on $X$. For all $x \in X$, let $f(x) := \sup\nolimits_{x^* \in X^*}\big[\bra{x}{x^*} - h^*(x^*)\big]$, {\em i.e.}, $f(x) := h^{**}(\widehat x)$. Then: \par\noindent {\rm (a)}\enspace $f \ge k$ on $X$, and so $f\colon\ X \to \,]{-}\infty,\infty]$. \par\noindent {\rm (b)}\enspace $f \in {\cal PCLSC}(X)$ and $f^* = h^*$ on $X^*$. \end{theorem} \begin{proof} (a)\enspace Let $x \in X$, $\lambda \in \mathbb R$ and $\lambda < k(x)$. Let $p(y) := h(y + x) - \lambda$ and $s(y) := k(y + x) - \lambda$, so $s(0) = k(x) - \lambda > 0$. Lemma~\ref{RSlem}(b) now gives $x^* \in X^*$ such that $p^*(x^*) \le 0$. It is easily seen that this is equivalent to the statement that $\bra{x}{x^*} - h^*(x^*) \ge \lambda$. (a) now follows by letting $\lambda \to k(x)$. \par (b)\enspace From the Fenchel--Young inequality, for all $x^* \in X^*$, ${x^*} - h^*(x^*) \le h$ on $X$, thus $f \le h$ on $X$, and so $f \in {\cal PCLSC}(X)$ and $f^* \ge h^*$ on $X^*$. On the other hand, for all $x^* \in X^*$, $f \ge {x^*} - h^*(x^*)$ on $X$, {\em i.e.}, $ {x^*} - f \le h^*(x^*)$ on $X$ thus, for all $x^* \in X^*$, $f^*(x^*) = \sup\nolimits_{X}\big[{x^*} - f\big] \le h^*(x^*)$ on $X$. Thus $f^* = h^*$ on $X^*$, completing the proof of (b). \end{proof} \section{$E \times E^*$, $q_L$, $r_L$ and quasidensity}\label{EEsec} Now let $E$ be nonzero Banach space. For all $(x,x^*) \in E \times E^*$, let\quad $\|(x,x^*)\| := \sqrt{\|x\|^2 + \|x^*\|^2}$,\quad and represent $(E \times E^*)^*$ by $E^* \times E^{**}$, under the pairing \begin{equation*} \Bra{(x,x^*)}{(y^*,y^{**})} := \bra{x}{y^*} + \bra{x^*}{y^{**}}. \end{equation*} Define the linear map $L\colon\ E \times E^* \to E^* \times E^{**}$ by \begin{equation}\label{Ldef} L(x,x^*) := (x^*,\widehat{x}). \end{equation} Then \begin{equation*} \hbox{for all}\ a,b \in E \times E^*,\quad \bra{a}{Lb} = \bra{b}{La}. \end{equation*} We define the even real functions $q_L$ and $r_L$ on $E \times E^*$ by\quad $q_L(x,x^*) := \bra{x}{x^*}$\quad and \begin{equation}\label{RL1} r_L(x,x^*) := {\textstyle\frac{1}{2}}\|x\|^2 + {\textstyle\frac{1}{2}}\|x^*\|^2 + \bra{x}{x^*} = {\textstyle\frac{1}{2}}\|(x,x^*)\|^2 + q_L(x,x^*). \end{equation} For all $(x,x^*) \in E \times E^*$,\quad $|q_L(x,x^*)| = |\bra{x}{x^*}| \le \|x\|\|x^*\| \le {\textstyle\frac{1}{2}}\|(x,x^*)\|^2$,\quad so \begin{equation}\label{RL3} 0 \le r_L \le \|\cdot\|^2\ \hbox{on}\ E \times E^*. \end{equation} We note for future reference that, \begin{equation}\label{RL2} \hbox{for all}\ b,c \in E \times E^*,\quad q_L(b - c) = q_L(b) + q_L(c) - \bra{b}{Lc}. \end{equation} \begin{definition}\label{QDdef} Let $A \subset E \times E^*$. We say that $A$ is {\em quasidense} (in $E \times E^*$) if \begin{equation}\label{QD1} c \in E \times E^* \quad\Longrightarrow\quad \inf r_L(A - c) \le 0 \iff \inf r_L(A - c) = 0. \end{equation} (The ``$\iff$'' above follows since $r_L \ge 0$.) In longhand, \eqref{QD1} can be rewritten: \begin{equation}\label{EE1} (x,x^*) \in E \times E^*\Longrightarrow \inf_{(s,s^*) \in A}\big[{\textstyle\frac{1}{2}}\|s - x\|^2 + {\textstyle\frac{1}{2}}\|s^* - x^*\|^2 + \bra{s - x}{s^* - x^*}\big] \le 0. \end{equation} \end{definition} \begin{example}[Subdifferentials]\label{SUBex} Let $f\colon\ E \to \,]{-}\infty,\infty]$ be proper, convex and lower semicontinuous and $\partial f$ be the usual subdifferential. Then $G(\partial f)$ is\break quasidense. There is an ``elementary'' proof of this in \cite[Theorem 4.6]{PARTTWO}. There is also a more sophisticated proof based on Theorem~\ref{FSTARthm} below in \cite[Theorem 7.5, p.\ 1033]{PARTONE}. We shall see in Theorem~\ref{RLMAXthm} below that this result generalizes Rockafellar's maximal monotonicity theorem. \par In fact, the ``elementary'' proof mentioned above can be generalized to some more general subdifferentials for non--convex functions. See Simons--Wang,\break \cite[Definition 2.1, p.\ 633]{SW} and \cite[Theorem 3.2, pp.\ 634--635]{SW}. \end{example} The dual norm on $E^* \times E^{**}$ is given by $\|(y^*,y^{**})\| := \sqrt{\|y^*\|^2 + \|y^{**}\|^2}$. Define the linear map ${\widetilde L}\colon\ E^* \times E^{**} \to E^{**} \times E^{***}$ by ${\widetilde L}(x^*,x^{**}) := \big(x^{**},\widehat{x^*}\big)$. Then $q_{\widetilde L}(y^*,y^{**}) = \bra{y^*}{y^{**}}$ and $r_{\widetilde L}(y^*,y^{**}) := {\textstyle\frac{1}{2}}\|y^*\|^2 + {\textstyle\frac{1}{2}}\|y^{**}\|^2 + \bra{y^*}{y^{**}} = {\textstyle\frac{1}{2}}\|(y^*,y^{**})\|^2 + q_{\widetilde L}(y^*,y^{**})$. \par One can easily verify the following generalization of \eqref{RL2}: \begin{equation}\label{QD2} c \in E \times E^*\hbox{ and }c^* \in E^* \times E^{**} \Longrightarrowq_{\widetilde L}(c^* + Lc) = q_{\widetilde L}(c^*) + \bra{c}{c^*} + q_L(c). \end{equation} \smallbreak Lemma~\ref{SLlem} below gives a very nice relationship between $L$ and quasidensity. It is the first of two preliminary results leading to the main result of this section, Theorem~\ref{NIthm}. \begin{lemma}\label{SLlem} $L(E \times E^*)$ is quasidense in $E^* \times E^{**}$. In other words: \begin{equation}\label{SL3} c^* \in E^* \times E^{**}\quad\Longrightarrow\quad\inf\nolimits_{c \in E \times E^*}r_{\widetilde L}(Lc - c^*) = 0. \end{equation} {\rm In longhand, this can be rewritten:} for all $(y^*,y^{**}) \in E^* \times E^{**}$, \begin{equation}\label{SL2} \inf\nolimits_{(x,x^*) \in E \times E^*}\big[{\textstyle\frac{1}{2}}\|y^* - x^*\|^2 + {\textstyle\frac{1}{2}}\|y^{**} - \widehat x\|^2 + \bra{y^* - x^*}{y^{**} - \widehat x}\big] = 0. \end{equation} \end{lemma} \begin{proof} Let $(y^*,y^{**}) \in E^* \times E^{**}$. For all $\varepsilon > 0$, the definition of $\|y^{**}\|$ provides $z^* \in E^*$ such that $\|z^*\| \le \|y^{**}\|$ and $\bra{z^*}{y^{**}} \le -\|y^{**}\|^2 + \varepsilon$, from which\quad ${\textstyle\frac{1}{2}}\|z^*\|^2 + {\textstyle\frac{1}{2}}\|y^{**}\|^2 + \bra{z^*}{y^{**}} \le \|y^{**}\|^2 + \bra{z^*}{y^{**}} \le \varepsilon$.\quad So \begin{align*} 0 &\le \inf\nolimits_{(x,x^*) \in E \times E^*}\big[{\textstyle\frac{1}{2}}\|y^* - x^*\|^2 + {\textstyle\frac{1}{2}}\|y^{**} - \widehat x\|^2+ \bra{y^* - x^*}{y^{**} - \widehat x}\big]\\ &= \inf\nolimits_{(x,z^*) \in E \times E^*}\big[{\textstyle\frac{1}{2}}\|z^*\|^2 + {\textstyle\frac{1}{2}}\|y^{**} - \widehat x\|^2 + \bra{z^*}{y^{**} - \widehat x}\big]\\ &\le \inf\nolimits_{z^* \in E^*}\big[{\textstyle\frac{1}{2}}\|z^*\|^2 + {\textstyle\frac{1}{2}}\|y^{**}\|^2 + \bra{z^*}{y^{**}}\big] \le 0. \end{align*} This establishes \eqref{SL2}, and hence \eqref{SL3}. \end{proof} \begin{lemma}\label{BBlem} Let $b \in E \times E^*$ and $b^* \in E^* \times E^{**}$. Then \begin{equation}\label{BB1} q_{\widetilde L}(Lb + b^*) \le r_L(b) + r_{\widetilde L}(b^*). \end{equation} Let $a,c \in E \times E^*$ and $c^* \in E^* \times E^{**}$. Then \begin{equation}\label{BB2} q_{\widetilde L}(La - c^*) \le r_L(a - c) + r_{\widetilde L}(Lc - c^*). \end{equation} \end{lemma} \begin{proof} From \eqref{QD2}, \begin{gather*} r_L(b) + r_{\widetilde L}(b^*) - q_{\widetilde L}(Lb + b^*)\\ = q_L(b) + {\textstyle\frac{1}{2}}\|b\|^2 + q_{\widetilde L}(b^*) + {\textstyle\frac{1}{2}}\|b^*\|^2 - q_L(b) - \bra{b}{b^*} - q_{\widetilde L}(b^*)\\ = {\textstyle\frac{1}{2}}\|b\|^2 + {\textstyle\frac{1}{2}}\|b^*\|^2 - \bra{b}{b^*} \ge {\textstyle\frac{1}{2}}\|b\|^2 + {\textstyle\frac{1}{2}}\|b^*\|^2 - \|b\|\|b^*\| \ge 0. \end{gather*} This completes the proof of \eqref{BB1}, and \eqref{BB2} follows from \eqref{BB1} with $b := a - c$ and $b^* := Lc - c^*$ \end{proof} We have the following fundamental result: \begin{theorem}\label{NIthm} Let $A \subset E \times E^*$ and $A$ be quasidense in $E \times E^*$. Then, for all $c^* \in E^* \times E^{**}$, $\infq_{\widetilde L}(L(A) - c^*) \le 0$. \end{theorem} \begin{proof} Let $c^* \in E^* \times E^{**}$ and $\varepsilon > 0$. Then, from Lemma~\ref{SLlem} and Definition~\ref{QDdef}, there exist $c \in E \times E^*$ and then $a \in A$ such that $r_{\widetilde L}(Lc - c^*) < {\textstyle\frac{1}{2}}\varepsilon$ and $r_L(a - c) < {\textstyle\frac{1}{2}}\varepsilon$. From \eqref{BB2}, $q_{\widetilde L}(La - c^*) < \varepsilon$. \end{proof} The following definition was made in \cite[Definition 10,\ p.\ 183]{RANGE}: \begin{definition} Let $A \subset E \times E^*$. Then $A$ is of {\em type (NI)} if, \begin{equation}\label{NI1} \hbox{for all}\ (y^*,y^{**}) \in E^* \times E^{**},\quad\inf\nolimits_{(s,s^*) \in A}\bra{s^* - y^*}{\widehat s - y^{**}} \le 0. \end{equation} In our current notation, \eqref{NI1} can be rephrased as \begin{equation}\label{NI2} \hbox{for all}\ c^* \in E^* \times E^{**},\quad\inf\nolimits_{a \in A}q_{\widetilde L}(La - c^*) \le 0. \end{equation} ``(NI)'' stands for ``negative infimum''. We note that $A$ is not constrained to be monotone in this definition. \end{definition} \begin{corollary}\label{NIcor} Let $A \subset E \times E^*$ and $A$ be quasidense in $E \times E^*$. Then $A$ is of type (NI). \end{corollary} \begin{proof} This is immediate from Theorem~\ref{NIthm} and \eqref{NI2}. \end{proof} There is another way of viewing Theorem~\ref{NIthm}. In order to explain this, we introduce the function $\Theta_A$. (Compare \cite[Definition 6.2, p.\ 1029]{PARTONE}.) \begin{definition}\label{THAdef} Let $A \subset E \times E^*$ and $A \ne \emptyset$. We define the function\break $\Theta_A\colon\ E^* \times E^{**} \to \,]{-}\infty,\infty]$ by: \begin{equation*} \hbox{for all}\ c^* \in E^* \times E^{**},\quad \Theta_A(c^*) := \sup\nolimits_{A}\big[c^* - q_L\big] = \sup\nolimits_{a \in A}\big[\bra{a}{c^*} - q_L(a)\big]. \end{equation*} In longhand: for all $(y^*,y^{**}) \in E^* \times E^{**}$, \begin{equation*} \Theta_A(y^*,y^{**}) := \sup\nolimits_{(s,s^*) \in A}\big[\bra{s}{y^*} + \bra{s^*}{y^{**}} - \bra{s}{s^*}\big]. \end{equation*} \end{definition} \begin{corollary}\label{THAcor} Let $A \subset E \times E^*$ and $A$ be quasidense in $E \times E^*$. Then $\Theta_A \ge q_{\widetilde L}$ on $E^* \times E^{**}$. \end{corollary} \begin{proof} Let $c^* \in E^* \times E^{**}$. Then, from Definition~\ref{THAdef} and \eqref{QD2}, \begin{align*} \Theta_A(c^*) - q_{\widetilde L}(c^*) &= \sup\nolimits_{a \in A}\big[\bra{a}{c^*} - q_L(a) - q_{\widetilde L}(c^*)\big]\\ &= -\inf\nolimits_{a \in A}\big[q_{\widetilde L}(c^*) - \bra{a}{c^*} + q_L(a)\big] = -\inf\nolimits_{a \in A}q_{\widetilde L}(La - c^*). \end{align*} The result now follows since, from Theorem~\ref{NIthm}, $\inf\nolimits_{a \in A}q_{\widetilde L}(La - c^*) \le 0$. \end{proof} \begin{remark}\label{NIrem} Corollary~\ref{THAcor} will be used in Lemma~\ref{FSTARlem} and Theorem~\ref{THthm}. The converses of Corollaries~\ref{NIcor} and \ref{THAcor} are true for maximally monotone sets. (See Theorem~\ref{THthm}). We give an example where the converse of Corollary~\ref{NIcor} fails without the hypothesis of maximal monotonicity in Example~\ref{QDneNI} below. Example~\ref{QDneNI} depends on the following simple fact: \end{remark} \begin{fact}\label{NIFACT} {\em Let $E$ be reflexive, $S\colon\ E \rightrightarrows E^*$, $D(S) = E$ and $A := G(S)$. Then $A$ is of type (NI).} \end{fact} \begin{proof} Let $(y^*,y^{**}) \in E^* \times E^{**}$. Since $E$ is reflexive, there exists $s \in E$ such that $\widehat s = y^{**}$. Since $D(S) = E$ and $G(S) = A$, there exists $s^* \in E^*$ such that $(s,s^*) \in A$. Since $\bra{s^* - y^*}{\widehat s - y^{**}} = \bra{s^* - y^*}{0} = 0$, $A$ is of type (NI). \end{proof} \begin{example}\label{QDneNI} Let $E = \mathbb R$. If $(s,s^*),(x,x^*) \in \mathbb R \times \mathbb R$ then \begin{align*} r_L\big((s,s^*) - (x,x^*)\big) &= {\textstyle\frac{1}{2}}\|s - x\|^2 + {\textstyle\frac{1}{2}}\|s^* - x^*\|^2 + \bra{s - x}{s^* - x^*}\\ &= {\textstyle\frac{1}{2}}(s - x)^2 + {\textstyle\frac{1}{2}}(s^* - x^*)^2 + (s - x)(s^* - x^*)\\ &= {\textstyle\frac{1}{2}}(s + s^* - x - x^*)^2 \end{align*} Let $A := \big\{(\lambda,-\lambda)\colon \lambda \in \mathbb R\big\} \subset \mathbb R \times \mathbb R$ and $(x,x^*) := (1,0) \in \mathbb R \times \mathbb R$ then, for all $(s,s^*) \in A$, $r_L\big((s,s^*) - (1,0)\big) = {\textstyle\frac{1}{2}}(s - s - 1 - 0)^2 = {\textstyle\frac{1}{2}}$. Thus $A$ is not quasidense. However, from Fact \ref{NIFACT}, $A$ is of type (NI). \end{example} \section{Quasidense sets determined by the coincidence sets of convex functions}\label{RLsec} \begin{definition}\label{FCdef} If $f \in {\cal PC}(E \times E^*)$ and $f \ge q_L$ on $E \times E^*$, we write ${\rm coinc}[f]$ for the ``coincidence set'' \begin{equation*} \big\{b \in E \times E^*\colon\ f(b) = q_L(b)\big\}. \end{equation*} The notation ``$M_f$'' has been used for this set in the literature. We have avoided the ``$M_f$'' notation because it lead to superscripts and subscripts on subscripts, and consequently makes the analysis harder to read. If $g$ is a proper, convex function on $E^* \times E^{**}$ and $g \ge q_{\widetilde L}$ on $E^* \times E^{**}$, we write ${\rm dcoinc}[g]$ for the ``dual coincidence set'' \begin{equation*} \big\{b^* \in E^* \times E^{**}\colon\ g(b^*) = q_{\widetilde L}(b^*)\big\}. \end{equation*} \end{definition} Lemmas~\ref{EXNlem} and \ref{RLlem} lead to the main result of the section, Theorem~\ref{FCthm}: \begin{lemma}[A boundedness result]\label{EXNlem} Let $X$ be a nonzero real Banach space and $g \in {\cal PC}(X)$. Suppose, further, that $\inf\nolimits_{x \in X}\big[g(x) + {\textstyle\frac{1}{2}}\|x\|^2\big] = 0$, $y,z \in X$, $g(y) + {\textstyle\frac{1}{2}}\|y\|^2 \le 1$ and $g(z) + {\textstyle\frac{1}{2}}\|z\|^2 \le 1$. Then $\|y\| \le \|z\| + \sqrt8$. \end{lemma} \begin{proof} We have $\textstyle\frac{1}{8}\big[\|y\| - \|z\|\big]^2 = \textstyle\frac{1}{4}\|y\|^2 + \textstyle\frac{1}{4}\|z\|^2 - \textstyle\frac{1}{8}\big[\|y\| + \|z\|\big]^2$ and \begin{equation*} 0 \le g\big({\textstyle\frac{1}{2}} y + {\textstyle\frac{1}{2}} z\big) + {\textstyle\frac{1}{2}}\|{\textstyle\frac{1}{2}} y + {\textstyle\frac{1}{2}} z\|^2 \le {\textstyle\frac{1}{2}} g(y) + {\textstyle\frac{1}{2}} g(z) + \textstyle\frac{1}{8}\big[\|y\| + \|z\|\big]^2. \end{equation*} Thus, by addition, \begin{align*} \textstyle\frac{1}{8}\big[\|y\| - \|z\|\big]^2 \le {\textstyle\frac{1}{2}} g(y) + {\textstyle\frac{1}{2}} g(z) + \textstyle\frac{1}{4}\|y\|^2 + \textstyle\frac{1}{4}\|z\|^2 \le {\textstyle\frac{1}{2}} + {\textstyle\frac{1}{2}} = 1. \end{align*} This gives the required result. \end{proof} \begin{lemma}\label{RLlem} Let $b,d \in E \times E^*$. Then: \begin{equation*} r_L(b + d) \le r_L(b) + 2\|b\|\|d\| + r_L(d) \le \|b\|^2 + 2\|b\|\|d\| + r_L(d). \end{equation*} \end{lemma} \begin{proof} Let $b = (x,x^*)$ and $d = (z,z^*)$. From the Cauchy--Schwarz inequality, we have\quad $\|x\|\|z\| + \|x^*\|\|z^*\| \le \sqrt{\|x\|^2 + \|x^*\|^2}\sqrt{\|z\|^2 + \|z^*\|^2} = \|b\|\|d\|$.\quad From the triangle inequality,\quad $\|x + z\|^2 \le \big(\|x\| + \|z\|\big)^2 = \|x\|^2 + 2\|x\|\|z\| + \|z\|^2$\quad and\quad $\|x^* + z^*\|^2 \le \big(\|x^*\| + \|z^*\|\big)^2 = \|x^*\|^2 + 2\|x^*\|\|z^*\| + \|z^*\|^2$. Thus \begin{align*} {\textstyle\frac{1}{2}}\|b + d\|^2 &= {\textstyle\frac{1}{2}}\|x + z\|^2 + {\textstyle\frac{1}{2}}\|x^* + z^*\|^2\\ &\le {\textstyle\frac{1}{2}}\|x\|^2 + \|x\|\|z\| + {\textstyle\frac{1}{2}}\|z\|^2 + {\textstyle\frac{1}{2}}\|x^*\|^2 + \|x^*\|\|z^*\| + {\textstyle\frac{1}{2}}\|z^*\|^2\\ &\le {\textstyle\frac{1}{2}}\|b\|^2 + \|b\|\|d\| + {\textstyle\frac{1}{2}}\|d\|^2. \end{align*} Also, from \eqref{RL2} with $c := -d$ and the fact that $\|Ld\| = \|d\|$, \begin{align*} q_L(b + d) = q_L(b) + \bra{b}{Ld} + q_L(d) \le q_L(b) + \|b\|\|d\| + q_L(d). \end{align*} The result now follows by addition, \eqref{RL1} and \eqref{RL3}. \end{proof} \begin{theorem}[Primal condition for quasidensity]\label{FCthm} Let $f \in {\cal PCLSC}(E \times E^*)$ and $f \ge q_L$ on $E \times E^*$. For all $c,b \in E \times E^*$, let \begin{equation}\label{FC0} f_c(b) := f(b + c) - \bra{b}{Lc} - q_L(c) = (f - q_L)(b + c) + q_L(b) \ge q_L(b). \end{equation} {\em \big(The first expression shows that $f_c \in {\cal PCLSC}(E \times E^*$).\big)} Then {\rm(a)}$\iff${\rm(b)}: \par\noindent {\rm(a)}\enspace ${\rm coinc}[f]$ is quasidense. \par\noindent {\rm(b)}\enspace For all $c \in E \times E^*$, $\inf\nolimits_{b \in E \times E^*}\big[f_c(b) + {\textstyle\frac{1}{2}}\|b\|^2\big] \le 0$. \end{theorem} \begin{proof} Let $A := {\rm coinc}[f]$. Let $c \in E \times E^*$. Since $f_c = q_L$ on $A - c$, \begin{align*} \inf\nolimits_{b \in E \times E^*}\big[f_c(b) + {\textstyle\frac{1}{2}}\|b\|^2\big] &\le \inf\nolimits_{b \in A - c}\big[f_c(b) + {\textstyle\frac{1}{2}}\|b\|^2\big]\\ &= \inf\nolimits_{b \in A - c}\big[q_L(b) + {\textstyle\frac{1}{2}}\|b\|^2\big]\\ &= \inf\nolimits_{b \in A - c}r_L(b) = \inf r_L(A - c) = 0, \end{align*} and so it follows that (a)$\Longrightarrow$(b). \par Suppose now that (b) is satisfied and $c \in E \times E^*$. Let $c_0 := c$, so that $\inf\nolimits_{b \in E \times E^*}\big[f_{c_0}(b) + {\textstyle\frac{1}{2}}\|b\|^2\big] \le 0$. From \eqref{FC0}, $f_{c_0} + {\textstyle\frac{1}{2}}\|\cdot\|^2 \ge q_L + {\textstyle\frac{1}{2}}\|\cdot\|^2 \ge 0$ on $E \times E^*$, so in fact $\inf\nolimits_{b \in E \times E^*}\big[f_{c_0}(b) + {\textstyle\frac{1}{2}}\|b\|^2\big] = 0$. From Lemma~\ref{EXNlem}, there exists $M \in \mathbb R$ such that \begin{equation}\label{FC1} f_{c_0}(b) + {\textstyle\frac{1}{2}}\|b\|^2 \le 1 \quad\Longrightarrow\quad \|b\| \le M. \end{equation} Let $0 \le \varepsilon < 1$. Let $1 \ge \varepsilon_1 \ge \varepsilon_3 \ge \varepsilon_3 \dots > 0$ and $\sum_{n = 1}^\infty \varepsilon_n \le \varepsilon$. We now define inductively $c_1,c_2,\dots \in E \times E^*$. Suppose that $n \ge 0$ and $c_{n}$ is known. By hypohesis, $\inf\nolimits_{b \in E \times E^*}\big[f_{c_{n}}(b) + {\textstyle\frac{1}{2}}\|b\|^2\big] \le 0$, and so there exists $b_n \in E \times E^*$ such that $f_{c_{n}}(b_n) + {\textstyle\frac{1}{2}}\|b_n\|^2 \le \varepsilon_{n + 1}^2$. Let $c_{n + 1} := b_n + c_{n}$. This completes the inductive construction. \par Since $b_n = c_{n + 1} - c_{n}$, we now have $c_0,c_1,c_2,\dots,$ such that, \begin{equation}\label{FC2} \hbox{for all}\ n \ge 0,\quad f_{c_{n}}(c_{n + 1} - c_{n}) + {\textstyle\frac{1}{2}}\|c_{n + 1} - c_{n}\|^2 \le \varepsilon_{n + 1}^2. \end{equation} From \eqref{FC0}, $f_{c_{n}}(c_{n + 1} - c_{n}) = (f - q_L)(c_{n + 1}) + q_L(c_{n + 1} - c_{n})$ and so, \begin{equation*} \hbox{for all}\ n \ge 0,\quad (f - q_L)(c_{n + 1}) + r_L(c_{n + 1} - c_{n}) \le \varepsilon_{n + 1}^2. \end{equation*} Since $f \ge q_L$ and, from \eqref{RL3}, $r_L \ge 0$ on $E \times E^*$, this implies that, \begin{equation}\label{FC4} \hbox{for all}\ n \ge 0,\quad (f - q_L)(c_{n + 1}) \le \varepsilon_{n + 1}^2 \quad\hbox{and}\quad r_L(c_{n + 1} - c_{n}) \le \varepsilon_{n + 1}^2. \end{equation} We now prove that, \begin{equation}\label{FC5} \hbox{for all}\ n \ge 1,\quad\|c_{n + 1} - c_{n}\| \le \sqrt{10}\varepsilon_{n}. \end{equation} Let $n \ge 1$. Since $f$ is convex, \eqref{FC4} gives \begin{equation*} 2f({\textstyle\frac{1}{2}} c_{n + 1} + {\textstyle\frac{1}{2}} c_{n}) \le f(c_{n + 1}) + f(c_{n}) \le q_L(c_{n + 1}) + \varepsilon_{n + 1}^2 + q_L(c_{n}) + \varepsilon_{n}^2. \end{equation*} Since $f \ge q_L$ on $E \times E^*$ and $\varepsilon_{n + 1}^2 \le \varepsilon_{n}^2$, it follows that \begin{equation*} 2q_L({\textstyle\frac{1}{2}} c_{n + 1} + {\textstyle\frac{1}{2}} c_{n}) - q_L(c_{n + 1}) - q_L(c_{n}) \le 2\varepsilon_{n}^2. \end{equation*} Thus, from the quadraticity of $q_L$, ${\textstyle\frac{1}{2}} q_L(c_{n + 1} + c_{n}) - q_L(c_{n + 1}) - q_L(c_{n}) \le 2\varepsilon_{n}^2$. Since $q_L(c_{n + 1}) + q_L(c_{n}) = {\textstyle\frac{1}{2}} q_L(c_{n + 1} + c_{n}) + {\textstyle\frac{1}{2}} q_L(c_{n + 1} - c_{n})$, we see that \begin{equation*} - q_L(c_{n + 1} - c_{n}) \le 4\varepsilon_{n}^2. \end{equation*} From \eqref{FC4}, $q_L(c_{n + 1} - c_{n}) + {\textstyle\frac{1}{2}}\|c_{n + 1} - c_{n}\|^2 = r_L(c_{n + 1} - c_{n}) \le \varepsilon_{n + 1}^2$. Thus \begin{equation*} \hbox{for all}\ n \ge 1,\quad{\textstyle\frac{1}{2}}\|c_{n + 1} - c_{n}\|^2 \le 4\varepsilon_{n}^2 + \varepsilon_{n + 1}^2 \le 5\varepsilon_{n}^2. \end{equation*} Thus we obtain \eqref{FC5}. We will also need an estimate for $\|c_{1} - c_{0}\|$. This is not covered by \eqref{FC5}. Now \eqref{FC5} used the inequality $f(c_n) \le q_L(c_{n}) + \varepsilon_{n}^2$. A similar analysis for $\|c_{1} - c_{0}\|$ is unlikely, because we have no knowledge about $f(c_0)$ --- there is no {\em a priori} reason why $f(c_0)$ should even be finite. This issue is partially resolved by \eqref{FC7} below. \par It follows from \eqref{FC5} that $\lim_{n \to \infty}c_n$ exists. Let $a_\varepsilon := \lim_{n \to \infty}c_n$. Clearly, $a_\varepsilon - c_1 = \sum_{n = 1}^\infty (c_{n + 1} - c_{n})$ and so, from \eqref{FC5}, \begin{equation}\label{FC6} \left.\begin{aligned} \|a_\varepsilon - c_{1}\| &= \big\|\textstyle\sum_{n = 1}^\infty (c_{n + 1} - c_{n})\big\|\\ &\le\textstyle \sum_{n = 1}^\infty \|c_{n + 1} - c_{n}\| \le \sqrt{10}\sum_{n = 1}^\infty\varepsilon_{n} \le 4\varepsilon. \end{aligned} \right\} \end{equation} From \eqref{FC4}, the lower semicontinuity of $f$, and the continuity of $q_L$, $f(a_\varepsilon) \le q_L(a_\varepsilon)$, and so $a_\varepsilon \in {\rm coinc}[f]$. We must now estimate $r_L(a_\varepsilon - c)$. \eqref{FC2} with $n = 0$ gives $f_{c_0}(c_{1} - c_{0}) + {\textstyle\frac{1}{2}}\|c_{1} - c_{0}\|^2 \le \varepsilon_{1}^2 \le 1$ and so, from \eqref{FC1}, \begin{equation}\label{FC7} \|c_{1} - c\| = \|c_{1} - c_{0}\| \le M. \end{equation} Furthermore, \eqref{FC4} with $n = 0$ gives \begin{equation}\label{FC8} r_L(c_{1} - c) = r_L(c_{1} - c_{0}) \le \varepsilon_{1}^2 \le \varepsilon. \end{equation} From Lemma~\ref{RLlem} with $b = a_\varepsilon - c_1$ and $d = c_1 - c$, \eqref{FC6}, \eqref{FC7} and \eqref{FC8}, \begin{align*} r_L(a_\varepsilon - c) &\le \|a_\varepsilon - c_1\|^2 + 2\|a_\varepsilon - c_1\|\|c_1 - c\| + r_L(c_1 - c)\\ &\le 16\varepsilon^2 + 8\varepsilon M + \varepsilon \le 16\varepsilon + 8\varepsilon M + \varepsilon = (17 + 8M)\varepsilon. \end{align*} Letting $\varepsilon \to 0$, we see that\quad $\inf r_L({\rm coinc}[f] - c) \le 0$.\quad Thus ${\rm coinc}[f]$ is\break quasidense, and (a) holds. \end{proof} \begin{remark} An inspection of the above proof shows that we have, in fact, proved that if ${\rm coinc}[f]$ is quasidense then ${\rm coinc}[f]$ satisfies the stronger\break condition that, for all $c \in E \times E^*$, there exists $K_c \ge 0$ such that \begin{equation*} \inf\big\{r_L(a - c)\colon\ a \in {\rm coinc}[f],\ \|a - c\| \le K_c\big\} \le 0. \end{equation*} \end{remark} It is clear from \eqref{FC0} that, for all $b,c \in E \times E^*$, $(f_c - q_L)(b) = (f - q_L)(b + c)$. In light of this, the result of Lemma~\ref{FClem} below is very pleasing: \begin{lemma}\label{FClem} Let $f \in {\cal PCLSC}(E \times E^*)$ and $f_c$ be as in \eqref{FC0}. Then, for all $c \in E \times E^*$ and $b^* \in E^* \times E^{**}$, $({f_c}^* - q_{\widetilde L})(b^*) = (f^* - q_{\widetilde L})(b^* + Lc)$. \end{lemma} \begin{proof} From \eqref{FC0}, the substitution $d = b + c$, \eqref{RL2} and \eqref{QD2}, \begin{align*} ({f_c}^* &- q_{\widetilde L})(b^*) = \sup\nolimits_{b \in E \times E^*}\big[\bra{b}{b^*} - (f - q_L)(b + c) - q_L(b) - q_{\widetilde L}(b^*)\big]\\ &= \sup\nolimits_{d \in E \times E^*}\big[\bra{d - c}{b^*} - (f - q_L)(d) - q_L(d - c)- q_{\widetilde L}(b^*)\big] \\ &= \sup\nolimits_{d \in E \times E^*}\big[\bra{d}{b^*} - f(d) + q_L(d) - q_L(d - c) - \bra{c}{b^*} - q_{\widetilde L}(b^*)\big]\\ &= \sup\nolimits_{d \in E \times E^*}\big[\bra{d}{b^* + Lc} - f(d) - q_L(c) - \bra{c}{b^*} - q_{\widetilde L}(b^*)\big]\\ &= f^*(b^* + Lc) - q_{\widetilde L}(b^* + Lc). \end{align*} This gives the required result. \end{proof} \begin{lemma}\label{FSTARlem} Let $f \in {\cal PC}(E \times E^*)$, $f \ge q_L$ on $E \times E^*$ and ${\rm coinc}[f]$ be quasidense. Then $f^* \ge q_{\widetilde L}$ on $E^* \times E^{**}$. \end{lemma} \begin{proof} Let $A := {\rm coinc}[f]$. Let $c^* \in E^* \times E^{**}$. Then, since $f = q_L$ on $A$, \begin{equation*} f^*(c^*) = \sup\nolimits_{E \times E^*}\big[{c^*} - f\big] \ge \sup\nolimits_{A}\big[{c^*} - f\big] = \sup\nolimits_{A}\big[{c^*} - q_L\big]. \end{equation*} Thus, from Definition~\ref{THAdef} and Corollary~\ref{THAcor}, $f^*(c^*) \ge \Theta_A(c^*) \ge q_{\widetilde L}(c^*)$. \end{proof} \begin{theorem}[Dual condition for quasidensity]\label{FSTARthm} Let $f \in {\cal PCLSC}(E \times E^*)$ and $f \ge q_L$ on $E \times E^*$. Then\quad ${\rm coinc}[f]$ is quasidense $\iff f^* \ge q_{\widetilde L}$ on $E^* \times E^{**}$. \end{theorem} \begin{proof} By virtue of Lemma~\ref{FSTARlem}, we only have to prove the implication ($\Longleftarrow$). So assume that $f^* \ge q_{\widetilde L}$ on $E^* \times E^{**}$. Let $c \in E \times E^*$. Let $f_c \in {\cal PCLSC}(E \times E^*)$ be as in \eqref{FC0}. From Lemma~\ref{FClem}, ${f_c}^* \ge q_{\widetilde L} \ge -{\textstyle\frac{1}{2}}\|\cdot\|^2$ on $E^* \times E^{**}$, thus ${f_c}^* + {\textstyle\frac{1}{2}}\|\cdot\|^2 \ge 0$ on $E^* \times E^{**}$. We now derive from \eqref{RTR1} that $\inf\nolimits_{b \in E \times E^*}\big[f_c(b) + {\textstyle\frac{1}{2}}\|b\|^2\big] \le 0$. Thus, from Theorem~\ref{FCthm}, ${\rm coinc}[f]$ is quasidense, as required. \end{proof} \begin{definition}\label{FATdef} Let $f \in {\cal PC}(E \times E^*)$. We define the function $f^@$ on $E \times E^*$ by $f^@ := f^* \circ L$. Explicitly, for all $a \in E \times E^*$, \begin{equation}\label{FAT} f^@(a) := \sup\nolimits_{E \times E^*}\big[{La} - f\big] = \sup\nolimits_{b \in E \times E^*}\big[\bra{b}{La} - f(b)\big]. \end{equation} \end{definition} Lemma~\ref{Llem} will be used in Theorem~\ref{THREEthm}, Lemma~\ref{PSlem} and Theorem~\ref{COINCthm}. \begin{lemma}\label{Llem} Let $f,f^@ \in {\cal PC}(E \times E^*)$, $f \ge q_L$ and $f^@ \ge q_L$ on $E \times E^*$. Then ${\rm coinc}[f] \subset {\rm coinc}[f^@]$. \end{lemma} \begin{proof} Let $a \in {\rm coinc}[f]$, $b \in \hbox{\rm dom}\,f$, $\lambda,\mu > 0$ and $\lambda + \mu = 1$. Then \begin{align*} \lambda\mu q_L(a) &= \mu q_L(a) - \mu^2q_L(a) = \mu f(a) - \mu^2q_L(a)\\ &\ge f(\lambda b + \mu a) - \lambda f(b) - \mu^2q_L(a) \ge q_L(\lambda b + \mu a) - \lambda f(b) - \mu^2q_L(a)\\ &= \lambda^2q_L(b) + \lambda\mu\bra{b}{La} - \lambda f(b). \end{align*} Dividing by $\lambda$ and letting $\lambda \to 0$, we see that\quad $q_L(a) \ge \bra{b}{La} - f(b)$.\quad If we now take the supremum over $b$ and use \eqref{FAT}, we see that $q_L(a) \ge f^@(a)$.\quad Consequently, $a \in {\rm coinc}[f^@]$. \end{proof} The important thing about the next result is that $h$ is {\em not} required to be lower semicontinuous. \begin{theorem}[The theorem of the three functions]\label{THREEthm} Let $h \in {\cal PC}(E \times E^*)$, \begin{equation}\label{THREE1} h \ge q_L\ \hbox{on}\ E \times E^*\hbox{ and }h^* \ge q_{\widetilde L}\ \hbox{on}\ E^* \times E^{**}. \end{equation} Then $h^@ \ge q_L$ on $E \times E^*$ and ${\rm coinc}[h^@]$ is closed and quasidense. \end{theorem} \begin{proof} From \eqref{THREE1},\quad $h^@ = h^* \circ L \ge q_{\widetilde L} \circ L = q_L$ on $E \times E^*$,\quad as required. From Theorem~\ref{HKFthm} or Theorem~\ref{HKF2thm} with $k = q_L$, there exists $f \in {\cal PCLSC}(E \times E^*)$ such that $f \ge q_L$ on $E \times E^*$ and $f^* = h^* \ge q_{\widetilde L}$ on $E^* \times E^{**}$, from which $f^@ = h^@ \ge q_L$ on $E \times E^*$. Thus Theorem~\ref{FSTARthm} and Lemma~\ref{Llem} imply that ${\rm coinc}[f]$ is quasidense and ${\rm coinc}[f] \subset {\rm coinc}[f^@]$. Consequently, ${\rm coinc}[f^@]$ is quasidense. Since $f^@ = h^@$ on $E \times E^*$, ${\rm coinc}[h^@]$ is quasidense also. Since $q_L$ is continuous and $h^@$ is lower semicontinuous, ${\rm coinc}[h^@]$ is closed. \end{proof} \section{The coincidence sets of partial episums}\label{EPIsec} Let $E$ and $F$ be nonzero Banach spaces and $f, g \in {\cal PCLSC}(E \times F)$. Then we define the functions $(f {\,\mathop{\oplus}\nolimits_2\,} g)$ and $(f {\,\mathop{\oplus}\nolimits_1\,} g)$ by \begin{equation}\label{DD1} (f {\,\mathop{\oplus}\nolimits_2\,} g)(x,y) := \inf\nolimits_{\eta \in F}\big[f(x,y - \eta) + g(x,\eta)] \end{equation} and \begin{equation*} (f {\,\mathop{\oplus}\nolimits_1\,} g)(x,y) := \inf\nolimits_{\xi \in E}\big[f(x - \xi,y) + g(\xi,y)\big]. \end{equation*} We substitute the symbol ${\,\mathop{\oplus}\nolimits_2^e\,}$ for ${\,\mathop{\oplus}\nolimits_2\,}$ and ${\,\mathop{\oplus}\nolimits_1^e\,}$ for ${\,\mathop{\oplus}\nolimits_1\,}$ if the infimum is {\em exact}, that is to say, can be replaced by a minimum. Lemma~\ref{SZlem} below first appeared in Simons--Z\u{a}linescu \cite[Section~4, pp.\ 8--10]{SZNZ}, and appeared subsequently in\break \cite[Section~16, pp.~67--69]{HBM}. It was later generalized in \cite[Theorem 9, p.\ 882]{QUAD} and \cite[Corollary 5.4, pp.\ 121--122]{AST}. We will be applying Lemmas~\ref{SZlem} and \ref{SZBISlem} below with $F := E^*$. We define the projection maps $\pi_1$ and $\pi_2$ by $\pi_1(x,y) := x$ and $\pi_2(x,y) := y$ \big($(x,y) \in E \times F$\big). \medbreak \begin{lemma}\label{SZlem} Let $f, g \in {\cal PCLSC}(E \times F)$, $f {\,\mathop{\oplus}\nolimits_2\,} g \in {\cal PC}(E \times F)$ and \begin{equation*} \textstyle\bigcup\nolimits_{\lambda > 0}\lambda\big[\pi_1\,\hbox{\rm dom}\,f - \pi_1\,\hbox{\rm dom}\,g\big]\ \hbox{be a closed subspace of}\ E. \end{equation*} Then\quad $(f {\,\mathop{\oplus}\nolimits_2\,} g)^* = f^* {\,\mathop{\oplus}\nolimits_1^e\,} g^*\ \hbox{on}\ E^* \times F^*$. \end{lemma} \begin{theorem}\label{Dthm} Let $f, g \in {\cal PCLSC}(E \times E^*)$, $f,g \ge q_L$ on $E \times E^*$, \begin{equation}\label{D1} \textstyle\bigcup\nolimits_{\lambda > 0}\lambda\big[\pi_1\,\hbox{\rm dom}\,f - \pi_1\,\hbox{\rm dom}\,g\big]\hbox{ be a closed subspace of }E, \end{equation} and ${\rm coinc}[f]$ and ${\rm coinc}[g]$ be quasidense. Then $(f {\,\mathop{\oplus}\nolimits_2\,} g)^@ \ge q_L$ on $E \times E^*$, ${\rm coinc}[(f {\,\mathop{\oplus}\nolimits_2\,} g)^@]$ is closed and quasidense, and \begin{equation}\label{DD2} \left.\begin{gathered} (y,y^*) \in {\rm coinc}[(f {\,\mathop{\oplus}\nolimits_2\,} g)^@] \iff\\ \hbox{there exist}\ u^*,v^* \in E^*\ \hbox{such that}\\ (y,u^*) \in {\rm coinc}[f^@],\ (y,v^*) \in {\rm coinc}[g^@]\hbox{ and }u^* + v^* = y^*. \end{gathered}\right\} \end{equation} \end{theorem} \begin{proof} Let $h := f {\,\mathop{\oplus}\nolimits_2\,} g$. Since $f,g \ge q_L$ on $E \times E^*$, for all $(x,x^*) \in E \times E^*$, \begin{align*} h(x,x^*) &= \inf\nolimits_{\xi^* \in E^*}\big[f(x,x^* - \xi^*) + g(x,\xi^*)\big]\\ &\ge \inf\nolimits_{\xi^* \in E^*}\big[q_L(x,x^* - \xi^*) + q_L(x,\xi^*)\big]\\ &= \inf\nolimits_{\xi^* \in E^*}\big[\bra{x}{x^* - \xi^*} + \bra{x}{\xi^*}\big] = \bra{x}{x^*} = q_L(x,x^*). \end{align*} From \eqref{D1}, $\pi_1\,\hbox{\rm dom}\,f \cap \pi_1\,\hbox{\rm dom}\,g \ne \emptyset$, and so there exist $x_0 \in E$, $y_0^* \in E^*$ and $z_0^* \in E^*$ such that $(x_0,y_0^*) \in \hbox{\rm dom}\,f$ and $(x_0,z_0^*) \in \hbox{\rm dom}\,g$. It now follows from \eqref{DD1} that\quad $(f {\,\mathop{\oplus}\nolimits_2\,} g)(x_0,y_0^* + z_0^*) \le f(x_0,y_0^*) + g(x_0,z_0^*) < \infty$.\quad To sum up: \begin{equation}\label{D2} h \in {\cal PC}(E \times E^*)\hbox{ and } h \ge q_L\ \hbox{on}\ E \times E^*. \end{equation} Note that we do not assert in \eqref{D2} that $h \in {\cal PCLSC}(E \times E^*)$. Since ${\rm coinc}[f]$ and ${\rm coinc}[g]$ are quasidense, Lemma~\ref{FSTARlem} implies that \begin{equation}\label{D3} f^* \ge q_{\widetilde L}\ \hbox{on}\ E^* \times E^{**} \quad\hbox{and}\quad g^* \ge q_{\widetilde L}\ \hbox{on}\ E^* \times E^{**}, \end{equation} from which \begin{equation}\label{DD4} f^@ \ge q_L\ \hbox{on}\ E \times E^* \quad\hbox{and}\quad g^@ \ge q_L\ \hbox{on}\ E \times E^*. \end{equation} From Lemma~\ref{SZlem} and \eqref{D3}, for all $(y^*,y^{**}) \in E^* \times E^{**}$, \begin{equation}\label{D4} \left. \begin{gathered} h^*(y^*,y^{**}) = \min\nolimits_{z^* \in E^*}\big[{f}^*(y^* - z^*,y^{**}) + {g}^*(z^*,y^{**})\big]\\ \ge \inf\nolimits_{z^* \in E^*}\big[\bra{y^* - z^*}{y^{**}} + \bra{z^*}{y^{**}}\big] = \bra{y^*}{y^{**}} = q_{\widetilde L}(y^*,y^{**}). \end{gathered} \right\} \end{equation} Thus $h^* \ge q_{\widetilde L}$ on $E^* \times E^{**}$, and so \eqref{D2} and Theorem~\ref{THREEthm} imply that $h^@ \ge q_L$ on $E \times E^*$ and ${\rm coinc}[h^@]$ is closed and quasidense, as required. \par We now establish \eqref{DD2}. If $(y,y^*) \in E \times E^*$ and we use \eqref{DD4} and specialize \eqref{D4} to the case when $y^{**} = \widehat y$, we obtain \begin{equation}\label{DD3} \left. \begin{gathered} h^@(y,y^*) = h^*(y^*,\widehat y) = \min\nolimits_{z^* \in E^*}\big[f^@(y,y^* - z^*) + g^@(y,z^*)\big]\\ \ge \inf\nolimits_{z^* \in E^*}\big[\bra{y}{y^* - z^*} + \bra{y}{z^*}\big] = \bra{y}{y^*} = q_L(y,y^*). \end{gathered} \right\} \end{equation} If $(y,y^*) \in {\rm coinc}[h^@]$ then this provides $v^* \in E^*$ such that \begin{equation*} f^@(y,y^* - v^*) + g^@(y,v^*) = \bra{y}{y^* - v^*} + \bra{y}{v^*}. \end{equation*} Let $u^* := y^* - v^*$. Then $u^* + v^* = y^*$ and $f^@(y,u^*) + g^@(y,v^*) = \bra{y}{u^*} + \bra{y}{v^*}$.\break From \eqref{DD4}, $(y,u^*) \in {\rm coinc}[f^@]$ and $(y,v^*) \in {\rm coinc}[g^@]$. This completes the proof of the implication ($\Longrightarrow$) of \eqref{DD2}. If, conversely, there exist $u^*,v^* \in E^*$ such that $(y,u^*) \in {\rm coinc}[f^@]$, $(y,v^*) \in {\rm coinc}[g^@]$ and $u^* + v^* = y^*$ then, from \eqref{DD3}, \begin{gather*} h^@(y,y^*) \le f^@(y,u^*) + g^@(y,v^*) = \bra{y}{u^*} + \bra{y}{v^*} = \bra{y}{y^*}. \end{gather*} It now follows from \eqref{DD3} that $(y,y^*) \in {\rm coinc}[h^@]$. This completes the proof of the implication ($\Longleftarrow$) of \eqref{DD2}, and thus the proof of Theorem~\ref{Dthm}. \end{proof} By interchanging the roles of ${\,\mathop{\oplus}\nolimits_2\,}$ and ${\,\mathop{\oplus}\nolimits_1\,}$ in the statement of Lemma~\ref{SZlem}, we can prove the following result: \begin{lemma}\label{SZBISlem} Let $f, g \in {\cal PCLSC}(E \times F)$, $f {\,\mathop{\oplus}\nolimits_1\,} g \in {\cal PC}(E \times F)$ and \begin{equation*} \textstyle\bigcup\nolimits_{\lambda > 0}\lambda\big[\pi_2\,\hbox{\rm dom}\,f - \pi_2\,\hbox{\rm dom}\,g\big]\ \hbox{be a closed subspace of}\ F. \end{equation*} Then\quad$(f {\,\mathop{\oplus}\nolimits_1\,} g)^* = f^* {\,\mathop{\oplus}\nolimits_2^e\,} g^*\ \hbox{on}\ E^* \times F^*$. \end{lemma} \begin{theorem}\label{Rthm} Let $f, g \in {\cal PCLSC}(E \times E^*)$, $f,g \ge q_L$ on $E \times E^*$, \begin{equation}\label{R1} \textstyle\bigcup\nolimits_{\lambda > 0}\lambda\big[\pi_2\,\hbox{\rm dom}\,f - \pi_2\,\hbox{\rm dom}\,g\big]\hbox{ be a closed subspace of }E^*, \end{equation} and ${\rm coinc}[f]$ and ${\rm coinc}[g]$ be quasidense. Then $(f {\,\mathop{\oplus}\nolimits_1\,} g)^@ \ge q_L$ on $E \times E^*$, ${\rm coinc}[(f {\,\mathop{\oplus}\nolimits_1\,} g)^@]$ is closed and quasidense and, \begin{equation}\label{R2} \left.\begin{gathered} (y,y^*) \in {\rm coinc}[(f {\,\mathop{\oplus}\nolimits_1\,} g)^@] \iff\\ \hbox{there exist}\ u^{**},v^{**} \in E^{**}\ \hbox{such that}\\ (y^*,u^{**}) \in {\rm dcoinc}[f^*],\ (y^*,v^{**}) \in {\rm dcoinc}[g^*]\hbox{ and }u^{**} + v^{**} = \widehat y. \end{gathered} \right\} \end{equation} \end{theorem} \begin{proof} Let $h := f {\,\mathop{\oplus}\nolimits_1\,} g$. By interchanging the variables in the proofs already given of \eqref{D2} and \eqref{D3} in Theorem~\ref{Dthm}, we can prove that, \begin{equation}\label{R3} h \in {\cal PC}(E \times E^*)\hbox{ and } h \ge q_L\ \hbox{on}\ E \times E^* \end{equation} and \begin{equation}\label{R4} f^* \ge q_{\widetilde L}\ \hbox{on}\ E^* \times E^{**} \quad\hbox{and}\quad g^* \ge q_{\widetilde L}\ \hbox{on}\ E^* \times E^{**}. \end{equation} From Lemma~\ref{SZBISlem} and \eqref{R4}, for all $(y^*,y^{**}) \in E^* \times E^{**}$, \begin{equation}\label{R5} \left. \begin{gathered} h^*(y^*,y^{**}) = \min\nolimits_{z^{**} \in E^{**}}\big[{f}^*(y^*,y^{**} - z^{**}) + {g}^*(y^*,z^{**})\\ \ge \inf\nolimits_{z^{**} \in E^{**}}\big[\bra{y^*}{y^{**} - z^{**}} + \bra{y^*}{z^{**}}\big] = \bra{y^*}{y^{**}} = q_{\widetilde L}(y^*,y^{**}). \end{gathered} \right\} \end{equation} Thus $h^* \ge q_{\widetilde L}$ on $E^* \times E^{**}$, and so \eqref{R3} and Theorem~\ref{THREEthm} imply that $h^@ \ge q_L$ on $E \times E^*$ and ${\rm coinc}[h^@]$ is closed and quasidense, as required. If we now let $(y,y^*) \in E \times E^*$ and specialize \eqref{R5} to the case when $y^{**} = \widehat y$, we obtain \begin{equation}\label{R7} \left.\begin{gathered} h^@(y,y^*) = \min\nolimits_{z^{**} \in E^{**}}\big[f^*(y^*,\widehat y - z^{**}) + g^*(y^*,z^{**})\big]\\ \ge \inf\nolimits_{z^{**} \in E^{**}}\big[\bra{y^*}{\widehat y - z^{**}} + \bra{y^*}{z^{**}}\big] = \bra{y}{y^*} = q_L(y,y^*). \end{gathered}\right\} \end{equation} We now establish \eqref{R2}. If $(y,y^*) \in {\rm coinc}[h^@]$ then \eqref{R7} provides $v^{**} \in E^{**}$ such that \begin{equation*} f^*(y^*,\widehat y - v^{**}) + g^*(y^*,v^{**}) = \bra{y^*}{\widehat y - v^{**}} + \bra{y^*}{v^{**}}. \end{equation*} Let $u^{**} := \widehat y - v^{**}$. Then we have $u^{**} + v^{**} = \widehat y$ and $f^*(y^*,u^{**}) + g^*(y^*,v^{**}) = \bra{y^*}{u^{**}} + \bra{y^*}{v^{**}}$. From \eqref{R4}, $(y^*,u^{**}) \in {\rm dcoinc}[f^*]$ and $(y^*,v^{**}) \in {\rm dcoinc}[g^*]$. This completes the proof of the implication ($\Longrightarrow$) of \eqref{R2}. If, conversely, there exist $u^{**},v^{**} \in E^{**}$ such that $(y^*,u^{**}) \in {\rm dcoinc}[f^*]$, $(y^*,v^{**}) \in {\rm dcoinc}[g^*]$ and $u^{**} + v^{**} = \widehat y$ then, from \eqref{R7}, \begin{equation*} h^@(y,y^*) \le f^*(y^*,u^{**}) + g^*(y^*,v^{**}) = \bra{y^*}{u^{**}} + \bra{y^*}{v^{**}} = \bra{y^*}{\widehat y} = \bra{y}{y^*}. \end{equation*} It now follows from \eqref{R7} that $(y,y^*) \in {\rm coinc}[h^@]$. This completes the proof of the implication ($\Longleftarrow$) of \eqref{R2}, and thus the proof of Theorem~\ref{Rthm}. \end{proof} \section{Monotone sets and multifunctions}\label{MONsec} Let $\emptyset \ne A \subset E \times E^*$. It is easy to see that \begin{gather} A\hbox{\em\ is monotone if, and only if, for all } a,b \in A,\ q_L(a - b) \ge 0\label{QLMON}\\ \hbox{\em\ if, and only if, } L(A)\hbox{\em\ is a monotone subset of }E^* \times E^{**}.\label{LMON} \end{gather} \begin{theorem}[Quasidensity and maximality]\label{RLMAXthm} Let $A$ be a closed, quasidense monotone subset of $E \times E^*$. Then $A$ is maximally monotone. \end{theorem} \begin{proof} Let $c \in E \times E^*$ and $A \cup \{c\}$ be monotone. Let $\varepsilon > 0$, and choose $a \in A$ so that $r_L(a - c) < \varepsilon$. Since $q_L(a - c) \ge 0$, it follows that \begin{align*} {\textstyle\frac{1}{2}}\|a - c\|^2 \le {\textstyle\frac{1}{2}}\|a - c\|^2 + q_L(a - c) = r_L(a - c) < \varepsilon. \end{align*} Letting $\varepsilon \to 0$ and using the fact that $A$ is closed, $c \in A$. \end{proof} The following important property of coincidence sets was first proved in Burachik--Svaiter, \cite[Theorem~3.1, pp. 2381--2382]{BS} and Penot, \cite[Proposition 4(h)$\Longrightarrow$(a), pp. 860--861]{PENOT}. Here, we give a short proof using the criterion for monotonicity that appeared in \eqref{QLMON}. \begin{lemma}\label{CONTlem} Let $f \in {\cal PC}(E \times E^*)$ and $f \ge q_L$ on $E \times E^*$ . Then ${\rm coinc}[f]$ is monotone. \end{lemma} \begin{proof} Let $a,b \in {\rm coinc}[f]$. Then \begin{align*} \textstyle\frac{1}{4} q_L(a - b) &= {\textstyle\frac{1}{2}} q_L(a) + {\textstyle\frac{1}{2}} q_L(b) - \textstyle\frac{1}{4} q_L(a + b) = {\textstyle\frac{1}{2}} f(a) + {\textstyle\frac{1}{2}} f(b) - \textstyle\frac{1}{4} q_L(a + b)\\ &\ge f\big({\textstyle\frac{1}{2}}(a + b)\big) - q_L\big({\textstyle\frac{1}{2}}(a + b)\big) \ge 0. \end{align*} This establishes \eqref{QLMON} and completes the proof of Lemma~\ref{CONTlem}. \end{proof} In order to simplify some notation in the sequel, if $S\colon\ E \rightrightarrows E^*$, we will say that $S$ is {\em closed} if its graph, $G(S)$, is closed in $E \times E^*$, and we will say that $S$ is \emph{quasidense} if $G(S)$ is quasidense in $E \times E^*$. \par Our analysis depends on the following definition: \begin{definition}[The definition of $\theta_S$]\label{THdef} Let $S\colon\ E \rightrightarrows E^*$ be a monotone\break multifunction and $G(S) \ne \emptyset$. We define the function $\theta_S \in {\cal PCLSC}(E^* \times E^{**})$ by $\theta_S := \Theta_{G(S)}$. (See Definition~\ref{THAdef}.) Explicitly: \begin{equation}\label{TH1} \hbox{for all}\ c^* \in E^* \times E^{**},\quad \theta_S(c^*) := \sup\nolimits_{G(S)}[c^* - q_L]. \end{equation} In longhand, for all $(y^*,y^{**}) \in E^* \times E^{**}$ \begin{equation}\label{THLONG} \theta_S(y^*,y^{**}) := \sup\nolimits_{(s,s^*) \in G(S)}\big[\bra{s}{y^*} + \bra{s^*}{y^{**}} - \bra{s}{s^*}\big]. \end{equation} \end{definition} We now show how $\theta_S$ determines the {\em Fitzpatrick function}, $\varphi_S$, that acts on $E \times E^*$ (rather than on $E^* \times E^{**}$). \begin{definition}[The definition of $\varphi_S$]\label{PHdef} Let $S\colon\ E \rightrightarrows E^*$ be a monotone multifunction and $G(S) \ne \emptyset$. We define the function $\varphi_S \in {\cal PCLSC}(E \times E^*)$ by \begin{equation}\label{TH2} \varphi_S = \theta_S \circ L. \end{equation} Explicitly, \begin{align} \hbox{for all}\ b \in E \times E^*,\quad\varphi_S(b) &:= \sup\nolimits_{G(S)}[Lb - q_L]\label{PH1}\\ &= q_L(b) - \inf q_L\big(G(S) - b\big).\label{PH2} \end{align} In longhand, for all $(x,x^*) \in E \times E^*$, \begin{equation}\label{PH5} \varphi_S(x,x^*) := \sup\nolimits_{(s,s^*) \in G(S)}\big[\bra{s}{x^*} + \bra{x}{s^*} - \bra{s}{s^*}\big]. \end{equation} \end{definition} \begin{remark}\label{FDEFrem} The Fitzpatrick function was originally introduced in the\break Banach space setting in \cite[(1988)]{FITZ}, but lay dormant until it was rediscovered by Mart\'\i nez-Legaz and Th\'era in \cite[(2001)]{MLT}. It had been previously considered in the finite--dimensional setting by Krylov in \cite[(1982)]{KRYLOV}. The generalization of the Fitzpatrick function to {\em Banach SN spaces} can be found in \cite[Definition 6.2, p.\ 1029]{PARTONE}. \end{remark} \begin{lemma}\label{PHlem} Let $S\colon\ E \rightrightarrows E^*$ be maximally monotone. Then: \begin{equation}\label{PH3} \varphi_S \in {\cal PCLSC}(E \times E^*),\ \varphi_S \ge q_L\ \hbox{on}\ E \times E^*\quad\hbox{and}\quad {\rm coinc}[\varphi_S] = G(S). \end{equation} \end{lemma} \begin{proof} If $b \in E \times E^*$ and $\varphi_S(b) \le q_L(b)$ then \eqref{PH2} gives $\inf q_L\big(G(S) - b\big) \ge 0$. From the maximality, $b \in G(S)$ and so we derive from the monotonicity that $\inf q_L\big(G(S) - b\big) = 0$, from which $\varphi_S(b) = q_L(b)$. Since $\varphi_S$ is obviously convex and lower semicontinuous, this completes the proof of \eqref{PH3}. \end{proof} We now come to the ``${\varphi_S}^*$ criterion'' for a maximally monotone set to be quasidense. \begin{theorem}\label{PHthm} Let $S\colon\ E \rightrightarrows E^*$ be maximally monotone. Then: \begin{equation}\label{PH4} S\hbox{ is quasidense } \iff {\varphi_S}^* \ge q_{\widetilde L}\ \hbox{on}\ E^* \times E^{**}. \end{equation} \end{theorem} \begin{proof} This is immediate from \eqref{PH3} and Theorem~\ref{FSTARthm}. \end{proof} \begin{corollary}[First partial converse to Theorem~\ref{RLMAXthm}]\label{SURJcor} Let $S\colon\ E \rightrightarrows E^*$ be maximally monotone and surjective. Then $S$ is quasidense. \end{corollary} \begin{proof} Suppose that $(y^*,y^{**}) \in E^* \times E^{**}$. Let $x \in S^{-1}y^*$. Then, from \eqref{PH3}, \begin{align*} {\varphi_S}^*(y^*,y^{**}) &\ge \bra{x}{y^*} + \bra{y^*}{y^{**}} - \varphi_S(x,y^*)\\ &= \bra{x}{y^*} + \bra{y^*}{y^{**}} - \bra{x}{y^*} = \bra{y^*}{y^{**}} = q_{\widetilde L}(y^*,y^{**}). \end{align*} It now follows from \eqref{PH4} that $S$ is quasidense. \end{proof} \begin{remark} Once one knows the (highly nontrivial) result that a maximally\break monotone multifunction is quasidense if, and only if, it is {\em of type (FP), {\em or} locally maximally monotone}, see \cite[Theorem 10.3, p.\ 21]{PARTTWO}, then Corollary~\ref{SURJcor} follows from Fitzpatrick--Phelps, \cite[Theorem 3.7, pp.\ 67--68]{FITZTWO}. \end{remark} In Theorem~\ref{THthm}, we will give the ``$\theta_S$ criterion'' for a maximally monotone set to be quasidense. We start with a preliminary lemma of independent interest, which will be used in Corollary~\ref{PHIVcor}. Lemma~\ref{THlem} raises the following problem: \begin{problem}\label{THprob} Is there a maximally monotone multifunction $S\colon\ E \rightrightarrows E^*$ such that ${\varphi_S}^* \ne \theta_S$? \end{problem} \begin{lemma}\label{THlem} Let $S\colon\ E \rightrightarrows E^*$ be maximally monotone. Then: \begin{equation}\label{TH3} {\varphi_S}^* \ge \theta_S\ \hbox{on}\ E^* \times E^{**}. \end{equation} If, further, \begin{equation}\label{TH5} \hbox{\rm dom}\,\varphi_S \subset G(S) \end{equation} then \begin{equation}\label{TH4} {\varphi_S}^* = \theta_S\ \hbox{on}\ E^* \times E^{**}. \end{equation} \end{lemma} \begin{proof} Let $c^* \in E^* \times E^{**}$. From \eqref{PH3} and \eqref{TH1}, \begin{equation*} {\varphi_S}^*(c^*) = \sup\nolimits_{E \times E^*}[{c^*} - \varphi_S] \ge \sup\nolimits_{G(S)}[{c^*} - \varphi_S] = \sup\nolimits_{G(S)}[{c^*} - q_L] = \theta_S(c^*), \end{equation*} which gives \eqref{TH3}. Now suppose that \eqref{TH5} is satisfied. If $b \in E \times E^* \setminus \hbox{\rm dom}\,\varphi_S$ then $\bra{b}{c^*} - \varphi_S(b) = -\infty \le \theta_S(c^*)$. If, on the other hand, $b \in \hbox{\rm dom}\,\varphi_S$ then \eqref{TH5} implies that $b \in G(S)$, and so \eqref{PH3} gives $\varphi_S(b) = q_L(b)$. Thus, using \eqref{TH1}, $\bra{b}{c^*} - \varphi_S(b) = \bra{b}{c^*} - q_L(b) \le \theta_S(c^*)$. Combining these two observations, we see that, \begin{equation*} \hbox{for all}\ b \in E \times E^*,\quad \bra{b}{c^*} - \varphi_S(b) \le \theta_S(c^*). \end{equation*} Taking the supremum over $b \in E \times E^*$, ${\varphi_S}^*(c^*) \le \theta_S(c^*)$. Thus ${\varphi_S}^* \le \theta_S$ on $E^* \times E^{**}$, and \eqref{TH4} follows from \eqref{TH3}. \end{proof} \begin{theorem}\label{THthm} Let $S\colon\ E \rightrightarrows E^*$ be maximally monotone. Then: \begin{equation}\label{PHIA5} S\hbox{ is quasidense } \iff \theta_S \ge q_{\widetilde L}\ \hbox{on}\ E^* \times E^{**}. \end{equation} \end{theorem} \begin{proof} If $S$ is quasidense then $G(S)$ is a quasidense subset of $E \times E^*$ and so, from Corollary~\ref{THAcor}, $\theta_S = \Theta_{G(S)} \ge q_{\widetilde L}$ on $E^* \times E^{**}$. If, conversely, $\theta_S \ge q_{\widetilde L}$ on $E^* \times E^{**}$ then, from \eqref{TH3}, ${\varphi_S}^* \ge q_{\widetilde L}$ on $E^* \times E^{**}$, and it follows from Theorem~\ref{PHthm} that $S$ is quasidense. \end{proof} \begin{corollary}[Second partial converse to Theorem~\ref{RLMAXthm}]\label{CONVcor} Let $E$ be reflexive and $S\colon\ E \rightrightarrows E^*$ be maximally monotone. Then $S$ is quasidense. \end{corollary} \begin{proof} Suppose that $(y^*,y^{**}) \in E^* \times E^{**}$. Choose $y \in E$ such that $\widehat y = y^{**}$. Then $(y^*,y^{**}) = (y^*,\widehat y) = L(y,y^*)$ and so, from \eqref{TH2} and \eqref{PH3}, \begin{equation*} \theta_S(y^*,y^{**}) = \theta_S\circ L(y,y^*) = \varphi_S(y,y^*) \ge q_L(y,y^*) = q_{\widetilde L}(y^*,y^{**}). \end{equation*} It now follows from Theorem~\ref{THthm} that $S$ is quasidense. \end{proof} We end this section by giving a result in Theorem~\ref{COINCthm} that will be used in our discussion of the Fitzpatrick extension in Section~\ref{FITZEXTsec}. We start with a preliminary lemma. \begin{lemma}\label{PSlem} Let $S\colon\ E \rightrightarrows E^*$ be maximally monotone. Then: \begin{gather} {\varphi_S}^@ \ge \varphi_S\ \ge q_L\ \hbox{on}\ E \times E^*\quad\hbox{and}\quad {\rm coinc}[{\varphi_S}^@] = G(S).\label{PS2}\\ {\theta_S}^@ \ge {\varphi_S}^* \ge \theta_S\ \hbox{on}\ E^* \times E^{**}.\label{PS4} \end{gather} \end{lemma} \begin{proof} It follows by composing \eqref{TH3} with $L$ and using Definition~\ref{FATdef} and \eqref{TH2} that ${\varphi_S}^@ \ge \varphi_S$ on $E \times E^*$. Furthermore, \eqref{PH3} implies that $\varphi_S \ge q_L$ on $E \times E^*$ and $G(S) = {\rm coinc}[\varphi_S] \supset {\rm coinc}[{\varphi_S}^@]$. Lemma~\ref{Llem} implies that ${\rm coinc}[\varphi_S] \subset {\rm coinc}[{\varphi_S}^@]$, which completes the proof of \eqref{PS2}. \par For all $c^* \in E^*\times E^{**}$, ${\theta_S}^@(c^*) = \sup\nolimits_{E^* \times E^{**}}[{{\widetilde L} c^*} - \theta_S]$. Thus, from \eqref{TH2}, \begin{align*} {\theta_S}^@(c^*) &\ge \sup\nolimits_{b \in E \times E^*}\big[\bra{Lb}{{\widetilde L} c^*} - \theta_S(Lb)\big]\\ &= \sup\nolimits_{b \in E \times E^*}\big[\bra{b}{c^*} - \varphi_S(b)\big] = {\varphi_S}^*(c^*), \end{align*} which gives the first inequality in \eqref{PS4}, and the second inequality in \eqref{PS4} has already been established in \eqref{TH3}. \end{proof} \begin{theorem}\label{COINCthm} Let $S\colon\ E \rightrightarrows E^*$ be maximally monotone and quasidense. Then\quad ${\rm dcoinc}[{\theta_S}] = {\rm dcoinc}[{\varphi_S}^*] = {\rm dcoinc}[{\theta_S}^@]$. \end{theorem} \begin{proof} From \eqref{PS4} and \eqref{PHIA5},\quad ${\theta_S}^@ \ge {\varphi_S}^* \ge \theta_S \ge q_{\widetilde L}$ on $E^* \times E^{**}$.\quad It follows that\quad ${\rm dcoinc}[{\theta_S}^@] \subset {\rm dcoinc}[{\varphi_S}^*] \subset {\rm dcoinc}[{\theta_S}]$.\quad However, if we apply\break Lemma~\ref{Llem} (to $E^* \times E^{**}$ instead of $E \times E^*$), we see that ${\rm dcoinc}[\theta_S] \subset {\rm dcoinc}[{\theta_S}^@$]. This gives the desired result. \end{proof} \begin{problem} Theorem~\ref{COINCthm} leads to the question: {\em if $S$ is maximally monotone and ${\theta_S}^@ \ge q_{\widetilde L}$ on $E^* \times E^{**}$ then is $S$ necessarily quasidense?} \end{problem} \section{Sum theorem with domain constraints}\label{DSUMSsec} \begin{notation} Let $S\colon\ E \rightrightarrows E^*$. In what follows, we write \begin{equation*} D(S) := \big\{x \in E\colon\ Sx \ne \emptyset\big\} = \pi_1G(S)\hbox{ and }R(S) := \textstyle\bigcup_{x \in E}Sx = \pi_2G(S). \end{equation*} \end{notation} We will use the following computational rules in the sequel: \begin{lemma}\label{PHISlem} Let $S\colon\ E \rightrightarrows E^*$ be closed, quasidense and monotone. Then \begin{align} D(S) \subset \pi_1\hbox{\rm dom}\,\varphi_S &\quad\hbox{and}\quad R(S) \subset \pi_2\hbox{\rm dom}\,\varphi_S.\label{PHIS2} \end{align} \end{lemma} \begin{proof} This is immediate from Theorem~\ref{RLMAXthm} and \eqref{PH3}. \end{proof} \begin{theorem}[Sum theorem with domain constraints]\label{STDthm} Let $S,T\colon\ E \rightrightarrows E^*$ be closed, quasidense and monotone. Then {\rm(a)$\Longrightarrow$(b)$\Longrightarrow$(c)$\Longrightarrow$(d)}: \par \noindent {\rm(a)}\enspace $D(S) \cap \hbox{\rm int}\,D(T) \ne \emptyset$ or $\hbox{\rm int}\,D(S) \cap D(T) \ne \emptyset$. \par \noindent {\rm(b)}\enspace $\textstyle\bigcup\nolimits_{\lambda > 0}\lambda\big[D(S) - D(T)\big] = E$. \par \noindent {\rm(c)}\enspace $\textstyle\bigcup\nolimits_{\lambda > 0}\lambda\big[\pi_1\,\hbox{\rm dom}\,\varphi_S - \pi_1\,\hbox{\rm dom}\,\varphi_T\big]$\quad is a closed subspace of $E$. \par \noindent {\rm(d)}\enspace $S + T$\quad is closed, quasidense and monotone. \end{theorem} \begin{proof} It is immediate from \eqref{PHIS2} that (a)$\Longrightarrow$(b)$\Longrightarrow$(c). Now suppose that (c) is satisfied. From Theorem~\ref{RLMAXthm}, $S$ and $T$ are maximally monotone, and so \eqref{PH3} and \eqref{PS2} imply that $\varphi_S, \varphi_T \in {\cal PCLSC}(E \times E^*)$, $\varphi_S, \varphi_T \ge q_L$ on $E \times E^*$, ${\rm coinc}[\varphi_S] = {\rm coinc}[{\varphi_S}^@] = G(S)$ and ${\rm coinc}[\varphi_T] = {\rm coinc}[{\varphi_T}^@] = G(T)$, and we can apply Theorem~\ref{Dthm} with $f := \varphi_S$ and $g := \varphi_T$. Thus $(\varphi_S {\,\mathop{\oplus}\nolimits_2\,} \varphi_T)^@ \ge q_L$ on $E \times E^*$, ${\rm coinc}[(\varphi_S {\,\mathop{\oplus}\nolimits_2\,} \varphi_T)^@]$ is closed and quasidense, and $(y,y^*) \in {\rm coinc}[(\varphi_S {\,\mathop{\oplus}\nolimits_2\,} \varphi_T)^@]$ if, and only if, there exist $u^*,v^* \in E^*$ such that $(y,u^*) \in G(S)$, $(y,v^*) \in G(T)$ and $u^* + v^* = y^*$. This is exactly equivalent to the statement that $(y,y^*) \in G(S + T)$. Finally, it is obvious that $S + T$ is monotone. \end{proof} \begin{remark}\label{VZrem} Theorem~\ref{STDthm} above has applications to the classification of maximally monotone multifunctions. See \cite[Theorems~7.2 and 8.1]{PARTTWO}. Theorem~\ref{STDthm} can also be deduced from Voisei--Z\u{a}linescu \cite[Corollary~3.5,\ p.\ 1024]{VZ}. \end{remark} \section{The Fitzpatrick extension}\label{FITZEXTsec} \begin{definition}[The Fitzpatrick extension]\label{FITZdef} Let $S\colon\ E \rightrightarrows E^*$ be a closed quasidense monotone multifunction. We now introduce the {\em Fitzpatrick extension}, $S^{\mathbb F}\colon\ E^* \rightrightarrows E^{**}$, of $S$. From Theorem~\ref{RLMAXthm} and \eqref{PH3}, ${\rm coinc}[\varphi_S] = G(S)$, and so we see from Theorem~\ref{FSTARthm} that ${\varphi_S}^* \ge q_{\widetilde L}$ on $E^* \times E^{**}$. Using our current notation, the multifunction $S^{\mathbb F}$ was defined in \cite[Definition 5.1]{PARTTWO} by \begin{equation}\label{PHSTCRIT} G(S^{\mathbb F}) := {\rm dcoinc}[{\varphi_S}^*]. \end{equation} \big(There is a more abstract version of this in \cite[Definition 8.5, p.\ 1037]{PARTONE}.\big) From Theorem~\ref{COINCthm}, we can also write \begin{equation}\label{THCRIT} G(S^{\mathbb F}) = {\rm dcoinc}[{\theta_S}] = {\rm dcoinc}[{\theta_S}^@]. \end{equation} The word {\em extension} is justified by the fact that $L(a) \in G(S^{\mathbb F}) \iff a \in G(S)$. Indeed, from \eqref{THCRIT}, \eqref{TH2} and \eqref{PH3}, \begin{equation}\label{EXT1} \left.\begin{gathered} L(a) \in G(S^{\mathbb F}) \iff \theta_S\big(L(a)\big) = q_{\widetilde L}\big(L(a)\big)\\ \iff \varphi_S(a) = q_L(a) \iff a \in G(S). \end{gathered} \right\} \end{equation} \end{definition} \begin{theorem}\label{AFMAXthm} Let $S\colon\ E \rightrightarrows E^*$ be closed, quasidense and monotone. Then $S^{\mathbb F}$ is maximally monotone. \end{theorem} \begin{proof} From Lemma~\ref{CONTlem} (applied to the function ${\varphi_S}^*$ on $E^* \times E^{**}$), $S^{\mathbb F}$ is monotone. Now let $c^* \in E^* \times E^{**}$ and, for all $a^* \in G(S^{\mathbb F})$, $q_{\widetilde L}(c^* - a^*)\ge 0$. From \eqref{EXT1}, for all $a \in G(S)$, $q_{\widetilde L}\big(c^* - L(a)\big)\ge 0$. Now \eqref{QD2} gives $q_{\widetilde L}\big(c^* - L(a)\big) = q_{\widetilde L}(c^*) - \bra{a}{c^*} + q_L(a)$ and so, for all $a \in G(S)$, $q_{\widetilde L}(c^*) \ge \bra{a}{c^*} - q_L(a)$. Taking the supremum over $a$ and using \eqref{TH1}, $q_{\widetilde L}(c^*) \ge \theta_S(c^*)$. From Theorem~\ref{THthm}, $\theta_S(c^*) = q_{\widetilde L}(c^*)$, and so $c^* \in {\rm dcoinc}[\theta_S]$. Thus, from \eqref{THCRIT}, $c^* \in G(S^{\mathbb F})$. This completes the proof of the maximal monotonicity of $S^{\mathbb F}$. \end{proof} \begin{remark} It is interesting to speculate (see \cite[Problem 12.7, p.\ 1047]{PARTONE}) whether $S^{\mathbb F}$ is actually quasidense. We shall see in Example~\ref{TAILex}, Theorems~\ref{SFTthm}(b) and \ref{SPECTthm} that this is not generally the case. However, it is the case in one important situation. We observed in Example~\ref{SUBex} that if $f\colon\ E \to \,]{-}\infty,\infty]$ is proper, convex and lower semicontinuous then $\partial f\colon\ E \rightrightarrows E^*$ is quasidense. However, it was shown in \cite[Theorem 5.7]{PARTTWO} that $(\partial f)^{\mathbb F} = \partial(f^*)$, so the multifunction $(\partial f)^{\mathbb F}\colon\ E^* \rightrightarrows E^{**}$ is quasidense. \end{remark} \begin{remark}\label{GOSSrem} It follows from \eqref{THCRIT} that $y^{**} \in S^{\mathbb F}(y^*)$ exactly when $(y^{**},y^*)$ is in the {\em Gossez extension} of $G(S)$ \big(see \cite[Lemma~2.1, p.\ 275]{GOSSEZ}\big). \end{remark} Our next result gives a situation in which we can obtain an explicit description of $S^{\mathbb F}$, as well as inverse of the operation $S \mapsto S^{\mathbb F}$. Theorem~\ref{TSthm} is an extension to the nonlinear case of \cite[Theorem 2.1, pp.\ 297--298]{BUS12}. It will be important in our construction of examples. \begin{theorem}\label{TSthm} Let $T\colon\ E^* \rightrightarrows E^{**}$ and $R(T) \subset \widehat E$. Let $S = G^{-1}L^{-1}G(T)$, {\em i.e.}, $S\colon\ E \rightrightarrows E^*$ is defined by $G(S) = L^{-1}G(T)$. Then: \par\noindent {\rm(a)}\enspace $G(T) \subset L\big(G(S)\big)$. {\em(The opposite inclusion is trivially true.)} \par\noindent {\rm(b)}\enspace Suppose in addition that $T$ is maximally monotone. Then $S$ is maximally monotone. \par\noindent {\rm(c)}\enspace Suppose in addition that $T$ is maximally monotone and $D(T) = E^*$. Then $S$ is maximally monotone and quasidense, and $S^{\mathbb F} = T$. {\em Put another way, for multifunctions like $T$}, $G^{-1}L^{-1}G$ is the inverse of $\cdot^{\mathbb F}$. \end{theorem} \begin{proof} (a)\enspace Let $(y^*,y^{**}) \in G(T)$. Since $R(T) \subset \widehat E$, there exists $y \in E$ such that $y^{**} = \widehat y$. But then $(y^*,y^{**}) = L(y,y^*)$, and so $(y,y^*) \in L^{-1}\{(y^*,y^{**})\} \subset L^{-1}G(T) = G(S)$, from which $(y^*,y^{**}) = L(y,y^*) \in L\big(G(S)\big)$. \par (b)\enspace Now let $b_1,b_2 \in G(S)$. Then $Lb_1,Lb_2 \in G(T)$, and so $q_{\widetilde L}(Lb_1 - Lb_2) \ge 0$. Equivalently, $q_L(b_1 - b_2) \ge 0$. Thus $S$ is monotone. We now prove that $S$ is maximally monotone. To this end, let $c \in E \times E^*$ and $\inf q_L\big(G(S) - c\big) \ge 0$. Equivalently, $\infq_{\widetilde L}\big(L\big(G(S)\big) - Lc\big) \ge 0$. From (a), $\infq_{\widetilde L}\big(G(T) - Lc\big) \ge 0$. The maximal monotonicity of $T$ now implies that $Lc \in L\big(G(S)\big)$. Since $L$ is injective, $c \in G(S)$. Thus $S$ is maximally monotone. \par (c)\enspace Let $y^* \in E^* = D(T)$. Arguing as in (a), there exist $y^{**} \in E^{**}$ and $y \in E$ such that $L(y,y^*) = (y^*,y^{**}) \in L\big(G(S)\big)$. Since $L$ is injective, $(y,y^*) \in G(S)$. Thus $R(S) = E^*$, and the quasidensity of $S$ follows from Corollary~\ref{SURJcor}. \eqref{EXT1} and (a) now imply that that $G(S^{\mathbb F}) \supset L\big(G(S)\big) \supset G(T)$, and the assumed maximal monotonicity of $T$ now gives $S^{\mathbb F} = T$, as required. \end{proof} The following result appears in Phelps--Simons, \cite[Corollary 2.6, p.\ 306]{PS}. We do not know the original source of the result. We almost certainly learned about it by personal communication with Robert Phelps. We give a proof for completeness. \begin{fact}\label{FOLKLORE} Let $T\colon\ E \to E^*$ be monotone and linear. Then $T$ is maximally monotone. \end{fact} \begin{proof} Let $(y,y^*) \in E \times E^*$ and, for all $x \in E$, $\bra{x - y}{Tx - y^*} \ge 0$. We first prove that, for all $z \in E$ and for all $\lambda \in \mathbb R$, \begin{equation}\label{FOLK1} \lambda\bra{z}{Ty - y^*} + \lambda^2\bra{z}{Tz} \ge 0. \end{equation} To this end, let $z \in E$ and $\lambda \in \mathbb R$. By direct computation, \begin{equation*} \lambda\bra{z}{Ty - y^*} + \lambda^2\bra{z}{Tz} = \bra{y + \lambda z - y}{T(y + \lambda z) - y^*}. \end{equation*} \eqref{FOLK1} now follows from our assumption, with $x = y + \lambda z$. From \eqref{FOLK1}, for all $z \in E$, the quadratic expression $\lambda \mapsto \lambda\bra{z}{Ty - y^*} + \lambda^2\bra{z}{Tz}$ attains a minimum at $\lambda = 0$ so, from elementary calculus, for all $z \in E$, $\bra{z}{Ty - y^*} = 0$. Consequently, $Ty - y^* = 0 \in E^*$. Thus $y^* = Ty$. This completes the proof of the maximal monotonicity of $T$. \end{proof} Theorem~\ref{LINVWthm} will be applied in Example~\ref{TAILex} and Theorem~\ref{SFTthm}. \begin{theorem}\label{LINVWthm} Let $T\colon\ E^* \to E^{**}$ be a monotone linear map and $R(T) \subset \widehat E$. Let $S = G^{-1}L^{-1}G(T)$. Then $S$ is maximally monotone and quasidense, and $S^{\mathbb F} = T$. \end{theorem} \begin{proof} Fact \ref{FOLKLORE} (with $E$ replaced by $E^*$) implies that $T$ is maximally monotone. The result now follows from Theorem~\ref{TSthm}(c). \end{proof} \begin{example}\label{TAILex} Let $E = \ell_1$, and define $T\colon\ \ell_1 \to \ell_\infty = {\ell_1}^*$ by $(Tx)_n = \sum_{k \ge n} x_k$. $T$ is the ``tail operator''. Let $S = G^{-1}L^{-1}G(T)$. It was proved in \cite[Example 7.10, pp.\ 1034--1035]{PARTONE} that $T$ is not quasidense. Thus, from Theorems~\ref{AFMAXthm} and \ref{LINVWthm}, $S$ is maximally monotone and quasidense, but $S^{\mathbb F}$ is maximally monotone and not quasidense. This example answers in the negative the question posed in \cite[Problem 12.7, p.\ 1047]{PARTONE} as to whether the Fitzpatrick extension of a quasidense maximally monotone multifunction is necessarily quasidense. $S$ can be represented in matrix form by \begin{equation*} \left(\begin{matrix} (Sx)_1\\(Sx)_2\\(Sx)_3\\(Sx)_4\\(Sx)_5\\\vdots \end{matrix} \right) = \left(\begin{matrix} 1&-1&0&0&0&\cdots\\ 0&1&-1&0&0&\cdots\\ 0&0&1&-1&0&\cdots\\ 0&0&0&1&-1&\cdots\\ 0&0&0&0&1&\cdots\\ \vdots&\vdots&\vdots&\vdots&\vdots&\ddots \end{matrix} \right) \left(\begin{matrix} x_1\\x_2\\x_3\\x_4\\x_5\\\vdots \end{matrix} \right), \end{equation*} and $D(S) = \big\{x \in c_0\colon\ \textstyle\sum_{i = 1}^\infty|x_i - x_{i + 1}| < \infty\big\}$. \end{example} \section{Sum theorem with range constraints}\label{RSUMSsec} Theorem~\ref{STRthm} below has applications to the classification of maximally monotone multifunctions. See \cite[Theorems~8.2 and 10.3]{PARTTWO}. \begin{theorem}[Sum theorem with range constraints]\label{STRthm} Let $S,T\colon\ E \rightrightarrows E^*$ be closed, quasidense and monotone. Then {\rm(a)$\Longrightarrow$(b)$\Longrightarrow$(c)$\Longrightarrow$(d)}: \par \noindent {\rm(a)}\enspace $R(S) \cap \hbox{\rm int}\,R(T) \ne \emptyset$ or $\hbox{\rm int}\,R(S) \cap R(T) \ne \emptyset$. \par \noindent {\rm(b)}\enspace $\textstyle\bigcup\nolimits_{\lambda > 0}\lambda\big[R(S) - R(T)\big] = E^*$. \par \noindent {\rm(c)}\enspace $\textstyle\bigcup\nolimits_{\lambda > 0}\lambda\big[\pi_2\,\hbox{\rm dom}\,\varphi_S - \pi_2\,\hbox{\rm dom}\,\varphi_T\big]$\quad is a closed subspace of $E^*$. \par \noindent {\rm(d)}\enspace The multifunction $E \rightrightarrows E^*$ defined by $y \mapsto (S^{\mathbb F} + T^{\mathbb F})^{-1}(\widehat y)$ is closed,\break quasidense and monotone. \par \noindent {\rm(e)}\enspace If, further, $R(T^{\mathbb F}) \subset \widehat E$, then the {\em parallel sum} $(S^{-1} + T^{-1})^{-1}$ is closed, monotone and quasidense. \end{theorem} \begin{proof} It is immediate \big(using \eqref{PHIS2}\big) that (a)$\Longrightarrow$(b)$\Longrightarrow$(c). Now suppose that (c) is satisfied. From Theorem~\ref{RLMAXthm}, $S$ and $T$ are maximally monotone, and so \eqref{PH3} implies that $\varphi_S, \varphi_T \in {\cal PCLSC}(E \times E^*)$, $\varphi_S, \varphi_T \ge q_L$ on $E \times E^*$, ${\rm coinc}[\varphi_S] = G(S)$ and ${\rm coinc}[\varphi_T] = G(T)$, and we can apply Theorem~\ref{Rthm} with $f := \varphi_S$ and $g := \varphi_T$. Thus $(\varphi_S {\,\mathop{\oplus}\nolimits_1\,} \varphi_T)^@ \ge q_L$ on $E \times E^*$, ${\rm coinc}[(\varphi_S {\,\mathop{\oplus}\nolimits_1\,} \varphi_T)^@]$ is closed and quasidense, and $(y,y^*) \in {\rm coinc}[(\varphi_S {\,\mathop{\oplus}\nolimits_1\,} \varphi_T)^@]$ if, and only if, there exist $u^{**},v^{**} \in E^{**}$ such that \begin{equation}\label{DCON1} (y^*,u^{**}) \in {\rm dcoinc}[{\varphi_S}^*],\ (y^*,v^{**}) \in {\rm dcoinc}[{\varphi_T}^*]\hbox{ and }u^{**} + v^{**} = \widehat y. \end{equation} From \eqref{PHSTCRIT}, this is equivalent to the statement: ``$u^{**} \in S^{\mathbb F}(y^*)$, $v^{**} \in T^{\mathbb F}(y^*)$ and $u^{**} + v^{**} = \widehat y$\,'', that is to say, ``$\widehat y \in (S^{\mathbb F} + T^{\mathbb F})(y^*)$''. This gives (d). \par (e)\enspace Now suppose that $R(T^{\mathbb F}) \subset \widehat E$ and $(y,y^*) \in {\rm coinc}[(\varphi_S {\,\mathop{\oplus}\nolimits_1\,} \varphi_T)^@]$. Then the element $v^{**}$ in \eqref{DCON1} is actually in $\widehat E$, and so there exists $v \in E$ such that\break $\widehat v = v^{**} \in T^{\mathbb F}(y^*)$. \eqref{EXT1} now implies that $(v,y^*) \in G(T)$, that is to say\break $v \in T^{-1}y^*$. From \eqref{DCON1} again, $u^{**} = \widehat{y - v}$, and a repetition of the argument above gives $y - v \in S^{-1}y^*$. Consequently, we have $y = v + (y - v) \in (S^{-1} + T^{-1})y^*$, that is to say $y^* \in (S^{-1} + T^{-1})^{-1}y$. Thus we have proved that ${\rm coinc}[(\varphi_S {\,\mathop{\oplus}\nolimits_1\,} \varphi_T)^@] \subset G\big((S^{-1} + T^{-1})^{-1}\big)$ On the other hand, from \eqref{EXT1} and \eqref{DCON1}, we always have $G\big((S^{-1} + T^{-1})^{-1}\big) \subset {\rm coinc}[(\varphi_S {\,\mathop{\oplus}\nolimits_1\,} \varphi_T)^@]$, completing the proof of (e). \end{proof} \section{Another maximally monotone non--quasidense multifunction}\label{ANOTHERsec} In Bueno--Svaiter, \cite[Proposition 1, pp.\ 84--85]{BUS13} an example is given of a maximally monotone skew linear operator from a subspace of $c_0$ into $\ell_1$ which is maximally monotone but not {\em of type (D),} thus answering in the negative a conjecture of J. Borwein. As observed in \cite[Remark 10.4, pp.\ 21--22]{PARTTWO}, a maximally monotone multifunction is of type (D) if, and only if, it is quasidense, so the Bueno--Svaiter example provides a maximally monotone non--quasidense multifunction on $c_0$. In this section, we discuss a slight modification of this multifunction. Ironically, it is easier to establish the non--quasidensity than the maximal monotonicity. \begin{definition}\label{Qdef} If $(x_n)$ is a real sequence such that $\textstyle\sum_{k = 1}^\infty x_k$ is convergent, we define the {\em tail sequence} of $x$, $(t(x)_n)$, by, for all $n \ge 1$, $t(x)_n = \sum_{k = n}^\infty x_k$. Clearly \begin{equation*} t(x) \in c_0\hbox{\quad and,\quad for all }j \ge 1,\quad x_j = t(x)_{j} - t(x)_{j + 1}. \end{equation*} Let \begin{equation}\label{S0} K:= \big\{x = (x_i)_{i \ge 1}\colon\ \textstyle\sum_{i = 1}^\infty x_i = 0\quad\hbox{and}\quad \textstyle\sum_{p = 1}^\infty|t(x)_{p} + t(x)_{p + 1}| < \infty\big\}. \end{equation} $K$ is a vector subspace of $c_0$. Let $x \in K$. For all $j \ge 1$, let \begin{equation}\label{S1} (Sx)_j := -t(x)_{j} - t(x)_{j + 1}. \end{equation} Clearly, $Sx \in \ell_1$. $S$ can be represented in matrix form by \begin{equation}\label{S2} \left(\begin{matrix} (Sx)_1\\(Sx)_2\\(Sx)_3\\(Sx)_4\\(Sx)_5\\\vdots \end{matrix} \right) = \left(\begin{matrix} -1&-2&-2&-2&-2&\cdots\\ 0&-1&-2&-2&-2&\cdots\\ 0&0&-1&-2&-2&\cdots\\ 0&0&0&-1&-2&\cdots\\ 0&0&0&0&-1&\cdots\\ \vdots&\vdots&\vdots&\vdots&\vdots&\ddots \end{matrix} \right) \left(\begin{matrix} x_1\\x_2\\x_3\\x_4\\x_5\\\vdots \end{matrix} \right). \end{equation} If $x \in K$ then $t(x)_1 = 0$ and so, for all $k \ge 1$, \begin{equation}\label{S3} \left.\begin{aligned} \textstyle\sum_{j = 1}^{k}x_j(Sx)_j &= \textstyle\sum_{j = 1}^{k}\big(t(x)_{j} - t(x)_{j + 1}\big)\big(-t(x)_{j} - t(x)_{j + 1}\big)\\ &= t(x)_{k + 1}^2 - t(x)_{1}^2 = t(x)_{k + 1}^2. \end{aligned} \right\} \end{equation} Letting $k \to \infty$ in \eqref{S3}, for all $x \in K$, \begin{equation}\label{S4} \bra{x}{Sx} = \lim\nolimits_{k \to \infty}\textstyle\sum_{j = 1}^{k}x_j(Sx)_j = \lim\nolimits_{k \to \infty}t(x)_{k + 1}^2 = 0. \end{equation} If $x \in c_0 \setminus K$, we define $Sx := \emptyset$ . Thus $S\colon\ c_0 \rightrightarrows \ell_1$ is at most single--valued, linear and skew and $D(S) = K$. \par If $i \ge 1$, write $\e{i}$ for the sequence $(0,\dots, 0,1,0,0,\dots)$, with the 1 in the $i$th place. \end{definition} \begin{lemma}\label{SMAXlem} Let $j \ge 1$. Then \begin{equation*} \e{j} - \e{j + 1} \in K\quad\hbox{and}\quad S\big(\e{j} - \e{j + 1}\big) = \e{j} + \e{j + 1}. \end{equation*} In other words, \begin{equation*} (\e{j} - \e{j + 1},\e{j} + \e{j + 1}) \in G(S). \end{equation*} \end{lemma} \begin{proof} Let $x := \e{j} - \e{j + 1}$. It is easily seen that $t = - \e{j + 1}$. So, for all $p \ge 1$, \begin{equation*} (Sx)_p = \e{j + 1}_p + \e{j + 1}_{p + 1} = \begin{cases}0 + 0 = 0&\hbox{if }p < j;\\ 0 + 1 = 1&\hbox{if }p = j;\\ 1 + 0 = 1&\hbox{if }p = j + 1;\\ 0 + 0 = 0&\hbox{if }p > j + 1. \end{cases} \end{equation*} This gives the desired result. Alternatively, we can simply subtract the $(j + 1)$st column from the $j$th column of the matrix in \eqref{S2}. \end{proof} \begin{theorem}\label{SMAXthm} $S$ is skew and maximally monotone but not quasidense. \end{theorem} \begin{proof} From \eqref{S4}, $S$ is skew. Now let $(x,x^*) \in c_0 \times \ell_1$ and, \begin{equation}\label{S5} \hbox{for all}\ z \in K,\ \bra{z - x}{Sz - x^*} \ge 0. \end{equation} From \eqref{S4}, $\bra{z}{Sz} = 0$, and so \eqref{S5} reduces to $\bra{x}{Sz} + \bra{z}{x^*} \le \bra{x}{x^*}$. Since $K$ is a vector space, this implies that \begin{equation}\label{S6} \bra{x}{x^*} \ge 0\hbox{\quad and,\quad }\hbox{for all}\ z \in K,\ \bra{x}{Sz} = -\bra{z}{x^*}. \end{equation} Lemma~\ref{SMAXlem} and \eqref{S6} imply that, for all $j \ge 1$, \begin{equation*} x_j + x_{j +1} = \Bra{x}{\e{j} + \e{j + 1}} = -\Bra{\e{j} - \e{j + 1}}{x^*} = - x^*_j + x^*_{j + 1}. \end{equation*} Consequently, for all $n \ge 1$, \begin{equation*} x_{1} + x_{2} + \cdots + x_{2n - 1} + x_{2n} = -x_{1}^* + x_{2} ^*+ \cdots - x_{2n - 1}^* + x_{2n}^*. \end{equation*} Adding $x_{2n + 1}$ to both sides of this equation, \begin{equation*} x_{1} + x_{2} + \cdots + x_{2n - 1} + x_{2n} + x_{2n + 1} = -x_{1}^* + x_{2} ^*+ \cdots - x_{2n - 1}^* + x_{2n}^* + x_{2n + 1}. \end{equation*} Using the fact that $x \in c_0$, $x^* \in \ell_1$ and a simple interleaving argument, we see that $\textstyle\sum_{i = 1}^{\infty}x_i = \textstyle\sum_{i = 1}^{\infty}(-1)^{i}x^*_i$. Since we now know that $\textstyle\sum_{i = 1}^{\infty}x_i$ is convergent, we can use the notation of Definition~\ref{Qdef}. Thus \begin{equation}\label{S7} t(x)_1 = \textstyle\sum_{i = 1}^{\infty}(-1)^{i}x^*_i. \end{equation} Let $j \ge 1$. Using the same argument as above but starting the summation at $i = j$ instead of $i = 1$, $t(x)_{j} = -x_{j}^* + x_{j + 1} ^*-+ \cdots$. Replacing $j$ by $j + 1$, $t(x)_{j + 1} = -x_{j + 1}^* + x_{j + 2} ^*-+ \cdots$ and so, by addition, $x_{j}^* = -t(x)_{j} - t(x)_{j + 1}$. From \eqref{S1}, $x_j^* = (Sx)_j$. So $Sx = x^* \in \ell_1$. Furthermore, \begin{equation*} \bra{x}{x^*} = \textstyle\sum_{j = 1}^\infty x_jx^*_j = \textstyle\sum_{j = 1}^\infty\big(t(x)_{j} - t(x)_{j + 1}\big)\big(-t(x)_{j} - t(x)_{j + 1}\big) = - t(x)_{1}^2, \end{equation*} and \eqref{S6} now gives $t(x)_{1} = 0$. Thus, from \eqref{S0}, $x \in K$ and $(x,x^*) \in G(S)$. This completes the proof of the maximal monotonicity of $S$. \par We now prove that $S$ is not quasidense. To this end, let $x \in K$. Then, from \eqref{S1}, $\textstyle\sum_{j = 1}^{\infty}(-1)^{j}(Sx)_j = \big(t(x)_{1} + t(x)_{2}\big) - \big(t(x)_{2} + t(x)_{3}\big) + \big(t(x)_{3} + t(x)_{4}\big) \cdots = t(x)_1 = 0$, thus \begin{equation*} (Sx)_1 = \textstyle\sum_{j = 2}^{\infty}(-1)^{j}(Sx)_j,\hbox{ from which }|(Sx)_1| \le \textstyle\sum_{j = 2}^{\infty}|(Sx)_j|. \end{equation*} Thus $2|(Sx)_1| \le \textstyle\sum_{j = 1}^{\infty}|(Sx)_j| = \|Sx\|_1$. Since $(Sx)_1 = -t(x)_1 - t(x)_2 =\break t(x)_1 - t(x)_2 = x_1$, $\|Sx\|_1^2 \ge 4x_1^2$. From \eqref{S4}, $\bra{x - \e{1}}{Sx} = \bra{x}{Sx} - (Sx)_1 = -x_1$, and so \begin{align*} r_L\big((x,Sx) - (\e{1},0)\big) &= {\textstyle\frac{1}{2}}\|x - \e{1}\|_\infty^2 + {\textstyle\frac{1}{2}}\|Sx\|_1^2 + \bra{x - \e{1}}{Sx}\\ &\ge {\textstyle\frac{1}{2}}(x_1 - 1)^2 + 2x_1^2 - x_1 \ge \textstyle\frac{5}{2}x_1^2 - 2x_1 + {\textstyle\frac{1}{2}}\\ &= \textstyle\frac{5}{2}(x_1 - \frac{2}{5})^2 + \frac{1}{10} \ge \frac{1}{10}. \end{align*} This completes the proof that $S$ is not quasidense. \end{proof} \begin{remark} As we observed above, $D(S) = K \ne c_0$. On the other hand, the tail operator, $T$, defined in Example~\ref{TAILex} has full domain. This leads to the following problem. \end{remark} \begin{problem} Is every maximally monotone multifunction $T\colon\ c_0 \rightrightarrows \ell_1$ such that $D(T) = c_0$ quasidense? \end{problem} It is natural to ask whether Theorem~\ref{TSthm}(b) can be used to establish the\break maximal monotonicity of $S$ in Theorem~\ref{SMAXthm}. Theorem~\ref{TNOTMAXthm} below shows that this is impossible. \begin{lemma}\label{SOlem} Let $S\colon\ c_0 \rightrightarrows \ell_1$ be as in Definition~\ref{Qdef}, $(x,x^*) \in G(S)$ and $\omega^{**} := (-1,1,-1,1,-1,\dots) \in \ell_\infty$. Then $\bra{x^*}{\omega^{**}} = 0$. \end{lemma} \begin{proof} $(x,x^*) \in c_0 \times \ell_1$ and, from \eqref{S0} and \eqref{S1}, $\textstyle\sum_{i = 1}^\infty x_i = 0$ and, for all $j \ge 1$, $x^*_j := -t(x)_{j} - t(x)_{j + 1}$. Thus, for all $j \ge 1$, \begin{equation*} - x^*_j + x^*_{j + 1} = t(x)_{j} + t(x)_{j + 1}-t(x)_{j + 1} - t(x)_{j + 2} = x_j + x_{j + 1}. \end{equation*} Consequently, \begin{align*} \bra{x^*}{\omega^{**}} &= (- x^*_1 + x^*_2) + (- x^*_3 + x^*_4) + (- x^*_5 + x^*_6) + \dots\\ &= (x_1 + x_2) + (x_3 + x_4) + (x_5 + x_6) + \dots = \textstyle\sum_{i = 1}^\infty x_i = 0. \end{align*} This gives the desired result. \end{proof} \begin{theorem}\label{TNOTMAXthm} Let $S$ be as in {\emTheorem~\ref{SMAXthm}}, $T\colon\ \ell_1 \rightrightarrows \ell_\infty$, $R(T) \subset \widehat{c_0}$ and $S = G^{-1}L^{-1}G(T)$. Then $T$ is not maximally monotone. \end{theorem} \begin{proof} Let $(y^*,y^{**}) \in G(T)$. From the proof of Theorem~\ref{TSthm}(a), there exists $(y,y^*) \in G(S)$ such that $\widehat y = y^{**}$, and Lemma~\ref{SOlem} implies that $\bra{y^*}{\omega^{**}} = 0$. From \eqref{S4}, $\bra{y^*}{y^{**}} = \bra{y}{y^*} = \bra{y}{Sy} = 0$, from which $\bra{y^* - 0}{y^{**} - \omega^{**}} =\break \bra{y^*}{y^{**}} - \bra{y^*}{\omega^{**}} = 0$. Thus $(0,\omega^{**})$ is monotonically related to $G(T)$.\break However, $\omega^{**} \not\in \widehat{c_0} \supset R(T)$, and so $(0,\omega^{**}) \not\in G(T)$. This completes the proof of Theorem~\ref{TNOTMAXthm}. \end{proof} \section{The Bueno--Svaiter construction}\label{NONQDEXTsec} In Example~\ref{TAILex}, we gave an example of a quasidense maximally monotone\break multifunction with a non-quasidense Fitzpatrick extension. In this section, we give a construction, due to Bueno and Svaiter, that produces another example of a similar phenonemon. Definition~\ref{Kdef} is patterned after Bueno, \cite[Theorem 2.7, pp.\ 13--14]{BUENO}. It would be interesting to find a scheme that includes both the example of Example~\ref{TAILex}, and also examples of the kind considered in this section. \begin{definition}\label{Kdef} Let $E$ be a Banach space and $e^{**} \in E^{**} \setminus \widehat E$. We define $k\colon\ E^* \to \mathbb R$ by $k(y^*) = \bra{y^*}{e^{**}}^2$. $k$ is a convex, continuous function on $E^*$. Let $T\colon\ E^* \to E^{**}$ be a linear map and $R(T) \subset \widehat E$. Suppose that \begin{equation}\label{GEN2} \hbox{for all}\ x^* \in E^*,\ \bra{x^*}{Tx^*} = k(x^*) \ge 0. \end{equation} \end{definition} In what follows, ``$\hbox{\rm lin}$'' stands for ``linear hull of''. \begin{lemma}\label{LMlem} $\hbox{\rm dom}\,k^* = \hbox{\rm lin}\{e^{**}\}$ and, for all $\mu \in \mathbb R$, $k^*(2\mu e^{**}) = \mu^2$. \end{lemma} \begin{proof} If $z^{**} \not\in \hbox{\rm lin}\{e^{**}\}$ then, from a well known algebraic result, there\break exists $z^* \in E^*$ so that $\bra{z^*}{e^{**}} = 0$ but $\bra{z^*}{z^{**}} \ne 0$. Thus, for all $\lambda \in \mathbb R$,\break $k^*(z^{**}) \ge \bra{\lambda z^*}{z^{**}} - \bra{\lambda z^*}{e^{**}}^2 = \lambda \bra{z^*}{z^{**}}$, and by taking $\lambda$ large and of the appropriate sign, $k^*(z^{**}) = \infty$. Thus $\hbox{\rm dom}\,k^* \subset \hbox{\rm lin}\{e^{**}\}$. If now $\mu \in \mathbb R$ then $k^*(2\mu e^{**}) = \sup_{y^* \in E^*}\big[2\mu\bra{y^*}{e^{**}} - \bra{y^*}{e^{**}}^2\big]$. Since $e^{**} \ne 0$, as $y^*$ runs through $E^*$, $\bra{y^*}{e^{**}}$ runs through $\mathbb R$, and so (by elementary calaculus or\break completing the square) $k^*(2\mu e^{**}) = \sup_{\lambda \in \mathbb R}\big[2\mu\lambda - \lambda^2\big] = \mu^2$. \end{proof} \begin{theorem}\label{VWthm} $T$ is not quasidense. \end{theorem} \begin{proof} We start off by proving that \begin{equation}\label{W1} \hbox{If }z^{***} \in E^{***},\ \Bra{\widehat E}{z^{***}} = \{0\}\hbox{ and }\lambda \in \mathbb R\hbox{ then }\theta_T(e^{**},\lambda z^{***}) = \textstyle\frac{1}{4}. \end{equation} To this end, let $z^{***}$ and $\lambda$ be as in \eqref{W1}. From \eqref{GEN2} and the definition of $T$, for all $x^* \in E^*$, $\bra{x^*}{Tx^*} = k(x^*)$, and \eqref{THLONG} and Lemma~\ref{LMlem} give \begin{align*} \theta_{T}(e^{**},\lambda z^{***}) &= \sup\nolimits_{x^* \in E^*}\big[\bra{x^*}{e^{**}} + \lambda\bra{Tx^*}{z^{***}} - \bra{x^*}{Tx^*}\big]\\ &= \sup\nolimits_{x^* \in E^*}\big[\bra{x^*}{e^{**}} + 0 - k(x^*)\big] = k^*(e^{**}) = \textstyle\frac{1}{4}. \end{align*} This completes the proof of \eqref{W1}. If $T$ were quasidense then, from \eqref{W1} and Corollary~\ref{THAcor}, if $z^{***} \in E^{***}$ and $\Bra{\widehat E}{z^{***}} = \{0\}$ then, for all $\lambda \in \mathbb R$, \begin{equation*} \textstyle\frac{1}{4} = \theta_{T}(e^{**},\lambda z^{***}) \ge \bra{e^{**}}{\lambda z^{***}} = \lambda\bra{e^{**}}{z^{***}}. \end{equation*} Letting $\lambda \to \pm \infty$, $\bra{e^{**}}{z^{***}} = 0$. So we would have $\bra{e^{**}}{z^{***}} = 0$ whenever $\Bra{\widehat E}{z^{***}} = \{0\}$. Since $\widehat E$ is a closed subspace of $E^{**}$, it would follow that $e^{**} \in \widehat E$, violating the assumption in Definition~\ref{Kdef}. \end{proof} \begin{theorem}\label{SFTthm} Let $S = G^{-1}L^{-1}G(T)$ {\em(see Theorem~\ref{TSthm})}. Then: \par\noindent {\rm(a)}\enspace $S$ is maximally monotone and quasidense, and $S^{\mathbb F} = T$. \par\noindent {\rm(b)}\enspace $S^{\mathbb F}$ is maximally monotone but not quasidense. \end{theorem} \begin{proof} (a) is immediate from Definition~\ref{Kdef} and Theorem~\ref{LINVWthm}, and (b) is\break immediate from Theorem~\ref{AFMAXthm}, (a) and Theorem~\ref{VWthm}. \end{proof} For the rest of this section, we shall consider some of the more technical properties of $\theta_S$, with $S = G^{-1}L^{-1}G(T)$ as in Theorems~\ref{TSthm} and \ref{SFTthm}. \begin{lemma}\label{XYlem} For all $x^*,y^* \in E^*$, $\bra{y^*}{Tx^*} = \Bra{x^*}{2\bra{y^*}{e^{**}}e^{**} - Ty^*}$. \end{lemma} \begin{proof} We have \begin{align*} \bra{y^*}{Tx^*} + \bra{x^*}{Ty^*} &= {\textstyle\frac{1}{2}}\bra{x^* + y^*}{Tx^* + Ty^*} - {\textstyle\frac{1}{2}}\bra{x^* - y^*}{Tx^* - Ty^*}\\ &= {\textstyle\frac{1}{2}} k(x^* + y^*) - {\textstyle\frac{1}{2}} k(x^* - y^*) = 2\bra{x^*}{e^{**}}\bra{y^*}{e^{**}}. \end{align*} The result follows easily from this. \end{proof} \begin{theorem}\label{NPHWthm} Let $(y^*,y^{**}) \in E^* \times E^{**}$. Then \begin{equation}\label{NPHW1} (y^*,y^{**}) \in \hbox{\rm dom}\,\theta_S \iff 2\bra{y^*}{e^{**}}e^{**} - Ty^* + y^{**} \in \hbox{\rm lin}\{e^{**}\}. \end{equation} It follows that $\hbox{\rm dom}\,\theta_S$ is a linear subpace of $E^* \times E^{**}$. Furthermore, for all $(y^*,y^{**}) \in \hbox{\rm dom}\,\theta_S$, there exists a unique value of $\mu \in \mathbb R$ such that \begin{equation}\label{NPHW3} 2\bra{y^*}{e^{**}}e^{**} - Ty^* + y^{**} = 2\mu e^{**},\hbox{ and then }\theta_S(y^*,y^{**}) = \mu^2. \end{equation} \end{theorem} \begin{proof} It follows from \eqref{THLONG} and \eqref{GEN2} that \begin{align*} \theta_S(y^*,y^{**}) &=\sup\nolimits_{x^* \in E^*}\big[\bra{y^*}{Tx^*} + \bra{x^*}{y^{**}} - k(x^*)\big]. \end{align*} Thus, from Lemma~\ref{XYlem}, \begin{align*} \theta_S(y^*,y^{**}) &=\sup\nolimits_{x^* \in E^*}\big[\Bra{x^*}{2\bra{y^*}{e^{**}}e^{**} - Ty^* + y^{**}} - k(x^*)\big]\\ &= k^*\big(2\bra{y^*}{e^{**}}e^{**} - Ty^* + y^{**}\big). \end{align*} \eqref{NPHW1} now follows from Lemma~\ref{LMlem}. Since $e^{**} \ne 0$, for all $(y^*,y^{**}) \in \hbox{\rm dom}\,\theta_S$ there exists a unique $\mu \in \mathbb R$ such that $2\bra{y^*}{e^{**}}e^{**} -Ty^* + y^{**} = 2\mu e^{**}$, and the rest of \eqref{NPHW3} follows from another application of Lemma~\ref{LMlem}. \end{proof} \begin{corollary}\label{PHIVcor} $\hbox{\rm dom}\,\varphi_S = G(S)$ and $\theta_S = {\varphi_S}^*$ on $E^* \times E^{**}$. \end{corollary} \begin{proof} Let $(x,x^*) \in \hbox{\rm dom}\,\varphi_S$. From \eqref{TH2}, $(x^*, \widehat x) \in \hbox{\rm dom}\,\theta_S$. Theorem~\ref{NPHWthm} now gives a unique value of $\mu \in \mathbb R$ such that $2\bra{x^*}{e^{**}}e^{**} - Tx^* + \widehat x = 2\mu e^{**}$. Thus $\widehat E \ni \widehat x - Tx^* = 2\big(\mu - \bra{x^*}{e^{**}}\big)e^{**}$. From Definition~\ref{Kdef}, $e^{**} \not\in \widehat E$, and so $\mu - \bra{x^*}{e^{**}} = 0$, from which $\widehat x - Tx^* = 0$. It follows that $(x,x^*) \in G(S)$. Thus $\hbox{\rm dom}\,\varphi_S \subset G(S)$. The result now follows from Lemma~\ref{THlem}. \end{proof} Since $S$ is quasidense, it follows from Theorem~\ref{COINCthm} that \begin{equation}\label{TELE2} {\rm dcoinc}[{\theta_S}] = {\rm dcoinc}[{\varphi_S}^*] = {\rm dcoinc}[{\theta_S}^@]. \end{equation} Of course, we know the first equality in \eqref{TELE2} from Corollary~\ref{PHIVcor}. The second equality in \eqref{TELE2} leads naturally to the conjecture that ${\theta_S }^@ = {\varphi_S}^*$ on $E^* \times E^{**}$. As we show in Theorem~\ref{THATthm} below, this conjecture fails in a spectacular way. This raises the question of finding the exact value of $\hbox{\rm dom}\,{\theta_S}^@$. \begin{theorem}\label{THATthm} Since $e^{**} \ne 0$, there exists $y^* \in E^*$ so that $\bra{y^*}{e^{**}} = 1$. Define $y^{**} \in E^{**}$ by $y^{**} := Ty^* - 2{e^{**}}$. Let $\lambda \in \mathbb R$. Then \begin{equation}\label{THAT1} \theta_S(\lambda y^*,\lambda y^{**}) = 0,\hbox{ in particular, }{\varphi_S}^*(y^*,y^{**}) = \theta_S(y^*,y^{**}) = 0. \end{equation} but \begin{equation}\label{THAT2} {\theta_S}^@(y^*,y^{**}) = \infty. \end{equation} \end{theorem} \begin{proof} We note that $2\bra{\lambda y^*}{e^{**}}e^{**} - T\lambda y^* + \lambda y^{**} = \lambda(2e^{**} - Ty^* + y^{**}) = 0$, so \eqref{THAT1} follows from \eqref{NPHW3}. Let $\lambda < 0$. From \eqref{FAT} and \eqref{THAT1}, \begin{align*} {\theta_S}^@(y^*,y^{**}) &= \sup\nolimits_{(x^*,x^{**}) \in E^* \times E^{**}}\big[\Bra{(x^*,x^{**})}{(y^{**},\widehat{y^*})} - \theta_S(x^*,x^{**})\big]\\ &\ge \bra{\lambda y^*}{y^{**}} + \bra{y^*}{\lambda y^{**}} - \theta_S(\lambda y^*,\lambda y^{**}) = 2\lambda\bra{y^*}{y^{**}}. \end{align*} However, $\bra{y^*}{y^{**}} = \bra{y^*}{Ty^* - 2{e^{**}}} = \bra{y^*}{Ty^*} - 2\bra{y^*}{{e^{**}}}$. It now\break follows from \eqref{GEN2} that $\bra{y^*}{y^{**}} = \bra{y^*}{e^{**}}^2 - 2\bra{y^*}{{e^{**}}} = 1 - 2 = -1$, and so ${\theta_S}^@(y^*,y^{**}) \ge -2\lambda$, and we obtain \eqref{THAT2} by letting $\lambda \to -\infty$. \end{proof} \section{A specific non--quasidense Fitzpatrick\\ extension}\label{SPECsec} If $x^* \in \ell_1$ and $j \ge 1$, let $\tau_j :=\textstyle\sum_{i = j}^\infty x^*_i$. Define the linear map $T\colon\ \ell_1 \to \ell_\infty$ by \begin{equation}\label{TELE1} \hbox{for all}\ j \ge 1,\ (Tx^*)_j = \tau_j + \tau_{j + 1}. \end{equation} Clearly $R(T) \subset \widehat{c_0}$. Let $e^{**} := (1,1,1,1,\dots) \in \ell_\infty \setminus \widehat{c_0}$. \begin{remark} $T$ can be represented by \begin{equation*} \left(\begin{matrix} (Tx^*)_1\\(Tx^*)_2\\(Tx^*)_3\\(Tx^*)_4\\(Tx^*)_5\\\vdots \end{matrix} \right) = \left(\begin{matrix} 1&2&2&2&2&\cdots\\ 0&1&2&2&2&\cdots\\ 0&0&1&2&2&\cdots\\ 0&0&0&1&2&\cdots\\ 0&0&0&0&1&\cdots\\ \vdots&\vdots&\vdots&\vdots&\vdots&\ddots \end{matrix} \right) \left(\begin{matrix} x_1^*\\x_2^*\\x_3^*\\x_4^*\\x_5^*\\\vdots \end{matrix} \right) \end{equation*} \end{remark} \begin{lemma}\label{Ulem} For all $x^* \in \ell_1$, $\bra{x^*}{Tx^*} = \bra{x^*}{e^{**}}^2 \ge 0$. \end{lemma} \begin{proof} Let $j \ge 1$. Then $x^*_j(Tx^*)_j = (\tau_j + \tau_{j + 1})(\tau_j - \tau_{j + 1}) = \tau_{j}^2 - \tau_{j + 1}^2$. Since $x^* \in \ell_1$, $\lim_{k \to \infty}\tau_{k} = 0$. Thus \begin{align*} \textstyle\sum_{j = 1}^\infty x^*_j(Tx^*)_j &= \lim\nolimits_{k \to \infty}\textstyle\sum_{j = 1}^{k} x^*_j(Tx^*)_j= \lim\nolimits_{k \to \infty}\textstyle\sum_{j = 1}^{k} (\tau_{j}^2 - \tau_{j + 1}^2) = \tau_{1}^2, \end{align*} as required. \end{proof} \begin{theorem}\label{SPECTthm} Let $S = G^{-1}L^{-1}G(T)$. Then $S$ is maximally monotone and quasidense, and $S^{\mathbb F} = T$ is maximally monotone but not quasidense. \end{theorem} \begin{proof} This is immediate from Lemma~\ref{Ulem} and Theorem~\ref{SFTthm}. \end{proof} \begin{remark} In this case we can give a direct proof that $T$ is not quasidense. For all $x^* \in \ell_1$, $Tx^* \in \widehat{c_0}$ and so $\|Tx^* - e^{**}\|_\infty \ge 1$, and $\bra{x^*}{Tx^* - e^{**}} = \bra{x^*}{Tx^*} - \bra{x^*}{e^{**}} = \bra{x^*}{e^{**}}^2 - \bra{x^*}{e^{**}}$. Thus \begin{align*} r_L((x^*,Tx^*) - (0,e^{**})) &= {\textstyle\frac{1}{2}}\|x^*\|_1^2 + {\textstyle\frac{1}{2}}\|Tx^* - e^{**}\|_\infty^2 + \bra{x^*}{Tx^* - e^{**}}\\ &\ge 0 + {\textstyle\frac{1}{2}} + \bra{x^*}{e^{**}}^2 - \bra{x^*}{e^{**}}\\ &= \textstyle\frac{1}{4} + \textstyle\frac{1}{4}(2\bra{x^*}{e^{**}} - 1)^2 \ge \textstyle\frac{1}{4}. \end{align*} Thus $T$ is not quasidense. \end{remark} \begin{remark} Define $x \in c_0$ by, for all $j \ge 1$, $x_j = (Tx^*)_j$. Clearly, $x^* = Sx$. Using the fact that $x \in c_0$, $x^* \in \ell_1$ and an interleaving argument similar to that used in Theorem~\ref{SMAXthm}, we see that, for all $j \ge 1$, $\tau_{j} = \textstyle\sum_{i = j}^\infty(-1)^{i + j}x_i$. It follows that $S$ can be represented in matrix form on the appropriate domain by \begin{equation*} \left(\begin{matrix} (Sx)_1\\(Sx)_2\\(Sx)_3\\(Sx)_4\\(Sx)_5\\\vdots \end{matrix} \right) = \left(\begin{matrix} 1&-2&2&-2&2&\cdots\\ 0&1&-2&2&-2&\cdots\\ 0&0&1&-2&2&\cdots\\ 0&0&0&1&-2&\cdots\\ 0&0&0&0&1&\cdots\\ \vdots&\vdots&\vdots&\vdots&\vdots&\ddots \end{matrix} \right) \left(\begin{matrix} x_1\\x_2\\x_3\\x_4\\x_5\\\vdots \end{matrix} \right). \end{equation*} \end{remark}
2,869,038,154,154
arxiv
\section{Introduction}\label{intro} \IEEEPARstart{O}{ver} the past few years, various types of streaming platforms in the form of video on demand (VoD), 360-degree streaming, and live streaming services have become dramatically popular. Compared to traditional cable broadcast that users can view on television, video streaming is ubiquitous and provides viewers with the flexibility of watching video content on various devices. In most cases, such services have vast videos catalogs present for users to browse and watch anytime. It is often challenging for users to find relevant content due to innumerable data and time constraints. This considerable growth has increased the need for technologies that enable users to browse the vast and ever-growing content collections and quickly retrieve the content of interest. The development of new techniques for generating animated graphic change format (GIF) images and artistic static thumbnails is part of this demand \cite{song2016click, yuan2019sentence, xu2021gif}. Almost every streaming platform uses artistic media to provide a quick and decisive glimpse of video content. The artistic static thumbnail provides viewers with a quick video preview. Meanwhile, the animated GIF provides a condensed preview of the video for 3--15 seconds \cite{bakhshi2016fast}. Figure \ref{fig:intro_gif} illustrates artistic media for sports videos: 1) an animated GIF played when the user hovers the mouse on artistic thumbnail (above) and 2) the most preferred frames is selected as artistic thumbnail from the feature-length video. Viewers often decide whether to watch or skip the video based on its static thumbnail and animated GIF. Due to their importance, there is a growing interest in automatically creating compelling and expressive artistic media. \begin{figure}[t] \centering \includegraphics[width=\linewidth,keepaspectratio]{figures/fig1.PNG} \caption{\label{fig:intro_gif} Artistic media in the form of static thumbnails and animated GIFs are universally used in the most popular streaming platforms to highlight recommended videos. Animated GIFs are played whenever a user hovers over the static thumbnail (above). Generally, the most preferred events are selected according to the video category to attract users to get more views of the video (below).} \end{figure} Click-through rate (CTR) is a prominent metric to boost the popularity of newly published feature-length videos on streaming platforms. However, many streaming platforms (such as YouTube) provide only one thumbnail and a single GIF for a given video, without prioritizing user preferences. Recent studies showed that personalized artistic media (thumbnails and animated GIFs) could play a significant role in video selection and improve the CTR of videos \cite{mujtaba2019client, mujtaba2021GIF}. However, manually creating static thumbnails and GIF thumbnails is time-consuming, and their quality is not guaranteed. Their ubiquitous adoption and prevalence have increased the demand for methods that can automatically generate personalized static artistic media from feature-length videos. Nowadays, some popular video streaming sites are investigating server-side solutions to automatically generate personalized artistic media. There are four key concerns when it comes to server-based solutions: (i) due to finite computing capabilities personalized artistic media may not be simultaneously generated in a timely manner for multiple users, (ii) consumer privacy is prone to invasions in a personalized approach, (iii) user behavior should be overseen with recommendation algorithms, (iv) the fact that the current solution processes the entire video (frames) to generate GIFs increases the overall computational duration and requires significant computational resources. As personalization is one of the key elements for early media content adoption, we focused on the personalization and lightweight processing aspects of artistic media generation. Figure \ref{fig:2} shows a general overview and comparison of the traditional and proposed methods. \begin{figure}[t] \centering \includegraphics[width=\linewidth,keepaspectratio]{figures/fig2.png} \caption{\label{fig:2} Traditionally, personalized artistic media (thumbnail and GIF) is generated using server-based techniques. We propose a new lightweight technique in this paper to create personalized artistic media on the client device.} \end{figure} With the observation above in mind, we propose an innovative computationally efficient client-driven method that can generate personalized artistic media simultaneously for multiple users. Considering that computational resources are limited, we use lightweight thumbnail containers (LTC) of the corresponding feature-length sports video instead of processing the entire video (frames). Since every sports video has key events (i.e., penalty shots in soccer videos), we utilize LTC to detect events that reduce the overall processing time. Therefore, we aim to reduce the overall computation load and processing time while generating personalized thumbnails and GIFs from feature-length videos. In the proposed method, twenty-three publicly broadcast soccer videos were examined to estimate the model effectiveness\footnote{Here, we focused on long videos of six different sports matches, namely, baseball, basketball, boxing, cricket, football, and tennis. However, the proposed method can also be used for other sporting events}. The main contributions of this research are summarized as follows: \begin{itemize} \item We propose a new lightweight client-driven technique to automatically create static artistic media for feature-length sports videos. To the best of our knowledge, this is the first work to address this novel and challenging problem in the literature. \item To support the study, we have collected twenty-three feature-length videos with approximately $2,818.96$ minutes duration, in six different sports categories, namely, baseball, basketball, boxing, cricket, football, and tennis. \item We designed an effective 2D Convolutional Neural Network (CNN) model that can detect personalized events from feature-length videos. \item Extensive quantitative and qualitative analyses were conducted using feature-length sports videos. The quantitative results indicated that the computational complexity of the proposed method is 3.57 times lower than that of the SoA approach on resource-constrained Nvidia Jetson TX2 device (detailed in Section \ref{sec:level4.3}). Additionally, qualitative evaluations were conducted in collaboration with nine participants (detailed in Section \ref{sec:level4.4}). \end{itemize} To the best of our knowledge, this is the first attempt to generate artistic media using LTC in end-user devices for streaming platforms\footnote{The code and trained models are publicly available on GitHub at \url{https://github.com/iamgmujtaba/LTC-GIF}.}. The rest of this paper is organized as follows: Section II provides an overview of related literature. Section III details the proposed client-driven method. Section IV presents the qualitative and quantitative results, along with the relevant discussions. Finally, the conclusions of this study are presented in Section V. \section{Related Work}\label{sec:level2} This paper focuses on artistic media generation methods, event recognition, and video analysis. This section briefly reviews works associated with these topics. \subsection{Animated GIF Generation Methods}\label{sec:level2.1} Animated GIFs, first created in 1987, have been widely used in recent years. Specifically, in \cite{bakhshi2016fast}, animated GIFs were reported to be more attractive than other forms of media, including photos and videos, on social media platforms such as Tumblr. They identified some important factors that contribute to fascination users with GIFs, such as animations, storytelling capabilities, and emotional expression. In addition, several studies \cite{chen2017gifgif+, jou2014predicting} have trained models for predicting viewers’ perceptual sentiments toward animated GIFs. Despite the engagement, in \cite{jiang2018perfect}, it was discovered that viewers may have diverse interpretations of animated GIFs used in communication. They predicted facial expressions, histograms, and aesthetic features and compared them to \cite{jou2014predicting} to find the most appropriate video features for expressing useful emotions in GIFs. Another new approach \cite{liu2020sentiment}, sentiment analysis was used to estimate annotated GIF text and visual emotion scores. From an aesthetic perspective, in \cite{song2016click}, frames were picked by measuring various subjective and objective metrics of the video frames (such as visual quality and aesthetics) to generate the GIFs. In a recent study \cite{mujtaba2021GIF}, the authors proposed a client-driven method to mitigate privacy issues while designing a lightweight method for streaming platforms to create GIFs. Instead of adopting full-length video content in the method, the author used an acoustic feature to reduce the overall computational time for resource-contained devices. \subsection{Event Recognition Methods}\label{sec:level2.2} Event recognition is a common problem in detecting and classifying video segments according to the predefined set of actions or activity classes used to understand videos. Most methods adopt temporal segments \cite{yang2019exploring} to prune and classify videos. Recent research has focused on exploiting the context to further improve event recognition. Context represents and utilizes both spatio-temporal information and attention, which helps in learning adaptive confidence scores to utilize surrounding information \cite{heilbron2017scc}. More advanced methods of time integration and motion-aware sequence learning have used other neural networks such as long short-term memory (LSTM) and recurrent neural networks (RNNs) \cite{agethen2019deep, pei2017temporal}. The LSTM convolutional network is designed in combination with attention-based mechanisms to support multiple convolutional kernels and layers. Attention models have also been used to improve the integrated spatio-temporal information. Recent studies have used two model-based attention mechanisms within the analysis of spatio-temporal method \cite{peng2018two}. The first is a spatial level attention model that determines critical areas within a frame, while the second addresses the time level-level attention used to identify frames in a video. \subsection{Video Understanding Methods}\label{sec:level2.3} Understanding videos is a prominent field in computer vision research. Event (action) recognition \cite{carreira2017quo} and temporal event localization \cite{farha2019ms} are the two main issues addressed in the literature pertaining to video understanding. Action recognition involves recognizing an action from a cropped video clip, which is accomplished via various methods such as two-stream networks \cite{simonyan2014two}, 3D CNNs \cite{tran2015learning}, and RNNs \cite{donahue2015long}. Another popular action recognition method uses a two-stream structure to extend 3D CNNs \cite{carreira2017quo}. It is obtained by pretraining a 2D CNN model using the ImageNet \cite{deng2009imagenet} dataset and extending the 2D CNN model to a 3D CNN by repeated weighting in a depth-wise manner. These features are local descriptors that are obtained using the bag-of-words method or global descriptors retrieved by CNNs. Comparing to the proposed method, HECATE \cite{song2016click} is most similar approach as it can generate artistic media. Lightweight client-driven techniques to generate artistic media are still in early stages of development, and more effective methods are needed to bridge the semantic gap between video understanding and personalization. Most modern client devices have limited computational capabilities. Moreover, inspecting a full-length video to create artistic media is time-consuming and not reasonable for real-time solutions \cite{song2016click}. This paper proposes an effective artistic media generation scheme that considers user preferences and resource-constrained devices. The following section explains the artistic media generation process in detail. \begin{figure}[t] \centering \includegraphics[width=\linewidth,keepaspectratio]{figures/fig4.png} \caption{\label{fig:propsed_framwework} High-level system architecture of the proposed client-driven LTC artistic media generation method.} \end{figure} \section{Proposed Method}\label{sec:level3} According to a recent study \cite{cisco2020cisco}, the use of streaming platforms has become more popular than ever compared to traditional platforms. CTR is a significantly important metric for streaming platforms, especially for the videos newly broadcast on the platform. Meanwhile, artistic media is vital for streaming platforms as well. There is a stronger correlation between the artistic media and personalization; if the artistic media is relevant to the video, there will be a higher click rate. Consequently, artistic media has become increasingly important in the video selection process. However, currently, they are generated via a one-size-fits-all framework, without user feedback. It is possible that users do not like a particular artistic media because it is not congruent with their interests, which can lead to users skipping the video and reduce its CTR significantly. Owing to the recent popularity of artistic media on streaming platforms, a need for methods that create artistic media based on user preferences with minimal computational requirements has emerged. This paper proposes a new technique to advance the research on generating anticipated artistic media using a client-driven approach. The proposed method uses LTC \footnote{Thumbnail containers are being widely used in streaming platforms for timeline manipulation of videos (refer to Figure \ref{fig:3}) \cite{mujtaba2020client}. This thumbnail container can be obtained from \url{https://www.youtube.com/watch?v=kn5uevla61U}.} instead of the entire video to analyze personalized events. Subsequently, artistic media is created within an adequate processing duration for client-side devices such as Nvidia Jetson TX2, an embedded AI computing device. Figure \ref{fig:propsed_framwework} depicts the high-level system architecture of the proposed artistic media method. In the streaming server, the size and orientation of the LTC and video segments are identical to those mentioned in a previous work \cite{mujtaba2020client}. There are two phases of generating artistic media. Each phase processes and generates a different artistic media type. In the first phase, LTC is analyzed using the \textit{Thumbnail Container Analyzer} module and artistic thumbnails are obtained. The information in the first phase is used to generate the artistic animated GIF from the given video segment in the second phase of the proposed method. The second phase consists of the \textit{Animated GIF Generation} module. The proposed method and its relevant components are described in the following subsections. \begin{figure}[t] \centering \includegraphics[width=\linewidth, height=3.5cm]{figures/fig3.PNG} \caption{\label{fig:3} Example of thumbnail container of selected video (left) and using the thumbnail to instantly preview lengthy videos in web-based players (right).} \end{figure} \begin{figure*}[t] \centering \includegraphics[keepaspectratio, width = 17.5cm]{figures/fig5.png} \caption{\label{fig:deep_lerning} Architecture of the proposed 2D convolutional neural network.} \end{figure*} \subsubsection{Thumbnail Containers Analyzer Module} \label{sec:level3.2.2} The LTC analyzer module determines the personalized events from thumbnail containers. A 2D CNN model was designed to examine thumbnails trained on the UCF-101 dataset \cite{soomro2012ucf101}. The dataset was categorized into 101 different action categories from 13,320 videos. The state-of-the-art Xception image annotation model was used to extract frame-level features \cite{chollet2017xception}. The model was pre-trained on ImageNet dataset \cite{deng2009imagenet}. The architecture of the proposed 2D CNN is depicted in Figure \ref{fig:deep_lerning}. Vortex pooling was used as an attention module to enhance the efficiency of the proposed neural network \cite{xie2018vortex}. The module uses multi-branch convolution with dilation rates to aggregate contextual information, making it more effective. Data augmentation was applied to reduce the overfitting in the proposed approach. The first train/test partition of the UCF-101 dataset was used as recommended in \cite{soomro2012ucf101}. Each video was subsampled up to 40 frames to train the model using the UCF101 dataset. Before being utilized as the network input, all images were pre-processed by cropping their central area and resizing them to 244$\times$244 pixels. Shear transformations were also performed according to an angle of 20°, random rotation of 10°, horizontal and vertical shift of 0.2, and random horizontal inversion of the image. The varied stochastic gradient descent optimizer was used with a learning rate of 0.01, momentum of 0.9, and default weight decay value (SGDW) to train the model \cite{loshchilov2017decoupled}. In the experiment, an early stop mechanism was applied during the training process with a patience of ten. Training data were provided in mini-batches with a size of 32 and a learning rate of 0.001 to minimize costs; 1,000 iterations were performed to train the sequence patterns in the data. The Keras toolbox was used for deep feature extraction, and a GeForce RTX 2080 Ti GPU was used for implementation. Section \ref{sec:level4.2} provides a detailed accuracy analysis of the proposed action recognition model. \subsubsection{Animated GIFs Generation Module} \label{sec:level3.2.3} The animated GIF creation module is designed to examine the segment number from the text-based file generated from detected thumbnails. Later, we have utilized this information to obtain the corresponding segment from the HTTP Live Streaming (HLS) server and create an animated GIF \cite{mujtaba2020client}. The proposed method uses the first 3 seconds of the segment in the animated GIF generation process. FFmpeg is used in the proposed method to create GIFs from segments \cite{ffmpeg}. Here, the duration of all generated GIFs is fixed. However, this approach is extendable to generate a GIF with a specific length. Section \ref{sec:level4.1.2} provides a detailed description of the GIF generation using the proposed method. \section{Experimental Results and Discussion}\label{sec:level4} In this section, we present an extensive experimental evaluation of the baseline and proposed approaches. First, the hardware configurations are explained. Next, the entire artistic media process is described from the user's perspective. Later, the experimental scheme is explained with baseline methods. The accuracy of the proposed event recognition model is then given by comparing its performance to those of the prominent action recognition methods on UCF101 dataset. Next, the proposed and baseline methods are compared qualitatively and quantitatively. Finally, the overall results of the proposed and baseline methods are discussed. \subsection{Experimental Setup}\label{sec:level4.1} \subsubsection{Hardware Configuration}\label{sec:level4.1.1} The HLS server and HLS client hardware devices were configured locally for the experimental evaluations. For HLS clients, two end-user devices were configured with different hardware configurations: a high computational resource (HCR) end-user device running on the open-source Ubuntu 18.04 LTS operating system, and a low computational resource (LCR) end-user machine utilizing an Nvidia Jetson TX2 device. The proposed and baseline approaches were set up separately on HCR and LCR machines. The HLS server machine was set up with Windows 10 operating system and was used in experiments. The current network structure of our university (SKKU) was utilized to connect all hardware machines locally. Table \ref{tab:hardware_specs} shows the specifications of the hardware devices used in all experiments. The complete artistic media creation process that uses the proposed approach is described in the next subsection from the user's perspective. \begin{table}[ht] \centering \caption{\label{tab:hardware_specs} HLS server and HLS clients hardware devices specifications.} \begin{tabular}{|P{20pt}|P{80pt}|P{75pt}|c|} \hline Device & CPU & GPU & RAM \\ \Xhline{3\arrayrulewidth} HLS Server & Intel Core i7-8700K & GeForce GTX 1080 & 32 GB \\ \hline HCR Client & Quad-core 2.10 GHz Xeon & GeForce RTX 2080 Ti & 62 GB \\ \hline LCR Client & HMP Dual Denver 2/2MB L2 + Quad ARM A57/2MB L2 & Nvidia Pascal 256 CUDA cores & 8 GB\\ \hline \end{tabular} \end{table} \subsubsection{Proposed Artistic Media Generation Process} \label{sec:level4.1.2} This section describes the entire process of artistic media generation from the user's perspective. The process is demonstrated utilizing twenty-three feature-length sports videos obtained from the streaming platform YouTube. The videos are split into six categories based on their content, namely, baseball, basketball, boxing, cricket, football, and tennis. Table \ref{tab:video_title} provides the complete descriptions of the selected videos. View statistics counts were collected in November 2021. All videos used in the experiments had a resolution of $640\times480$ pixels. All selected videos were examined using ten different events selected from the action list provided in the UCF-101 dataset\footnote{It should be noted that the proposed method is not bound by these events; additional events can be included according to the video content.}. The ten selected events were basketball, basketball dunk, boxing punching bag, boxing speed bag, cricket bowling, cricket shot, punch, soccer juggling, soccer penalty, and tennis swing. These events were selected based on the video content. \begin{table*}[t] \centering \caption{List of selected videos utilized for analysis in the proposed approach.} \label{tab:video_title} \begin{tabular}{|c|c|l|c|c|c|c|c|c|c|} \hline S/N & Category & \multicolumn{1}{c|}{Title} & Playtime & FPS & \# Frames & \# LTC & \# Thumbnails & Views & YouTube ID \\ \Xhline{3\arrayrulewidth} 1 & \multirow{6}{*}{Football} & \begin{tabular}[c]{@{}l@{}} Belgium vs. Japan \end{tabular} & 1h 52m 14s & 30 & 202,036 & 270 & 6734 & 1,141,707 & ervkVzoFJ5w \\ \cline{1-1} \cline{3-10} 2 & & \begin{tabular}[c]{@{}l@{}} Brazil vs. Belgium \end{tabular} & 1h 50m 50s & 30 & 199,506 & 267 & 6650 & 935,399 & 5OJfbYQtKtk \\ \cline{1-1} \cline{3-10} 3 & & \begin{tabular}[c]{@{}l@{}} France vs. Argentina \end{tabular} & 1h 50m 26s & 25 & 165,653 & 266 & 6626 & 2,660,920 & J41d0cHAfSM \\ \cline{1-1} \cline{3-10} 4 & & \begin{tabular}[c]{@{}l@{}} France vs. Croatia \end{tabular} & 1h 54m 1s & 30 & 205,243 & 274 & 6841 & 1,367,451 & 7Fau-IwbuJc \\ \cline{1-1} \cline{3-10} 5 & & \begin{tabular}[c]{@{}l@{}} Germany vs. Mexico \end{tabular} & 1h 48m 56s & 30 & 196,106 & 262 & 6536 & 1,111,419 & 3fYpcapas0k \\ \cline{1-1} \cline{3-10} 6 & & \begin{tabular}[c]{@{}l@{}} Portugal vs. Spain \end{tabular} & 1h 50m 25s & 30 & 198,556 & 266 & 6625 & 1,792,000 & Xhu5Bz1xDf0 \\ \hline 7 & \multirow{4}{*}{Basketball} & \begin{tabular}[c]{@{}l@{}}France vs USA \end{tabular} & 2h 14m 39s & 30 & 242,135 & 324 & 8079 & 1,171,512 & 8YSrNfcKvA0 \\ \cline{1-1} \cline{3-10} 8 & & \begin{tabular}[c]{@{}l@{}}Golden State Warriors vs. \\ Brooklyn Nets \end{tabular} & 1h 40m 52s & 30 & 181,574 & 243 & 6052 & 585,904 & KAZ-U8vYqZg \\ \cline{1-1} \cline{3-10} 9 & & \begin{tabular}[c]{@{}l@{}}Los Angeles Lakers vs. \\Houston Rockets \end{tabular} & 1h 54m 19s & 30 & 205,586 & 275 & 6859 & 312,224 & aHVd9vVWVSQ \\ \cline{1-1} \cline{3-10} 10 & & \begin{tabular}[c]{@{}l@{}}USA vs. Spain - \\Men's Gold Final \end{tabular} & 2h 53m 54s & 25 & 260,886 & 418 & 10434 & 17,722,044 & l9wUr-CK1Y4 \\ \hline 11 & \multirow{4}{*}{Boxing} & \begin{tabular}[c]{@{}l@{}}Canelo vs. Daniel Jacobs \end{tabular} & 53m 55s & 30 & 96,968 & 130 & 3235 & 11,834,396 & 1VbXe9ZjzTM \\ \cline{1-1} \cline{3-10} 12 & & \begin{tabular}[c]{@{}l@{}}Davis vs. Gamboa Full Fight \end{tabular} & 1h 3m 2s & 30 & 113,368 & 152 & 3782 & 3,135,174 & KZtVQo8lpqY \\ \cline{1-1} \cline{3-10} 13 & & \begin{tabular}[c]{@{}l@{}}Dirrell vs. Davis Full Fight \end{tabular} & 47m 29s & 30 & 85,392 & 114 & 2849 & 165,015 & sVtzzpvaEjc \\ \cline{1-1} \cline{3-10} 14 & & \begin{tabular}[c]{@{}l@{}}Floyd Mayweather Jr. vs. \\Marcos Maidana \end{tabular} & 56m 50s & 25 & 85,259 & 137 & 3410 & 13,569,484 & KYvOC7MBuUw \\ \hline 15 & \multirow{3}{*}{Baseball} & \begin{tabular}[c]{@{}l@{}}Giants vs. Dodgers \end{tabular} & 2h 11m 42s & 30 & 236,827 & 317 & 7902 & 168,309 & ScmHL8YVM5E \\ \cline{1-1} \cline{3-10} 16 & & \begin{tabular}[c]{@{}l@{}}Giants vs. Royals \end{tabular} & 2h 36m 50s & 30 & 282,024 & 377 & 9410 & 6,448,368 & YJmwofDYOeo \\ \cline{1-1} \cline{3-10} 17 & & \begin{tabular}[c]{@{}l@{}}Toronto Blue Jays vs. \\Boston Red Sox \end{tabular} & 2h 40m 50s & 30 & 289,221 & 387 & 9650 & 19,006 & psL-FvRg9jM \\ \hline 18 & \multirow{2}{*}{Cricket} & \begin{tabular}[c]{@{}l@{}}India vs. Pakistan \end{tabular} & 1h 25m 2s & 30 & 153,065 & 205 & 5102 & 36,562,893 & uSGCAJS6qWg \\ \cline{1-1} \cline{3-10} 19 & & \begin{tabular}[c]{@{}l@{}}Peshawar Zalmi vs. \\ Islamabad United \end{tabular} & 2h 17m 15s & 30 & 205,170 & 274 & 6845 & 372,182 & uzErZgKuuSM \\ \hline 20 & \multirow{4}{*}{Tennis} & \begin{tabular}[c]{@{}l@{}}Maria Sharapova vs. \\Caroline Wozniacki \end{tabular} & 2h 10m 6s & 30 & 233,962 & 313 & 7806 & 745,690 & 72VhC9biEFk \\ \cline{1-1} \cline{3-10} 21 & & \begin{tabular}[c]{@{}l@{}}Novak Djokovic vs. \\Daniil Medvedev \end{tabular} & 2h 1m 6s & 25 & 181,654 & 291 & 7266 & 902,442 & MG-RjlqyaJI \\ \cline{1-1} \cline{3-10} 22 & & \begin{tabular}[c]{@{}l@{}}Novak Djokovic vs. \\ Roger Federer \end{tabular} & 4h 58m 38s & 25 & 447,961 & 717 & 17918 & 4,841,514 & TUikJi0Qhhw \\ \cline{1-1} \cline{3-10} 23 & & \begin{tabular}[c]{@{}l@{}}Roger Federer vs. Rafael Nadal \end{tabular} & 3h 5m 37s & 25 & 278,448 & 446 & 11137 & 4,991,304 & wZnCcqm\_g-E \\ \hline \end{tabular} \end{table*} To obtain artistic media for a specific video, the user first selects the video from the web interface. The end-user device requests and downloads the LTC for the corresponding video. The downloaded LTC covers the entire duration of the video. The total number of frames, frames per second (FPS), LTC and thumbnails corresponding to LTC in a video are shown in Table \ref{tab:video_title}. A single LTC contains 25 thumbnails. The size of an LTC is considerably smaller than the number of frames in a video; hence, a significantly low bitrate is required during transmission. The proposed method uses the canvas to capture every thumbnail separately from the transmitted thumbnail containers. The event(s) of the video is selected using the web interface. A user can select more than one event during the GIF generation process. The proposed 2D CNN model requires two inputs during the recognition process: the thumbnail and preferred event. The deep learning model analyzes each extracted thumbnail individually based on the event(s) selected by the user. The proposed method selects a personalized artistic thumbnail from the analyzed LTC. The artistic thumbnails are selected based on a threshold that is set to maintain the quality of the artistic media. A text-based file is generated for all selected personalized artistic thumbnails obtained from the LTC. This text-based file is used to provide personalized artistic thumbnails according to user preferences regarding the video category. The data inside the artistic thumbnail files are ranked in chronological order. To obtain a specific segment for a selected thumbnail, the text-based file is analyzed to download the segment. Then, the end-user machine then requests specific segments from the HLS server with distinct timestamps. The HTTP server sends these segments immediately in response to the client device request. Subsequently, the segments are adopted to create an animated GIF. FFmpeg \cite{ffmpeg} is used in the proposed method to create an artistic GIF from a given segment. Algorithm \ref{code:generate_GIF} depicts the processing steps required to generate a GIF from a video with the proposed method. \begin{algorithm} [t] \DontPrintSemicolon\SetAlgoLined \noindent\rule{7.5cm}{0.4pt} \KwData{Input thumbnail containers} - N: number of thumbnails \textit{T} inside thumbnail containers \textit{LTC}\; \SetKw{KwInit}{Initialization:}\KwInit - Personalize events \textit{P}; Segments \textit{S}; threshold = 80\; \normalem \textbf{Main loop}: \While{ i $<$ (N)} { Extract \textit{T} from \textit{LTC}\; \textit{determineEvents}(\textit{T}, \textit{P}, threshold) \; Identify the \textit{S} number from text-based file \; Download \textit{S} \; Generate animated GIF from \textit{S} \; } \SetKwProg{fn}{Function}{}{} \fn{determineEvents (T, P, threshold)}{ Analyze \textit{T} as per \textit{P}\; Select artistic \textit{T} according to threshold\; Prepare text-file of selected \textit{T} \; }\textbf{return} text-based selected \textit{T} list\; {} \KwResult{Generated Artistic Media} \noindent\rule{7.5cm}{0.4pt} \ULforem \caption{\label{code:generate_GIF} Process to analyze personalize events from thumbnail containers to generate artistic media.} \end{algorithm} \subsubsection{Baseline Methods}\label{sec:level4.1.3} This section describes the baseline methods that are compared to the proposed artistic media generation method. As explained in Section \ref{sec:level2}, some of the well-known approaches use the entire video to generate animated GIFs. The baseline approaches are listed as follows: \begin{itemize} \item \textbf{HECATE} \cite{song2016click}: It analyzes atheistic features obtained from video frames. The corresponding video is stored locally on the device. During the process, the frames are extracted, temporarily stored, and then analyzed. This method only supports a fixed duration and number of GIFs. Here, ten artistic thumbnail and GIFs were generated for each video. \item \textbf{AV-GIF} \cite{mujtaba2021GIF}: It analyzes the entire audio and video files to create animated GIFs. This is the baseline approach described in \cite{mujtaba2021GIF}. To create a GIF, the default parameters were used as described by the authors. With this method, only one GIF was generated for each corresponding video using the default parameters. \item \textbf{CL-GIF} \cite{mujtaba2021GIF}: It uses acoustic features to analyze the audio climax portion and employs segments to generate GIFs. This is the SoA client-driven animated GIF generation method. Default parameters were applied to generate the animated GIFs. Here, similar to \cite{mujtaba2021GIF}, only one GIF was generated using default parameters. \item \textbf{FB-GIF}: Instead of analyzing the LTC, this method uses video frames of the corresponding video to detect personalized scenes. Initially, frames are extracted from the video; then, the 2D CNN model is used to detect the corresponding events from the extracted frames. \end{itemize} \subsection{Experimental Evaluation Action Recognition}\label{sec:level4.2} This subsection presents an evaluation of the existing 2D CNN approaches using the UCF-101 dataset. To the best of our knowledge, \cite{mujtaba2020client} is the only method that uses thumbnail containers to recognize events, which performed the best on the UCF-101 dataset when using thumbnail containers. The proposed CNN model performed 2.5\% better in terms of validation accuracy compared to \cite{mujtaba2020client}, with 51.32 million floating-point operations per second. The total number of parameters of the proposed CNN model is 25.6 million. The experimental results of the proposed and baseline approaches on the UCF-101 dataset are listed in Table \ref{tab:methods_comparisons}. All CNN models \cite{chollet2017xception, sandler2018mobilenetv2, howard2019searching, huang2017densely, szegedy2016rethinking} were trained on the UCF-101 dataset with similar configurations without adopting an attention module as described in Section \ref{sec:level3.2.2}. The proposed CNN model was used in all experiments to identify personalized events from thumbnails. \begin{table} [ht] \centering \setlength{\tabcolsep}{3pt} \caption{\label{tab:methods_comparisons} Comparisons between the proposed CNN action recognition model and other approaches.} \begin{tabular}{ l P{120pt} } \hline CNN Methods& Overall validation accuracy (\%) \\ \Xhline{3\arrayrulewidth} MobileNetV2 \cite{sandler2018mobilenetv2} & 59.06\%\\ \hline MobileNetV3Small \cite{howard2019searching} & 68.75\%\\ \hline MobileNetV3Large \cite{howard2019searching} & 71.88\%\\ \hline DenseNet121 \cite{huang2017densely} & 65.31\%\\ \hline InceptionV3 \cite{szegedy2016rethinking} & 61.25\%\\ \hline Karpathy, Andrej, et al. 2014 \cite{karpathy2014large}& 65.40\%\\ \hline Shu, Yu, et al. 2018\cite{shu2018odn}& 76.07\%\\ \hline Mujtaba, et al. 2020 \cite{mujtaba2020client}& 73.75\%\\ \hline Xception \cite{chollet2017xception}& 68.44\%\\ \Xhline{3\arrayrulewidth} \textbf{Proposed} & \textbf{76.25\%} \\ \Xhline{3\arrayrulewidth} \end{tabular} \end{table} \subsection{Performance Analysis of the Proposed Method}\label{sec:level4.3} In this section, the performance of the proposed LTC artistic media generation method is evaluated with respect to those of the baseline approaches described in Section \ref{sec:level4.1.3}. This performance evaluation was conducted using twenty-three feature-length sports videos (Table \ref{tab:video_title}). The computation time of the proposed method was calculated considering the (i) download thumbnail containers, (ii) obtaining thumbnails by extracting them from the thumbnail containers, (iii) recognizing personalized event(s) from the thumbnails, (iv) selecting artistic static thumbnails that have high accuracy, (v) estimate segment number and download segments, and (v) creating the artistic animated GIFs from these segments. All thumbnails were selected with an accuracy exceeding 80.0\% of the threshold, which was set to maintain the artistic media quality. In the first experiment, we evaluated the computation time required to generate artistic static thumbnails using the proposed and baseline approaches. To evaluate the performance of the proposed method, the HECATE \cite{song2016click} baseline method was used with default configuration. In this evaluation, the HCR device was used for experimental evaluation. Table \ref{tab:thumb_hcr} shows number of artistic thumbnails and the computation time required (in minutes) to generate them using proposed and baseline methods. The proposed approach required considerable less computational time then the HECATE \cite{song2016click} baseline method. It is important to note that, all the artistic thumbnails obtained using the proposed method have personalized events. Meanwhile, the artistic thumbnails are generated using HECATE \cite{song2016click} as the one-size-fits-all framework. The artistic thumbnails generated using proposed and baseline methods are depicted in Figure \ref{fig:thumb_sampel}. \begin{table}[t] \centering \caption{Computation times required (in minutes) to generate artistic thumbnails using the baseline and proposed methods on the HCR device.} \label{tab:thumb_hcr} \begin{tabular}{|c|cc|cc|} \hline \multirow{2}{*}{S/N} & \multicolumn{2}{c|}{HECATE \cite{song2016click}} & \multicolumn{2}{c|}{Proposed} \\ \cline{2-5} & \multicolumn{1}{c|}{\#Thumbnails} & Total & \multicolumn{1}{c|}{\#Thumbnails} & Total \\ \hline 1 & \multicolumn{1}{c|}{10} & 50.19 & \multicolumn{1}{c|}{\textbf{438}} & \textbf{1.75} \\ \hline 2 & \multicolumn{1}{c|}{10} & 86.59 & \multicolumn{1}{c|}{\textbf{465}} & \textbf{1.64} \\ \hline 3 & \multicolumn{1}{c|}{10} & 41.34 & \multicolumn{1}{c|}{\textbf{130}} & \textbf{1.65} \\ \hline 4 & \multicolumn{1}{c|}{10} & 60.17 & \multicolumn{1}{c|}{\textbf{584}} & \textbf{1.73} \\ \hline 5 & \multicolumn{1}{c|}{10} & 44.78 & \multicolumn{1}{c|}{\textbf{117}} & \textbf{1.03} \\ \hline 6 & \multicolumn{1}{c|}{10} & 73.16 & \multicolumn{1}{c|}{\textbf{712}} & \textbf{1.67} \\ \hline 7 & \multicolumn{1}{c|}{10} & 130.16 & \multicolumn{1}{c|}{\textbf{984}} & \textbf{2.01} \\ \hline 8 & \multicolumn{1}{c|}{10} & 67.40 & \multicolumn{1}{c|}{\textbf{961}} & \textbf{1.51} \\ \hline 9 & \multicolumn{1}{c|}{10} & 66.35 & \multicolumn{1}{c|}{\textbf{1040}} & \textbf{1.77} \\ \hline 10 & \multicolumn{1}{c|}{10} & 158.84 & \multicolumn{1}{c|}{\textbf{1184}} & \textbf{2.64} \\ \hline 11 & \multicolumn{1}{c|}{10} & 14.70 & \multicolumn{1}{c|}{\textbf{928}} & \textbf{0.82} \\ \hline 12 & \multicolumn{1}{c|}{10} & 19.63 & \multicolumn{1}{c|}{\textbf{845}} & \textbf{0.96} \\ \hline 13 & \multicolumn{1}{c|}{10} & 13.33 & \multicolumn{1}{c|}{\textbf{897}} & \textbf{0.72} \\ \hline 14 & \multicolumn{1}{c|}{10} & 14.05 & \multicolumn{1}{c|}{\textbf{1375}} & \textbf{0.86} \\ \hline 15 & \multicolumn{1}{c|}{10} & 87.20 & \multicolumn{1}{c|}{\textbf{1225}} & \textbf{1.99} \\ \hline 16 & \multicolumn{1}{c|}{10} & 81.25 & \multicolumn{1}{c|}{\textbf{1020}} & \textbf{2.36} \\ \hline 17 & \multicolumn{1}{c|}{10} & 78.41 & \multicolumn{1}{c|}{\textbf{1160}} & \textbf{2.40} \\ \hline 18 & \multicolumn{1}{c|}{10} & 25.29 & \multicolumn{1}{c|}{\textbf{18}} & \textbf{1.28} \\ \hline 19 & \multicolumn{1}{c|}{10} & 48.70 & \multicolumn{1}{c|}{\textbf{14}} & \textbf{1.60} \\ \hline 20 & \multicolumn{1}{c|}{10} & 65.19 & \multicolumn{1}{c|}{\textbf{158}} & \textbf{1.95} \\ \hline 21 & \multicolumn{1}{c|}{10} & 34.41 & \multicolumn{1}{c|}{\textbf{124}} & \textbf{1.84} \\ \hline 22 & \multicolumn{1}{c|}{10} & 178.28 & \multicolumn{1}{c|}{\textbf{45}} & \textbf{4.42} \\ \hline 23 & \multicolumn{1}{c|}{10} & 74.86 & \multicolumn{1}{c|}{\textbf{22}} & \textbf{2.78} \\ \hline \end{tabular} \end{table} \begin{figure}[t] \centering \includegraphics[width=\linewidth, height=4.5cm]{figures/fig6.PNG} \caption{\label{fig:thumb_sampel} Artistic thumbnails generated using proposed and baseline methods.} \end{figure} In the second experiment, we compared the computational time required to generate artistic animated GIFs using the proposed and baseline approaches. The HCR device was adopted to all approaches, and the detailed device specification for each approach are highlighted in Table \ref{tab:hardware_specs}. The computational times required (in minutes) to generate GIFs using the proposed and baseline approaches is depicted in Table \ref{tab:baseline_hcr}. Table \ref{tab:thumb_compare} shows the number of events and segments detected using baseline LTC approach with the proposed method. Table \ref{tab:proposed_hcr} depicts the computation time required (in seconds) for every step when creating artistic media on the HCR device. The HECATE \cite{song2016click} method analyzes every frame in the video and determines aesthetic features that can be used for generating GIFs. The number of thumbnails is significantly lower than the number of frames in the example video, shown in Table \ref{tab:video_title}. As indicated in a recent paper, AV-GIF \cite{mujtaba2021GIF} uses entire video and audio clips to generate animated GIFs. Meanwhile, CL-GIF \cite{mujtaba2021GIF} uses segments and audio climax portions to generate animated GIFs. The proposed method uses considerably small images (thumbnails) to analyze personalized events, which results in a significantly lower computation time for generating animated GIFs. \begin{table}[t] \centering \caption{Computation times required (in minutes) to generate artistic animated GIFs using the baseline and proposed methods on the HCR device.} \label{tab:baseline_hcr} \begin{tabular}{| c | c | P{30pt} | P{30pt} |c|c|} \hline S/N & HECATE \cite{song2016click} & AV-GIF \cite{mujtaba2021GIF} & CL-GIF \cite{mujtaba2021GIF} & FB-GIF & Proposed \\ \Xhline{3\arrayrulewidth} 1 & 51.52 & 21.60 & 8.16 & 70.67 & \textbf{2.20} \\ \hline 2 & 89.79 & 21.36 & 8.56 & 65.31 & \textbf{2.02} \\ \hline 3 & 45.69 & 21.09 & 0.77 & 54.81 & \textbf{2.09} \\ \hline 4 & 103.63 & 20.20 & 8.26 & 117.72 & \textbf{2.13} \\ \hline 5 & 45.29 & 22.04 & 8.29 & 63.74 & \textbf{1.49} \\ \hline 6 & 76.34 & 42.88 & 7.66 & 65.60 & \textbf{2.03} \\ \hline 7 & 199.44 & 26.36 & 8.22 & 137.77 & \textbf{2.38} \\ \hline 8 & 97.36 & 16.24 & 7.41 & 127.98 & \textbf{1.90} \\ \hline 9 & 97.86 & 19.14 & 7.86 & 177.38 & \textbf{2.29} \\ \hline 10 & 245.67 & 47.64 & 12.55 & 84.30 & \textbf{3.09} \\ \hline 11 & 16.24 & 9.58 & 3.52 & 42.63 & \textbf{1.22} \\ \hline 12 & 33.12 & 10.86 & 4.87 & 64.33 & \textbf{1.37} \\ \hline 13 & 20.92 & 8.07 & 3.04 & 43.35 & \textbf{1.17} \\ \hline 14 & 14.13 & 10.92 & 3.66 & 29.21 & \textbf{1.36} \\ \hline 15 & 93.92 & 29.68 & 9.38 & 155.61 & \textbf{2.47} \\ \hline 16 & 132.03 & 104.24 & 15.52 & 98.34 & \textbf{2.83} \\ \hline 17 & 88.27 & 30.01 & 13.83 & 94.66 & \textbf{2.85} \\ \hline 18 & 35.08 & 17.38 & 6.68 & 48.44 & \textbf{1.71} \\ \hline 19 & 49.70 & 23.92 & 9.93 & 69.22 & \textbf{2.03} \\ \hline 20 & 79.53 & 31.44 & 10.48 & 90.68 & \textbf{2.34} \\ \hline 21 & 35.18 & 41.99 & 10.98 & 58.26 & \textbf{2.20} \\ \hline 22 & 128.32 & 31.37 & 20.79 & 152.01 & \textbf{4.86} \\ \hline 23 & 79.24 & 41.05 & 13.87 & 181.49 & \textbf{3.18} \\ \hline \end{tabular} \end{table} \begin{table}[t] \centering \caption{The number of detected events from LTC and segments using the proposed method compared to the previous methods.} \label{tab:thumb_compare} \begin{tabular}{|c|cP{40pt}|cP{40pt}|} \hline \multirow{2}{*}{S/N} & \multicolumn{2}{c|}{Mujtaba, et al. 2020 \cite{mujtaba2020client}} & \multicolumn{2}{c|}{Proposed Method} \\ \cline{2-5} & \multicolumn{1}{P{45pt}|}{Events} & Segments & \multicolumn{1}{P{45pt}|}{Events} & Segments \\ \Xhline{3\arrayrulewidth} 1 & \multicolumn{1}{c|}{403} & 203 & \multicolumn{1}{c|}{\textbf{1849}} & \textbf{417} \\ \hline 2 & \multicolumn{1}{c|}{465} & 211 & \multicolumn{1}{c|}{\textbf{2819}} & \textbf{491} \\ \hline 3 & \multicolumn{1}{c|}{130} & 82 & \multicolumn{1}{c|}{\textbf{1540}} & \textbf{389} \\ \hline 4 & \multicolumn{1}{c|}{584} & 223 & \multicolumn{1}{c|}{\textbf{3084}} & \textbf{499} \\ \hline 5 & \multicolumn{1}{c|}{117} & 71 & \multicolumn{1}{c|}{\textbf{2238}} & \textbf{447} \\ \hline 6 & \multicolumn{1}{c|}{1712} & 412 & \multicolumn{1}{c|}{\textbf{3926}} & \textbf{520} \\ \hline 7 & \multicolumn{1}{c|}{1082} & 330 & \multicolumn{1}{c|}{\textbf{2930}} & \textbf{497} \\ \hline 8 & \multicolumn{1}{c|}{2425} & 417 & \multicolumn{1}{c|}{\textbf{2461}} & \textbf{421} \\ \hline 9 & \multicolumn{1}{c|}{1140} & 351 & \multicolumn{1}{c|}{\textbf{3912}} & \textbf{541} \\ \hline 10 & \multicolumn{1}{c|}{1184} & 344 & \multicolumn{1}{c|}{\textbf{3376}} & \textbf{540} \\ \hline 11 & \multicolumn{1}{c|}{1528} & 242 & \multicolumn{1}{c|}{\textbf{1719}} & \textbf{270} \\ \hline 12 & \multicolumn{1}{c|}{1489} & 283 & \multicolumn{1}{c|}{\textbf{1341}} & \textbf{261} \\ \hline 13 & \multicolumn{1}{c|}{1149} & 218 & \multicolumn{1}{c|}{\textbf{1477}} & \textbf{241} \\ \hline 14 & \multicolumn{1}{c|}{1875} & 274 & \multicolumn{1}{c|}{\textbf{2295}} & \textbf{301} \\ \hline 15 & \multicolumn{1}{c|}{2123} & 468 & \multicolumn{1}{c|}{\textbf{2599}} & \textbf{557} \\ \hline 16 & \multicolumn{1}{c|}{1619} & 425 & \multicolumn{1}{c|}{\textbf{2044}} & \textbf{512} \\ \hline 17 & \multicolumn{1}{c|}{1959} & 535 & \multicolumn{1}{c|}{\textbf{3328}} & \textbf{692} \\ \hline 18 & \multicolumn{1}{c|}{8} & 4 & \multicolumn{1}{c|}{\textbf{25}} & \textbf{12} \\ \hline 19 & \multicolumn{1}{c|}{10} & 7 & \multicolumn{1}{c|}{\textbf{22}} & \textbf{17} \\ \hline 20 & \multicolumn{1}{c|}{218} & 90 & \multicolumn{1}{c|}{\textbf{364}} & \textbf{134} \\ \hline 21 & \multicolumn{1}{c|}{146} & 62 & \multicolumn{1}{c|}{\textbf{124}} & \textbf{66} \\ \hline 22 & \multicolumn{1}{c|}{56} & 34 & \multicolumn{1}{c|}{\textbf{82}} & \textbf{53} \\ \hline 23 & \multicolumn{1}{c|}{25} & 20 & \multicolumn{1}{c|}{\textbf{87}} & \textbf{51} \\ \hline \end{tabular} \end{table} Since this study focuses on generating artistic media using resource-constrained end-user devices, subsequent experiments are conducted implementing the proposed and baseline methods on the LCR device, namely, Nvidia Jetson TX2. Table \ref{tab:baseline_lcr} shows the computation times required (in minutes) of first six feature-length sports videos to create artistic GIFs when implementing the baseline and proposed methods on the LCR device. In this experiment, we considered HECATE \cite{song2016click} and AV-GIF \cite{mujtaba2021GIF} approaches to generate animated GIFs. However, these approaches cannot be used in practice because they require significant computational resources owing to requiring lengthy videos. Only the CL-GIF \cite{mujtaba2021GIF} method can be used on the LCR device to generate a GIF. The overall processing time of the proposed method is significantly shorter than that of CL-GIF \cite{mujtaba2021GIF}. \begin{table}[ht] \centering \caption{Computation times required (in minutes) to generate artistic GIFs using the baseline and proposed methods on the LCR device.} \label{tab:baseline_lcr} \begin{tabular}{| P{15pt} | P{92pt} | P{93pt} |} \hline S/N & CL-GIF \cite{mujtaba2021GIF} & Proposed \\ \Xhline{3\arrayrulewidth} 1 & 38.71& \textbf{10.08} \\ \hline 2 & 36.17& \textbf{9.85}\\ \hline 3 & 35.40& \textbf{9.32}\\ \hline 4 & 40.06& \textbf{10.45} \\ \hline 5 & 37.96& \textbf{13.96} \\ \hline 6 & 35.60& \textbf{8.92}\\ \hline \end{tabular} \end{table} From the communication and storage perspectives, the proposed approach is more effective than the baseline methods. The HECATE approach requires a locally stored video file to begin processing \cite{song2016click}. Similarly, the corresponding full-length audio file and video segment must be downloaded when using the CL-GIF method to generate a GIF \cite{mujtaba2021GIF}. However, the proposed method requires only a lightweight thumbnail container downloaded for the same process. For example, the video and audio sizes of the Germany vs. Mexico match were 551 MB and 149 MB, respectively. However, the thumbnail container size was 22.2 MB for the same video. Thus, the proposed method significantly reduced the download time and storage requirements compared to the baseline methods. \begin{figure*}[t] \centering \includegraphics[keepaspectratio, width=\linewidth]{figures/fig7.png} \caption{\label{fig:frame_samples} Sample frames taken from the GIFs generated using the proposed and baseline methods.} \end{figure*} The total computation time for the twenty-three feature-length videos was $2,818.96$ minutes. To create artistic thumbnails for the corresponding videos using HCR end-user device, HECATE~\cite{song2016click} required $1514.27$ minutes and the proposed method required $41.35$ minutes. Therefore, the analysis of these twenty-three videos indicates that, on average, the proposed method is $36.62$ times faster than the HECATE \cite{song2016click} when generating the personalized artistic thumbnails. To create the corresponding GIFs when using the HCR end-user device, HECATE \cite{song2016click}, AV-GIF \cite{mujtaba2021GIF}, CL-GIF \cite{mujtaba2021GIF}, FB-GIF, and proposed method required $1858.25$, $649.07$, $204.31$, $2093.52$, and $51.20$ minutes, respectively. Moreover, for the first six videos, CL-GIF \cite{mujtaba2021GIF} and proposed method needed $223.92$ and $62.59$ minutes, respectively, on the LRC device (Table~\ref{tab:baseline_lcr}). Therefore, the analysis of these twenty-three videos indicates that, on average, the proposed method is $36.29$, $40.88$, $12.67$, and $3.99$ times faster than the HECATE \cite{song2016click}, FB-GIF, AV-GIF \cite{mujtaba2021GIF}, and CL-GIF \cite{mujtaba2021GIF} methods when using the HCR device, respectively. Similarly, when using the LCR device, the proposed method is $3.57$ times faster than the CL-GIF \cite{mujtaba2021GIF} method. The proposed approach also generates more GIFs than baseline methods. For example, when generating one GIF with AV-GIF and CL-GIF methods, $10$ GIFs can be generated using HECATE \cite{song2016click}, whereas the proposed method can generate $25$ GIFs. In summary, these outcomes prove that the proposed approach is more computationally effective than the baseline methods when using both HCR and LCR devices. \begin{table}[!b] \centering \caption{Average ratings (1$\sim$10) assigned by participants for the proposed and baseline methods.} \label{tab:quantitative_eva} \begin{tabular}{|P{15pt} |c|c|c| P{35pt} |} \hline S/N & YouTube & HECATE \cite{song2016click} & CL-GIF \cite{mujtaba2021GIF} & Proposed\\ \Xhline{3\arrayrulewidth} 1 & 4.67& 6.78 & 5.67 & \textbf{8.11} \\ \hline 2 & 4.67& 6.22 & 7.00 & \textbf{8.56} \\ \hline 3 & 4.78& 7.56 & 5.33 & \textbf{8.44} \\ \hline 4 & 5.56& 5.44 & 5.22 & \textbf{5.78} \\ \hline 5 & 4.22& 6.33 & 5.00 & \textbf{7.44} \\ \hline 6 & 6.11& 6.44 & 5.67 & \textbf{6.56} \\ \hline \end{tabular} \end{table} \subsection{Qualitative Evaluation} \label{sec:level4.4} This section evaluates the quality of GIFs created using the proposed approach compared to those obtained from YouTube or created utilizing baseline approaches. The evaluation was conducted using a survey with nine participants. A group of students was selected based on their interest in sports. The survey was based on the first six videos (Table \ref{tab:video_title}). The quality of the created GIFs was assessed with respect to exact rating scales. The participants were asked to grade the GIFs based on perceived joy. An anonymous questionnaire was designed for the created GIFs to prevent users from determining the method used to create a given GIF. The participants were requested to view all GIFs and rank them on a scale of 1 to 10 (1 being the lowest and 10 being the highest ranking). Table \ref{tab:quantitative_eva} lists the rankings of the three methods given by the participants. Regarding the six videos, the average ratings for YouTube, HECATE \cite{song2016click}, CL-GIF \cite{mujtaba2021GIF}, and the proposed method were $5.0$, $6.46$, $5.65$, and $7.48$, respectively. The sample frames obtained from the generated GIFs using the proposed and baseline methods are presented in Figure \ref{fig:frame_samples}. \begin{table*}[t] \centering \caption{Computation time required (in seconds) at each step when implementing the proposed method using the HCR device. } \label{tab:proposed_hcr} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline S/N & Download LTC & Extract Thumbnails & Events & Thumbnail Selection & Download Segments & Generate GIFs & Total (sec)\\ \Xhline{3\arrayrulewidth} 1 & 5.05 & 7.42 & 92.54 & 0.1 & 4.14 & 22.62 & 131.87 \\ \hline 2 & 5.34 & 7.43 & 85.58 & 0.1 & 3.71 & 19.11 & 121.27 \\ \hline 3 & 5.59 & 7.58 & 85.87 & 0.1 & 4.45 & 21.82 & 125.31 \\ \hline 4 & 5.66 & 7.82 & 90.05 & 0.1 & 3.77 & 20.45 & 127.85 \\ \hline 5 & 5.96 & 7.51 & 48.45 & 0.1 & 4.37 & 23.29 & 89.68 \\ \hline 6 & 5.15 & 7.42 & 87.42 & 0.1 & 3.05 & 18.83 & 121.97 \\ \hline 7 & 7.16 & 9.12 & 104.19 & 0.1 & 3.95 & 18.26 & 142.78 \\ \hline 8 & 5.73 & 7.35 & 77.78 & 0.1 & 4.13 & 18.83 & 113.92 \\ \hline 9 & 7.36 & 8.39 & 90.68 & 0.1 & 4.09 & 26.67 & 137.29 \\ \hline 10 & 10.75 & 12.29 & 135.26 & 0.1 & 4.47 & 22.6 & 185.47 \\ \hline 11 & 3.31 & 3.87 & 42.17 & 0.1 & 3.94 & 19.99 & 73.38 \\ \hline 12 & 3.79 & 4.42 & 49.12 & 0.1 & 3.71 & 21.24 & 82.38 \\ \hline 13 & 2.7 & 3.36 & 37.09 & 0.1 & 3.7 & 23.12 & 70.07 \\ \hline 14 & 3.84 & 4.32 & 43.37 & 0.1 & 4.62 & 25.69 & 81.94 \\ \hline 15 & 7.1 & 9.02 & 102.99 & 0.1 & 4.6 & 24.43 & 148.24 \\ \hline 16 & 8.45 & 10.79 & 122.24 & 0.1 & 3.59 & 24.82 & 169.99 \\ \hline 17 & 8 & 10.85 & 124.88 & 0.1 & 2.85 & 24.27 & 170.95 \\ \hline 18 & 3.96 & 5.59 & 67.23 & 0.1 & 2.66 & 23.45 & 102.99 \\ \hline 19 & 5.08 & 7.42 & 83.39 & 0.1 & 2.64 & 23 & 121.63 \\ \hline 20 & 6.81 & 8.9 & 100.99 & 0.1 & 3.94 & 19.59 & 140.33 \\ \hline 21 & 5.82 & 7.84 & 96.69 & 0.1 & 3.33 & 18.13 & 131.91 \\ \hline 22 & 15.7 & 20.74 & 228.8 & 0.1 & 2.86 & 23.62 & 291.82 \\ \hline 23 & 9.61 & 13.16 & 143.79 & 0.1 & 3.74 & 20.61 & 191.01 \\ \hline \end{tabular} \end{table*} \subsection{Discussion} \label{sec:level4.5} The overall effectiveness of the proposed method was evaluated through comparisons to the baseline methods. The proposed method achieved significantly higher performance and shorter computational time on both HCR and LCR devices because it uses thumbnail containers and video segments to generate artistic media instead of processing the entire video, which results in improved computational efficiency. The superiority of our method was underlined experimentally as well, whose results were compared to those of the baseline methods. The proposed method was shown to be $36.62$ times faster than the HECATE \cite{song2016click} while generating artistic thumbnails when using the HCR device. Meanwhile, the proposed method was shown to be $36.29$, $40.88$, $12.67$, and, $3.99$ times faster than the HECATE \cite{song2016click}, FB-GIF, AV-GIF \cite{mujtaba2021GIF}, the CL-GIF \cite{mujtaba2021GIF} methods during artistic animated GIF generation, respectively, when using the HCR device. Similarly, when using the LCR device, the proposed method is 3.57 times faster while analyzing six video than the CL-GIF \cite{mujtaba2021GIF} method. The proposed method has reduced the overall computational power and time required to produce GIFs on client devices. In the qualitative experiment involving participants, detailed in Section \ref{sec:level4.4}, the proposed approach obtained a higher average rating than the those of other methods. This is mainly because the GIFs are generated based on user interests with the proposed approach. In addition, the proposed method can generate more than one GIF, which can then be used randomly to obtain a greater CTR for the corresponding video. In practical applications, the proposed method can significantly increase the CRT of soccer and other full-length newly broadcast sports videos. The proposed system can be used a wide range of client devices with different computational resource capabilities. Thanks to its simplicity and scalability in implementing multiple device configurations \cite{li2020energy}, it can be easily adapted to other animated image formats, such as WebP, recommendation methods \cite{mu2020auxiliary, zhang2020social}, and other streaming protocols. In addition, by reducing the computational load of the servers, the proposed approach can act as a privacy protection solution by utilizing effective encryption methods \cite{mujtaba2019, ryu2011home, ryu2008towards} in three-screen TV solutions \cite{kim2019360, jeong2019towards}. Various client-based GIF generation real-time application scenarios for smartphones or set-top boxes can be considered. For an example, if the battery is fully charged, an iPhone utilizes its computing resources to analyze the photos/videos from specific dates and generates the so-called ``memories'' video summary. Animated GIFs can be generated similarly using end-user devices. Client-based GIF generation technology is in early-stages of development and new methods considering different scenarios need to be researched. \section{Conclusions} This paper proposes a new lightweight method for generating artistic media the computational resources of end-user devices. The proposed method analyzes thumbnails to recognize personalized events and uses the corresponding video segments to generate artistic thumbnail and animated GIFs. This improves the computational efficiency and reduces the demand for communication and storage resources in resource-constrained devices. Extensive experimental results based on a set of twenty-three videos show that the proposed approach is 3.99 and 3.57 times faster than the SoA method, respectively when using HCR and LCR devices. The qualitative evaluation indicated that the proposed method outperformed the existing methods and received higher overall ratings. In the future, the proposed method could be implemented for other sports categories by considering various events using resource-constrained devices. \normalem \bibliographystyle{IEEEtran}
2,869,038,154,155
arxiv
\section{Introduction\label{Sec:Intro}} Conventional imaging systems are developed to {\em capture} more data, such as high-resolutions and large field-of-view. However, to save these captured data, image/video compression methods are immediately applied due to the limited memory and bandwidth. This ``capturing images first and processing afterwards" cannot meet the unprecedented demand in recent explosive growth of artificial intelligence and robotics. To address these challenges, computational imaging~\cite{Altmanneaat2298,Mait18CI} constructively combines optics, electronics and algorithms for optimized performance~\cite{BradyNature12,Brady18Optica,Ouyang2018DeepLM} or to provide new abilities~\cite{Brady15AOP,Tsai15OL} to imaging systems. Different from conventional imaging, these computational imaging systems usually capture the data in an indirect manner, mostly compressed or coded. \begin{figure*}[!htbp] \begin{center} \includegraphics[width=1\linewidth]{video_color_sci.pdf} \end{center} \vspace{-3mm} \caption{Schematic of a color video SCI system and its snapshot measurement (showing in Bayer RGB mode). A ``RGGB'' Bayer pattern is shown.} \label{fig:video_color_sci} \end{figure*} This paper considers one important branch of computational imaging with promising applications, the snapshot compressive imaging (SCI)~\cite{Patrick13OE,Wagadarikar08CASSI}, which utilizes a two-dimensional (2D) camera to capture the 3D video or spectral data in a snapshot. Such imaging systems adopt \emph{compressed sampling} on a set of consecutive images--video frames ({\em i.e.}, CACTI~\cite{Patrick13OE,Yuan14CVPR}) or spectral channels ({\em i.e.}, CASSI~\cite{Wagadarikar09CASSI})--in accordance with an encoding procedure and \emph{integrating} these sampled signals along time or spectrum to obtain the final compressed measurements. With this technique, SCI systems can capture the high-speed motion~\cite{Hitomi11ICCV,Reddy11CVPR,Yuan16BOE,Deng19_sin} and high-resolution spectral information~\cite{Gehm07,Miao19ICCV,Yuan15JSTSP} but with low memory, low bandwidth, low power and potentially low cost. In this work, we focus on video SCI. There are two critical challenges in SCI and other computational imaging systems. The first one is the hardware imaging system to capture the {compressed measurements} and the second one is the reconstruction algorithm to retrieve the desired signal. From the encoder-decoder perspective, we call the imaging system ``hardware encoder" and the reconstruction algorithm ``software decoder". For the first challenge in video SCI, different hardware encoders have been built and the underlying principle is to modulate the high-speed scene with a higher frequency than the sampling speed of the camera (Fig.~\ref{fig:video_color_sci}). Various coding strategies have been proposed, such as using a spatial light modulator (SLM), including a digital micromirror device (DMD)~\cite{Hitomi11ICCV,Qiao2020_APLP,Reddy11CVPR,Sun17OE} or a dynamic mask~\cite{Patrick13OE,Yuan14CVPR}. The patterns of DMD will change tens of times during one exposure time of the camera to impose the compression; a physical mask is moving within one exposure time so that different variants of the mask are imposed on the high-speed scenes to achieve the high-speed modulation. Regarding the second challenge of software decoder, various algorithms have been employed and developed for SCI reconstruction. In addition to the widely used TwIST~\cite{Bioucas-Dias2007TwIST}, Gaussian Mixture Model (GMM) in~\cite{Yang14GMMonline,Yang14GMM} assumes the pixels within a spatial-temporal patch are drawn from a GMM. GAP-TV~\cite{Yuan16ICIP_GAP} adopts the idea of total variation (TV) minimization under the generalized alternating projection (GAP)~\cite{Liao14GAP} framework. Recently, DeSCI proposed in~\cite{Liu18TPAMI} has led to state-of-the-art results. However, the slow speed of DeSCI precludes its real applications, especially to the HD ($1280\times720$), FHD ($1920\times1080$) or UHD ($3840\times1644$ and $3840\times2160$ in Fig.~\ref{fig:comp_largescale}) videos, which are now commonly used in our daily life. Recall that DeSCI needs more than one hour to reconstruct a $256\times256\times8$ video from a snapshot measurement. GAP-TV, by contrast, as a fast algorithm, cannot provide decent reconstructions to be used in real applications (in general, this needs the PSNR $>$30dB). An alternative solution is to train an end-to-end network to reconstruct the videos~\cite{Ma19ICCV,Qiao2020_APLP,Li2020ICCP,Cheng20ECCV_BIRNAT} for the SCI system. On one hand, this approach can finish the task within seconds (after training) and by the appropriate usage of multiple GPUs, an end-to-end sampling and reconstruction system can be built. On the other hand, this method loses the {\em robustness} of the network since whenever the sensing matrix (encoding process) changes, a new network has to be re-trained. Moreover, it cannot be used in adaptive video sensing~\cite{Yuan13ICIP}. Therefore, it is desirable to devise an {\em efficient} and {\em flexible} algorithm for video SCI reconstruction, especially for large-scale problems. This will pave the way of applying SCI in our daily life. Towards this end, this paper develops plug-and-play (PnP) algorithms for SCI. \subsection{Related Work \label{Sec:Related}} From the hardware side, in addition to capture high-speed videos, various other SCI systems have been developed to capture 3D multi/hyper-spectral images~\cite{Cao16SPM,Wang18PAMI,Yuan15JSTSP,Meng2020_OL_SHEM,Meng20ECCV_TSAnet}, 4D spectral-temporal~\cite{Tsai15OL}, spatial-temporal~\cite{Qiao2020_CACTI}, depth~\cite{Llull15Optica,Yuan16AO} and polarization~\cite{Tsai15OE} images, etc. These systems share the similar principle of modulating the high-dimensional signals using high-frequency patterns. From the algorithm side, early systems usually employed the algorithms for inverse problems of other applications such as compressive sensing~\cite{Candes06ITT,Donoho06ITT}. In general, the SCI reconstruction is an ill-posed problem and diverse priors and regularization methods have been used. Among these priors, the TV~\cite{Rudin92_TV} and sparsity~\cite{Patrick13OE} are widely used. Representative algorithms include TwIST~\cite{Bioucas-Dias2007TwIST} and GAP-TV~\cite{Yuan16ICIP_GAP}. Recently developed algorithms specifically for SCI include GMM~\cite{Yang14GMMonline,Yang14GMM} and DeSCI~\cite{Liu18TPAMI}, where GMM methods use mixture of Gaussian distributions to model video patches and DeSCI applies weighted nuclear norm minimization~\cite{Gu14CVPR} on video patches into the alternating direction method of multipliers (ADMM)~\cite{Boyd11ADMM} framework. As mentioned before, one main bottleneck of these optimization algorithms is the slow running speed. Inspired by recent advances of deep learning on image restoration~\cite{zhang2017beyond}, researchers have started using deep learning in computational imaging~\cite{Iliadis18DSPvideoCS,Jin17TIP,Kulkarni2016CVPR,LearningInvert2017,George17lensless,Yuan18OE}. Some networks have been proposed for SCI reconstruction~\cite{Ma19ICCV,Miao19ICCV,Qiao2020_APLP,Yoshida18ECCV,Li2020ICCP}. After training, these algorithms can provide results instantaneously and thus they can lead to end-to-end systems~\cite{Meng20ECCV_TSAnet,Qiao2020_CACTI} for SCI. However, these end-to-end deep learning methods rely heavily on the training data and further are not flexible. Specifically, when one network is trained for a specific SCI system, it cannot be used in other SCI systems provided different modulation patterns or different compression rates. In summary, optimization methods are slow but deep learning algorithms are not flexible. To cope with these issues, most recently, researchers start to integrate the advantages of both by applying the deep denoisers into the PnP framework~\cite{Venkatakrishnan_13PnP,Sreehari16PnP,Chan2017PlugandPlayAF,Ryu2019PlugandPlayMP}. Though PnP can date back to 2013~\cite{Venkatakrishnan_13PnP}, it is getting powerful in real inverse problems because the usage of advanced deep denoising networks~\cite{Zhang17SPM_deepdenoise,Zhang18TIP_FFDNet}. Recently, great successes have been achieved by PnP in other applications. Bearing these concerns in mind, in this work, we integrate various denoisers into PnP framework for SCI reconstruction. Our PnP algorithms can not only provide excellent results but also are robust to different coding process and thus can be used in adaptive sensing and large-scale problems~\cite{Yuan20CVPR}. \subsection{Contributions of This Work} Generally speaking, reconstruction of SCI aims to solve the trilemma, {\em i.e.}, speed, accuracy and flexibility. To address this challenge, our preliminary work~\cite{Yuan20CVPR} applied {\em frame-wise image denoiser} into the PnP framework to achieve excellent results in video SCI. Specially, we made the following contributions in~\cite{Yuan20CVPR}. \begin{itemize} \item[1)] Inspired by the plug-and-play ADMM~\cite{Chan2017PlugandPlayAF} framework, we extend it to SCI and show that PnP-ADMM converges to a fixed point by considering the hardware constraints and the special structure of the sensing matrix in SCI~\cite{Jalali19TIT_SCI}. \item[2)] We propose an efficient PnP-GAP algorithm by using various {denoisers} (Fig.~\ref{fig:demo}) into the generalized alternating projection~\cite{Liao14GAP,Yuan16ICIP_GAP} framework, which has a lower computational workload than PnP-ADMM. We prove that, under proper assumptions, the solution of PnP-GAP also converges to a fixed point. \item[3)] By employing the deep image denoiser FFDNet~\cite{Zhang18TIP_FFDNet} into PnP-GAP, we show that a FHD color video (1920$\times$1080$\times$3$\times$30 with 3 denoting the RGB channels and 30 the frame number) can be recovered from a snapshot measurement (Fig.~\ref{fig:comp_largescale}) efficiently {with PSNR close to 30dB}. Compared with an end-to-end network~\cite{Qiao2020_APLP}, dramatic resources have been saved since no re-training is required. This further makes the UHD compression using SCI to be feasible (a {3840$\times$1644$\times$3$\times$40 video is reconstructed with PSNR above 30dB in Fig.~\ref{fig:comp_largescale}}). \item[4)] We apply our developed PnP algorithms to extensive simulation and real datasets (captured by real SCI cameras) to verify the efficiency and robustness of our proposed algorithms. We show that the proposed algorithm can obtain results on-par to DeSCI but with a significant reduction of computational time. \end{itemize} Since videos are image sequences and they are highly correlated, it is expected that a {\em video denoiser} will boost up the results of video SCI reconstruction. Moreover, the color-video SCI has not been fully exploited by color image/video denoising algorithms. In particular, we make additional contributions in this paper. \begin{itemize} \item[5)] In addition to the PnP-FFDNet~\cite{Yuan20CVPR}, which integrates the image denoiser, FFDNet~\cite{Zhang18TIP_FFDNet}, into PnP-GAP, we further integrate the most recent video denoiser, FastDVDnet~\cite{Tassano_2020_CVPR}, into PnP-GAP to achieve better results than those reported in \cite{Yuan20CVPR} (Fig.~\ref{fig:demo}). \item[6)] We propose joint reconstruction and demosaicing for color SCI video reconstruction. In color SCI, since each pixel only capture one of the red (R), green (G) or blue (B) channels, previous methods reconstruct each channel (as grayscale images/videos) separately and then use off-the-shelf demosaicing methods to get the final color video. To overcome the limitations of these steps, we jointly reconstruct and demosaic the color video in one shot and better results are achieved. We also build an RGB mid-scale size dataset as benchmark data for the color video SCI problem. \item[7)] We verify the proposed PnP-FastDVDnet in the measurements captured by the newly built SCI camera in~\cite{Qiao2020_APLP} at different compressive sampling rates from 10 to 50. This clearly demonstrates the feasibility and flexibility of the proposed algorithm on the large-scale data. Furthermore, this verifies that a video SCI system can capture high-speed videos at 2500 frames per second (fps) by using a camera working at 50 fps. \end{itemize} \begin{figure}[!htbp] \begin{center} \includegraphics[width=1\linewidth]{Figures/fig01_tradeoff.pdf} \end{center} \vspace{-4mm} \caption{Trade-off of quality and speed of various plug-and-play denoising algorithms for SCI reconstruction. {Average PSNR of the six grays-scale datasets~\cite{Yuan20CVPR} are shown.}} \label{fig:demo} \end{figure} \subsection{Organization of This Paper} The rest of this paper is organized as follows. Sec.~\ref{Sec:SCImodel} introduce the mathematical model of both grayscale and color SCI. Sec.~\ref{Sec:PnP_ADMM} develops the PnP-ADMM under the SCI hardware constraints and shows that PnP-ADMM converges to a fixed point. Sec.~\ref{Sec:PnP_GAP} proposes the PnP-GAP algorithm and proves its convergence\footnote{We observed some error in the proof of the global convergence of PnP-GAP in~\cite{Yuan20CVPR}. Specifically, the lower bound of the second term in Eq. (25) in~\cite{Yuan20CVPR} should be 0. Therefore, the global convergence of PnP-GAP dees not hold anymore. Instead, we provide another convergence proof of PnP-GAP in this paper.}. Sec.~\ref{Sec:P3} integrates various denoisers into to the PnP framework for SCI reconstruction and develops the joint demosaicing and reconstruction for color SCI. Extensive results of both simulation (grayscale benchmark, mid-scale color and large-scale color) and real data are presented in Sec.~\ref{Sec:results} and Sec.~\ref{Sec:realdata}, respectively. Sec.~\ref{Sec:Con} concludes the paper. \begin{figure*}[!htbp] \begin{center} \includegraphics[width=1\linewidth]{Figures/pnpsci_joint_recon_flowchart.pdf} \end{center} \vspace{-2mm} \caption{Reconstruction of color SCI using mosaic sensor measurements. (a) Color SCI reconstruction by independently reconstruct RGGB channels using grayscale {\em image} denoising and then perform demosicing (we proposed this in~\cite{Yuan20CVPR}). The raw measurement (and the mask) is divided into four color channels, R (red), G1 (green), G2 (green) and B (blue) and these channels are reconstructed separately using the PnP-GAP with FFDNet. Then these channels are interleaved and demosaiced to obtain the final color video. (b) Proposed (in this paper) joint reconstruction and demosaicing for color SCI. The raw measurement (and the mask) is sent to the proposed PnP framework using GAP/ADMM with {\em color denoising} by FFDNet or FastDVDnet to output the desired color video directly. Note the demosaicing and {\em color video denoising} are embedded in each iteration.} \label{fig:Bayer_sci} \end{figure*} \section{Mathematical Model of SCI~\label{Sec:SCImodel}} As depicted in Fig.~\ref{fig:video_color_sci}, in the video SCI system {\em e.g.}, CACTI~\cite{Patrick13OE}, consider that a $B$-frame (grayscale) video ${\boldsymbol X} \in \mathbb{R}^{n_x \times n_y \times B}$ is modulated and compressed by $B$ sensing matrices (masks) ${\boldsymbol C}\in \mathbb{R}^{n_x \times n_y \times B}$, and the measurement frame $\Ymat \in \mathbb{R}^{n_x\times n_y} $ can be expressed as~\cite{Patrick13OE,Yuan14CVPR} \begin{equation}\label{Eq:System} \Ymat = \sum_{b=1}^B {\boldsymbol C}_b\odot {\boldsymbol X}_b + {\boldsymbol Z}, \end{equation} where ${\boldsymbol Z} \in \mathbb{R}^{n_x \times n_y }$ denotes the noise; ${\boldsymbol C}_b = {\boldsymbol C}(:,:,b)$ and ${\boldsymbol X}_b = {\boldsymbol X}(:,:,b) \in \mathbb{R}^{n_x \times n_y}$ represent the $b$-th sensing matrix (mask) and the corresponding video frame respectively, and $\odot$ denotes the Hadamard (element-wise) product. Mathematically, the measurement in \eqref{Eq:System} can be expressed by \begin{equation}\label{Eq:ghf} \boldsymbol{y} = \Hmat \boldsymbol{x} + \boldsymbol{z}, \end{equation} where $\boldsymbol{y} = \text{Vec}(\Ymat) \in \mathbb{R}^{n_x n_y}$ and $\boldsymbol{z}= \text{Vec}({\boldsymbol Z}) \in \mathbb{R}^{n_x n_y}$ with $\text{Vec}(\cdot)$ vectorizing the ensued matrix by stacking columns. Correspondingly, the video signal $\boldsymbol{x} \in \mathbb{R}^{n_x n_y B}$ is \begin{equation} \boldsymbol{x} = \text{Vec}({\boldsymbol X}) = [\text{Vec}({\boldsymbol X}_1)^{\mathsf{T}},..., \text{Vec}({\boldsymbol X}_B)^{\mathsf{T}}]^{\mathsf{T}}. \end{equation} Unlike the global transformation based compressive sensing~\cite{Candes05compressed,donoho2006compressed}, the sensing matrix $\Hmat \in \mathbb{R}^{n_x n_y \times n_x n_y B}$ in video SCI is sparse and is constituted by a concatenation of diagonal matrices \begin{equation}\label{Eq:Hmat_strucutre} \Hmat = [{\boldsymbol D}_1,...,{\boldsymbol D}_B]. \end{equation} where ${\boldsymbol D}_b = \text{diag}(\text{Vec}({\boldsymbol C}_b)) \in {\mathbb R}^{n \times n}$ with $n = n_x n_y$, for $b =1,\dots B$. Consequently, the {\em sampling rate} here is equal to $1/B$. It has been proved recently in~\cite{Jalali18ISIT,Jalali19TIT_SCI} that the reconstruction error of SCI is bounded even when $B>1$. In the color video case, as shown {in Figs.~\ref{fig:Bayer_sci}, \ref{fig:comp_frames_midscale}, \ref{fig:comp_largescale} and \ref{fig:real_color_hammer}}, the raw data captured by the generally used Bayer pattern sensors have ``RGGB" channels. Since the mask is imposed on each pixel, the generated measurement can be treated as a grayscale image as in Fig.~\ref{fig:real_chopperwheel} and when it is shown in color, the demosaicing procedure cannot generate the right color due to mask modulation (Fig.~\ref{fig:Bayer_sci}). In previous papers, during reconstruction, we first recover each of these four channel independently and then perform demosaicing in the reconstructed videos (upper part in Fig.~\ref{fig:Bayer_sci}). The final demosaiced RGB video is the desired signal~\cite{Yuan14CVPR,Yuan20CVPR}. In this case, the raw measurement is decoupled into four components $\{\Ymat^{(r)},\Ymat^{(g_1)},\Ymat^{(g_2)},\Ymat^{(b)}\} \in {\mathbb R}^{\frac{n_x}{2}\times \frac{n_y}{2}}$. Similarly, the corresponding masks and videos are denoted by $\{{\boldsymbol C}^{(r)},{\boldsymbol C}^{(g_1)},{\boldsymbol C}^{(g_2)},{\boldsymbol C}^{(b)}\} \in {\mathbb R}^{\frac{n_x}{2}\times \frac{n_y}{2}\times B}$, $\{{\boldsymbol X}^{(r)},{\boldsymbol X}^{(g_1)},{\boldsymbol X}^{(g_2)},{\boldsymbol X}^{(b)}\} \in {\mathbb R}^{\frac{n_x}{2}\times \frac{n_y}{2}\times B}$, respectively. The forward model for each channel is now \begin{eqnarray} \Ymat^{(r)} &=& \sum_{b=1}^B {\boldsymbol C}^{(r)}_b\odot {\boldsymbol X}^{(r)}_b + {\boldsymbol Z}^{(r)}, \\ \Ymat^{(g_1)} &=& \sum_{b=1}^B {\boldsymbol C}^{(g_1)}_b\odot {\boldsymbol X}^{(g_1)}_b + {\boldsymbol Z}^{(g_1)}, \\ \Ymat^{(g_2)} &=& \sum_{b=1}^B {\boldsymbol C}^{(g_2)}_b\odot {\boldsymbol X}^{(g_2)}_b + {\boldsymbol Z}^{(g_2)}, \\ \Ymat^{(b)} &=& \sum_{b=1}^B {\boldsymbol C}^{(b)}_b\odot {\boldsymbol X}^{(b)}_b + {\boldsymbol Z}^{(b)}. \end{eqnarray} In this color case, the desired signal is ${\boldsymbol X}^{(rgb)}\in \mathbb {R}^{n_x\times n_y\times 3\times B}$, where $3$ denotes the R, G and B channels in the color video. The demosaicing is basically an interpolation process from ${\boldsymbol X}^{(r)}$ to $\tilde{{\boldsymbol X}}^{(r)} \in \mathbb {R}^{n_x\times n_y\times B}$, from $\{{\boldsymbol X}^{(g_1)},{\boldsymbol X}^{(g_2)}\}$ to $\tilde{{\boldsymbol X}}^{(g)} \in \mathbb {R}^{n_x\times n_y\times B}$ and from ${\boldsymbol X}^{(b)}$ to $\tilde{{\boldsymbol X}}^{(b)} \in \mathbb {R}^{n_x\times n_y\times B}$. Note that the interpolation rate for red and blue channel is from 1 pixel to 4 pixels, whereas for the green channel it is from 2 pixels to 4 pixels. Utilizing the vectorized formulation, let $\{\tilde{\boldsymbol{x}}^{(r)},\tilde{\boldsymbol{x}}^{(g)},\tilde{\boldsymbol{x}}^{(b)}\}\in {\mathbb R}^{n_xn_yB}$ denote the vectorized representations of $\{\tilde{{\boldsymbol X}}^{(r)},\tilde{{\boldsymbol X}}^{(g)},\tilde{{\boldsymbol X}}^{(b)}\}$, $\{{\boldsymbol{y}}^{(r)},{\boldsymbol{y}}^{(b)}\}\in {\mathbb R}^{\frac{n_xn_y}{4}}$ denote the vectorized representations of $\Ymat^{(r)},{\Ymat}^{(b)}\}$ and $\boldsymbol{y}^{(g)} = \left[\begin{array}{c}\boldsymbol{y}^{(g_1)}\\ \boldsymbol{y}^{(g_2)}\end{array}\right]\in {\mathbb R}^{\frac{n_xn_y}{2}} $ denote the concatenated vector-representation of $\{\Ymat^{(g_1)} ,\Ymat^{(g_2)} \}$. Similar notations are also used for the noise term. We arrive at \begin{eqnarray} \boldsymbol{y}^{(r)}&=& \Hmat^{(r)} \tilde{\boldsymbol{x}}^{(r)} + \boldsymbol{z}^{(r)},\\ \boldsymbol{y}^{(g)}&=& \Hmat^{(g)} \tilde{\boldsymbol{x}}^{(g)} + \boldsymbol{z}^{(g)},\\ \boldsymbol{y}^{(b)}&=& \Hmat^{(b)} \tilde{\boldsymbol{x}}^{(b)}+ \boldsymbol{z}^{(b)}, \end{eqnarray} where $\{\Hmat^{(r)},\Hmat^{(b)}\} \in {\mathbb R}^{\frac{n_xn_y}{4} \times n_xn_yB}$ and $\Hmat^{(g)}\in {\mathbb R}^{\frac{n_xn_y}{2} \times n_xn_yB}$. The structures of $\{\Hmat^{(r)},\Hmat^{(g)},\Hmat^{(b)}\}$ are similar to \eqref{Eq:Hmat_strucutre} for grayscale video but include the down-sampling (mosaic) process, which decimate pixels in an interleaving way following the mosaic pattern of the sensor. Following this, let the captured mosaic compressed measurement and the desired color video be $\boldsymbol{y} \in{\mathbb R}^{n_xn_y}$ and $\boldsymbol{x} \in{\mathbb R}^{3n_xn_yB}$, respectively. We have \begin{eqnarray} \boldsymbol{y} = \left[\begin{array}{l} \boldsymbol{y}^{(r)} \\ \boldsymbol{y}^{(g)}\\ \boldsymbol{y}^{(b)} \end{array}\right], \quad \boldsymbol{x} = \left[\begin{array}{l} \tilde{\boldsymbol{x}}^{(r)} \\ \tilde{\boldsymbol{x}}^{(g)}\\ \tilde{\boldsymbol{x}}^{(b)} \end{array}\right], \end{eqnarray} and the full froward model of color-video SCI can be modeled by \begin{eqnarray} \label{eq:forward_color} \boldsymbol{y} = \underbrace{\left[\begin{array}{ccc} \Hmat^{(r)} & {\bf 0} & {\bf 0} \\ {\bf 0}& \Hmat^{(g)} &{\bf 0} \\ {\bf 0}& {\bf 0}& \Hmat^{(b)} \end{array}\right]}_{\Hmat} \boldsymbol{x} + \boldsymbol{z}. \end{eqnarray} This formulation along with the grayscale one is the unique forward model of video SCI and apparently, the color one is more challenging. As depicted in the top-part of Fig.~\ref{fig:Bayer_sci}, previous studies usually first reconstruct the four Bayer channels of the video independently and then employ the off-the-shelf demosaicing algorithm to get the desired color videos~\cite{Liu18TPAMI,Yuan20CVPR}. However, this final performance of the reconstructed video will be limited by both steps (channel-wise reconstruction and demosaicing). In this paper, we derive a joint reconstruction and demosaicing framework for color video SCI (lower part in Fig.~\ref{fig:Bayer_sci}) directly based on the model derived in \eqref{eq:forward_color}. More importantly, it is a unified PnP framework, where different demosaicing and color denoising algorithms can be used. Please refer to the details in Sec.~\ref{Sec:jointcsci}. \section{Plug-and-Play ADMM for SCI~\label{Sec:PnP_ADMM}} The inversion problem of SCI can be modeled as \begin{equation} {\hat \boldsymbol{x}} = \arg\!\min_{\boldsymbol{x}} f(\boldsymbol{x}) + \lambda g(\boldsymbol{x}), \label{Eq:uncontr} \end{equation} where $f(\boldsymbol{x})$ can be seen as the forward imaging model, {\em i.e.}, $\|\boldsymbol{y}-\Hmat\boldsymbol{x}\|_2^2$ and $g(\boldsymbol{x})$ is a prior being used. This prior is usually playing the role of a regularizer. While diverse priors have been used in SCI such as TV, sparsity and low-rank, in this work, we focus on the deep denoising prior, which has shown superiority recently on various image restoration tasks. Note that, since SCI systems aim to reconstruct high-speed video, a video deep denoising prior is desired~\cite{Tassano_19ICIP_DVDnet,Tassano_2020_CVPR}. On the other hand, since videos are essentially consequent images, recently advanced deep denoising priors for images can also be used~\cite{Zhang17SPM_deepdenoise,Zhang18TIP_FFDNet}. It has been shown in our preliminary paper that an efficient image denoising prior can lead to good results for SCI~\cite{Yuan20CVPR}. However, this frame-wise image denoising prior~\cite{Zhang18TIP_FFDNet} limits the performance of video denoising since it ignored the strong temporal correlation in neighbouring frames. In this work, we employ the most recent video denoiser, FastDVDnet~\cite{Tassano_2020_CVPR}, as the denoising prior in our PnP framework and it leads to better results than those reported in~\cite{Yuan20CVPR}. \subsection{Review the Plug-and-Play ADMM} Using ADMM~\cite{Boyd11ADMM}, by introducing an auxiliary parameter $\boldsymbol{v}$, the unconstrained optimization in Eq.~\eqref{Eq:uncontr} can be converted into \begin{equation} \label{Eq:ADMM_xv} ({\hat \boldsymbol{x}}, {\hat \boldsymbol{v}}) = \arg\!\min_{\boldsymbol{x},\boldsymbol{v}} f(\boldsymbol{x}) + \lambda g(\boldsymbol{v}), {\text{ subject to }} \boldsymbol{x} = \boldsymbol{v}. \end{equation} This minimization can be solved by the following sequence of sub-problems~\cite{Chan2017PlugandPlayAF} \begin{align} \boldsymbol{x}^{(k+1)} &= \arg\!\min_{\boldsymbol{x}} f(\boldsymbol{x}) + \frac{\rho}{2} \|\boldsymbol{x} - (\boldsymbol{v}^{(k)}-\frac{1}{\rho} \uv^{(k)})\|_2^2, \label{Eq:solvex}\\ \boldsymbol{v}^{(k+1)} &= \arg\!\min_{\boldsymbol{v}} \lambda g(\boldsymbol{v}) + \frac{\rho}{2}\|\boldsymbol{v} - (\boldsymbol{x}^{(k)}+\frac{1}{\rho} \uv^{(k)})\|_2^2, \label{Eq:solvev}\\ \uv^{(k+1)} &= \uv^{(k)} + \rho (\boldsymbol{x}^{(k+1)} - \boldsymbol{v}^{(k+1)}), \label{Eq:u_k+1} \end{align} where the superscript $^{(k)}$ denotes the iteration number. In SCI and other inversion problems, $f(\boldsymbol{x})$ is usually a quadratic form and there are various solutions to Eq.~\eqref{Eq:solvex}. In PnP-ADMM, the solution of Eq.~\eqref{Eq:solvev} is replaced by an {\em off-the-shelf} denoising algorithm, to yield \begin{equation} { \boldsymbol{v}^{(k+1)} = {\cal D}_{\sigma_k} (\boldsymbol{x}^{(k)}+\frac{1}{\rho} \uv^{(k)})}. \end{equation} where ${\cal D}_{\sigma_k}$ denotes the denoiser being used with $\sigma_k$ being the standard deviation of the assumed additive white Gaussian noise in the $k$-th iteration. In~\cite{Chan2017PlugandPlayAF}, the authors proposed to update the $\rho$ in each iteration by $\rho_{k+1} = \gamma_k \rho_k$ with $\gamma_k \ge 1$ and setting $\sigma_k = \sqrt{\lambda/\rho_k}$ for the denoiser. This essentially imposed the {\em non-increasing denoiser} in Assumption~\ref{Ass:non_in} defined in Sec.~\ref{Sec:Conv_gap}. Chen {\em et al.}~\cite{Chan2017PlugandPlayAF} defined the {\em bounded denoiser} and proved the {\em fixed point} convergence of the PnP-ADMM. \begin{definition} (Bounded Denoiser~\cite{Chan2017PlugandPlayAF}): A bounded denoiser with a parameter $\sigma$ is a function ${\cal D}_{\sigma}: {\mathbb R}^n \rightarrow {\mathbb R}^n$ such that for any input $\boldsymbol{x}\in {\mathbb R}^{n}$, \begin{equation} \frac{1}{n}\|{\cal D}_{\sigma}(\boldsymbol{x}) - \boldsymbol{x}\|_2^2 \le \sigma^2 C, \end{equation} for some universal constant $C$ independent of $n$ and $\sigma$. \label{Definition1} \end{definition} With this definition (constraint on the denoiser) and the assumption of $f:[0,1]^n \rightarrow {\mathbb R}$ having bounded gradient, which is for any $\boldsymbol{x} \in [0,1]^n$, there exists $L < \infty$ such that $\|\nabla f(\boldsymbol{x})\|_2/\sqrt{n} \le L$, the authors of~\cite{Chan2017PlugandPlayAF} have proved that: the iterates of the PnP-ADMM demonstrates a fixed-point convergence. That is, there exists $(\boldsymbol{x}^*, \boldsymbol{v}^*, \uv^*)$ such that $\|\boldsymbol{x}^{(k)} - \boldsymbol{x}^*\|_2 \rightarrow 0$, $\|\boldsymbol{v}^{(k)} - \boldsymbol{v}^*\|_2 \rightarrow 0$, and $\|\uv^{(k)} - \uv^*\|_2 \rightarrow 0$ as $ k\rightarrow \infty$. \subsection{PnP-ADMM for SCI} In the following derivation, we focus on the grayscale case and it is ready to extend to the color SCI cases. In SCI, with the model stated in Eq.~\eqref{Eq:ghf}, $\boldsymbol{x} \in {\mathbb R}^{nB}$, we consider the loss function $f(\boldsymbol{x})$ as \begin{equation} f(\boldsymbol{x}) = \frac{1}{2}\|\boldsymbol{y} - \Hmat \boldsymbol{x}\|_2^2. \end{equation} Consider all the pixel values are normalized into $[0,1]$. \begin{lemma} In SCI, the function $f(\boldsymbol{x}) = \frac{1}{2}\|\boldsymbol{y}-\Hmat\boldsymbol{x}\|_2^2$ has bounded gradients, {\em i.e.}, $\|\nabla f(\boldsymbol{x})\|_2\leq B \|\boldsymbol{x}\|_2$. \label{Lemma:fx_grad} \end{lemma} \begin{proof} The gradient of $f(\boldsymbol{x})$ in SCI is \begin{equation} \nabla f(\boldsymbol{x}) = \Hmat^{\mathsf{T}}\Hmat\boldsymbol{x}-\Hmat^{\mathsf{T}}\boldsymbol{y}, \end{equation} where $\Hmat$ is a concatenation of diagonal matrices of size $n\times nB$ as shown in Eq.~\eqref{Eq:Hmat_strucutre}. \begin{list}{\labelitemi}{\leftmargin=12pt \topsep=0pt \parsep=0pt} \item The $\Hmat^{\mathsf{T}}\boldsymbol{y}$ is a non-negative constant since both the measurement $\boldsymbol{y}$ and the mask are non-negative in nature. \item Now let's focus on $\Hmat^{\mathsf{T}}\Hmat\boldsymbol{x}$. Since \begin{align} \label{eq_sesci_PTP} \Hmat^{\mathsf{T}}\Hmat&= \left[ \begin{matrix} {\boldsymbol D}_1 \\ \vdots \\ {\boldsymbol D}_B \end{matrix} \right] \left[ \begin{matrix} {\boldsymbol D}_1 \cdots {\boldsymbol D}_B \end{matrix} \right]\\ & = \left[ \begin{matrix} {\boldsymbol D}_1^2& {\boldsymbol D}_1{\boldsymbol D}_2 & \cdots & {\boldsymbol D}_1{\boldsymbol D}_B\\ {\boldsymbol D}_1 {\boldsymbol D}_2& {\boldsymbol D}^2_2 & \cdots & {\boldsymbol D}_2{\boldsymbol D}_B\\ \vdots & \vdots & \ddots & \vdots\\ {\boldsymbol D}_1 {\boldsymbol D}_B& {\boldsymbol D}_2{\boldsymbol D}_B & \cdots & {\boldsymbol D}^2_B \end{matrix} \right], \end{align} \end{list} due to this special structure, $\Hmat^{\mathsf{T}}\Hmat\boldsymbol{x}$ is the weighted sum of the $\boldsymbol{x}$ and $\|\Hmat^{\mathsf{T}}\Hmat\boldsymbol{x}\|_2\leq B C_{\rm max}\|\boldsymbol{x}\|_2$, where $C_{\rm max}$ is the maximum value in the sensing matrix. Usually, the sensing matrix is normalized to $[0,1]$ and this leads to $C_{\rm max}=1$ and therefore $\|\Hmat^{\mathsf{T}}\Hmat\boldsymbol{x}\|_2\leq B \|\boldsymbol{x}\|_2$. Thus, $\nabla f(\boldsymbol{x})$ is bounded. Furthermore, \begin{itemize} \item If the mask element $D_{i,j}$ is drawn from a binary distribution with entries \{0,1\} with a property of $p_1 \in (0,1)$ being 1, then \begin{eqnarray} \|\Hmat^{\mathsf{T}}\Hmat\boldsymbol{x}\|_2\leq p_1 B \|\boldsymbol{x}\|_2 \end{eqnarray} with {\em a high probability}; usually, $p_1 = 0.5$ and thus $\|\Hmat^{\mathsf{T}}\Hmat\boldsymbol{x}\|_2\leq 0.5 B \|\boldsymbol{x}\|_2$. \item If the mask element $D_{i,j}$ is drawn from a Gaussian distribution ${\cal N}(0, \sigma^2)$ as in~\cite{Jalali18ISIT,Jalali19TIT_SCI}, though it is not practical to get negative modulation (values of $D_{i,j}$) in hardware, \begin{eqnarray} \|\Hmat^{\mathsf{T}}\Hmat\boldsymbol{x}\|_2\leq B\sigma^2 \|\boldsymbol{x}\|_2\stackrel{\sigma = 1}{=} B\|\boldsymbol{x}\|_2, \end{eqnarray} with {\em a high probability}, where the concentration of measure is used. \end{itemize} \end{proof} % Lemma~\ref{Lemma:fx_grad} along with the bounded denoiser in Definition~\ref{Definition1} gives us the following Corollary. \begin{corollary} \label{Coro1} Consider the sensing model of SCI in \eqref{Eq:ghf}. Given $\{\Hmat,\boldsymbol{y}\}$, ${\boldsymbol{x}}$ is solved iteratively via PnP-ADMM with bounded denoiser, then $\boldsymbol{x}^{(k)}$ and $\boldsymbol{\theta}^{(k)}$ will converge to a fixed point. \end{corollary} \begin{proof} The proof follows \cite{Chan2017PlugandPlayAF} and thus omitted here. \end{proof} \section{Plug-and-Play GAP for SCI \label{Sec:PnP_GAP}} In this section, following the generalized alternating projection (GAP) algorithm~\cite{Liao14GAP} and the above conditions on PnP-ADMM, we propose the PnP-GAP for SCI, which has a lower computational workload (thus faster) than PnP-ADMM. \subsection{Algorithm} Different from the ADMM in Eq.~\eqref{Eq:ADMM_xv}, GAP solves SCI by the following problem \begin{equation} \label{Eq:GAP_xv} ({\hat \boldsymbol{x}}, {\hat \boldsymbol{v}}) = \arg\!\min_{\boldsymbol{x},\boldsymbol{v}} \frac{1}{2}\|\boldsymbol{x} - \boldsymbol{v}\|_2^2 + \lambda g(\boldsymbol{v}), ~{\text{s.t.}}~~ \boldsymbol{y} = \Hmat\boldsymbol{x}. \end{equation} Similarly to ADMM, the minimizer in Eq.~\eqref{Eq:GAP_xv} is solved by a sequence of subproblems and we again let $k$ denotes the iteration number. \begin{list}{\labelitemi}{\leftmargin=10pt \topsep=0pt \parsep=0pt} \item Solving $\boldsymbol{x}$: given $\boldsymbol{v}$, $\boldsymbol{x}^{(k+1)}$ is updated via an Euclidean projection of $\boldsymbol{v}^{(k)}$ on the linear manifold ${\cal M}: \boldsymbol{y} = \Hmat \boldsymbol{x}$, \begin{equation} \boldsymbol{x}^{(k+1)} = \boldsymbol{v}^{(k)} + \Hmat^{\mathsf{T}} (\Hmat \Hmat^{\mathsf{T}})^{-1} (\boldsymbol{y} - \Hmat \boldsymbol{v}^{(k)}). \label{Eq:x_k+1} \end{equation} Recall~\eqref{Eq:Hmat_strucutre}, $\{{\boldsymbol D}_i\}_{i=1}^B$ is a diagonal matrix \begin{equation} {\boldsymbol D}_i = {\rm diag} (D_{i,1}, \dots, D_{i,n}). \nonumber \end{equation} Thereby, $\Hmat\Hmat^{\mathsf{T}}$ is diagonal matrix, {\em i.e.}, \begin{equation} {\Rmat = \Hmat\Hmat^{\mathsf{T}} = {\rm diag}(R_1, \dots, R_n)},\label{eq:R} \end{equation} where $ R_{j} = \sum_{b=1}^{B} D^2_{i,j}, \forall j = 1,\dots,n$. Eq.~\eqref{Eq:x_k+1} can thus be solved efficiently. \item Solving $\boldsymbol{v}$: given $\boldsymbol{x}$, updating $\boldsymbol{v}$ can be seen as a denoising problem and \begin{equation} { \boldsymbol{v}^{(k+1)} = {\cal D}_{\sigma}(\boldsymbol{x}^{(k+1)}).} \label{Eq:Denoise_GAP} \end{equation} Here, various denoiser can be used with $\sigma = \sqrt{\lambda}$. \end{list} \begin{algorithm}[!htbp] \caption{Plug-and-Play GAP} \begin{algorithmic}[1] \REQUIRE$\Hmat$, $\boldsymbol{y}$. \STATE Initial $\boldsymbol{v}^{(0)}$, $\lambda_0$, $\xi<1$. \WHILE{Not Converge} \STATE Update $\boldsymbol{x}$ by Eq.~\eqref{Eq:x_k+1}. \STATE Update $\boldsymbol{v}$ by denoiser $\boldsymbol{v}^{(k+1)} = {\cal D}_{\sigma_k}(\boldsymbol{x}^{(k+1)})$ \IF {$\Delta_{k+1}\ge \eta \Delta_k$} \STATE {$\lambda_{k+1} = \xi \lambda_k$} \ELSE \STATE {$\lambda_{k+1} = \lambda_k$} \ENDIF \ENDWHILE \end{algorithmic} \label{algo:PP_GAP} \end{algorithm} We can see that in each iteration, the only parameter to be tuned is $\lambda$ and we thus set $\lambda_{k+1} = \xi_k \lambda_k$ with $\xi_k\le 1$. Inspired by the PnP-ADMM, we update $\lambda$ by the following two rules: \begin{list}{\labelitemi}{\leftmargin=12pt \topsep=0pt \parsep=0pt} \item [a)] Monotone update by setting $\lambda_{k+1} = \xi \lambda_k$, with $\xi<1$. \item [b)] Adaptive update by considering the relative residue: \begin{eqnarray} {\textstyle \Delta_{k+1} = \frac{1}{\sqrt{nB}}\left(\|\boldsymbol{x}^{(k+1)} - \boldsymbol{x}^{(k)}\|_2 + \|\boldsymbol{v}^{(k+1)} - \boldsymbol{v}^{(k)}\|_2\right)}.\nonumber \label{eq:Delta} \end{eqnarray} For any $\eta \in [0,1)$ and let $\xi<1$ be a constant, $\lambda_k$ is conditionally updated according to the following settings: \begin{list}{\labelitemi}{\leftmargin=14pt \topsep=0pt \parsep=0pt} \item [i)] If $\Delta_{k+1}\ge \eta \Delta_k$, then $\lambda_{k+1} = \xi \lambda_k$. \item [ii)] If $\Delta_{k+1}< \eta \Delta_k$, then $\lambda_{k+1} = \lambda_k$. \end{list} \end{list} With this adaptive updating of $\lambda_k$, the full PnP-GAP algorithm for SCI is exhibited in Algorithm~\ref{algo:PP_GAP}. \subsection{Convergence \label{Sec:Conv_gap}} \begin{assumption}\label{Ass:non_in} (Non-increasing denoiser) The denoiser in each iteration of PnP-GAP ${\cal D}_{\sigma_{k}}: {\mathbb R}^{nB} \rightarrow {\mathbb R}^{nB}$ performs denoising in a non-increasing order, {\em i.e.}, $\sigma_{k+1}\le \sigma_k$. Further, when $k\rightarrow+\infty$, $\sigma_k \rightarrow 0$. \end{assumption} This assumption makes sense since as the algorithm proceeds we expect the algorithm's estimate of the underlying signal to become more accurate, which means that the denoiser needs to deal with a less noisy signal. This is also guaranteed by the $\lambda$ setting in Algorithm~\ref{algo:PP_GAP} and imposed by $\rho$ setting in the PnP-ADMM~\cite{Chan2017PlugandPlayAF}. With this assumption, we have the following convergence result of PnP-GAP. \begin{theorem} \label{The:GAP_SCI_bound} Consider the sensing model of SCI. Given $\{\Hmat,\boldsymbol{y}\}$, ${\boldsymbol{x}}$ is solved by PnP-GAP with bounded denoiser in a non-increasing order, then $\boldsymbol{x}^{(k)}$ converges. \end{theorem} \begin{proof} From \eqref{Eq:x_k+1}, $\boldsymbol{x}^{(k+1)} = \boldsymbol{v}^{(k)} + \Hmat^{\mathsf{T}} (\Hmat \Hmat^{\mathsf{T}})^{-1} (\boldsymbol{y} - \Hmat \boldsymbol{v}^{(k)})$, we have \begin{equation} \boldsymbol{x}^{(k+1)}-\boldsymbol{x}^{(k)} = \boldsymbol{v}^{(k)}-\boldsymbol{x}^{(k)} + \Hmat^{\mathsf{T}} \Rmat^{-1} (\boldsymbol{y} - \Hmat \boldsymbol{v}^{(k)}). \end{equation} Following this, \begin{align} &\|\boldsymbol{x}^{(k+1)} - \boldsymbol{x}^{(k)}\|_2^2 \nonumber\\ =&\|\boldsymbol{v}^{(k)} + \Hmat^{\mathsf{T}} \Rmat^{-1} (\boldsymbol{y} - \Hmat \boldsymbol{v}^{(k)}) - \boldsymbol{x}^{(k)} \|^2_2 \\ =& \|\boldsymbol{v}^{(k)} + \Hmat^{\mathsf{T}} \Rmat^{-1} (\Hmat\boldsymbol{x}^{(k)} - \Hmat \boldsymbol{v}^{(k)}) - \boldsymbol{x}^{(k)} \|^2_2 \nonumber\\ =& \|({\boldsymbol I} - \Hmat^{\mathsf{T}} \Rmat^{-1} \Hmat) (\boldsymbol{v}^{(k)} - \boldsymbol{x}^{(k)})\|^2_2 \label{eq:mid_step}\\ =& \|\boldsymbol{v}^{(k)} - \boldsymbol{x}^{(k)}\|_2^2 - \|\Rmat^{-\frac{1}{2}}\Hmat (\boldsymbol{v}^{(k)} - \boldsymbol{x}^{(k)})\|_2^2 \label{Eq:xk_vkminus1}\\ \le & \|\boldsymbol{v}^{(k)} - \boldsymbol{x}^{(k)}\|_2^2 \\ =& \| {\cal D}_{\sigma_{k}} (\boldsymbol{x}^{(k)}) - \boldsymbol{x}^{(k)}\|_2^2 \\ \le& \sigma_k^2 nBC \label{Eq:convg_C}, \end{align} where $\Rmat = \Hmat\Hmat^{\mathsf{T}}$ as defined in \eqref{eq:R} and the following shows the derivation from \eqref{eq:mid_step} to \eqref{Eq:xk_vkminus1} \begin{align} &\|({\boldsymbol I} - \Hmat^{\mathsf{T}} \Rmat^{-1} \Hmat) (\boldsymbol{v}^{(k)} - \boldsymbol{x}^{(k)})\|^2_2 \nonumber\\ &= (\boldsymbol{v}^{(k)} - \boldsymbol{x}^{(k)})^{\mathsf{T}} [{\boldsymbol I} - \Hmat^{\mathsf{T}} (\Hmat \Hmat^{\mathsf{T}})^{-1} \Hmat]^{\mathsf{T}} \nonumber\\ &\quad\, \cdot [{\boldsymbol I} - \Hmat^{\mathsf{T}} (\Hmat \Hmat^{\mathsf{T}})^{-1} \Hmat](\boldsymbol{v}^{(k)} - \boldsymbol{x}^{(k)}) \nonumber\\ &= (\boldsymbol{v}^{(k)} - \boldsymbol{x}^{(k)})^{\mathsf{T}} [{\boldsymbol I} - 2\Hmat^{\mathsf{T}} \Rmat^{-1} \Hmat +\Hmat^{\mathsf{T}} \Rmat^{-1}\Rmat \Rmat^{-1} \Hmat ]\nonumber\\ &\quad\, \cdot (\boldsymbol{v}^{(k)} - \boldsymbol{x}^{(k)}) \nonumber\\ & = (\boldsymbol{v}^{(k)} - \boldsymbol{x}^{(k)})^{\mathsf{T}} ({\boldsymbol I} - \Hmat^{\mathsf{T}} \Rmat^{-1} \Hmat )(\boldsymbol{v}^{(k)} - \boldsymbol{x}^{(k)}) \nonumber\\ &= \|\boldsymbol{v}^{(k)} - \boldsymbol{x}^{(k)}\|_2^2 - (\boldsymbol{v}^{(k)} - \boldsymbol{x}^{(k)})^{\mathsf{T}} \Hmat^{\mathsf{T}} \Rmat^{-1} \Hmat (\boldsymbol{v}^{(k)} - \boldsymbol{x}^{(k)}) \nonumber\\ &= \|\boldsymbol{v}^{(k)} - \boldsymbol{x}^{(k)}\|_2^2- \|\Rmat^{-\frac{1}{2}}\Hmat (\boldsymbol{v}^{(k)} - \boldsymbol{x}^{(k)})\|_2^2. \nonumber \end{align} In \eqref{Eq:convg_C} we have used the bounded denoiser. Using Assumption \ref{Ass:non_in} (non-increasing denoiser), we have $\sigma_k \rightarrow 0$, $\|\boldsymbol{x}^{(k+1)} - \boldsymbol{x}^{(k)}\|_2^2 \rightarrow 0$ and thus $\boldsymbol{x}^{(k)}$ converges. \end{proof} \subsection{PnP-ADMM vs. PnP-GAP} Comparing PnP-GAP in Eqs~\eqref{Eq:x_k+1} and \eqref{Eq:Denoise_GAP} and PnP-ADMM in Eqs~\eqref{Eq:solvex}-\eqref{Eq:u_k+1}, we can see that PnP-GAP only has two subproblems (rather than three as in PnP-ADMM) and thus the computation is faster. It was pointed out in~\cite{Liu18TPAMI} that in the noise-free case, ADMM and GAP perform the same with appropriate parameter settings, which has been mathematically proved. However, in the noisy case, ADMM usually performs better since it considers noise in the model and below we give a geometrical explanation. \begin{figure}[htbp!] \begin{center} \includegraphics[width=\linewidth]{Figures/ADMM_GAP_comp.pdf} \end{center} \vspace{-3mm} \caption{Demonstration of the solution of ADMM and GAP for a two-dimensional sparse signal, where $\boldsymbol{x}^*$ denotes the truth. (a) In the noise-free case, both ADMM and GAP have a large chance to converge to the true signal. (b) In the noisy case, GAP will converge to the green dot (the cross-point of dash-green line and vertical axis), whereas the solution of ADMM will be one of the two red dots that the red circle crosses the vertical axis.} \label{fig:ADMM_GAP} \end{figure} In Fig.~\ref{fig:ADMM_GAP}, we use a two-dimensional sparse signal (with the $\ell_1$ assumption shown as the diamond shape in blue lines in Fig.~\ref{fig:ADMM_GAP}) as an example to compare ADMM and GAP. Note that the key difference is that, in both noise-free and noisy cases, GAP always imposes the solution $\hat\boldsymbol{x}$ on the line of $\boldsymbol{y} = \Hmat\hat\boldsymbol{x}$ by Eq.~\eqref{Eq:x_k+1}. In the noise-free case in Fig.~\ref{fig:ADMM_GAP}(a), we can see that since GAP imposes $\hat\boldsymbol{x}$ on the green line, it will converge to the true signal $\boldsymbol{x}^*$. ADMM does not have this constraint but minimizes $\|\boldsymbol{y}-\Hmat\boldsymbol{x}\|_2^2$, the solution might be a little bit off the true signal $\boldsymbol{x}^*$. However, with appropriate parameter settings and a good initialization, it also has a large chance to converge to the true signal. In the noisy case, GAP sill imposes $\hat\boldsymbol{x}$ on the line of $\boldsymbol{y} = \Hmat\hat\boldsymbol{x}$, shown by the dash-green line in Fig.~\ref{fig:ADMM_GAP}(b). In this case, due to noise, this line might deviate from the solid green line where the true signal lies on. GAP will thus converge to the green-point where the dash-green line crosses the vertical axis. On the other hand, by minimizing $\|\boldsymbol{y}-\Hmat\boldsymbol{x}\|_2^2$, the solution of ADMM can be in the dash-red circle depending on the initialization. Considering the sparse constraint, the final solution of ADMM would be one of the two red dots that the red circle crosses the vertical axis. Therefore, in the noisy case, the Euclidean distance between the GAP solution and the true signal ($\|\hat{\boldsymbol{x}} - \boldsymbol{x}^*\|_2$) might be larger than that of ADMM. However, the final solution of ADMM depends on the initialization and it is not guaranteed to be more accurate than GAP. The PnP framework can be recognized as a deep denoising network plus an inverse problem solver. Other solvers such as TwIST~\cite{Bioucas-Dias2007TwIST} and FISTA~\cite{Beck09IST} can also be used~\cite{Zheng20_PRJ_PnP-CASSI} and may also lead to convergence results under proper conditions. According to our experience, TwIST usually converges slowly and FITSA sometimes sticks to limited performance. Hence, in the experiments, we use PnP-GAP for simulation data and PnP-ADMM for real data. \section{Integrate Various Denoisers into PnP for SCI Reconstruction\label{Sec:P3}} In the above derivation, we assume the denoiser existing and in this section, we briefly introduce different denoisers. These denoisers have different speed and quality. \subsection{Non-deep Denoiser} In conventional denoising algorithms, a prior is usually employed to impose the piece-wise constant (by TV), sparsity (by bases or learn-able dictionaries) or low-rank (similar patches groups). These algorithms usually have a clear objective function. In the following, we briefly categorize these algorithms into following classes. For a detailed review, please refer to~\cite{Zha2020TIP_JPG,Zha2020TIP_NSSP}. \begin{list}{\labelitemi}{\leftmargin=8pt \topsep=2pt \parsep=1pt} \item Global constraint based algorithms such as TV~\cite{Rudin92_TV,Stanley05_TV} minimize the total variations of the entire image and have been extended to videos~\cite{yang2013efficient}. \item Global sparsity based algorithms impose the coefficients of the image under specific basis such as wavelet~\cite{Crouse98_wavelet} or Curvelet~\cite{Starck02_Curvelet}. \item Patch based algorithms usually learn a dictionary using methods such as K-SVD~\cite{Aharon06TSP} for image patches and then impose sparsity on the coefficients. \item Patch-group based algorithms exploit the nonlocal similarity of image patches and impose the sparsity~\cite{Mairal_09ICCV_LSSC} or low-rank~\cite{Gu14CVPR,Gu17IJCV} on these similar patch groups. \end{list} Among these denoisers, usually, a faster denoiser~{\em e.g.}, TV, is very efficient, but cannot provide high-quality results. The middle class algorithms~{\em e.g.}, K-SVD and BM3D~\cite{Dabov07BM3D} can provide decent results with a longer running time. More advanced denoising algorithm such as WNNM~\cite{Gu14CVPR,Gu17IJCV} can provide better results, but even slower. On the other hand, while extensive denoising algorithms for images have been developed, VBM4D~\cite{Maggioni2012VideoDD} is still one of the state-of-the-art algorithms for video denoising. In our previous work, we have extended WNNM into SCI for gray-scale videos leading to the state-of-the-art algorithm (DeSCI~\cite{Liu18TPAMI}) for SCI, which performs better than PnP-VBM4D as shown in Fig.~\ref{fig:demo} but paying the price of a longer running time. \subsection{Deep Denoiser} Another line of emerging denoising approaches is based on deep learning~\cite{XieNIPS2012_deepDN,zhang2017beyond}, which can provide decent results within a short time after training, but they are usually not robust to noise levels and in high noisy cases, the results are not good. Since this paper is not focusing directly on video denoiser, we do not provide a detailed survey on the deep learning based video denoising. Interested readers can refer to other recent papers. Different from conventional denoising problems, in SCI reconstruction, the noise level in each iteration is usually from large to small and the dynamic range can from 150 to 1, considering the pixel values within $\{0,1,\dots, 255\}$. Therefore, a flexible denoiser that is robust to the input noise level is desired. Fortunately, FFDNet~\cite{Zhang18TIP_FFDNet} has provided us a fast and flexible solution under various noise levels. However, since FFDNet is developed for images, we perform the denoising step in PnP for SCI frame-wise; for the color SCI problem, we used the gray-scale denoising for each channel in~\cite{Yuan20CVPR}. As discussed before and shown in Fig.~\ref{fig:Bayer_sci}, using joint demosaicing and reconstruction by employing color denoiser of FFDNet instead of grayscale ones can improve the results significantly. Please refer to Table~\ref{Tab:results_midscale} and Fig.~\ref{fig:comp_frames_midscale} . Most recently, we have noticed that the FastDVDnet~\cite{Tassano_2020_CVPR} also satisfies these desired (fast and flexible) properties. More importantly, FastDVDnet is developed for video denoising which takes account of the strong temporal correlation within consequent video frames. By using FastDVDnet into PnP, we have achieved even better results on both grayscale and color videos than those of FFDNet. {Please refer to Table~\ref{Tab:results_4video} for grayscale video SCI results and Table~\ref{Tab:results_midscale} for the color SCI}. \subsection{Hybrid Denoiser} By integrating these denoising algorithms into PnP-GAP/ADMM, we can have different algorithms (Table~\ref{Tab:results_4video} and Fig.~\ref{fig:demo}) with different results. It is worth noting that DeSCI can be seen as PnP-WNNM, and its best results are achieved by exploiting the correlation across different video frames. On the other hand, most existing deep denoising priors are still based on images. Therefore, it is not unexpected that the results of PnP-GAP/ADMM-FFDNet are not as good as DeSCI. As mentioned above, by using video denoising priors such as FastDVDnet, the results can be improved. In addition, these different denoisers can be used in parallel, {\em i.e.}, one after each other in one GAP/ADMM iteration or used sequentially, {\em i.e.}, the first $K_1$ iterations using FFDNet and the next $K_2$ iterations using WNNM to achieve better results. This is a good way to balance the performance and running time. Please refer to the performance and running time in Fig.~\ref{fig:demo} and {Table~\ref{Tab:results_4video}}. These different denoising priors can also be served as the complementary priors in image/video denoising~\cite{zha2020power,Qiao2020_APLP}. \subsection{Joint Demosaicing and SCI Reconstruction \label{Sec:jointcsci}} For the color SCI described in Eq.~\eqref{eq:forward_color}, though it is easy to directly use the PnP algorithm derived above for grayscale videos by changing the forward matrix, it cannot leads to good results by using the deep denoiser such as FFDNet or FastDVDnet only for denoising according to our experiments. This might due to the fact that video SCI can be recognized as temporal interpolation while the demosaicing is spatial interpolation. Jointly performing these two tasks is too challenging for the deep denoiser, especially for videos~\cite{GharbiACM16}. To cope with this challenge, we rewrite the forward model of color-SCI as \begin{equation} \boldsymbol{y} = \Hmat \Tmat_{\cal M} \boldsymbol{x} \end{equation} where $\Tmat_{\cal M} $ is the mosaicing and deinterleaving process shown in Fig.~\ref{fig:Bayer_sci} that translate the RGB videos $\boldsymbol{x}$ to the Bayer pattern video $\boldsymbol{x}^{\rm (rggb)}$. From $\boldsymbol{y}$ to $\boldsymbol{x}^{\rm (rggb)}$ is exactly the same as grayscale video SCI. Different algorithms exist for images demosaicing, {\em i.e.}, from $\boldsymbol{x}^{\rm (rggb)}$ to $\boldsymbol{x}$~\cite{LiDemosaicing08}. Recently, the deep learning based demosaicing algorithms have also been developed~\cite{Brady:20,GharbiACM16}. The key principle of PnP algorithms for color SCI is to make full use of the color denoising algorithm, rather than channel-wise grayscale denoising. Bearing this concern in mind, we decompose the denoising step in PnP-GAP \eqref{Eq:Denoise_GAP} into the following 2 steps: \begin{eqnarray} \tilde{\boldsymbol{v}}^{k+1} &=& {\cal D}_{\cal M} (\boldsymbol{x}^{(k+1)}), \label{Eq:demosaic}\\ {\boldsymbol{v}}^{k+1} &=& {\cal D}_{\sigma}(\tilde{\boldsymbol{v}}^{k+1}), \label{Eq:denoise_color} \end{eqnarray} where ${\cal D}_{\cal M}$ is the demosaicing algorithm being used\footnote{Demosaicing is conducted after interleaving. Similarly, mosaicing and then deinterleaving are conducted before projection in Eq.~\eqref{Eq:x_k+1}, as shown in Fig.~\ref{fig:Bayer_sci}(b).} and ${\cal D}_{\sigma}$ now denotes the color denoising algorithms. Note that ${\cal D}_{\cal M}$ is also a plug-and-play operation where different algorithms can be used. In the experiments, we have found that both 'Malvar04'~\cite{malvar2004high-quality} and `Menon07'~\cite{Menon07} can lead to stable results and they are performing better than the fast bilinear interpolation. However, the time consumption of `Menon07' is about 4$\times$ longer than 'Malvar04' with limited gain for our color SCI problem. Therefore, we used 'Malvar04' in our experiments. We believe a deep learning based algorithm can leads to better results for demosaicing. However, for our color-SCI reconstruction, we noticed that existing deep learning based demosaicing is not stable, though they do perform well for the sole demosaicing task. We leave this robust demosaicing deep learning network for the future work. Nonetheless, our proposed PnP algorithm can adopt any new demosaicing and denoising algorithms to improve the results. For the sake of running time of large-scale color SCI reconstruction, instead of calling the demosaicing algorithms in Eq.~\eqref{Eq:demosaic}, we have also developed a light-weight method by using the R channel, B channel and the averaged G channels to construct a small-scale RGB video (with half size of rows and columns of the desired video) in each iteration; following this, the color denoising operation in~\eqref{Eq:denoise_color} is performed on this small scale RGB video and the iterations are conducted until converge. In this case, we only need to call the demosaicing algorithm once at the end to provide the final result. Apparently, the results would not be as good as using demosaicing in each iteration, but it saves time. \subsection{Online PnP for Sequential Measurements} To speed up the convergence of PnP with deep denoiser, we notice that a good initialization would help. Usually, we use a few iterations of GAP-TV to warmly start the PnP-FastDVDnet and it will lead to good results with about 50 iterations. However, for the sequential measurements, we empirically find that using the reconstruction results of the previous measurement to initialize the next measurement will lead to good results. In this case, when we are handling multiple sequential measurements, we only need a warm start for the first measurement and the next ones can use the results of previous measurements. This will slightly save the reconstruction time. This idea shares the similar spirit of `group of pictures' in the MPEG video compression. \begin{table*}[!htbp] \caption{Grayscale benchmark dataset: The average results of PSNR in dB (left entry in each cell) and SSIM (right entry in each cell) and run time per measurement/shot in minutes by different algorithms on 6 benchmark datasets.} \centering \vspace{-3mm} \resizebox{\textwidth}{!} { \begin{threeparttable} \begin{tabular}{cV{2}ccccccV{2}cc} \hlineB{3} Algorithm& \texttt{Kobe} & \texttt{Traffic} & \texttt{Runner} & \texttt{Drop} & \texttt{Crash} & \texttt{Aerial} & Average & Run time (min) \\ \hlineB{3} GAP-TV~\cite{Yuan16ICIP_GAP} & 26.46, 0.8448 & 20.89, 0.7148 & 28.52, 0.9092 & 34.63, 0.9704 & 24.82, 0.8383 & 25.05, 0.8281 & 26.73, 0.8509 & 0.07 \\ {DeSCI~\cite{Liu18TPAMI}} & {\bf 33.25}, {0.9518} & {\bf 28.71}, {0.9250} & {\bf 38.48}, {\bf 0.9693} & 43.10, 0.9925 & {27.04}, {0.9094} & 25.33, 0.8603 & {\bf 32.65}, {0.9347} & 103.0 \\ \hlineB{2} PnP-VBM4D & 30.60, 0.9260 & 26.60, 0.8958 & 30.10, 0.9271 & 26.58, 0.8777 & 25.30, 0.8502 & 26.89, 0.8521 & 27.68, 0.8882 & 7.9 \\ PnP-FFDNet~\cite{Yuan20CVPR} & 30.50, 0.9256 & 24.18, 0.8279 & 32.15, 0.9332 & 40.70, 0.9892 & 25.42, 0.8493 & 25.27, 0.8291 & 29.70, 0.8924 & {0.05 (GPU)} \\ PnP-WNNM-TV & 33.00, 0.9520 & 26.76, 0.9035 & 38.00, 0.9690 & 43.27, 0.9927 & 26.25, 0.8972 & 25.53, 0.8595 & 32.14, 0.9290 & 40.8 \\ PnP-WNNM-VBM4D & 33.08, \bf{0.9537} & 28.05, 0.9191 & 33.73, 0.9632 & 28.82, 0.9289 & 26.56, 0.8874 & {27.74}, {0.8852} & 29.66, 0.9229 & 25.0 \\ PnP-WNNM-FFDNet & 32.54, 0.9511 & 26.00, 0.8861 & 36.31, 0.9664 & \bf{43.45}, \bf{0.9930} & 26.21, 0.8930 & 25.83, 0.8618 & 31.72, 0.9252 & 17.9 \\ \hlineB{2} GAP-TV* & 26.92, 0.8378 & 20.66, 0.6905 & 29.81, 0.8949 & 34.95, 0.9664 & 24.48, 0.7988 & 24.81, 0.8105 & 26.94, 0.8332 & {\bf 0.03} \\ PnP-FFDNet* & 30.33, 0.9252 & 24.01, 0.8353 & 32.44, 0.9313 & 39.68, 0.9864 & 24.67, 0.8330 & 24.29, 0.8198 & 29.21, 0.8876 & {{\bf 0.03} (GPU)} \\ \rowcolor{lightgray} PnP-FastDVDnet* & 32.73, 0.9466 & 27.95, {\bf 0.9321} & 36.29, 0.9619 & 41.82, 0.9892 & {\bf 27.32}, {\bf 0.9253} & {\bf 27.98}, {\bf 0.8966} & 32.35, {\bf 0.9420} & {0.10 (GPU)} \\ \hlineB{3} \end{tabular} \begin{tablenotes} \item[*] Implemented with Python (PyTorch for FFDNet and FastDVDnet), where the rest are implemented with MATLAB (MatConvNet for FFDNet). \end{tablenotes} \end{threeparttable} } \label{Tab:results_4video} \end{table*} \section{Simulation Results \label{Sec:results}} We apply the proposed PnP algorithms to both simulation~\cite{Liu18TPAMI,Ma19ICCV} and real datasets captured by the SCI cameras~\cite{Patrick13OE,Yuan14CVPR,Qiao2020_APLP}. In addition to the widely used grayscale benchmark datasets~\cite{Yuan20CVPR}, we also build a mid-scale color dataset consisting of 6 color videos (details in Sec.~\ref{sec:sim_color}) and we hope they will serve as the benchmark data of color SCI problems. This benchmark dataset is used to verify the performance of our proposed PnP-GAP for joint reconstruction and demosaicing compared with other algorithms. Finally, we apply the proposed joint method to the large-scale datasets introduced in~\cite{Yuan20CVPR} and show better results using FastDVDnet for color video denoising. Conventional denoising algorithms include TV~\cite{Yuan16ICIP_GAP}, VBM4D~\cite{Maggioni2012VideoDD} and WNNM~\cite{Gu14CVPR} are used for comparison. For the deep learning based denoiser, we have tried various algorithms and found that FFDNet~\cite{Zhang18TIP_FFDNet} provides the best results among image denoising methods while FastDVDnet~\cite{Tassano_2020_CVPR} provides the best results among video denoising approaches. Both PSNR and SSIM~\cite{Wang04imagequality} are employed as metrics to compare different algorithms. Note that in our preliminary paper~\cite{Yuan20CVPR}, all the codes are conducted in MATLAB, while in this work, we re-write the code of GAP-TV, PnP-FFDNet using Python, to be consistent with PnP-FastDVDnet. However, we do notice a difference of FFDNet; the performance of Python version\footnote{Code from \url{https://github.com/cszn/KAIR}.} is a little bit worse (0.49dB lower in PSNR for the grayscale benchmark data) than the counterpart conducted in MATLAB\footnote{Code from \url{https://github.com/cszn/FFDNet}.}. We also notice that GAP-TV in Python is more than $2\times$ faster than MATLAB with a slightly better result. We show these in Table~\ref{Tab:results_4video}. \begin{figure}[htbp!] \begin{center} \includegraphics[width=1.0\linewidth]{fig03_comp_frames_fastdvdnet.pdf} \end{center} \vspace{-4mm} \caption{Comparison of reconstructed frames of different PnP-GAP algorithms (GAP-TV~\cite{Yuan16ICIP_GAP}, DeSCI~\cite{Liu18TPAMI}, PnP-FFDNet~\cite{Yuan20CVPR}, and PnP-FastDVDnet) on six simulated grayscale video SCI datasets of spatial size $256\times256$ and $B=8$.} \label{fig:comp_frames_full} \vspace{-2mm} \end{figure} \subsection{Benchmark Data: Grayscale Videos \label{sec:sim_gray}} We follow the simulation setup in~\cite{Liu18TPAMI} of six datasets, {\em i.e.}, \texttt{Kobe, Traffic, Runner, Drop, crash,} and \texttt{aerial}~\cite{Ma19ICCV}\footnote{The results of DeSCI (GAP-WNNM) is different from those reported in \cite{Ma19ICCV} because of parameter settings of DeSCI, specifically the input estimated noise levels for each iteration stage. We use exactly the same parameters as the DeSCI paper~\cite{Liu18TPAMI}, which is publicly available at \href{https://github.com/liuyang12/DeSCI}{https://github.com/liuyang12/DeSCI}.}, where $B=8$ video frames are compressed into a single measurement. Table~\ref{Tab:results_4video} summarizes the PSNR and SSIM results of these 6 benchmark data using various denoising algorithms, where DeSCI can be categorized as GAP-WNNM, and PnP-WNNM-FFDNet used 50 iterations FFDNet and then 60 iterations WNNM, similar for PnP-WNNM-VBM4D. {PnP-FastDVDnet used 60 iterations and we used 5 neighbouring frames for video denoising.} It can be observed that: \begin{list}{\labelitemi}{\leftmargin=8pt \topsep=2pt \parsep=1pt} \item [$i$)] By using GPU, PnP-FFDNet is now the fastest algorithm\footnote{{Only a regular GPU is needed to run FFDNet and since FFDNet is performed in a frame-wise manner, we do not need a large amount of CPU or GPU RAM (no more than 2GB here) compared to other video denoisers using parallelization (even with parallelization, other algorithms listed here are unlikely to outperform PnP-FFDNet in terms of speed).}}; it is very close to GAP-TV, meanwhile providing more than 2dB higher PSNR than GAP-TV. Therefore, PnP-FFDNet can be used as {\em an efficient baseline} in SCI reconstruction. Since the average PSNR is close to 30dB, it is applicable in real cases. This will be further verified in the following subsections on mid-scale and large-scale color datasets. \item [$ii$)] DeSCI still provides the best results on average PSNR; however, by combing other algorithms with WNNM, comparable results ({\em e.g.} PnP-WNNM-FFDNet) can be achieved by only using $1/6$ computational time. \item [$iii$)] {PnP-FastDVDnet provides the best results on average SSIM and regarding PSNR, it is only 0.3dB lower than DeSCI but $1000\times$ faster. PnP-FastDVDnet is the only algorithm that can provide a higher SSIM than DeSCI.} \item [$iv$)] Comparing PnP-FFDnet with PnP-FastDVDnet, we observe that utilizing the temporal correlation has improved the results significantly, {\em i.e.}, more than 3dB in PSNR and 0.05 in SSIM. \end{list} Fig.~\ref{fig:comp_frames_full} plots selected frames of the six datasets using different algorithms. It can be seen that though DeSCI still leads to highest PSNR, the difference between PnP-FastDVDNet and DeSCI is very small and in most cases, they are close to each other. By investigating the temporal correlation, PnP-FastDVDNet can provide finer details than PnP-FFDNet and sometimes DeSCI; please refer to the zoomed parts of \texttt{Aerial} in Fig.~\ref{fig:comp_frames_full}. \begin{table*}[!htbp] \caption{Mid-scale Bayer benchmark dataset: the average results of PSNR in dB (left entry in each cell) and SSIM (right entry) and running time per measurement/shot in minutes by different algorithms on 6 benchmark color Bayer datasets. All codes are implemented in Python (Pytorch for deep denoising) except DeSCI, which is using MATLAB.} \centering \vspace{-4mm} \resizebox{.955\textwidth}{!} { \begin{tabular}{cV{2}ccccccV{2}cc} \hlineB{3} Algorithm& \texttt{Beauty} & \texttt{Bosphorus} & \texttt{Jockey} & \texttt{Runner} & \texttt{ShakeNDry}& \texttt{Traffic} & Average & Run time (mins) \\ \hlineB{3} GAP-TV & 33.08, 0.9639 & 29.70, 0.9144 & 29.48, 0.8874 & 29.10, 0.8780 & 29.59, 0.8928 & 19.84, 0.6448 &28.46, 0.8635 & 0.3\\ {DeSCI (GAP-WNNM)} & {34.66}, {0.9711} & 32.88, 0.9518 & 34.14, 0.9382 & 36.16, 0.9489 & 30.94, 0.9049 &24.62, 0.8387 & 32.23, 0.9256 & 1544 \\ \hline PnP-FFDNet-gray & 33.21, 0.9629 & 28.43, 0.9046 & 32.30, 0.9182 & 30.83, 0.8875 & 27.87, 0.8606 & 21.03, 0.7113 & 28.93, 0.8742 & { 0.22 (GPU)} \\ PnP-FFDNet-color & 34.15, 0.9670 & 33.06, 0.9569 & 34.80, 0.9432 & 35.32, 0.9398 & 32.37, 0.9401 &24.55, 0.8370 & 32.38, 0.9307 & 1.63 (GPU)\\ \hline PnP-FastDVDnet-gray & 33.01, 0.9628 & 30.95, 0.9342 & 33.51, 0.9279 & 32.82, 0.9004 & 29.92, 0.8920 & 22.81, 0.7764 & 30.50, 0.8989 & { 0.33 (GPU)} \\ \rowcolor{lightgray} PnP-FastDVDnet-color & {\bf 35.27}, {\bf 0.9719} & {\bf 37.24}, {\bf 0.9781} & {\bf 35.63}, {\bf 0.9495} & {\bf 38.22}, {\bf 0.9648} & {\bf 33.71}, {\bf 0.9685} & {\bf 27.49}, {\bf 0.9147} & {\bf 34.60}, {\bf 0.9546} & {1.65 (GPU)} \\ \hlineB{3} \end{tabular} } \label{Tab:results_midscale} \end{table*} \begin{figure*}[htbp!] \begin{center} \includegraphics[width=.955\linewidth]{fig03_comp_frames_midscale.pdf} \end{center} \vspace{-4mm} \caption{Comparison of reconstructed frames of PnP-GAP algorithms (GAP-TV~\cite{Yuan16ICIP_GAP}, DeSCI~\cite{Liu18TPAMI}, PnP-FFDNet~\cite{Yuan20CVPR}, and PnP-FastDVDnet) on six simulated benchmark color video SCI datasets of size $512\times512\times3$ and $B=8$. Please refer to the full videos in the supplementary material. } \label{fig:comp_frames_midscale} \vspace{-3mm} \end{figure*} \begin{figure*}[htbp!] \centering \includegraphics[width=1.0\linewidth]{fig03_comp_frames_largescale.pdf} \vspace{-6mm} \caption{Reconstructed frames of PnP-GAP algorithms (GAP-TV~\cite{Yuan16ICIP_GAP}, PnP-FFDNet~\cite{Yuan20CVPR}, and PnP-FastDVDnet) on four simulated large-scale video SCI datasets. Please refer to the full videos in the supplementary material } \vspace{-3mm} \label{fig:comp_largescale} \end{figure*} \subsection{Benchmark Data: Color RGB-Bayer Videos \label{sec:sim_color}} As mentioned before, in this paper, we propose the joint reconstruction and demosaicing using the PnP framework for color SCI shown in the lower part of Fig.~\ref{fig:Bayer_sci}. To verify the performance, we hereby generate a color RGB video dataset with 6 scenes of spatial size $512\times512\times3$, and here $3$ denotes the RGB channels. Similar to the grayscale case, we used compression rate $B=8$. The schematic of a color video SCI system is shown in Fig.~\ref{fig:video_color_sci}. Every 8 consequent video frames are first interleaved to the mosaic frames of size $512\times 512 \times 8$; then these mosaic frames are modulated by shifting binary masks of size $512\times 512 \times 8$ and finally summed to get the compressed mosaic measurement of size $512\times 512$. For each dataset, we have 4 compressed measurements and thus in total 32 RGB video frames. As shown in Fig.~\ref{fig:comp_frames_midscale}, these datasets include \texttt{Beauty}, \texttt{Bosphorus}, \texttt{Jockey}, \texttt{ShakeNDry}\footnote{\texttt{Beauty}, \texttt{Bosphorus}, \texttt{Jockey}, \texttt{ShakeNDry} are downloaded from \href{http://ultravideo.cs.tut.fi/\#testsequences}{http://ultravideo.cs.tut.fi/\#testsequences}.}, \texttt{Runner}\footnote{{Downloaded from \href{https://www.videvo.net/video/elite-runner-slow-motion/4541}{https://www.videvo.net/video/elite-runner-slow-motion/4541}.}} and \texttt{Traffic}\footnote{Downloaded from \href{http://dyntex.univ-lr.fr/database.html}{http://dyntex.univ-lr.fr/database.html}.}. In order to keep the video quality, we crop (instead of resize) the video frames with a spatial size of $512\times 512$. We dub these datasets the `mid-scale color data' due to the size in-between the small size grayscale benchmark data and the large-scale data discussed in the next subsection. For other algorithms, we perform the reconstruction and demosaicing separately. The R, G1, G2, and B channels are reconstructed separately and then we employ the `\texttt{demosaic}' function in MATLAB to get the final RGB video. To verify the performance of this joint procedure, we compare the PnP-FFDnet/FastDVDnet using joint processing (color denoising) and separately reconstruction (grayscale denoising) shown in Fig.~\ref{fig:Bayer_sci}. {Table~\ref{Tab:results_midscale} summarizes the PSNR and SSIM results of these datasets. We have the following observations.} \begin{list}{\labelitemi}{\leftmargin=8pt \topsep=2pt \parsep=1pt} \item [$i)$] When using color denoising for joint reconstruction and demosaicing, both PnP-FFDNet-color and PnP-FastDVDnet-color outperform DeSCI. \item [$ii)$] Color denoising significantly improves the results over grayscale denoising, {\em i.e.}, for FFDNet, the improvement is 3.45dB and for FastDVDnet, it is even 4.1dB in PSNR. \item [$iii)$] Regarding the running time, both PnP-FastDVDnet and PnP-FFDNet need about 1.6 minutes per measurement, while they only need about 0.3 minutes for their grayscale counterparts. Therefore, most time was consumed by demosaicing. As mentioned in Sec.~\ref{Sec:jointcsci}, we hope deep learning based demosaicing will provide a fast and better result in the future. \end{list} \begin{table*}[htbp!] \caption{Running time (minutes) of large-scale data using different algorithms.} \vspace{-3mm} \resizebox{1\linewidth}{!} { \begin{tabular}{ccV{3}ccccc} \hlineB{3} Large-scale dataset & Pixel resolution & GAP-TV & PnP-FFDNet-gray & PnP-FFDNet-color & PnP-FastDVDnet-gray & PnP-FastDVDnet-color \\ \hlineB{3} {\tt Messi} color & $1920\times1080\times3\times20$ & 15.1 & 5.2 & 42.2 & 7.6 & 43.1 \\ {\tt Hummingbird} color & $1920\times1080\times3\times30$ & 20.3 & 6.6 & 61.2 & 10.6 & 54.0 \\ {\tt Swinger} color & $3840\times2160\times3\times15$ & 39.2 & 13.2 & 138.8 & 21.3 & 138.4 \\ {\tt Football} color & $3840\times1644\times3\times40$ & 83.0 & 30.6 & 308.8 & 50.7 & 298.1 \\ \hlineB{3} \end{tabular} } \label{Table:time_largescale} \end{table*} {Figure~\ref{fig:comp_frames_midscale} plots selected reconstruction frames of different algorithms for these 6 RGB Bayer datasets with the snapshot measurement shown on the far left. Note that, due to the Bayer pattern of the sensor, the captured measurement is actually grayscale shown on the lower-right, whereas due to the coding in the imaging system, the demosaiced measurement depicts the wrong color shown in the upper-left part of the measurement. It can be seen from Fig.~\ref{fig:comp_frames_midscale} that PnP-FastDVDnet-color and PnP-FFDNet-color are providing smooth motions and fine spatial details. Some color mismatch exists in the GAP-TV, DeSCI and PnP-FFDNet-gray and PnP-FastDVDnet-gray. For instance, in the {\texttt{Traffic}} data, the color of the cars is incorrectly reconstructed for these methods; similar case exists in the water drops in the {\texttt{ShakeNDry}}. Overall, GAP-TV provides blurry results and DeSCI sometimes over-smooths the background such as the lawn in the {\texttt{Jockey}} data. PnP-FastDVDnet-color provides the finest details in the complicated background such as the trees in \texttt{Bosphorus}. } \subsection{Large-scale Data} Similar to the benchmark data, we simulate the color video SCI measurements for large-scale data with four YouTube slow-motion videos, {\em i.e.}, \texttt{Messi}\footnote{\href{https://www.youtube.com/watch?v=sbPrevs6Pd4}{https://www.youtube.com/watch?v=sbPrevs6Pd4}}, \texttt{Hummingbird}\footnote{\href{https://www.youtube.com/watch?v=RtUQ_pz5wlo}{https://www.youtube.com/watch?v=RtUQ\_pz5wlo}}, \texttt{Swinger}\footnote{\href{https://www.youtube.com/watch?v=cfnbyX9G5Rk}{https://www.youtube.com/watch?v=cfnbyX9G5Rk}}, and \texttt{Football}\footnote{\href{https://www.youtube.com/watch?v=EGAuWZYe2No}{https://www.youtube.com/watch?v=EGAuWZYe2No}}. A sequence of color scene is coded by the corresponding shifted random binary masks at each time step and finally summed up to form a snapshot measurement on the color Bayer RGB sensor (with a ``RGGB'' Bayer color filter array)\footnote{ Note these results are different from the ones reported in~\cite{Yuan20CVPR}. The reason is that the measurements are generally in different ways. In~\cite{Yuan20CVPR}, we up-sampled the raw video by putting each color channel as the mosaic R, G1, G2, and B channels. This leads to two identical G channels and the reconstructed and the size of demosaiced image is doubled (both in width and height). For example, for UHD color video \texttt{Football} with original image size of $3840\times1644$, the reconstructed video frames have the size of $7680\times3288$ (demosaiced). This is different from the Bayer pattern model described in Sec.~\ref{Sec:jointcsci}. After some researching on the camera design, in this paper, we follow the Bayer RGGB pattern color video SCI model developed in Sec.~\ref{Sec:SCImodel} to generate the new measurements being used in the experiments. We are convinced that this is more appropriate and closer to real color cameras. }. To verify the flexibility of the proposed PnP algorithm, we consider different spatial size and different compression rate $B$. \begin{list}{\labelitemi}{\leftmargin=8pt \topsep=2pt \parsep=2pt} \item \texttt{Messi20} color: A $1920\times1080\times3\times20$ video reconstructed from a snapshot. \item \texttt{Hummingbird30} color: A $1920\times1080\times3\times30$ video reconstructed from a snapshot. \item \texttt{swinger15} color: A $3840\times2160\times3\times15$ video reconstructed from a snapshot. \item \texttt{football40} color: A $3840\times1644\times3\times40$ video reconstructed from a snapshot. \end{list} Due to the extremely long running time of other algorithms, we hereby only show the results of GAP-TV, PnP-FFDNet and PnP-FastDVDnet; both FFDNet and FastDVDnet used color denoising as in the mid-scale benchmark data. Note that only grayscale FFDNet denoising was used in \cite{Yuan20CVPR}. {Figure~\ref{fig:comp_largescale} plots selected reconstruction frames of these three algorithms, where we can see that $i$) due to many fine details, GAP-TV cannot provide high quality results, $ii$) both PnP-FFDNet and PnP-FastDVDnet lead to significant improvements over GAP-TV (at least 3.69 dB in PSNR), and $iii$) PnP-FastDVDnet leads to best results on the first three datasets and for the last one, {\texttt{football40}}, it is 0.39dB lower than PnP-FFDNet. This might due to the crowed players in the scene, which is in favor of FFDNet denoising.} Importantly, for all these large-scale dataset, with a compression rate varying from 15 to 40, we can all get the reconstruction up to (or at least close to) 30dB. This proves that the video SCI can be used in our daily life videos. Regarding the running time, as shown in Table~\ref{Table:time_largescale}, for all these large-scale datasets, PnP with grayscale denoising (PnP-FFDNet-gray and PnP-FastDVDnet-gray) can finish the reconstruction within one hour. However, when the color denoising algorithms are used, the running time is $10\times$ longer. Again, as mentioned in the simulation, most of the time is consumed by the demosaicing algorithms and we expect a robust deep demosaicing network can speed up the reconstruction. Due to this, the running time of PnP-FastDVDnet-color is very similar to PnP-FFDNet-color. Even this, for the HD ($1920\times1080\times3$) video data with $B$ up to 30, the reconstruction can be finished within 1 hour, but the other algorithms such as DeSCI are not feasible as it will take days. For the UHD ($3840\times1644\times3$) videos, even at $B=40$, the reconstruction can be finished in hours. Note that since spatial pixels in video SCI are decoupled, we can also use multiple CPUs or GPUs performing on blocks rather than the entire frame to speed up the reconstruction. \begin{figure}[!htbp!] \begin{center} \includegraphics[width=0.8\linewidth]{Figures/quality_vary_codenum.pdf} \end{center} \vspace{-5mm} \caption{Reconstruction quality (PSNR in dB (a) and SSIM (b), higher is better) varying compression rates $B$ from 8 to 48 of the proposed PnP methods (PnP-FFDNet and PnP-FastDVDnet) and GAP-TV~\cite{Yuan16ICIP_GAP} .} \label{fig:quality_vary_codenum} \end{figure} % In addition to these large-scale data with different spatial sizes and compression rates, another way to construct large-scale data for a specific SCI system is to fix the spatial size, but with various compression rates. In this case, the data scales with $B$, which is also challenging to other algorithms including deep learning ones\footnote{In~\cite{Qiao2020_APLP}, it is failed to train a deep neural networks for $B>30$ even with a spatial size $512\times512$ due to the limited GPU memory.}. Hereby, we conduct simulation of the \texttt{Hummingbird} data with different compression rates $B = {8,16,24,32, 40,48}$ with results shown in Fig.~\ref{fig:quality_vary_codenum}. It can be seen that even at $B$=48, both PnP-FFDNet and PnP-FastDVDnet can reconstruct the video at PSNR close to 27dB; regarding SSIM, PnP-FastDVDnet achieves 0.85 at $B$=48, which is $>$0.15 higher than PnP-FFDNet and $>$0.25 higher than GAP-TV. Therefore, our proposed PnP algorithms are robust and flexible to different compression rates. This will be further verified by the real data in Sec.~\ref{Sec:gray_hand}. \section{Real Data \label{Sec:realdata}} We now apply the proposed PnP framework to real data captured by SCI cameras to verify the robustness of the proposed algorithms. Different data captured by different video SCI cameras are used and these data are of different spatial size, different compression rate and using different modulation patterns. We first verify PnP by using grayscale data~\cite{Patrick13OE,Sun17OE} with a fixed $B$; then we conduct the experiments of grayscale data with different compression rates captured by the same system~\cite{Qiao2020_CACTI}. Lastly, we show the results of color data captured by the SCI system in~\cite{Yuan14CVPR}. Note that, in Sec.~\ref{Sec:gray_hand}, for the first time, we show that a $512\times512\times50$ {\texttt{Hand}} video reconstructed from a snapshot in high quality with each frame having a motion. \begin{figure}[!htbp!] \begin{center} \includegraphics[width=1\linewidth]{fig05_real_cacti_chopperwheel_full.pdf}\\ \end{center} \vspace{-5mm} \caption{Real data: \texttt{chopper wheel} ($256\times256\times14$). \vspace{-4mm} \label{fig:real_chopperwheel} \end{figure} \subsection{Grayscale Videos with Fixed Compression Rate} In this subsection, we verify the proposed PnP algorithm by the following data: \begin{list}{\labelitemi}{\leftmargin=8pt \topsep=2pt \parsep=2pt} \item {\texttt{Chopper wheel}} data captured by the original CACTI paper~\cite{Patrick13OE} is of spatial size $256\times256$ and $B=14$. The results of GAP-TV, DeSCI, PnP-FFDNet and PnP-FastDVDnet are shown in Fig.~\ref{fig:real_chopperwheel}, where we can see that though DeSCI, PnP-FFDNet and PnP-FastDVDnet can all provide good results. Due to the temporal correlation of video investigated in FastDVDnet, the results of PnP-FastDVDnet is having a consistent brightness and of smooth motion. \item {\texttt{UCF data}} captured by the video SCI system built in~\cite{Sun17OE} is of large size $1100\times 850$ with $B=10$. The results are shown in Fig.~\ref{fig:real_ucf}, which has a complicated background and a dropping ball on the left. It can be seen clearly that PnP-FastDVDnet provides a clean background with fine details. \end{list} Again, since these data are of different sizes and compression rates, it is challenging to use the recently developed end-to-end deep neural networks~\cite{Cheng20ECCV_BIRNAT} to perform all the tasks. For instance, the training time for each task will be of weeks and it consumes a significant amount of power and memory to train the network for large-scale data such as {\texttt{UCF}}. By contrast, in our proposed PnP framework, the same pre-trained FFDNet or FastDVDnet is used for all these tasks and the results are obtained in seconds. \begin{figure}[htbp!] \begin{center} \includegraphics[width=1.0\linewidth]{fig06_real_cacti_ucf.pdf} \end{center} \vspace{-4mm} \caption{Real data: \texttt{UCF} high-speed video SCI ($1100\times850\times10$).} \label{fig:real_ucf} \vspace{-3mm} \end{figure} \begin{table}[htbp!] \caption{Running time (seconds) of real data using different algorithms.} \vspace{-3mm} \resizebox{1\columnwidth}{!} { \begin{tabular}{c cV{3}cccc} \hlineB{3} Real dataset & Pixel resolution & {GAP-TV} & {DeSCI} & {PnP-FFDNet} & {PnP-FastDVDnet} \\ \hlineB{3} \texttt{chopperwheel} & $256\times256\times14$ & 11.6 & 3185.8 & \textbf{2.7} & 18.3 \\ \hline \texttt{hammer} color & $512\times512\times22$ & 94.5 & 4791.0 & \textbf{12.6} & 136.6 \\ \hline \texttt{UCF} & $1100\times850\times10$ & 300.8 & 2938.8 & \textbf{12.5} & 132.6 \\ \hlineB{3} \texttt{hand10} & $512\times512\times10$ & 37.8 & 2880.0 & \textbf{19.3} & 29.5 \\ \texttt{hand20} & $512\times512\times20$ & 88.7 & 4320.0 & \textbf{42.4} & 63.9 \\ \texttt{hand30} & $512\times512\times30$ & 163.0 & 6120.0 & \textbf{74.7} & 107.7 \\ \texttt{hand50} & $512\times512\times50$ & 303.4 & 12600.0 & \textbf{144.5} & 203.9 \\ \hlineB{3} \end{tabular} } \label{Table:time_real} \vspace{-2mm} \end{table} \begin{figure*}[htbp!] \begin{center} \includegraphics[width=1.0\linewidth]{fig09_real_cacti_hand.pdf} \end{center} \vspace{-3mm} \caption{Real data: \texttt{Hand} high-speed video SCI ($512\times512\times B$) with compression rates, $B$, vary from 10 to 50. Dashed grids are added to aid the visualization of motion details. PnP-FastDVDnet is used for the reconstruction.} \label{fig:real_hand} \end{figure*} The running time of different algorithms for these real data are shown in Table~\ref{Table:time_real}. We can see that PnP-FFDNet, which only takes a few seconds for the reconstruction of these grayscale datasets, can provide comparable results as DeSCI, which needs hours even when performed in a frame-wise manner. PnP-FFDNet is significantly better than the speed runner-up GAP-TV (for the top two datasets) in terms of motion-blur reduction and detail preservation, as shown in Figs.~\ref{fig:real_chopperwheel} and \ref{fig:real_ucf}. PnP-FFDNet is at least more than $4\times$ faster than GAP-TV and when the data size is getting larger, the running time increases slower than GAP-TV. In this way, PnP algorithms for SCI achieves a good balance of efficiency and flexibility and PnP-FFDNet could serve as a baseline for SCI recovery. PnP-FastDVDnet costs about $10\times$ longer than PnP-FFDNet (upper part in Table~\ref{Table:time_real}), which is the price for a higher reconstruction quality. We also notice that the running time of PnP-FastDVDnet for large-scale data {\texttt{UCF}} is shorter than GAP-TV. This shows another gain of PnP based algorithm, {\em i.e.}, ready to scale up. This will be further verified by the following {\texttt{Hand}} data. Therefore, we recommend the PnP-FFDNet as a new baseline, and if a higher quality result is desired, PnP-FastDVDnet is a good choice with a longer running time (but still $20\times$ shorter than DeSCI). \subsection{Grayscale Videos with Various Compression Rates } \label{Sec:gray_hand} Next, we test the PnP algorithms by the data captured by a recently built video SCI system in~\cite{Qiao2020_APLP}, where similar scenes were captured by different compression rates, {\em i.e.}, $B=\{10,20,30,40,50\}$\footnote{Data at: \url{https://github.com/mq0829/DL-CACTI}.}. Unlike the data reported in the previous subsection, here the data are of the same spatial size $512\times 512$, but a compression rate of 50 will stress out the GPU memory in deep neural networks. {As shown in Fig.~\ref{fig:quality_vary_codenum}. This is another way to construct the large-scale data.} {Another challenge in video SCI is that though high compression rate results have been reported before, whether the reconstructed video can resolve such a high-speed motion is still a question.} {To address these concerns, hereby, we show the reconstruction of {\texttt{Hand}} data with $B=10,20,30,50$ in Fig.~\ref{fig:real_hand}, where we can see that at $B=50$, each frame is different from the previous one, and this leads to a high-speed motion to at least a few pixels motion per reconstructed frame.} Regarding the running time, it can be seen from Table~\ref{Table:time_real} that both PnP-FFDNet and PnP-FastDVDnet are faster than GAP-TV. At $B=50$, PnP-FFDNet can finish the reconstruction of one measurement in 2.4 minutes and PnP-FastDVDnet needs 3.4 minutes but providing better results. By contrast, DeSCI needs 210 minutes (3.5 hours) to reconstruct 50 frames from a snapshot. We have also tried to reconstruct this video by training the networks proposed in~\cite{Qiao2020_APLP} and \cite{Cheng20ECCV_BIRNAT}; however, the quality of the results from the trained networks for this {\texttt{Hand}} dataset are poor when $B>20$, mainly due to the high-speed motions. By contrast, in Fig.~\ref{fig:real_hand}, we can see clear details are reconstructed by the proposed PnP-FastDVDnet. Therefore, it is confident to state that the video SCI system along with our proposed PnP algorithm can achieve a compression rate of 50. When the camera is working at 50 fps~\cite{Qiao2020_CACTI}, the built system can be used to capture high-speed videos at 2500 fps with high quality reconstruction. Due to the space limit, we do not show results of other data here and more results can be found in the supplementary videos. \begin{figure}[!htbp] \begin{center} \includegraphics[width=1\linewidth]{Figures/fig07_real_cacti_hammer.pdf} \end{center} \vspace{-3mm} \caption{Real data: \texttt{Hammer} color video SCI ($512\times512\times 3\times22$).} \vspace{-4mm} \label{fig:real_color_hammer} \end{figure} \subsection{Color Videos} Lastly, we verify the proposed algorithm on the color video SCI data captured by~\cite{Yuan14CVPR}, which has the same model as described in Section~\ref{Sec:jointcsci}. Following the procedure in the mid-scale color data, an RGB video of $B=22$ frames with size of $512\times 512 \times 3$ is reconstructed from a single Bayer mosaic measurement shown in Fig.~\ref{fig:real_color_hammer} of the data \texttt{hammer}. Along with the running time in Table~\ref{Table:time_real}, we can see that PnP-FFDnet, which only takes about 12 seconds for reconstruction, can provide comparable results as DeSCI, which needs hours. GAP-wavelet~\cite{Yuan14CVPR} cannot remove noise in the background and GAP-TV shows blurry results. PnP-FFDnet shows sharper edges than DeSCI with a clean background. PnP-FastDVDnet reconstructs sharper boundaries and finer details of the hammer than DeSCI and PnP-FFDNet, but needs 136 seconds (2.27 minutes) for the reconstruction. We do notice the greenish background of PnP-FastDVDnet, which may come from the smoothing artifacts of brightness across different frames. We will test more color video SCI data using the proposed PnP algorithms in the future. \section{Conclusions \label{Sec:Con}} We proposed plug-and-play algorithms for the reconstruction of snapshot compressive video imaging systems. By integrating deep denoisers into the PnP framework, we not only get excellent results on both simulation and real datasets, but also provide reconstruction in a short time with sufficient flexibility. Convergence results of PnP-GAP are proved and we first time show that SCI can be used in large-scale (HD, FHD and UHD) daily life videos. This paves the way of practical applications of SCI. Regarding the future work, one direction is to incorporating an efficient demosaicing network to speed up the reconstruction and also improve the video quality. The other direction is to build a real large-scale video SCI system to be used in advanced cameras~\cite{brady2012multiscale,Brady:20}. \section*{Acknowledgments.} The work of Jinli Suo and Qionghai Dai is partially supported by NSFC 61722110, 61931012, 61631009 and Beijing Municipal Science \& Technology Commission (BMSTC) (No. Z181100003118014). X. Yuan and Y. Liu contribute equally to this paper. \bibliographystyle{IEEEtran}
2,869,038,154,156
arxiv
\section{Introduction} Precise non-perturbative calculation in heavy quark physics is one of the long-standing goals of lattice QCD. For quantities involving heavy quark, the discretization effect may become more significant than that in the light quark sector. With a naive order counting it appears as a power of $am$, the heavy quark mass in unit of the lattice cutoff $1/a$, which is not much smaller than one. While effective theories for heavy quarks in non-relativistic kinematics have been developed and used on the lattice, a brute-force approach of taking $a$ as small as possible would also be powerful since there is no need of additional matching of parameters. This may be combined with the Symanzik improvement of the lattice fermion action to eliminate leading discretization effects. The JLQCD collaboration is currently generating 2+1-flavor gauge configurations at fine lattice spacings of $a^{-1}=$ 2.4--4.8~GeV using a chirally symmetric fermion formulation for light quarks \cite{Kaneko:2013jla}. For the valence heavy quarks, we plan to use other fermion formulations that may have better scaling towards the continuum limit. In this work we investigate some choices of the lattice fermion action to be used for valence quarks, focusing on their discretization effect and continuum scaling for heavy quarks. At this initial study, we mainly consider the charm quark mass region, and leave an extension towards heavier masses for future study. The quantities to be studied are the energy-momentum dispersion relation, hyperfine splitting and decay constants of the heavy-heavy mesons. For this purpose, we are generating a series of quenched gauge configurations that have a roughly matched physical volume (at about 1.6~fm) and cover a range of lattice spacings between $1/a$ = 2 and 4~GeV. Since these lattices do not contain sea quarks and have small physical volume, we do not expect precise agreement with the corresponding experimental data for the charm quark, but rather we are interested in their scaling towards the continuum limit. The gauge configurations are generated with the tree-level $O(a^2)$-improved Symanzik action, so far at $\beta$ = 4.41 and 4.66 on $16^3\times 32$ and $24^3\times 48$ lattices, respectively. Using the energy density expectation value after the Wilson flow, we determine the lattice spacing with an input $w_0$ = 0.176(2)~fm \cite{Borsanyi:2012zs} as $1/a$ = 1.97(2) and 2.81(3)~GeV for the two lattices. (Note that this input value is given through the $\Omega$ baryon mass in 2+1-flavor QCD in \cite{Borsanyi:2012zs}.) For each $\beta$ value we have analysed 100 independent gauge configurations. In the following we mainly discuss the newly developed $\mathcal{O}(a^2)$-improved Brillouin fermion action, and present our preliminary studies of the dispersion relation and hyperfine splitting. We also analyse the heavy-heavy decay constant calculated with the domain-wall fermion action in the valence sector. \section{$\mathcal{O}\left(a^{2}\right)$-improved Brillouin fermions} We develop a new class of lattice fermion action which is free from $\mathcal{O}\left(a\right)$ and $\mathcal{O}\left(a^{2}\right)$ discretization effects at the tree-level. The action is based on the Isotropic-derivative and the Brillouin Laplacian studied in \cite{Creutz:2010bm,Durr:2010ch}. The Dirac-operator is defined as \begin{equation} D^{bri}(x,y) = \sum_{\mu}\gamma_{\mu}\nabla^{iso}_{\mu}(x,y) - \frac{a}{2} \Delta^{bri}(x,y)+m_{0}\delta_{x,y}, \end{equation} where $\nabla^{iso}_{\mu}(x,y)$ and $\Delta^{bri}(x,y)$ include 1-, 2-, 3- and 4-hop terms in a $3^4$ hypercube defined by $|x_\mu-y_\mu|\le 1$, and the resulting Dirac operator is ultralocal. The leading discretization effect contained in $\nabla^{iso}_{\mu}(x,y)$ is $\mathcal{O}(a^{2})$ and is isotropic. The Brillouin Laplacian $\Delta^{bri}(x,y)$ is designed such that all the fifteen doublers have the same mass $2/a$ at the tree-level. This can be seen from the eigenvalue spectrum on the complex plane as shown in Figure~\ref{fig:tree_eigen} for the free case. For the Wilson fermion, the spectrum has five branches, {\it i.e.} one corresponding to the physical (real part = 0) and the others to doublers ($2/a$, $4/a$, $6/a$ and $8/a$). The Brillouin operator approximately gives an unit circle centered at (1,0), which resembles the Ginsparg-Wilson-type fermions. This suggests that the Brillouin operator approximately satisfies the Ginsparg-Wilson relation. \begin{figure}[tb] \begin{center} \includegraphics[width=7cm,clip=on]{./eigen.pdf} \end{center} \caption{Eigenvalue distributions of the Wilson (open circles), the Brillouin (open squares) and the improved Brillouin (open diamonds). } \label{fig:tree_eigen} \end{figure} With the Brillouin operator, it is found that the energy-momentum dispersion relation calculated at the tree-level follows very precisely that of continuum theory \cite{Durr:2010ch}, which is demonstrated in Figure~\ref{fig:dis} (left: massless; right massive $ma=0.5$). With the massless Wilson fermion, the deviation from the continuum is already seen at $ap\sim 0.5$ in the massless case, while with the Brillouin fermion it does not start until around $ap\sim 1.5$. This is confirmed also nonperturbatively using the dispersion relation of mesons and baryons calculated on quenched lattices \cite{Durr:2012dw}. \begin{figure}[tb] \begin{center} \includegraphics[width=7cm,clip=on]{./tree_dis_m_0_0.pdf} \includegraphics[width=7cm,clip=on]{./tree_dis_m_0_5.pdf} \end{center} \caption{ Dispersion relation calculated at the tree-level for different fermion formulations, {\it i.e.} Wilson (green), Brillouin (blue), improved Brillouin (magenta). The results at $ma=0.0$ (left) and $ma=0.5$ (right) are plotted. } \label{fig:dis} \end{figure} In this work, we further improve the Brillouin fermion by eliminating its leading discretization effects. For instance, with the Brillouin operator, the relation between the static energy $E$, defined through a pole of the free propagator, and the bare mass $m$ is given as \begin{equation} \left(Ea\right)^{2} = (ma)^{2} - (ma)^{3}+\frac{11}{12}(ma)^{4} - \frac{5}{6}(ma)^{5} + \mathcal{O}\left((ma)^{6}\right) \end{equation} at finite lattice spacing $a$, and the error starts from the term of $(ma)^3$, which represents a relative $O(a)$ effect. Such deviation from the continuum is seen in the plot of Figure~\ref{fig:dis} (right) at $ap=0$, where the case of $am=0.5$ is plotted. Here, the Brillouin operator has a similar discretization effect as the Wilson fermion, which gives $Ea=\ln(1+ma)$. In order to make the Brillouin fermion consistent with the Symanzik improvement, we eliminate the leading discretization errors by modifying the action as \begin{equation} D^{imp} = \sum_{\mu}\gamma_{\mu} \left(1-\frac{a^{2}}{12}\Delta^{bri}\right) \nabla^{iso}_{\mu} \left(1-\frac{a^{2}}{12}\Delta^{bri}\right) +c_{imp}a^{3}(\Delta^{bri})^{2}+m_{0}. \end{equation} The terms $(1-a^2\Delta^{bri}/12)$ sandwiching $\nabla_\mu^{iso}$ are introduced to eliminate the $a^2$ errors while keeping the $\gamma_5$ hermiticity property. The Wilson-like term is simply squared so that its effect starts from $O(a^3)$. The relation between the energy and the bare mass becomes \begin{equation} \left(Ea\right)^{2}=(ma)^{2}+c_{imp}(ma)^{5}+\mathcal{O}\left((ma)^{6}\right), \end{equation}% and the leading error starts from $O(a^3)$ as expected. For this improved Brillouin fermion action, we observe good dispersion relation also for the massive case (see a plot on the right panel of Figure~\ref{fig:dis}). The difference from the continuum is invisible below $ap\sim 1.5$. The eigenvalues of the improved operator $D^{imp}$ no longer lie on the unit circle as shown in Figure~\ref{fig:tree_eigen} (blue dots), because it approaches the continuum limit which is in this case the imaginary axis. The blue points indeed touch the imaginary axis more closely. Numerical implementation of the Brillouin operator is complicated once the gauge-link is introduced, because one has to preserve symmetries under cubic rotations for 2-, 3- and 4-hop terms. We explicitly average over all possible paths of minimal lengths. Computational code is implemented on the IroIro++ package \cite{Cossu:2013ola}. \section{Nonperturbative studies on quenched lattices} Our scaling studies on the quenched configurations are ongoing. In the following we show the results for the dispersion relation and hyperfine splitting of heavy-heavy mesons obtained with the improved Brillouin fermion, as well as a study of heavy-heavy decay constant using the domain-wall fermion. For a heavy-heavy meson, we calculate an effective speed-of-light extracted from the energy at finite momentum $\vec{p}$ as \begin{equation} c_{\rm eff}^{2}(\vec{p}) = \frac{E^{2}(\vec{p})-E^{2}(\vec{0})}{\vec{p}^{2}}. \end{equation} The heavy quark mass is tuned until the spin-averaged 1S mass becomes 3~GeV, and $c_{\rm eff}^2$ is calculated for the pseudo-scalar channel. The Wilson and improved Brillouin fermions are used on the quenched configurations at $1/a$ = 1.97 and 2.81~GeV. Three steps of stout smearing \cite{Morningstar:2003gk} are applied for the gauge links. \begin{figure} \begin{center} \includegraphics[scale=0.3,clip=on]{./spl_M_3_00_v16.pdf} \includegraphics[scale=0.3,clip=on]{./spl_M_3_00_v24.pdf} \end{center} \caption{ Effective speed of light as a function of normalized momentum squared at $a^{-1}=1.97$ GeV (left) and $a^{-1}=2.81$ GeV (right). In each panel, data for improved Brillouin (filed circles) and Wilson (filed diamonds) fermions are plotted. } \label{fig:spl} \end{figure} \begin{figure} \begin{center} \includegraphics[width=9cm,clip=on]{./sclspl.pdf} \end{center} \caption{ Scaling of $c_{\rm eff}^{2}$ calculated at $|\vec{p}|^2$ = 1 (upper), 2 (middle) and 3 (lower) in units of $(2\pi/L)^2$. In each panel, data for improved Brillouin (filed circles) and Wilson (filed diamonds) are plotted as a function of $a^2$ [GeV$^{-2}$]. } \label{fig:sclspl} \end{figure} The results are shown in Figure~\ref{fig:spl}. Here $c_{\rm eff}^2$ is plotted against $|\vec{p}|^2$ for Wilson (black) and the improved Brillouin (red) fermions. Already at $1/a$ = 2.0~GeV (left) the dispersion relation of the 3-GeV meson follows that of continuum theory, $c=1$, very precisely (within the statistical error) when the improved Brillouin fermion is employed. With the Wilson fermion, the deviation is as large as 30\%. Such large deviation is allowed in the effective theory approaches \cite{ElKhadra:1996mp}, where the rest mass $m_1$ and the kinetic mass $m_2$ are treated differently and only $m_2$ is taken as physical. With the improved Brillouin fermion, this is not necessary. Scaling towards the continuum limit is demonstrated in Figure~\ref{fig:sclspl}. In three panels, the results at normalized momentum squared are shown as a function of $a^2$. With the Brillouin fermion, we do not see any deviations from the continuum at the level of 1\%, which is the size of the statistical error. \begin{figure} \begin{center} \includegraphics[width=8cm,clip=on]{./sclhyp.pdf} \end{center} \caption{ Continuum scaling of the hyperfine splitting $m_{vec}-m_{ps}$ [GeV]. Results with the Wilson (black) and improved Brillouin (red) fermions are plotted as a function of $a^2$ [GeV$^{-2}$]. } \label{fig:sclhyp} \end{figure} In Figure~\ref{fig:sclhyp}, we show a similar scaling test of the two formulations for the hyperfine splitting $m_{vec}-m_{ps}$ of the 3-GeV heavy-heavy meson. Also for this quantity, the scaling towards the continuum limit is much better with the improved Brillouin fermion. \begin{figure} \begin{center} \includegraphics[width=8cm,clip=on]{./decay_constant.eps} \end{center} \caption{ Heavy-heavy pseudo-scalar meson decay constants calculated at two lattice spacings, 2.0~GeV (red) and 2.8~GeV (black), with the domain-wall fermion. The results for $f_{PS}\sqrt{m_{PS}}$ are normalized by the value at $m_{PS}$ = 1.5~GeV and plotted as a function of $1/m_{PS}$. } \label{fig:hhdc} \end{figure} Finally, we briefly describe a calculation of the heavy-heavy decay constant using the domain-wall fermion. It is well known that the domain-wall fermion mechanism breaks down at large $am$ ($\simeq 0.5$) \cite{Jansen:1992tw,Golterman:1992ub,Christ:2004gc}, but the real question is where it shows up in numerical calculations. In Figure~\ref{fig:hhdc} we plot the decay constant $f_{PS}\sqrt{m_{PS}}$ as a function of $1/m_{PS}$. The data are normalized by the value at $m_{PS}$ = 1.5~GeV, so that the renormalization constant cancels out. We observe a good indication of scaling of $f_{ps}$ with the pseudo-scalar mass $m_{ps} \sim 3 GeV$, which suggests that the lattices of $1/a = 2-4 GeV$ could be used for the direct extraction of the properties of D mesons. A complete continuum limit study of this and alternative heavy quark discretizations is needed to come to a final conclusion on this matter. \section{Summary and plans} Relativistic formulation for heavy quark has an advantage that no tuning of parameters depending on the heavy quark mass is necessary. Therefore, as fine-lattice dynamical QCD simulations has become realistic, such a brute-force approach could be a powerful alternative to the effective theory approaches, provided that the possible $(am)^n$ corrections are under control. We are performing various scaling tests of relativistic formulations on quenched lattices, and so far have obtained promising results. In the future we plan to extend the study by adding finer lattices and more choices of fermion formulations. \vspace*{1cm} Numerical simulations are performed on the IBM System Blue Gene Solution (Blue Gene /Q) at High Energy Accelerator Research Organization (KEK) under a support of its Large Scale Simulation Program (No.~12/13-04). This work is supported in part by the Grant-in-Aid of the Japanese Ministry of Education (No. 21674002) and the SPIRE (Strategic Program for Innovative Research) Field5 project. The research leading to these results has also received funding from the European Research Council under the European Community's Seventh Framework Programme (FP7/2007-2013) ERC grant agreement No 279757. \newcommand{\J}[4]{{#1} {\bf #2} (#3) #4} \newcommand{Rev.~Mod.~Phys.}{Rev.~Mod.~Phys.} \newcommand{Mod.~Phys.~Lett.}{Mod.~Phys.~Lett.} \newcommand{Int.~J.~Mod.~Phys.}{Int.~J.~Mod.~Phys.} \newcommand{Nucl.~Phys.}{Nucl.~Phys.} \newcommand{Nucl.~Phys.~{\bf B} (Proc.~Suppl.)}{Nucl.~Phys.~{\bf B} (Proc.~Suppl.)} \newcommand{Phys.~Lett.}{Phys.~Lett.} \newcommand{Phys.~Rev.~D}{Phys.~Rev.~D} \newcommand{Phys.~Rev.~Lett.}{Phys.~Rev.~Lett.} \newcommand{Ann.~Phys.}{Ann.~Phys.} \newcommand{Commun.~Math.~Phys.}{Commun.~Math.~Phys.} \newcommand{Comp.~Phys.~Comm.}{Comp.~Phys.~Comm.} \newcommand{Prog. Theor. Phys.}{Prog. Theor. Phys.} \newcommand{Prog. Theor. Phys. Suppl.}{Prog. Theor. Phys. Suppl.} \newcommand{JHEP}{JHEP} \newcommand{PoS}{PoS}
2,869,038,154,157
arxiv
\section{Introduction} We are involved in the research of diameter variations and, as it is rather usual, in such metrological measurements the knowledge of the absolute value is more complicate. Only in the studies of stellar evolution the accurate value of the solar diameter has an utilization, in the hydrodynamical codes. The absolute value of the solar diameter is defined by the inflexion points of the Limb Darkening Function.\cite{bib:Hill} Recently this definition has been extended to the ephemerides-measurements, made with the timing of solar eclipses and planetary transits.\cite{bib:raponi} The timing in astronomy is more accurate than imaging, and eclipses and planetary transits have been considered as the more accurate way to measure the solar diameter: e.g. the transits of Mercury of 2003 and 2006 have been used to measure the solar diameter in SOHO/MDI 676.78 nm window.\cite{bib:emilio} \section{Heliometric angle calibration} To calibrate the heliometric angle $\theta$ we operated in two ways: one using imaging of a fixed target and the other by using the drfit-scan timing of the solar images over the CCD on the focal plane. This calibration can give either the absolute value of the solar diameter and the confirmation that the heliometric angle remains fixed during the years. This last opportunity is of paramount importance for measurements which should be consistent along several decades, in order to bring some astrophysical value. \subsection{Reference rod at finite distance} A wooden rod has been provided with two spheres of metal, and located 116 m far from the telescope, in a fixed position in the campus of the Astronomical Observatory. The two spheres act as artificial stars during a sunny day, while when the weather is cloudy the image reflected by the spheres is much larger, corresponding to all the visible sky. The telescope, observing without the solar filter, can aim at this rod, identifying the pointlike sources. Their distance is measured on the same focal plane, to calibrate the scale therein. The limit of this measurement is given by the local air turbulence, and it can be improved by the statistics as much as we need. The first series of calibration has been realized in the month of February 2013 by using the wooden rod with two metal spheres, located at 116 meters from the telescope on the roof of the main building of the old Observatorio Nacional in the campus of the Observatory of Rio de Janeiro. The image of the rod has been put on focus by using a two-pinholes mask.\cite{bib:Sigismondi2002} The distance of the two spheres in pixels was different when using another mask, with a larger separation between the two pinholes, but this effect is a simple parallactic effect. The modification of 1 cm in the distance between the two pinholes, with respect to a rod located 116 m far, corresponds to an angle of $1/11600$ radians$\approx 20$ arcsec, and it is consistent with the variation of the pixel distance. Another proof that is not the effect of an imaging obtained along different Petzval surfaces\cite{bib:wiki} at dfferent offaxis has been obtained during an ordinary observational session of the solar diameter: the images of the Sun drifting along different paths with respect to the holes, yield constant measured diameter. The measured diameter $D$ in pixels is related to the distance $d$ in pixels between the two images, produced by each half of a parabolic mirror by the formula $D=H-d$, where $H=\theta\times F$ and $F$ is the focal length. The pixel distance between the two spheres of the reference rod is of the same magnitude of the distance between the two images of the Sun. Finally the measurements made with the rod, are a reference for future checks with both the masks. But they are critically dependent on the collocation of the masks with respect of the axis of the tube that, due to the geometry of the mirror, is not axis-symmetrical. The annular heliometer\cite{bib:avila} will be axis-symmetrical. Moreover the opening of the telescope to allow measurements of Earthly objects, then without the solar filter, exposes the optics to the dust, and it is to avoid as much as possible. \subsection{Drift-scan timing} The measurement made with the drift-scan method is therefore more welcome. The drift-scan is already the ordinary acquisition mode for heliometric images. Usually 50 images are recorded without telescope following motion, in order to individuate the heliolatitude of the measured diameter with respect to the East-West drift. The solar image moves on the focal plane at an angular speed depending only by the solar declination and the true solar day duration in that particular instant, all quantities known with high precision from ephemerides. Therefore the solar image velocity can be used to calibrate the pixel scale of the CCD very accurately by acquiring 200-250 images in oder to have the passage of the four limbs of the two solar images at the edges of the field of view or on each CCD columun. The distance $d$ between the two images of the Sun is measured by the analysis program in pixels and it is related to the solar diameter by the equation $D+d=F\times\theta$ where $\theta$ is the heliometric angle, and $F$ the focal length of the telescope. This equation is identical to $D+d=H$ if we consider that $F$ is also invariant. The angular distance $\theta$=(D+d)/F can be measured by timing with drift scan, d is also measured directly by the heliometer. F is constant within one part over $10^5$ because the instrument is made by carbon fiber and its longitudinal coefficient of thermal expansion at 300-350 K is $\lambda \le 10^{-6}/K$.\cite{bib:carbon} The advantage of drift scan method is the timing: the solar images drift on the focal plane and, with respect to a given reference on the CCDs, even if there are optical distortions in the line of sight, the distortions act as a systematic error, which is the same for the same space direction. In other words the timing is not affected by local optical distortions. This kind of approach to measure the wedge angle has not been exploited with the Solar Disk Sextant SDS, where have been used ten internal reflections within the heliometric wedge (the prismatic objective) of a laser beam in order to know the angle of this wedge to a 0.1 arcsec of accuracy ($1978.94\pm 0.1$ arcsec).\cite{bib:sofia13} The detectors used in the SDS are seven linear CCDs of 100 pixel each, while in the Heliometer of Rio de Janeiro we can use all the CCD in VGA mode $640\times480$ pixels, with a scale of 1.168 arcsec/pixel. The errorbar attributed to the method used to measure the wedge angle is 0.1 arcsec. This errorbar can eventually be as larger as 0.2 or 0.3 arcsec. What it is important is that this angle remained constant along the years. At the heliometer with the drift-scan timing we obtained the following heliometric angle $1953.5\pm 1.4$ arcsec, which is a preliminary value obtained by C. Sigismondi with the measurement made on June 19, 2013 (three scans after local noon 12:45 PM) under very clear sky conditions. The error associated is the one of two independent measurements of the diameter which resulted $1888.2\pm 2.1$ arcsec, the ephemerides reference with standard solar radius is 1888.85 arcsec for the same instant. The agreement is perfect. To reduce the error more measurements have to be done on the same series of images. \subsection{Anomalous refraction and uncertainty on heliometric angle} The uncertainty associated to the measurement of the heliometric angle is the dispersion of the three independent measurements realized in that day. It is well known that each single measurement obtained with drift-scan can differ from another one because of the effect of the local atmospheric turbulence, in particular at frequencies below 0.01 Hz.\cite{bib:sigi3} These fluctuations can produce a slow shift of the image during the transit which affects the final measurement of the diameter.\cite{bib:sigi4} also well verified at the Heliometer with a 100 s continuative observation made by C. Sigismondi on April 16, 2013 at 10 AM local time, the longest series of 500 images available up to now owing to the memory limits of the present acquisition system. These effects have been treated as anomalous refraction\cite{bib:corbard} instead of being considered as the low frequency part of the seeing spectrum.\cite{bib:sigi3} \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{icrc2013-0303-03} \caption{The power spectrum of the seeing as measured with the Heliometer of Rio de Janeiro. The unit of measure is the Nyquist frequence which is 5 Hz in our case. According to the Shannon theory half of this frequence if the limiting frequence at which we can get information. There is power at all frequencies, confirming the reliability of the hypothesis of low frequency motions, already verified with the Locarno telescope. \cite{bib:sigi4}} \label{simp_fig} \end{figure} Considering these fluctuations as the low frequency region of the seeing seems a more logical approach, because it avoids to think to some strange effects of the atmosphere acting only somewhere, which are nothing else than phenomena ad hoc to explain the lack of coincidence between observations and expectations, too often invoked in past publications on solar astrometry. The accuracy of 1.4 arcsec for this first measurements will allow to test the stability of the heliometric angle in the next months, within this tolerance. A further complete analysis of the same data will allow to reduce the error of a statistical factor of $\sqrt500\sim 22$ reaching the desired 0.1 arcsec of accuracy. The measurements made with the fixed rod on the top of the main builiding of the Museu de Astronomia will be the cross checks. \section{Discussions and perspectives} The Heliometer of Rio de Janeiro is already bringing new results to solar astrometry. The quantitative discovery of glass filter effects has permitted to understand the shifts between the astrolabes of R2S3 network and between them and SDS. H. Neckel\cite{bib:neckel} showed that the variations of the solar diameter, in the continuum, does not exceed 0.07 arcsec within all the range of visibile wavelengths $\lambda$, hence all departures larger than 0.1 arcsec remain unexplained. SODISM II experiment\cite{bib:corbard0613} in 2013 has confirmed the 0.07 arcsec range, but only after the measured diameters have been corrected for the diffraction (changing with $\lambda$) and for the atmospheric turbulence (lower with increasing $\lambda$) acting over continuum limb darkening functions that are steeper for increasing $\lambda$: the solar radii at 535.7 nm and at 607.1 nm are respectively $959.77 \pm 0.25$ and $959.83\pm 0.26$ arcsec. The reduction of the wavebands to a few nm in the PICARD/SODISM satellite limitates the influence of emission lines from regions above the photosphere, but the differences of more than 0.5 arcsec within the data of various astrolabes remained unexplained up to our verifications on the glass filter of the Heliometer of Rio de Janeiro. The prediction of the space weather with an anticipation of a week for satellite in orbit around the Earth is a promising result and the reflecting Heliometer will provide in its observational duties. The study of low frequency component of the seeing is particularly suitable for the Heliometer configuration: defects of the tracking system or accidental hits or wind upon the tube act in the same way for the two heliometric images of the Sun. For single-image systems, there is always the doubt to observe a tracking defeat of the telescope. Longer duration monitors of the seeing will clarify the problem of consecutive meridian transits, with consecutive values separated often by more than the expected random errorbar determined by high frequency seeing. \vspace*{0.5cm} \footnotesize{{\bf Acknowledgment:}{ C.S. acknowledges A. Raponi, the CNPq fund 300682/2012-3 and the Notre Dame Jerusalem Center.}}
2,869,038,154,158
arxiv
\section{Introduction} In a small and highly urbanized nation like Singapore dengue outbreaks or epidemics are identified as ``clusters''. A dengue {\bf cluster} or focus of transmission is defined as at least two confirmed cases, with no recent travel history, that are located within 200 m of each other (taken as the flight range of the {\em Aedes aegypti}) and whose dates of the onset of symptoms are within three weeks of each other~\cite{dengue}. Some efforts have been directed towards the characterization of 'SIS' models of infections, or epidemics without immunization~\cite{Mollison,Grassberger,satorras}, that is the state of the particles are healthy or infected, and are susceptible to re-infection after healing, thus the name of the model (SIS: susceptible-infected-susceptible). Analytical and numerical expressions describe the dynamics of the $SIS $ model in terms of the rate of spreading $\lambda$, the evolution of the survival probability of infection $P(t)$, the mean number of infected agents $n(t)$ and the mean square distance of spreading $R^{2}(t)$ in time, which are quantities difficult to compare with real data of epidemics. This work suggests an application of potential comparison with public health data, analyzing a scaling function for {\bf clusters numbers} on a $SIS$ model of infection.\\ The second important ingredient of this work is the mobility of agents, contrasted to most of the models of epidemy where the population is modeled by static networks~\cite{Mollison,Grassberger,satorras}. In a previous work~\cite{us} we showed that the $SIS$ model of infection on a system of mobile agents has critical exponents which depends on the density of the system, i.e spatial correlations and mobility of the agents play an important role. We obtained a crossover from mean field behavior for low densities to static $2D$-lattices for higher densities. Here we use our model of mobile agents to define clusters of infections and analyze its dependency on the rate of infection $\lambda$ (defined in detail bellow) and on the mobility of the agents.\\ We propose a time-evolving network model: A link between two moving agents is created when they collide with each other and there is transmission of the infection among them (i.e through infected-susceptible interactions), the link lasts a characteristic time of infection.\\ We find that the network of clusters of infections remains disconnected and no matter how large the rate of infection, no giant cluster is formed. We show that in the transition to spreading, the moments of the cluster size distribution are described by an exponent $\beta$, which is the exponent that characterizes the fraction of infected mass $F_{IM}=N_{Inf}/N$, defined as the ratio of the number of infected agents ($N_{Inf}$) and the total amount of population ($N$). Thus the number of clusters depends on $\lambda$, and mobility and spatial correlation of the agents influence its dependency. \section{Model} $N$ soft disks, with radius $r_{0}=1$, represent agents which move in a two dimensional cell of linear size $L$, with density $\rho=N/L^{2}$. The system has periodic boundary conditions and is initialized as follows: the agents are placed in the cell with the same velocity modulus $v$ and randomly distributed directions, positions and states: 'infected' or 'susceptible'. If a susceptible agent $i$ collides with an infected agent $j$ (i.e $|\mbox{\boldmath$r$}_{i}-\mbox{\boldmath$r$}{j}|<=2r_{0}$), then $i$ becomes infected. Each infected agent heals and becomes susceptible again after a fixed number of time steps, called the 'time of infection' ($\Delta t_{inf}$), which is a free parameter of the model.\\ The physical interaction of the agents is modeled by molecular dynamic with a leap-frog integration method \cite{Rapaport}, the interaction potential is a $12-6$ Lennard-Jones truncated potential (see more details in \cite{us}).\\ The resulting model is a contact process~\cite{Dickman}, where the infected species become extinct unless the infection spreads rapidly enough. The transition between survival and extinction depends on a critical rate of spreading $\lambda_{c}$ that marks the transition into an absorbing state. The infection rate $\lambda$ is defined as the number of agents one agent infects before healing. For this system, \begin{equation} \lambda \equiv \Delta t_{inf}/\tau_{f}, \label{eq:lambda} \end{equation} where $\tau_{f}$ is the characteristic time of flight between two collisions, which is determined by the density ($\rho$) and the mean velocity of agents ($\langle v \rangle$). The critical exponents of the transition to spreading were presented by us for the same kind of system~\cite{us}, where the study was done in terms of the fraction of infected individuals ($F_{IM}$). Here we go further and characterize the behavior of the clusters of infected individuals. When agent $j$ infects agent $i$ a link is created among them, the link lasts until one of them heals, meanwhile each of them continues making links with other susceptible agents through the same rule. A cluster is thus defined as a group of infected agents connected by links. Note that in contrast to percolation, where clusters are given by occupied lattice sites connected by nearest-neighbor distances, for this model each cluster gives a group of agents infected in a given period of time linked by a relation of contagion. Isolated infected agents are regarded as clusters of size unity and any cluster consisting of $s$ connected agents is an $s-cluster$. We borrow the notation from Stauffer's book on percolation theory~\cite{Stauffer} and define here $n_{s}=N_{s}/N$ as the number of $s$-clusters per agent, where $N_{s}$ is the number of clusters of size $s$ and $N$ the total number of agents in the system. For different values of $\lambda$, in the next section we present the results of the calculation of the first three moments of the cluster size distribution. Namely: $\sum_{s} n_{s}$, $\sum_{s} s n_{s}$, $\sum_{s} s^{2} n_{s}$. Those quantities give us, respectively, information about: the total number of clusters, the fraction of infected agents and the mean size of clusters. In order to keep the analogy with the calculation on percolation, we sum over all values of $s$ excluding the largest cluster ($S_{major}$). We also present, the calculations of $F_{major}=S_{major}/N$, the fraction of agents that belong to the largest cluster and $F_{IM}=N_{inf}/N$, the fraction of agents that are infected. \begin{figure} \unitlength 1mm \begin{center} \leavevmode {\includegraphics[width=5.75cm]{fig1_left.eps} \includegraphics[width=5.78cm]{fig1_right.eps}} \end{center} \caption{\protect Left: Fraction of infected individuals from surviving trials versus time at $\lambda=\lambda_{c}$, starting with half of the population infected. At the top, the results for $\rho=0.05$ and $\lambda=1.06$ and at the bottom $\rho=0.46$ and $\lambda=0.68$. System sizes $N = 32\times32$, $64\times64$, $128\times128$ (from top to bottom). Right: Quasi-stationary fraction of infected agents versus $\lambda$ for the same densities (Top: $\rho=0.05$. Bottom: $\rho=0.46$).} \label{fig1} \end{figure} \section{Results} For a fixed density, we vary $\lambda$ (Eq.~\ref{eq:lambda}) changing the time of infection ($\Delta t_{inf}$). Starting with half of the population infected, for rate of infections near $\lambda_{c}$, a given trial may end into the absorbing state after a few time steps or it may {\em survive} fluctuating with a quasi-stationary fraction of infected agents, marked with windows in the left-side of Fig~\ref{fig1}. The calculations are made averaging on time at the {\em quasi-stationary state}, which is described by the surviving trials following an initial transient. The number of time steps of this transient depends on $\lambda$ and on the system size $L$ (see left side of Fig.~\ref{fig1}). The data here illustrate how the mean fraction of infected agents $F^{sv}_{IM}(t)$ (the superscript denotes an average restricted to surviving trials) approaches its stationary value $\bar{F}_{IM}(\lambda,N)$ (in the following, we write $\bar{F}_{IM}(\lambda,N)$ just like $F_{IM}(\lambda))$. In the right side of the same figure we see the graph of $F_{IM}(\lambda)$, which becomes sharper increasing the system size. We analyze in detail the number of clusters for the two density values $\rho=0.05$ and $\rho=0.46$, which have critical rate of spreading $\lambda_{c}=1.06$ and $\lambda_{c}=0.68$ respectively. Note that at the critical density $\lambda_c$, surviving trials tend to stationary values only in the limit $L \rightarrow \infty$. \\ The left side of Fig.~\ref{fig2} is only for pedagogical reasons, in order to illustrate how the number of clusters looks in the quasi-stationary state, we see snapshots of the clusters of infections for {\em different} systems densities and the {\em same} rate of infection $\lambda=1.5$, here $N=10\times10$.\\ \begin{figure} \unitlength 1mm \begin{center} \leavevmode {\includegraphics[width=6.25cm]{fig2_all.eps}} {\includegraphics[width=7.5cm]{fig2_right.eps}} \caption{\protect Left: Snapshots of cluster sizes of infected agents for systems with different densities: (a)$\rho=0.05$, (b)$\rho=0.20$, (c)$\rho=0.40$ and (d)$\rho=0.80$, in all cases $\lambda=1.5$. Right: Quasi-stationary fraction of infected agents varying $\lambda$ over three orders of magnitude (Average over $20$ realizations for $\rho=0.05$ and $N=32\times32$). The insets show the fraction of infected agents in the largest cluster (lower value) and the the first moment of the cluster size distribution (upper value) vs. time, at $\lambda=1.08$, $\lambda=10.0$ and $\lambda=108.0$. } \label{fig2} \end{center} \end{figure} For $\rho=0.05$ and $\lambda \in [1,200]$, the right side of Fig.~\ref{fig2} shows the variation of $F_{IM}(\lambda)$ and $N=32\times32$ averaged over $20$ different realizations. The insets show the change in time of $F_{major}$ and $\sum_{s} n_{s}$, for only one realization with $\lambda=1.08$, $\lambda=10.0$ and $\lambda=108$. In contrast to percolation results, in this model there is no significant variation of $F_{major}$ with $\lambda$, and the relation $F_{major} \ll F_{IM}$ remains. Moreover, the number of clusters $\sum_{s} n_{s}$ grows considerably only near $\lambda_{c}$. \begin{figure} \unitlength 1mm \begin{center} \leavevmode {\includegraphics[width=13.0cm]{fig3.eps}} \end{center} \caption{\protect First three moments of the cluster size distribution , fraction of agents in the largest cluster ($F_{major}$) and fraction of infected agents ($F_{IM}$) vs. $\lambda$. Average over 50 trials, system size $N=64\times64$. Left:$\rho=0.05$. Right:$\rho=0.46$}. \label{fig3} \end{figure} \begin{figure} \unitlength 1mm \begin{center} \leavevmode {\includegraphics[width=6.75cm]{fig4_left.eps} \includegraphics[width=6.75cm]{fig4_right.eps}} \end{center} \caption{\protect Same results of Fig.~\ref{fig3} plotted vs. $(\lambda-\lambda_{c})$. The solid lines are regressions of the form $m_{i}(\lambda-\lambda_{c})^{\beta}$ with $m_{i}$ the coefficient of the $i${\em th} moment. Left: $\lambda_{c}=1.06$, $\beta=0.66$, $m_{0}=0.321$, $m_{1}=2m_{0}$, and $m_{2}=6m_{0}$. Right: $\lambda_{c}=0.68$, $\beta=0.56$, $m_{0}=0.386$, $m_{1}=2.3m_{0}$, and $m_{2}=7.5m_{0}$} \label{fig4} \end{figure} In Fig.~\ref{fig3} for $\rho=0.05$ and $\rho=0.46$, we plot the behavior of the cluster numbers near their respective $\lambda_{c}$. As the largest cluster remains small compared to the total number of agents ($S_{major} \ll N$), we have $F_{IM}(\lambda) \sim \sum_{s} s n_{s}$.Additionally one can see that $\sum_{s} s n_{s}$ and $\sum_{s} s^{2}n_{s}$ show the same critical behavior as $F_{IM}(\lambda)$, plotted in detail in Fig.~\ref{fig4}. We observe that all the moments of the cluster size distribution present exactly the same critical behavior than $F_{IM}$, namely $~(\lambda -\lambda_{c})^{\beta}$, where $\beta$ depends on the density of the system. \section{Conclusions} This work showed that the cluster size distribution of infected individuals is described in terms of the spreading rate ($\lambda$) and the same exponents ($\beta$) previously known for the total mass of infection. Although the agents are free to move there is a homogeneous size distribution of infected clusters at the critical rate of infection, and we did not find any critical exponent associated with the cluster sizes. Comparing with the traditional $SIS$ model on a static network we confirm that mobility and spatial correlations change the value of the critical exponent $\beta$ of the fraction of infected population, and to the same extent the cluster size distribution of infection.
2,869,038,154,159
arxiv
\section{Introduction}\label{sec:intro} The statistical fluctuations of the energy levels and the transition strengths measured in highly excited nuclei with excitation energy above the neutron threshold (several MeV) are well described by the random matrix theory \cite{RMT,Porter,Mehta}. For example, the nearest-neighbour level spacing distribution (NND) and the spectral rigidity (or $\Delta_3$ statistics) of the neutron resonance states follow the behaviour predicted by the random matrix theory for the Gaussian orthogonal ensemble (GOE) \cite{RMT,neutronres}. This seems to indicate that such excited nuclei, at least over a time scale associated with the observed energy interval, are an example of a chaotic quantal system, in the sense that GOE fluctuations generally characterize quantum systems which are chaotic in the classical limit \cite{billiard,nuclchaos}. The fluctuation properties at lower excitation energy are less well understood, although several extensive analysis of low-lying levels as well as of near-yrast high spin levels have been reported recently \cite{Abul,Shriner,Al,Sn,Garrett3,Garrett1,Garrett2}. Although the low lying and low spin levels generally show level spacing distributions which are intermediate between chaos (the GOE or Wigner limit) and order (the Poisson limit), one observes some systematic behaviour with respect to the mass-number and the angular momentum \cite{Shriner}. In particular, it is remarkable that the NND in heavy deformed nuclei is the closest to the Poisson distribution, not only for the low-lying, low spin levels \cite{Shriner}, but also for the high spin rotational levels lying near the yrast line \cite{Garrett3,Garrett1,Garrett2}. This suggests that both the Poisson and the GOE fluctuations coexist in rotating nuclei and that one should expect a transition from order to chaos with increasing intrinsic excitation energy $U$ (the relative excitation energy measured from the yrast line at given spin). In the present paper, we examine theoretically the level statistics of high spin states in rapidly rotating nuclei as a function of intrinsic excitation energy $U$. In particular, we investigate in detail the level statistics associated with the near-yrast states which may become accessible in future experiments. We limit ourselves to the very high spin region with $I \m@thcombine>\sim 30$, where static pairing is generally quenched or even vanishes, because our model is not adequate to deal with strong pairing correlations. Although this makes it difficult to make a direct comparison with present experimental data, there are good reasons to expect that much more experimental information will be available in the near future. The high spin states near the yrast line in well deformed nuclei form rotational band structures, as evidenced by experiments. These rotational band states are usually well described by the cranked mean-field models \cite{Bengtsson-Frauendorf,Bengtsson-Ragnarsson,crank-rev} in which the collective rotation is represented by uniform rotation along the axis of the largest moment of inertia (axis perpendicular to the elongated direction). The intrinsic structure of a rotating nucleus is described in terms of the mean-field potential adding the cranking term caused by the uniform rotation. Most observed rotational bands are based on intrinsic configurations with a few excited quasiparticles (or particles and holes) defined in the cranked mean-field Hamiltonian. However, as the intrinsic excitation energy $U$ increases at a given spin, intrinsic configurations with many particles and holes ($n$p-$n$h) will show up and become progressively dominant. Accordingly, the level density increases significantly, reaching a value around $\sim 10^2$ /MeV at intrinsic excitation energy $U \sim 1 $ MeV above yrast line in rare earth nuclei. One then expects that the residual two-body interaction begins to play an important role, mixing the $n$p-$n$h configurations, because the size of its matrix elements ($\sim 10$ keV) is of the same order as the mean level spacing. It is also to be noticed that the phenomenon of the rotational damping \cite{Lauritzen}, which sets in at around $U \sim 0.8$ MeV above yrast \cite{FAM}, is an important signature of the configuration mixing caused by the residual two-body interaction. The fluctuations of the energy levels will be sensitive to the configuration mixing among the $n$p-$n$h configurations. If the configuration mixing were absent, intrinsic excitations would be specified uniquely by the excited particles and holes. In such a situation, the level fluctuations may follow the Poisson distribution. On the other hand, once the residual two-body interaction is switched on, the $n$p-$n$h configurations interact with each other. If the residual interaction is so strong that many $n$p-$n$h configurations are admixed with complicated amplitudes, one expects that the level fluctuations obey the theory of random matrices. It is therefore important, in studying the level fluctuations as a function of intrinsic excitation energy, to take configuration mixing explicitly into account. We adopt a shell model approach, making use of a reasonable residual two-body interaction on top of a cranked mean field \cite{Aberg, Matsuo96}. Previous work with the cranking model \cite{Aberg} has already discussed some general features of the order to chaos transition although it used a schematic residual interaction represented by a constant with random sign. We have recently shown that the cranked Nilsson model combined with the surface-delta interaction (SDI) \cite{Mozkowski,Faessler} can reproduce the overall features of rotational damping found in experiment \cite{Matsuo96, Matsuo93, Bracco}. In the present paper we adopt the same model, studying the excited levels lying up to about 2 MeV above yrast line. We study in particular detail the states close to yrast, which are likely to be observed in near future experiments. Statistical analyses of high spin levels in deformed nuclei on the basis of the interacting boson model \cite{ibm}, the interacting boson fermion model \cite{IBFM}, and the particle-rotor model \cite{Kruppa} have also been reported. These models, however, take into account only limited degrees of freedom ($sd$ collective bosons or high-$j$ nucleons) of the intrinsic excitations in deformed rotating nuclei. \section{Formulation}\label{sec:form} \subsection{The model} We start with the cranked Nilsson single-particle Hamiltonian \begin{equation} h_{crank} = h_{Nilsson} - \omega j_x \label{nilham} \end{equation} in order to define the single-particle basis in a rotating deformed nucleus. Here the quadrupole and hexadecapole deformations are considered. We do not include the static pairing potential in the mean-field. This may be justified for the high spin region ($I \m@thcombine>\sim 30$) which we are mostly concerned with, since the pairing gap is usually reduced, or even vanishes, due to the rotational perturbation (Mottelson-Valatin effect) \cite{Garrett-pair,Shimizu,Shimizu-Oak}. The eigen-solutions of the cranked Nilsson single-particle Hamitonian define an adiabatic basis as a function of the rotational frequency $\omega$. However, since the adiabatic orbits sometimes accompany avoided crossings between orbits which cause abrupt change of the basis wave functions against small change in $\omega$, we instead use a diabatic single-particle basis, which is constructed by removing small interactions causing the repulsions at the avoided crossings. Putting $N$ neutrons and $Z$ protons in the diabatic single-particle basis, shell model many-body configurations (labeled by $\mu$) are generated: \begin{equation} \ket{\mu (I)} = \prod_{{\rm occupied}\ i \ {\rm in} \ \mu} a_i^{\dag} \ket{-}. \label{mu} \end{equation} In Eq.(2) $a_i^{\dag}$ denotes the nucleon creation operator for an occupied diabatic single-particle orbit $i$, which is defined at an average rotational frequency $\omega_I$ corresponding to the given angular momentum $I$. We include all the single-particle orbits within an interval of 3.0 MeV below and above the Fermi surface. The shell model basis $\{ \ket{\mu(I)} \}$ includes the configuration in which the single-particle orbits up to the Fermi surface are fully occupied, as well as all possible $n$p-$n$h configurations with respect to the fully occupied one. The energy of a shell model configuration $\ket{\mu(I)}$ is given, following the standard cranked Nilsson-Strutinsky prescription, by \begin{equation} E_{\mu}(I) = E_{\mu}^{Nils}(I) - E^{smooth}(I) + E^{RLD}(I) \label{Str} \end{equation} where $E_\mu^{Nils}(I)=E'_\mu(\omega) +\omega J_{x,\mu}(\omega) $ with the angular momentum constraint $J_{x,\mu}(\omega) = I$ on the rotational frequency $\omega$. Here $E'_\mu(\omega)=\sum_{i \in \mu} e'_i(\omega)$ and $J_{x,\mu}(\omega)=\sum_{i \in \mu} j_{x,i}(\omega)$ are the total routhian and the expectation value of the angular momentum $J_x$ of the shell model basis $\mu$, respectively. Since we use the diabatic single-particle basis which depends only weakly on the rotational frequency, the energy expression can be accurately approximated locally by \begin{equation} E_{\mu}^{Nils}(I) = E'_{\mu}(\omega_I) + \omega_I I + {(I - J_{x,\mu}(\omega_I))^2 \over\ 2 J^{(2)}_{\mu}} \label{eng} \end{equation} referring to the average rotational frequency $\omega_I$. Here $J^{(2)}_{\mu}$ is the dynamical moment of inertia of the configuration. The deviation $\left | J_{x,\mu}(\omega_I) - I \right |$ in the angular momentum expectation value is less than 5 at spin $I=50$ for most configurations in the present calculation. Although the Strutinsky smoothed energy $E^{smooth}(I)$ and the rotating liquid drop energy $E^{RLD}(I)$ correct the absolute excitation energy, they do not affect the level statistics discussed in the present paper. We then introduce a two-body force, mixing the shell-model configurations. We adopt the surface delta interaction (SDI)\cite{Mozkowski} \begin{equation} \label{eq:SDI} v(1,2)^{angle} = - 4\pi V_0 \sum_{LM}Y^{*}_{LM}(\theta_{t,1} \phi_{t,1}) Y_{LM}(\theta_{t,2} \phi_{t,2}) \end{equation} where $(\theta_{t} \phi_{t})$ is the angle variable in the stretched coordinates. The strength parameter $V_0$ includes the radial matrix elements and we use the strength $V_0=27.5/A$ MeV given by Ref.\cite{Faessler}, which is the same value used for the study of rotational damping in ${}^{168}$Yb\ \cite{Matsuo96,Matsuo93}. The shell model Hamiltonian is given by \begin{equation} H(I)_{\mu\mu'} = E_{\mu}(I) \delta_{\mu\mu'} + V(I)_{\mu\mu'} \end{equation} where $V(I)_{\mu\mu'}$ denotes the matrix elements of the residual two-body interaction of SDI. The Hamiltonian is diagonalized to obtain energy eigenstates \begin{equation} \ket{\alpha(I)} = \sum_\mu X^{\alpha}_{\mu}(I) \ket{\mu (I)} \end{equation} which are admixtures of the basis configurations $\{ \ket{\mu (I)}\}$ as well as their energy levels $\{ E_\alpha(I)\}$. The diagonalization is done separately for each $I^\pi$, truncating the basis by including the lowest 1000 $\ket {\mu}$ basis states. The resulting lowest 300 states (covering the region up to $U \sim 2.4$ MeV) are rather stable against the truncation of the basis. For further details of the model, we refer to Ref.\cite{Matsuo96}. In the present paper, we focus on the rare-earth nuclei, and in particular we consider 40 nuclei in the $A = 160 -174$ region, listed in Table 1, for which deformed prolate shape stable up to very high spins is suggested by potential energy surface calculations \cite{PES,Werner}. We adopt the equilibrium deformation parameters taken from Ref.\cite{Def-parm}, as given in Table 1, which are similar to those calculated in Ref.\cite{PES}. In order to make a statistically meaningful analysis, we collect the spacings taken from a certain spin interval in all the 40 nuclei, and we will not discuss the dependence on individual nuclei, spins, and parities. In the following, we use the parity and the signature quantum number $(\pi,\alpha)$ to classify the energy levels of the total system. The signature $\alpha$ is related to the total spin $I$ through the relation $I = I_0 + \alpha$ with $\alpha=0,1$ for even-$A$ system, and $\alpha=\pm1/2$ for odd-$A$. We sometimes use the even integer spin $I_0$ and the signature $\alpha$ in place of the ``true'' spin $I$ when we specify spin intervals. \begin{table} \begin{center} \begin{tabular}{|c|c c|} \hline & $\epsilon_2$ & $\epsilon_4$ \\ \hline $^{160,161}$Dy $^{161,162}$Ho & 0.248 & -0.016 \\ $^{162,163}$Dy $^{163,164}$Ho & 0.261 & -0.007 \\ $^{164,165}$Dy $^{165,166}$Ho & 0.267 & 0.003 \\ $^{162,163}$Er $^{163,164}$Tm & 0.245 & -0.009 \\ $^{164,165}$Er $^{165,166}$Tm & 0.258 & 0.001 \\ $^{166,167}$Er $^{167,168}$Tm & 0.267 & 0.012 \\ $^{166,167}$Yb $^{167,168}$Lu & 0.246 & 0.004 \\ $^{168,169}$Yb $^{169,170}$Lu & 0.255 & 0.014 \\ $^{170,171}$Yb $^{171,172}$Lu & 0.265 & 0.025 \\ $^{172,173}$Yb $^{173,174}$Lu & 0.269 & 0.036 \\ \hline \end{tabular} \caption{\label{tabdef} The quadrupole and hexadecapole deformation parameters $\epsilon_2$ and $\epsilon_4$ used in the present calculations}\end{center} \end{table} \subsection{Level statistics} In order to perform the statistical analysis of the energy level fluctuations, one must take into account the fact that the level density and hence the level spacing are strongly dependent on the intrinsic excitation energy $U$. In this situation, it is necessary to separate local level fluctuation from the overall excitation energy dependence of the level spacings. For that purpose, we adopt the unfolding procedure \cite{unfolding,billiard} in a particular form which follows Shriner et.al. \cite{Shriner}. The unfolding procedure measures the local level fluctuations with respect to a smooth average level density. We assume that the average level density is represented by the constant temperature formula \cite{CTF} \begin{equation} \bar{\rho}(E)={1 \over T} \exp\left({E-E_0 \over T}\right) \label{rhoctf} \end{equation} for each spectrum at a given $I^\pi$. To determine the parameters in the formula, we make a fit to the staircase function which represents the cumulative number of levels below energy $E$, \begin{equation} N(E)=\int_{-\infty}^{E} \rho(E')dE' = \sum_{\alpha} \theta(E-E_\alpha) \ , \label{staircase} \end{equation} with a smooth function corresponding to the average level density $\bar{\rho}(E)$, \begin{equation} \bar{N}(E)=\int_{E_0}^{E}\bar{\rho}(E')dE' + N_0 =\exp\left({E-E_0 \over T}\right) -1 + N_0\ \ , \label{staircaseX} \end{equation} by minimizing the quantity \begin{equation} G(T,E_0,N_0)= \int_{E_{min}}^{E_{max}} \left( N(E) -\bar{N}(E) \right)^2 dE \end{equation} with respect to the parameters $T, E_0$ and $N_0$ for each spectrum at a given $I^{\pi}$. Here the energy boundaries $E_{min}$ and $E_{max}$ are the energies of the lowest and the 300-th levels in each spectrum. When we discuss the level statistics for the lowest 20 levels, however, we obtain a better fit including only the lowest 30 levels. The unfolded spectra $\{ x_\alpha ; \alpha = 1,2,... \}$ are then derived for each $I^{\pi}$ by the transformation \begin{equation} x_\alpha = \bar{N}(E_\alpha). \end{equation} The unfolded spectra have a constant average density $\bar{\rho}_x(x) =1$ provided that the constant temperature formula fits well the average level density. In order to analyze the level fluctuations, we calculate the nearest neighbour level spacing distribution (NND) which is also often used in the experimental analysis. We calculate the distribution $P(s)$ for the unfolded spectra, where $ s= x_{\alpha+1} -x_\alpha$ is the spacing between the neighbouring levels with same $I^\pi$. By the unfolding procedure, the spacings are normalized as $\langle s \rangle = 1$. The distribution is represented as a histogram. The NND is calculated for various ensembles of level spacings which are taken from different intervals in excitation energy. The obtained distribution is fitted with the Brody distribution \cite{RMT} \begin{equation} P_w (s) = (1+w)\alpha s^w\exp(-\alpha s^{1+w}),\ \ \alpha = \left(\Gamma\left({ 2+w \over 1+w}\right) \right)^{1+w} \ \ , \label{Brody} \end{equation} parametrized by the Brody parameter $w$. This family of distributions is convenient because the Brody parameter $w=1$ produces the Wigner distribution while the value $w=0$ corresponds to the Poisson distribution. (Note that the theory of GOE random matrices leads to $w=0.953$ \cite{RMT} , which is not distinguishable from the Wigner limit in the present analysis). The value of $w$ is determined minimizing the quantity \begin{equation} S(w) = \sum_i \left( {P(i) - P_{w}(i)} \over {\sigma(i)} \right)^2 \ , \end{equation} where $P(i)=N(i)/N$ is the probability in the $i$-th bin $[s_i, s_i+\Delta s]$ of the calculated NND ($N(i)$ being the number of spacings in bin $i$ out of the total spacings of $N$). $P_w(i)=\int_{s_i}^{s_i+\Delta s} P_w(s')ds'$ is the corresponding probability in the Brody distribution. The statistical error is estimated as $\sigma(i) = \sqrt{N(i)}/N$ for $N(i) > 0$ and $\sigma(i) = 1.15/N$ for $N(i) = 0$ by assuming the multinomial distributions. We also calculate the ensemble average of the $\Delta_3$ statistics \cite{delta3,RMT} \begin{equation} \bar{\Delta}_3(L)= \left\langle{ 1\over L} \mathop{\min}_{A,B} \int_{x}^{x+L} \left[N_x(x') -Ax'-B\right]^2 dx'\right\rangle \end{equation} or the spectral rigidity. Here $N_x(x)=\sum_\alpha\theta(x-x_\alpha)$ is the staircase function for the unfolded spectra and the average $\langle ... \rangle$ is calculated over spectra in a given ensemble and intervals $[x,x+L]$, $[x+L/2,x+3L/2]$, ... in a spectrum \cite{billiard}. For the Poisson distribution of levels, \begin{equation} \bar{\Delta}_{3,{\rm Poisson}}(L)=L/15 \ \ , \end{equation} while for the GOE distribution, \begin{equation} \bar{\Delta}_{3,{\rm GOE}}(L) \approx {1 \over \pi^2} (\ln L - 0.0687) \ \ . \end{equation} \section{Results and Discussion}\label{sec:results} \subsection{Order to chaos transition}\label{sec:nnls} We first discuss how the level statistics depends on the intrinsic excitation energy $U$, aiming at extracting the overall dependence on $U$ in a wide interval ranging from $U=0$ (at yrast line) to $U \sim 2$ MeV. For that purpose, we calculate the NND and $\Delta_3$ for the lowest 300 levels in each spectrum, grouping them in bins of levels. The intrinsic excitation energy of the binned levels approximately covers the region up to $U \sim 2.4$ MeV. \begin{figure} \centerline{\psfig{figure=fig1.eps,height=8cm,angle=-90}} \caption{\label{fig1} The Brody parameter extracted from the NND for energy bins containing the first to 5-th, 6-th to 10-th, 11-th to 20-th, 21-st to 30th, and 31-st to 40-th levels, ... 291-st to 300-th of each spectrum. The result is plotted as a function of the intrinsic excitation energy covered by the bins. The solid, dotted, and dot-dashed lines correspond to different spin intervals $I_0=32-50, 20-30, 52-60$ respectively. } \end{figure} \begin{figure} \centerline{\psfig{figure=fig2.eps,height=8cm,angle=-90}} \caption{\label{fig2} The NND for energy bins containing the first to 5-th, 41-st to 50-th, and 291-st to 300 th levels of each spectrum within spin interval $I_0=32-50$. } \end{figure} The calculated Brody parameter for the NND for the spin interval $I_0 = 32-50$ is depicted in Fig.\ref{fig1}. The Brody parameter increases monotonically with increasing intrinsic excitation energy $U$. The NND for the lowest bin (first to 5-th levels at each $I^\pi$), has the Brody parameter $w=0.301\pm0.012$. The corresponding NND shown in Fig.\ref{fig2}(a) is much closer to the Poisson than to the Wigner distribution, although one can also notice a small deviation from the Poisson distribution. For the levels from 10-th to 40-th, the Brody parameter is about 0.5, which is midway between the Poisson and the Wigner distributions, as can also be seen from the NND plotted in Fig.\ref{fig2}(b). As the intrinsic excitation energy increases further, the NND approaches the Wigner limit; the bin including the levels from the 291-st to 300-th has $w=0.888 \pm 0.012$, which is close to the Wigner limit $w=1.0$ or GOE limit 0.953 (See also the NND shown in Fig.\ref{fig2}(c)). These results indicate that the transition from order (Poisson fluctuation of the levels) to chaos (Wigner and GOE fluctuations) takes place gradually increasing the intrinsic excitation energy, until the chaotic limit is nearly achieved at around $U \sim 2$ MeV above yrast line. This dependence on excitation energy confirms the results of a previous analysis \cite{Matsuo93} performed with the same model, but without transforming the energy from the rotating frame to the laboratory frame (the last two terms in Eq.(\ref{eng}) were neglected). \begin{figure} \centerline{\psfig{figure=fig3.eps,height=7cm,angle=-90}} \caption{\label{fig3} The spectral rigidity $\bar{\Delta}_3(L)$ calculated for different energy bins containing 20 or 50 levels in each spectrum with fixed $I^\pi$ within the spin interval $I_0=32-50$, plotted with solid lines. } \end{figure} It is interesting to note the implications of the results from the $\Delta_3$-statistics (Fig.\ref{fig3}). The GOE limit is obtained only for $L$ values up to some value $L_{\rm max}$, and it is found that $L_{\rm max}$ increases with increasing excitation energy. For the bin at the highest studied excitation energy (\#251 --- \#300, $U \sim 2.3$ MeV), we find that $L_{max}\sim 6$. This implies that an energy eigenstate in this interval follows the GOE correlation only with approximately the ten closest lying states. Thus, the GOE-behaviour seen in the NND for the energy levels in this interval (Fig.\ref{fig2}(c)) indicates a chaotic behaviour of local nature. The NND and $\Delta_3$ carry different types of information for short-range and long-range correlations, as discussed in \cite{Pe95}. Since the spreading width $\Gamma_\mu$ of $n$p-$n$h shell model basis states is finite, $L_{\rm max}$ could be related to $\rho \Gamma_\mu$ ($\rho$ being the level density) \cite{Aberg,Pe95} although an estimate based on $L_{\rm max} \approx 2.5 \rho \Gamma_\mu$ which is found in a random matrix model \cite{Pe95} gives a much larger value than the calculated $L_{\rm max} \sim 6$. Non generic behaviours of $\Delta_3$ have also been discussed in connection with the shortest periodic orbits \cite{Berry} and the Lyapunov exponent \cite{Arve} in semiclassical analysis, whose relation to the present model is not clear yet. In Fig.\ref{fig1}, we also show the Brody parameters extracted from ensembles of binned levels taken from lower and higher spin intervals with $I_0 =20-30$ and $I_0 = 52 - 60$. No significant spin dependence is observed. This is in contrast with the interacting boson fermion model \cite{IBFM} and the particle-rotor model \cite{Kruppa}, which predict a spin dependence caused by the alignment of the high-$j$ orbitals. Note that, besides the high-$j$ orbitals, we include all the other single-particle orbits near the Fermi surface, which do not necessarily align in the considered spin interval. It is interesting to compare our results with the previous theoretical analysis by \AA berg \cite{Aberg}, who used essentially the same model except for the matrix elements of the two-body interaction, which were schematically approximated by a constant with random sign. When the mean square root value of the matrix elements was 15 keV, the $\Delta_3$ statistics reached the GOE limit in the excitation energy range $U=1.5 - 2.0$ MeV in ${}^{168}$Yb\ , a value lower than in the present model. The difference can be traced back to the statistical properties of the two-body matrix elements. We find that the statistical distribution of the off-diagonal matrix elements of the SDI force follows a distribution strongly peaked at zero matrix element compared with the Gaussian distribution \cite{Matsuo96,Matsuo93}, indicating a selectivity in the two-body matrix elements related to the intrinsic nature of the SDI and of the cranked Nilsson single-particle orbits. Because of this selectivity, the onset of chaos in our model takes place at higher excitation energy, although the average mean square root of the off-diagonal matrix elements is about 19 keV in the present calculation. \subsection{Level statistics in the near-yrast region}\label{sec:near} The bin including the lowest 5 levels for each $I^\pi$ covers the interval up to $U \sim 0.7$ MeV above the yrast line. The calculated levels in this energy region mostly form rotational band structures connected by strong stretched E2 transitions \cite{Matsuo96}. These levels are probably those which will be resolved in experiments in the near future, while it will be much harder to resolve excited levels lying in the region of rotational damping ($U \m@thcombine>\sim 0.8$ MeV). In fact, up to around 10-20 rotational bands are observed in a few rare-earth nuclei \cite{Fitz168Yb,Nord164Yb} (although only a few rotational bands are identified at the highest spin $I \sim 40$). In this subsection, we discuss in detail the level statistics associated with these near-yrast states. For this purpose, we introduce a strict ordering of the spacings according to excitation energy above yrast. The strict ordering $N$ encompasses four spectra having different parity $\pi$ and signature $\alpha$ for a given reference spin $I_0$ (even integer), that is those spectra with $I^\pi= I_0^\pm, (I_0+1)^\pm$ for even-$A$ systems, and those with $I^\pi= (I_0\pm1/2)^\pm$ for odd-$A$ systems. More precisely, we first define a reference energy $E_{ref}(I)$ by an envelope of the yrast levels, i.e, the $E_{ref}(I) = \min \left\{E_{lowest}(I), {E_{lowest}(I+1)+ E_{lowest}(I-1) \over 2} \right\}$ in order to compare the four spectra. We then assign the label $N$ to the levels in the four spectra, counting from the lowest according to the excitation energy $E(I) - E_{ref}(I)$ measured from the reference. By this definition the $N=1$ level represents the ``strictly yrast'' level in the sense that it refers to only one among the four lowest levels defined separately for the four parity and signature quantum numbers, while the other three are treated as excited levels ($N>1$) with respect to the strict yrast level. Collecting the spacings from the $N$-th rotational band (they are the spacings between the $N$-th level and the next excited level with the same $I^\pi$) from the spin range $I_0 = 32-50$ in the 40 rare-earth nuclei, we calculate the NND and extract the Brody parameter for each $N$. We also made the same analysis for spin intervals, $I_0=20-30$ and $I_0=52-60$ in order to study a possible spin-dependence, although the present model may not be very realistic for the lower spin interval ($I_0=20-30$) because of the problem of the pairing correlation. \begin{figure} \centerline{\psfig{figure=fig4.eps,height=9cm,angle=-90}} \caption{\label{fig4} The Brody parameter extracted from the NND associated with the lowest 15 near-yrast states. See text for the definition of the strict ordering $N$ of the states. Different symbols represent spin intervals $I_0=32-50$, $I_0=20-30$, and $I_0=52-60$. } \end{figure} \begin{figure} \centerline{\psfig{figure=fig5.eps,width=14cm,angle=-90}} \caption{\label{fig5} The NND associated with the lowest, third, and the tenth (strict order $N$=1,3,10) levels within the spin interval $I_0=32-50$ for (a), (b), and (c), respectively. (d,e,f) the same as (a,b,c) except that the residual SDI interaction is neglected. } \end{figure} The extracted Brody parameter is plotted in Fig. \ref{fig4} as a function of the strict level order $N$, for the lowest 15 levels. The spacings analysed for Fig. \ref{fig4} belong mostly within the first energy bin adopted in the previous subsection, which included the lowest 5 levels at each $I^\pi$. The Brody parameters plotted in Fig. \ref{fig4} in average agree with the value $w \sim 0.3$ for the lowest bin in Fig.\ref{fig1}. It is seen in Fig.\ref{fig4} that the Brody parameter gradually decreases as the levels become closer to the yrast line, except for $N$=1. For the lowest few levels, the Brody parameter is about 0.1-0.2, which is close to the Poisson limit (The corresponding NND's are shown in Fig.\ref{fig5}(b,c) for the third and tenth lowest levels $N=3$, and 10). This indicates that the excitation energy dependence shown in Fig.\ref{fig1} continues down to the lowest few states near the yrast line. However, a remarkable deviation from the overall excitation energy dependence is clearly noticed for the lowest point at $N=1$, i.e., for the spacings between the yrast band and the next excited band with same $I^\pi$, for which $w \approx 0.4-0.7$. The NND for $N=1 $ is shown in Fig.\ref{fig5}(a). It is also seen that the spin dependence is not strong while at $N=1$ the Brody parameter at lower spins shows a more significant deviation from the Poisson limit, becoming close to the Wigner limit. \subsection{Level spacing statistics at yrast}\label{sec:yr} In order to study the origin of the deviation of the first spacings from the Poisson distribution, we perform calculations neglecting the residual two-body interaction in which all the states become pure many-particle many-hole mean-field configurations. The NND and the extracted Brody parameter are compared in Figs.\ref{fig5} and \ref{fig6} with those obtained by inclusion of the residual interaction. \begin{figure} \centerline{\psfig{figure=fig6.eps,height=12cm}} \caption{\label{fig6} The Brody parameter extracted from the NND associated with the lowest 15 near-yrast states in the strict order obtained from the mean-field calculation without the residual interaction for different spin intervals $I_0=20-30$, $I_0=32-50$ and $I_0=52-60$ (points joined by dashed curves). It is compared with the result with the residual interaction (ones joined by solid curves. See also Fig.\ref{fig4}). } \end{figure} The Brody parameter of the lowest spacings ($N=1$) is essentially unaffected by the inclusion of the residual interaction. This indicates that the deviation from the Poisson limit for $N=1$ does not originate from the residual interaction, but from the mean field. On the other hand, Figures \ref{fig5} and \ref{fig6} also show that for the higher spacings above $N \sim 3$, the Brody parameter of the pure mean field calculation converges to the Poisson limit $w=0$ except for $I_0 = 20-30$ \footnote{For the rotational frequencies corresponding to the spin $I_0=20-30$, many of the cranked-Nilsson routhian orbits show very little signature splitting. This causes frequent near degeneracy in the $n$p-$n$h configurations, leading to the enhancement of the NND for small spacings and producing negative values of the Brody parameter for $N \m@thcombine>\sim 3$ in $I_0=20-30$ case in Fig.\ref{fig6}.} while the Brody parameter and the NND calculated with the residual two-body interaction deviates from the Poisson limit. This explicitly indicates that , contrary to the case of $N=1$, the deviation from the Poisson limit at $N \m@thcombine>\sim 2$ arises from the residual two-body interaction. We look for the origin of the special feature of the very yrast $N=1$ spacings in connection with the single-particle level structure in the cranked Nilsson mean-field. To this end, we first remark that for the levels near the yrast line, the configurations are only weakly mixed by the residual two-body force and most of them have essentially independent-particle configurations. In particular, the excited states near the yrast lines often have 1p1h configuration with respect to the yrast configuration. This means that the relatively large value of the Brody parameter extracted for the yrast levels may not be directly related to the mixture caused by the residual interaction. Furthermore, when the spin is not very large, the angular momentum alignments of intrinsic excitation is relatively small compared to the level spacings between the states in the yrast band and the next excited states with the same quantum number; it is found that the last term in Eq.(\ref{eng}) representing the alignment effect on the energy is at least a factor $\m@thcombine>\sim 2$ smaller than the average level spacing $D \sim 350$ keV associated with the $N=1$ yrast rotational band. Under these conditions, the relative energy of the first excited states having the same $I^\pi$ measured from the yrast states can be approximated by the 1p1h excitation energy in the single-particle routhian spectrum. Namely, $E_{next}(I^\pi) - E_{yrast}(I^\pi) \sim e'_{p,\alpha\pi}(\omega_I) - e'_{h,\alpha\pi}(\omega_I)$ where $e'_{p,h}$ are the single-particle routhian of the involved particle and hole. The particle and hole orbits necessarily have the same quantum numbers (signature $\alpha$ and parity $\pi$). On the other hand, spacings between excited states (yrare levels) do not keep such relation to the single particle spacings. This is illustrated in Fig.\ref{fig7} which shows examples of the main mean field components of the lowest excitation within each parity-signature set of states. It is clearly seen how the excitation from the yrast state in this typical case will proceed by changing the orbit of one particle, keeping its $(\pi,\alpha)$ (as shown in the left panel of Fig.\ref{fig7}). For yrare levels, excitations relative to the Fermi surface are already present, and the lowest excitation starting from an yrare state most often will proceed by a 2p-2h excitation which connect orbitals with different $(\pi,\alpha)$, as shown in the right panel. \begin{figure} \centerline{\psfig{figure=fig7.eps,height=8cm,angle=-90}} \caption{\label{fig7} Main mean-field configurations in the lowest states with $(\pi,\alpha)=(-,1)$ and $(+,0)$ in ${}^{168}$Yb\ at $\omega=0.319$ corresponding to $I=30,31$. The blobs represent the occupied orbitals in the main configuration of the lowest state for each $(\pi,\alpha)$. At this spin, the yrast state belongs to $(-,1)$. The arrows represent excitations involved in the second lowest state in each $(\pi,\alpha)$. The solid,dotted, dashed and dot-dashed lines represent the cranked Nilsson single-particle orbits with $(\pi,\alpha)=(+,1/2),(+,-1/2),(-,1/2)$, and $(-,-1/2)$ respectively. } \end{figure} In the previous section, we find that the the special feature seen for the yrast bands becomes more prominent as the spin decreases. This feature is consistent with the above interpretation. In addition, we find that there exists an odd-even effect for $N \m@thcombine<\sim 10$ for the low spins $I_0=20-30$; the level statistics for odd-odd nuclei is very close to the Poisson distribution while even-even or odd-$A$ nuclei shows significant deviation from the Poisson distribution for $N=1$. This also indicates that the level spacings associated with the very yrast states reflects the single-particle level spacings since the odd-even effects can arise from the fact that many cranked-Nilsson routhian orbits retains two-fold degeneracy (signature splitting is small) at low rotational frequency. These considerations lead us to investigate the spacing distribution of the {\it single-particle levels} in the cranked Nilsson potential. Figure \ref{fig8} shows distribution of the level spacings $e'_{p,\alpha\pi}(\omega_I) - e'_{h,\alpha\pi}(\omega_I)$ between the hole and particle orbitals having the same parity and signature quantum number which correspond to the lowest 1p1h excitations for all 40 nuclei and rotational frequencies corresponding to spin $I_0=30,32,...50$. We have not applied the unfolding procedure since the relevant single-particle orbits lie only in a limited region around the neutron and proton Fermi surfaces ($N =94-105, Z=66-71$) of the cranked Nilsson spectrum. It is noticed in Fig.\ref{fig8} that the spacing distribution is concentrated around the average spacing ($\langle D \rangle \sim 400$ keV) and there are few spacings smaller than 200 keV, indicating that degeneracy among orbits with the same quantum numbers happens only rarely. This in fact comes from the nature of the cranked Nilsson spectrum (which is believed to be valid also for other models such as Woods-Saxon potential). One of the relevant properties is that a large part of the deformed mean field is of harmonic oscillator type, and that the quantum spectrum of the oscillator shows strong level repulsion while the corresponding classical motion is integrable. With deformation not very large ($\epsilon \m@thcombine<\sim 0.3$) the single-particle orbits around the Fermi surface belong to a single major oscillator shell, provided that the parity and the kind of particle is fixed. Taking the neutron spectrum as an example, the negative parity orbits are dominated by those with the total oscillator quanta $N_{osc}=5$. Because of the mean-field deformation, the orbits having different $n_3$ (oscillator quanta along the deformation axis) are then splitted in energy, and this makes degeneracy among orbits with different $n_3$ asymptotic number rare. Furthermore, the $l^2$ and $ls$ terms of the mean field cause splittings among the orbits having the same $n_3$. The positive parity neutron orbits near the Fermi surface are $i_{13/2}$ orbits, and because of the deformation splitting, the $i_{13/2}$ orbits with fixed signature are placed with finite intervals at any rotational frequency. Therefore degeneracy among the $i_{13/2}$ orbits never happens. An additional mechanism arises from the fact that the nuclear mean-field favours an equilibrium deformation at which the shell energy lowers, implying that degeneracy of single-particle orbits at the Fermi surface is unfavoured. All these mechanisms prefer to the Wigner-like distribution in the single-particle spacing at the Fermi surface. Consequently, there exist only few cases of small spacings as seen in Fig.\ref{fig8}, especially for the low spin region $I_0 < 30$. At higher spins (i.e., at high rotational frequency), some specific orbits with large rotational alignment, e.g, proton orbits stemming from $h_{9/2}$ and $i_{13/2}$ intrudes in the Fermi surface region around $I\sim 40-50$ and cross sharply with other orbits. Small spacings associated with these highly aligned orbits are present in the distributions shown in Fig.\ref{fig8}(b) for $I_0 = 32-50$ (and slightly also for $I_0=52-60$), but this does not enhance very much the probability of small spacings and keep the distribution Wigner-like. \begin{figure} \begin{minipage}[t]{6cm} \psfig{figure=fig8a.eps,width=7cm,angle=-90} \end{minipage} \hskip 1cm \begin{minipage}[t]{6cm} \psfig{figure=fig8b.eps,width=6cm,angle=-90} \end{minipage} \caption{\label{fig8} (a) The distribution of the spacings of the cranked Nilsson single-particle orbits for spin interval $I_0=20-60$. The sampling is described in the text. The dotted line represents the distribution which is obtained when the adiabatic basis is adopted. There is no significant difference between the diabatic and adiabatic basis. (b) The same as (a), but the histogram bins are defined with the spacing itself instead of the normalized spacing, and also the spin interval is subdivided into $I_0=20-30$, $I_0=32-50$ and $I_0=52-60$. } \end{figure} Consequently, the intrinsic nature of the cranked single-particle spectrum affects specifically the level spacing distribution associated with the yrast band at $N=1$. Figure \ref{fig6} indicates that this remains to some extent even for the very high spin region with $I=30-60$. By the same token, the present analysis suggests that the singular behaviour of the lowest spacing could be stronger at lower spins. It should be remarked however that the present model is not very accurate for describing the near-yrast rotational bands at lower spins since the pairing correlation which is important at low spins is not well taken into account. Thus, the results of the present analysis cannot readily be compared to experiments for the lower spins ($I \m@thcombine<\sim 30$). We have here related the angular momentum dependence of the spacings of the single-particle spectrum in the mean field, displayed in Fig.\ref{fig8}(b), to the behavior of specific important orbits in the cranked potential. One may ask whether a more consistent explanation may be achieved from the general properties of the phase space associated with the classical single-particle motion in the rotating potential. So far, the questions of chaotic and regular motion in rotating deformed potentials have only been carried out for billiards in two dimensions \cite{Frisk,Traiber}. It is found \cite{Frisk} that rotation certainly may affect the phase space generally. However, especially for the high kinetic energies of nucleon states around the Fermi surface, the predicted effects are small. The absolute nature of the yrast band relative to yrare bands of the other parity-signature configurations is worth emphasizing. We have discussed it from our model, especially by means of Fig.\ref{fig7}. However, it may actually be less specific to the actual model, since it could result from a general homo-lumo gap in a quantal system, but now for states within an interval of angular momenta. The favoured yrast configuration should then be able to determine the detailed shape and other properties of the nucleus, and the yrare levels then have to adjust to this. \subsection{Analysis without unfolding procedure}\label{sec:nounf} As described in Sect.\ref{sec:form}, we apply the unfolding procedure in order to separate the overall excitation energy dependence and the local fluctuations in the level spacings. This procedure, however, cannot be used for the analysis of the present experimental data in the high spin region. In fact, the number of identified levels at fixed $I^\pi$ is far below 10 at high spins in the experiments performed so far. The experimental analysis in Refs. \cite{Garrett3,Garrett1,Garrett2} does not apply such an unfolding procedure that is described in Sect.\ref{sec:form}. In order to facilitate a direct comparison between the theoretical results and experiments, we propose in this subsection another way of analysis which does not use the unfolding procedure, but still takes into account the excitation energy dependence of the level density in an approximate way. The procedure is also applicable to the analysis of experimental data. Although the level density increases exponentially with increasing intrinsic excitation energy $U$, it may be assumed that the level density at given $U$ is rather independent of spin and parity and nuclear species as far as the high spin states in the same mass region are concerned. In fact, as discussed in Ref.\cite{Level-density}, the level density at fixed $I^\pi$ can be accounted by the level density of intrinsic configurations in the cranking model if the spin is sufficiently high (e.g. above $I \m@thcombine>\sim 12$ for $U< 3$ MeV). In this limit, the level density may be approximated by the Fermi gas formula \cite{Level-density} as a function of a single variable $a U$ where $a$ is the level density parameter related to the single-particle level density at the Fermi surface of the cranked mean-field. \begin{figure} \centerline{\psfig{figure=fig9.eps,height=10cm,angle=-90}} \caption{\label{fig9} The Brody parameter extracted from the NND obtained by using the simple normalization without the unfolding procedure for the spin interval $I_0=32-50$ as a function of the strict level ordering $N$ (the symbols connected with the solid line), compared with that obtained with the unfolding (dotted line). The inset shows the NND for $N=1$ obtained without unfolding. } \end{figure} Keeping this in mind, let us consider an ensemble of level spacings which is specified by the strict level ordering $N$ as introduced in the previous subsections. The level spacings within this ensemble are expected to have a common average value since the level ordering $N$ and hence the excitation energy is taken to be the same. It then may be reasonable to define, without using the unfolding procedure, the normalized spacing $s=D/\langle D \rangle$ by simply dividing the spacing $D$ by the average spacing $\langle D \rangle$ calculated for this ensemble specified by $N$. We show in Fig.\ref{fig9} the NND and the extracted Brody parameter calculated in this way. It is seen that there exits small but systematic difference for $N\m@thcombine<\sim 5$ between the results calculated with and without the unfolding procedure. The origin of the difference can be understood by noting that the average level spacings for the lowest states are 386, 271, 214, .. 163 keV for $N=1, 2, 3, .. 5$, which are not very small compared with the temperature parameter $T \sim 350 $ keV in the fitted level density. In other words, the smooth level density $\bar{\rho}(E)$ varies significantly in the energy interval of the single spacing, especially if the lowest few $N$'s are concerned. This causes a difference in the profile of the NND depending on whether we adopt the unfolding procedure or not. However, it should be stressed that, in spite of the difference depending on the way of analysis, the Wigner-like property associated with the yrast spacings ($N=1$) is present in both analysis. It becomes even more significant with the simple way of analysis without the unfolding procedure. Except for the lowest several spacings in the strict ordering, the NND obtained by means of the simple normalization agrees with those obtained with the unfolding procedure. As another illustration, we consider an ensemble of level spacings specified by the level ordering $n$ defined in each spectrum for each $I^\pi$ (note the difference between $n$ and the strict ordering $N$), and calculate the NND and the Brody parameter for spin interval $I_0=32-50$ by means of the simple normalization procedure described above. In this case, the average spacing $\langle D \rangle$ is calculated for each $n$. The result is compared in Fig.\ref{fig10} with the Brody parameter (Fig.\ref{fig1} in subsect.\ref{sec:nnls}) analysed for level bins $n=1-5, 5-10, 11-20, ...$ by using the unfolding procedure. Both ways of analysis agree very well with each other, leading to the same conclusion about the overall excitation energy dependence of the NND. The NND's for $n=1$ and 2 are plotted in Fig.\ref{fig11}(a,b). \begin{figure} \centerline{\psfig{figure=fig10.eps,height=8cm,angle=-90}} \caption{\label{fig10} The Brody parameter extracted from the NND associated with the $n$-th level at each $I^\pi$ for spin interval $I_0=32-50$ (See text). Here the unfolding procedure is not applied. The result is compared with the one with the unfolding procedure which is calculated for the bins of levels (same as the solid line in Fig.\ref{fig1}). } \end{figure} \subsection{Relation to experimental analysis} The experimental NND obtained by Shriner et al. \cite{Shriner} for the low-lying low-spin states ($I \m@thcombine<\sim 5 \hbar$) in rare-earth nuclei displays a Brody parameter around 0.3. A more recent analysis which includes the observed rotational states at relatively high spins (most of the analysed levels have $I \m@thcombine<\sim 30$) reports a NND which is close to the Poisson distribution \cite{Garrett3,Garrett1,Garrett2}. Our theoretical calculations for higher spins $I \m@thcombine>\sim 30$ also favours a Poisson-like NND for the levels near the yrast line. In the following we try to perform our theoretical analysis in a way similar to the procedure adopted by Garrett et al. \cite {Garrett3,Garrett1,Garrett2}. One should however remark that a comparison between our results and the experimental findings can only be indicative, because they refer to different spin regions. In accordance with Ref.\cite{Garrett3}, we consider here an ensemble of the spacings associated with the lowest and second lowest states at each $I^\pi$ ($n=1$ and 2). However we deal with the spin interval $I_0=32-50$, where the pairing effects are expected to be weak. We remark that the pairing effects are removed to some extent from the experimental analysis by excluding the lowest (0,+) spacings \cite{Garrett3}. We adopt the normalization scheme introduced in Subsect.\ref{sec:nounf} which do not use the unfolding procedure since the experimental analysis \cite{Garrett3} adopt the similar normalization. The obtained NND's, shown in Fig.\ref{fig11}(a,b), are close to the Poisson distribution, having Brody parameter $w \sim 0.25$ in agreement with the Poisson-like NND seen already for the near-yrast states. It also shares some common features with the experimental analysis \cite{Garrett3}: A deviation from the Poisson distribution is seen for small spacings $s \m@thcombine<\sim 0.2$ and is most significant for the smallest spacing with $s\m@thcombine<\sim 0.1$ or $D \m@thcombine<\sim 25$ keV. In Fig.\ref{fig11}(c,d), we also calculated the NND in the same way except that the residual two-body interaction is neglected. Comparing Fig.\ref{fig11}(a,b) and Fig.\ref{fig11}(c,d), it is indicated that the deviation from the Poisson limit at small spacings, seen in Fig.\ref{fig11}(a,b), is mostly caused by the residual two-body interaction. We remark, however, that the Wigner-like distribution associated with the very lowest spacing $N=1$ discussed in Subsect. \ref{sec:yr} should be present, but is not visible neither in Fig.\ref{fig11}(a) nor in (c). This is because the ensemble with $n=1$ contains also the other spacings with $N=2,3,..$, which have lower average spacing and mask the Wigner-like distribution associated with $N=1$ spacings. This suggests that, in order to find the Wigner-like distribution caused by the mean-field effect in the experimental analysis, one should make an analysis by subdividing the ensemble of the spacings with respect to the strict level order $N$. \begin{figure} \centerline{\psfig{figure=fig11.eps,height=8cm,angle=-90}} \caption{\label{fig11} The Brody parameter extracted from the NND associated with the lowest and second lowest states ($n=1,2$) at every $I^\pi$, for (a) and (b), respectively. Here the unfolding procedure is not applied, and the spacings are normalized to the average spacing defined in each ensemble. (c) and (d), the same as (a) and (b) except that the residual SDI interaction is neglected and pure mean-field many-particle many-hole configurations are considered. } \end{figure} \section{Conclusions}\label{sec:concl} We analysed the level statistics of the high spin states with $I \m@thcombine>\sim 30$ in rare-earth deformed nuclei as a function of the intrinsic excitation energy of the rotating nuclei. We used a shell model approach which describes $n$p-$n$h excitations in the cranked Nilsson potential interacting through the surface-delta residual interaction. We put emphasis on the analysis of the near-yrast levels which may be accessible by discrete $\gamma$-ray spectroscopies. The nearest neighbour level spacing distribution (NND) and the $\Delta_3$ statistics indicate that the level fluctuations in the near-yrast region follow a distribution close to the Poisson limits with the extracted Brody parameter $w=0.2-0.3$. This value of the Brody parameter implies significant deviation from the Poisson distribution for spacing smaller than about $s \m@thcombine<\sim 0.3$. The experimental analysis of the NND for low-spin states \cite{Shriner} and the one including high spin rotational states \cite{Garrett3,Garrett1,Garrett2} indicates a Poisson-like distribution in the near-yrast states. The present analysis suggests that this behaviour extends up to very high spins $I \sim 50-60$. The level statistics approaches the GOE limit as the intrinsic excitation energy $U$ increases, but this process proceeds very gradually and the chaos limit is nearly attained only with $U \m@thcombine>\sim 2$ MeV. This transition is caused by the residual two-body interaction. An interesting aspect of the NND emerges when we focus on the lowest levels near the yrast line. The level spacings between the yrast rotational band and the next excited band with the same spin and signature favour a Wigner-like NND, rather than obey the Poisson-like distribution associated with the other near-yrast levels. The distinguishable property of the NND associated with the yrast rotational bands arises from the mean-field property of the rotating nuclei while the deviation from the Poisson limit seen for the other spacings among yrare rotational bands is caused by the residual two-body interaction. Since the lowest few levels near yrast have dominantly one particle excitations, the spacing between the yrast rotational level and the next excited level thus reflects the single-particle routhian spectrum in the rotating mean-field at equilibrium normal deformation, which shows a Wigner-like distribution for the spacings between orbits with the same parity and signature around the Fermi surface. \vspace{10mm} \section*{Acknowledgments} Discussion with J.D. Garrett is acknowledged. One of the author, M.M., thanks the Danish Research Council for support of his stay at Niels Bohr Institute where a part of the research reported here was carried out. \vspace{10mm}
2,869,038,154,160
arxiv
\section{Introduction} \IEEEPARstart{S}{patial} multiplexing represents one of the most prominent techniques used for multiple-input multiple-output (MIMO) transmission systems \cite{ref34}. In general, both linear and non-linear (e.g., maximum likelihood) detectors have been adopted in these systems. For computational savings at the receiver side, there has been a prime interest in the class of linear detectors, such as zero-forcing (ZF) and linear minimum mean-square error (MMSE). It is widely known that MMSE outperforms ZF, especially in moderately medium-to-high signal-to-noise ratio (SNR) regions, at the cost of a higher computational burden \cite{ref35}. This occurs because MMSE computes the noise variance along with the channel estimates, in contrast to ZF which processes only the channel estimates. Thereby, MMSE appropriately mitigates interference and noise, while ZF cancels interference completely but enhances the noise power at the same time. On the other hand, a simplified non-linear yet capacity-efficient method is the successive interference cancellation (SIC). It is usually combined with ZF or MMSE to appropriately counterbalance performance and computational complexity \cite{ref13}. Performance assessment of either ZF- or MMSE-SIC has been extensively reported in the technical literature to date (e.g., see \cite{ref35}-\cite{ref36} and references therein). Nevertheless, all the previous studies assumed perfect channel state information (CSI) at the receiver and/or a non-impaired hardware at the transceiver; an ideal and a rather overoptimistic scenario for practical applications. More specifically, the hardware gear of wireless transceivers may be subject to impairments, such as I/Q imbalance, phase noise, and high power amplifier non-linearities \cite{ref37,ref3777}. These impairments are typically mitigated with the aid of certain compensation algorithms at the transceiver. Nevertheless, inadequate compensation mainly due to the imperfect parameter estimation and/or time variation of the hardware characteristics may result to residual impairments, which are added to the transmitted/received signal \cite{ref37}. Moreover, an erroneous CSI may occur due to imperfect feedback signaling and/or rapid channel variations. It can cause crosstalk interference (see \cite{ref38} and \cite{refnnneeeww} for explicit details on this effect) within the SIC process, while it can affect the detection ordering \cite{ref7}. It is noteworthy that an analytical performance assessment of ZF- and/or MMSE-SIC under the aforementioned \emph{non-ideal} communication setup (i.e., impaired hardware at the transceiver and imperfect CSI) has not been reported in the open technical literature so far. Capitalizing on the above observations, current work presents a unified analytical performance study of ZF- and MMSE-SIC for non-ideal transmission systems. The ideal (traditional) scenario is also considered as a special case. Particularly, the ordered ZF-SIC scheme is considered, where the suboptimal yet computational efficient Foschini ordering is adopted (i.e., strongest stream is detected first, while weakest stream is detected last, upon the ZF equalization). It should be mentioned that the norm-based Foschini ordering requires only $m(m+1)/2-1$ comparisons, with $m$ denoting the number of transmit antennas. This represents a remarkable computational gain over the optimal ordering, which operates over an exhaustive search of $m!$ combinations. Interestingly, it was recently demonstrated that Foschini ordering coincides with the optimal one, in the case when the transmission rate is uniformly allocated among the transmitters \cite{ref5}. Additionally, the scenario of MMSE-SIC with a fixed-ordering is analytically presented and studied, which can serve as a benchmark for the more sophisticated ordered MMSE-SIC scheme. The contributions of this work are summarized as follows: \begin{itemize} \item New closed-form expressions for the outage probability for both the ordered ZF-SIC and unordered MMSE-SIC schemes are derived. These expressions are reduced to the corresponding conventional outage probabilities, when ideal systems are assumed with perfect CSI and without hardware impairments at the transceiver. \item The well-known error propagation effect between consecutive SIC steps is analyzed for the general scenario. Relevant closed-form expressions are provided for some special cases of interest. \item A new MMSE linear filter is presented when the variances of noise, hardware impairments and imperfect CSI are known. Based on this filter, a substantial performance gain of MMSE-SIC over ZF-SIC is observed. \item Simplified expressions in the asymptotically high SNR regime are obtained, revealing useful engineering insights, achievable diversity order and impact of non-ideal communication conditions to the overall system performance. \end{itemize} The rest of this paper is organized as follows: In Section II, the system model is presented in detail. New analytical performance results with respect to the outage probability of ZF- and MMSE-SIC are derived in Sections III and IV, respectively, while relevant asymptotic approximations are provided in Section V. The error propagation effect is analyzed in Section VI. Moreover, numerical results are presented in Section VII, while Section VIII concludes the paper. \emph{Notation}: Vectors and matrices are represented by lowercase bold typeface and uppercase bold typeface letters, respectively. Also, $[\textbf{X}]_{ij}$, denotes the element in the \textit{i}th row and \textit{j}th column of $\mathbf{X}$, $(\mathbf{X})^{-1}$ is the inverse of $\mathbf{X}$ and $\mathbf{x}_{i}$ denotes the $i$th coefficient of $\mathbf{x}$. The superscript $(.)^{\mathcal{H}}$ denotes Hermitian transposition and $|.|$ represents absolute (scalar) value. In addition, $\mathbf{I}_{v}$ stands for the $v\times v$ identity matrix, $\mathbb{E}[.]$ is the expectation operator, $\overset{\text{d}}=$ represents equality in probability distributions, $\text{Pr}[.]$ returns probability, while $o(.)$ is the Landau symbol (i.e., $f(x)=o(g(x))$, when $f(x)/g(x)\rightarrow 0$ as $x\rightarrow \infty$). Also, $f_{X}(.)$ and $F_{X}(.)$ represent probability density function (PDF) and cumulative distribution function (CDF) of the random variable (RV) $X$, respectively. Complex-valued Gaussian RVs with mean $\mu$ and variance $\sigma^{2}$, while chi-squared RVs with $v$ degrees-of-freedom are denoted, respectively, as $\mathcal{CN}(\mu,\sigma^{2})$ and $\mathcal{X}^{2}_{2v}$. Furthermore, $\Gamma(a)\triangleq (a-1)!$ (with $a\in \mathbb{N}^{+}$) denotes the Gamma function \cite[Eq. (8.310.1)]{ref1}, $B(a,b)\triangleq \Gamma(a)\Gamma(b)/\Gamma(a+b)$ is the Beta function \cite[Eq. (8.384.1)]{ref1} and $\mathcal{U}(a,b,x)\triangleq \int^{\infty}_{0}\exp(-xt)t^{a-1}(t+1)^{b-a-1}/\Gamma(a)dt$ (with $\{a,x\}>0$) corresponds to the Tricomi confluent hypergeometric function \cite[Eq. (9.211.4)]{ref1}. \section{System Model} Consider a point-to-point MIMO system where the transmitter and receiver sides are equipped with $m$ and $n\geq m$ antennas, respectively. The input-output relation of the received signal stems as \cite{ref20} \begin{equation} \mathbf{y}=\mathbf{H}\left(\textbf{s}+\textbf{n}_{T}\right)+\textbf{n}_{R}+\textbf{w}, \label{inouttt} \end{equation} where $\textbf{y} \in \mathbb{C}^{n \times 1}$, $\textbf{s} \in \mathbb{C}^{m \times 1}$ and $\textbf{w} \in \mathbb{C}^{n \times 1}$ denote the received, the transmit and the circularly symmetric Gaussian noise signal vectors, respectively. In addition, $\textbf{n}_{T} \in \mathbb{C}^{m \times 1}$ and $\textbf{n}_{R} \in \mathbb{C}^{n \times 1}$ correspond to the distortion noise due to residual hardware impairments at the transmitter and receiver, respectively.\footnote{This distortion noise denotes the \emph{aggregation} of many residual impairments when compensation algorithms are applied to mitigate the main hardware impairments \cite{newone}.} Moreover, $\mathbf{H} \in \mathbb{C}^{n \times m}$ is the channel matrix, while assuming that the coefficients of $\textbf{H}\overset{\text{d}}= \mathcal{CN}(0,1)$, i.e., a Rayleigh flat fading scenario. Also, $\mathbb{E}[\textbf{ww}^{\mathcal{H}}]=N_{0}\textbf{I}_{n}$, where $N_{0}$ is the noise power, while $\mathbb{E}[\textbf{ss}^{\mathcal{H}}]=p\textbf{I}_{m}$ is assumed, where $p$ denotes the transmitted power per antenna. Typically, $\textbf{n}_{T}$ and $\textbf{n}_{R}$ are Gaussian distributed (see, e.g., \cite{ref20} and references therein), i.e., $\textbf{n}_{T}\overset{\text{d}}= \mathcal{CN}(0,p\kappa^{2}_{T}\textbf{I}_{m})$ and $\textbf{n}_{R}\overset{\text{d}}= \mathcal{CN}(0,p\kappa^{2}_{R}m\textbf{I}_{n})$, where $\kappa_{T}$ and $\kappa_{R}$ denote the level of residual impairments\footnote{In practical systems, $\kappa_{T}$ is equivalent to the error vector magnitude \cite{ref21}, which is defined as the ratio of distortion-to-signal magnitude, and can be measured directly with the aid of \cite{refevm}. As an indicative example, typical values of $\kappa_{T}$ in LTE infrastructures \cite{ref21} lie in the range of [$0.08, 0.175$].} at the transmitter and receiver, respectively. It is noteworthy that the variance of residual impairments is proportional to the transmission power per antenna \cite[Eqs. (7) and (8)]{ref20}. Also, the last two terms of (\ref{inouttt}), i.e., $\textbf{n}_{R}+\textbf{w}$, denote the total post-noise added onto the received signal, which can be modeled as $\mathcal{CN}(0,(p\kappa^{2}_{R}m+N_{0})\textbf{I}_{n})$. In the ideal scenario where $\left\{\kappa_{T},\kappa_{R}\right\}=0$ (i.e., no hardware impairments), (\ref{inouttt}) is reduced to the conventional MIMO signal relation, given by \begin{align*} \mathbf{y}=\mathbf{H}\textbf{s}+\textbf{w}. \end{align*} Further, in the rather realistic scenario when imperfect CSI occurs, the estimated channel at the receiver is given by \begin{equation} \mathbf{\hat{H}}\triangleq \mathbf{H}+\mathbf{\Delta H}, \label{inout} \end{equation} where $\mathbf{\hat{H}}$ is the estimated channel matrix, $\mathbf{\Delta H} \in \mathbb{C}^{n \times m}$ stands for the channel estimation error matrix, while the coefficients of $\mathbf{\Delta H}\overset{\text{d}}= \mathcal{CN}(0,\omega)$ with $\omega$ representing the channel estimation error variance \cite{ref7}. Also, $\mathbf{H}$ and $\mathbf{\Delta H}$ are statistically independent \cite{ref29}. In the following, we turn our focus on two quite popular linear detection schemes, namely, ZF and MMSE. These schemes, combined with SIC, are extensively used in spatial multiplexing transmissions \cite{ref13}. \subsection{ZF-SIC} In principle, ZF-SIC enables spatial multiplexing transmission, i.e., it can distinguish the received streams from different users and/or antennas with the aid of spatial structures (individual spatial signatures) of the signals to be detected \cite{ref2}. It is performed in three main steps, namely, the \emph{symbol ordering} that aims to enhance the overall reception performance, the \emph{interference nulling} via ZF from the yet-to-be detected symbols, and the \emph{interference cancellation} from the already detected symbols. These steps are performed in a number of consecutive stages, until all given symbols are successfully decoded. The interference nulling can be efficiently implemented by applying the QR decomposition on a given channel matrix, which is widely adopted in ZF equalizers, since it provides computational complexity savings \cite{ref3}. Let $\mathbf{\hat{Q}}$ be a $n\times n$ unitary matrix (with its columns representing the orthonormal ZF nulling vectors) and $\mathbf{\hat{R}}$ an $n\times m$ upper triangular matrix, given $\mathbf{\hat{H}}$. Accordingly, $\mathbf{Q}$ and $\mathbf{R}$ correspond to the true channel matrix $\mathbf{H}$. It follows from (\ref{inout}) that \begin{align} \nonumber \mathbf{\hat{Q}}\mathbf{\hat{R}}&= \mathbf{Q}\mathbf{R}+\mathbf{\Delta H}\\ \Leftrightarrow\mathbf{\hat{Q}}^{\mathcal{H}}&=(\mathbf{Q}\mathbf{R}\mathbf{\hat{R}}^{-1})^{\mathcal{H}}+(\mathbf{\hat{R}}^{-1})^{\mathcal{H}}\mathbf{\Delta H}^{\mathcal{H}}. \label{inouttttt} \end{align} Hence, $\mathbf{\hat{Q}}^{\mathcal{H}}\mathbf{y}$ is performed at the receiver, yielding \begin{align} \nonumber \mathbf{\hat{Q}}^{\mathcal{H}}\mathbf{y}&=\mathbf{\hat{Q}}^{\mathcal{H}}\left(\mathbf{Q}\mathbf{R}\left(\textbf{s}+\textbf{n}_{T}\right)+\textbf{n}_{R}+\textbf{w}\right)\\ \nonumber &=\left((\mathbf{Q}\mathbf{R}\mathbf{\hat{R}}^{-1})^{\mathcal{H}}+(\mathbf{\hat{R}}^{-1})^{\mathcal{H}}\mathbf{\Delta H}^{\mathcal{H}}\right)\mathbf{Q}\mathbf{R}\left(\textbf{s}+\textbf{n}_{T}\right)\\ &\ \ \ +\mathbf{\hat{Q}}^{\mathcal{H}}\textbf{n}_{R}+\mathbf{\hat{Q}}^{\mathcal{H}}\textbf{w}. \label{referr} \end{align} Interestingly, it has been demonstrated in \cite[Eq. (30)]{ref7} and \cite[Eq. (16)]{ref38} that $\mathbf{\hat{R}}\approx \mathbf{R}$, whereas the resultant approximation error can be considered negligible in terms of distributions \cite{ref7}. Also, note that the latter approximations become exact equalities in the case when perfect CSI is available. Thereby, (\ref{referr}) can be reformed as \begin{align} \nonumber &\mathbf{\hat{Q}}^{\mathcal{H}}\mathbf{y}\approx\\ &\left(\mathbf{I}_{n}+(\mathbf{R}^{-1})^{\mathcal{H}}\mathbf{\Delta H}^{\mathcal{H}}\mathbf{Q}\right)\mathbf{R}\left(\textbf{s}+\textbf{n}_{T}\right)+\mathbf{\hat{Q}}^{\mathcal{H}}\textbf{n}_{R}+\mathbf{\hat{Q}}^{\mathcal{H}}\textbf{w}. \label{refer} \end{align} Thus, the sequential signal decoding, which involves the decision feedback, is given by \begin{flalign*} &\mathop{\rm for}\:\:i=m:-1:1 &\\ &\ \ \ \ \:\hat{\mathbf{s}}_{i}=\mathcal{Q}\left[\frac{\left(\mathbf{\hat{Q}}^{\mathcal{H}}\mathbf{y}\right)_{i}-\sum^{m}_{j=i+1}\hat{r}_{ij}\hat{s}_{j}}{\hat{r}_{ii}}\right]& \\ &\ \ \ \ \ \ \:\approx \mathcal{Q}\left[\frac{\left(\mathbf{\hat{Q}}^{\mathcal{H}}\mathbf{y}\right)_{i}-\sum^{m}_{j=i+1}r_{ij}\hat{s}_{j}}{r_{ii}}\right]& \\ &\mathop{\rm end}& \end{flalign*} where $\hat{\mathbf{s}}_{i}$ is the estimated symbol of the $i$th detected stream, $\hat{r}_{ij}$ (or $r_{ij}$) is the coefficient at the $i$th row and $j$th column of $\mathbf{\hat{R}}$ (or $\mathbf{R}$) and $\mathcal{Q}[.]$ stands for the slicing operator mapping to the nearest point in the symbol constellation. Therefore, based on the unitary invariant property of Gaussian vectors (i.e., isotropic distribution \cite[Theorem 1.5.5]{ref4}), the signal-to-interference-plus-noise-and-distortion ratio (SINDR) of the $i$th decoding layer\footnote{The forward decoding is adopted into this work and, therefore, the first SIC stage corresponds to the last decoding layer of the processing matrix (from the left to the right). Similarly, the \textit{i}th decoding layer corresponds to the ($m-i+1$)th SIC stage. Note that the terms \textit{decoding layer} and \textit{SIC stage} will be interchangeably used in the rest of this paper.} for ZF-SIC is expressed as \begin{align} \nonumber &\text{SINDR}_{i}\\ &\approx \textstyle \frac{p r^{2}_{ii}}{pr^{2}_{ii}\kappa^{2}_{T}+p\sum^{m}_{j=1}\left|\left((\mathbf{R}^{-1})^{\mathcal{H}}\mathbf{\Delta H}^{\mathcal{H}}\mathbf{Q}\mathbf{R}\right)_{ij}\right|^{2}(1+\kappa^{2}_{T})+p\kappa^{2}_{R}m+N_{0}}. \label{sindr} \end{align} Notice that in the ideal scenario of perfect CSI and no hardware impairments, (\ref{sindr}) becomes the classical SNR expression of the $i$th layer, since $\text{SNR}_{i}=pr^{2}_{ii}/N_{0}$ \cite{ref33}. \subsection{MMSE-SIC} Unlike ZF-SIC, the more sophisticated MMSE-SIC detector achieves an optimal balance between interference suppression and noise enhancement. To this end, it requires the knowledge (or estimation) of the noise variance and, thus, it represents the optimal linear detection scheme \cite[App. A]{ref23}. Since the main difference between ZF- and MMSE-SIC is in the equalization process, we retain our focus on the discussion of the typical MMSE, while the description of the more advanced MMSE-SIC is provided subsequently. The conventional MMSE (non-SIC) detector strives to minimize the mean-square error (MSE) of the $j$th transmitted stream, i.e., $s^{(j)}$, as follows \begin{align} \text{MSE}^{(j)}=\mathbb{E}\left[\left|s^{(j)}-(\mathbf{g}^{(j)})^{\mathcal{H}}\mathbf{\hat{y}}\right|^{2}\right],\ \ 1\leq j\leq m, \label{mse} \end{align} where $\mathbf{g}^{(j)}$ is the optimal weight vector and $\mathbf{\hat{y}}$ denotes the post-detection received signal, subject to channel estimation imperfections and hardware impairments of the transceiver. To facilitate the analysis, we can formulate $\mathbf{\hat{y}}$ as the classical MIMO model \begin{align} \mathbf{y}=\mathbf{H}\mathbf{s}+\mathbf{w}', \end{align} where $\mathbf{w}'\triangleq \left(\mathbf{H}+\mathbf{\Delta H}\right)\mathbf{n}_{T}+\mathbf{\Delta H}\mathbf{s}+\mathbf{n}_{R}+\mathbf{w}$ with a (colored) noise covariance matrix given by \cite[Eq. (9)]{ref20} \begin{align} \nonumber \mathbb{E}[\mathbf{w}'\mathbf{w}'^{\mathcal{H}}]&=p\kappa^{2}_{T}\left(\mathbf{H}+\mathbf{\Delta H}\right)\left(\mathbf{H}+\mathbf{\Delta H}\right)^{\mathcal{H}}\\ &+p\mathbf{\Delta H}\left(\mathbf{\Delta H}\right)^{\mathcal{H}}+(p\kappa^{2}_{R}m+N_{0})\mathbf{I}_{n}. \end{align} Due to the scaling property of Gaussian RVs \cite[Chapt. 3]{refscaleprop}, while keeping in mind the independence between $\mathbf{H}$ and $\mathbf{\Delta H}$, it holds that $\mathbb{E}[(\mathbf{\Delta H})(\mathbf{\Delta H})^{\mathcal{H}}]=\omega \mathbb{E}[\mathbf{H}\mathbf{H}^{\mathcal{H}}]$. Hence, after some simple manipulations, the noise covariance matrix can be expressed more concisely as \begin{align} \mathbb{E}[\mathbf{w}'\mathbf{w}'^{\mathcal{H}}]=(p\kappa^{2}_{T}(\omega+1)+p\omega)\mathbf{H}\mathbf{H}^{\mathcal{H}}+(p\kappa^{2}_{R}m+N_{0})\mathbf{I}_{n}. \end{align} Based on (\ref{mse}), it can be seen that (see Appendix \ref{app0} for details) \begin{align} \nonumber &\mathbf{g}^{(j)}=p\left(p\mathbf{H}\mathbf{H}^{\mathcal{H}}+\mathbb{E}[\mathbf{w}'\mathbf{w}'^{\mathcal{H}}]\right)^{-1}\mathbf{h}_{j}\\ &=\textstyle \left(\mathbf{H}\mathbf{H}^{\mathcal{H}}\left(\scriptstyle \kappa^{2}_{T}(\omega+1)+\omega+1\right)+\left(\scriptstyle \kappa^{2}_{R}m+\frac{N_{0}}{p}\right)\mathbf{I}_{n}\right)^{-1}\mathbf{h}_{j}, \label{filterg} \end{align} whereas, after some straightforward manipulations (see Appendix \ref{app0}), the total SINDR of the $j$th stream is obtained as \cite[Eq. (5)]{ref24} \begin{align} \nonumber &\text{SINDR}^{(j)}=\\ &\frac{\frac{1}{(\kappa^{2}_{R}m+N_{0}/p)}\mathbf{h}_{j}^{\mathcal{H}}\left(\mathbf{H}\mathbf{H}^{\mathcal{H}}\frac{(\kappa^{2}_{T}(\omega+1)+\omega+1)}{(\kappa^{2}_{R}m+N_{0}/p)}+\mathbf{I}_{n}\right)^{-1}\mathbf{h}_{j}}{1-\frac{(2\sqrt{\omega}+1)^{-1}}{(\kappa^{2}_{R}m+N_{0}/p)}\mathbf{h}_{j}^{\mathcal{H}}\left(\mathbf{H}\mathbf{H}^{\mathcal{H}}\frac{(\kappa^{2}_{T}(\omega+1)+\omega+1)}{(\kappa^{2}_{R}m+N_{0}/p)}+\mathbf{I}_{n}\right)^{-1}\mathbf{h}_{j}}. \label{sindrmmse} \end{align} Based on Woodbury's identity \cite[Eq. (2.1.4)]{ref28}, (\ref{sindrmmse}) reads also as \begin{align} \text{SINDR}^{(j)}=\frac{\mathcal{C}^{(j)}}{1-\frac{\mathcal{C}^{(j)}}{2\sqrt{\omega}+1}}, \label{sindrmmse1} \end{align} where \begin{align*} \nonumber \mathcal{C}^{(j)}&\triangleq \frac{1}{(\kappa^{2}_{T}(\omega+1)+\omega+1)}\\ &\times \frac{\mathbf{h}_{j}^{\mathcal{H}}\left(\mathbf{K}_{j}\mathbf{K}_{j}^{\mathcal{H}}+\frac{(\kappa^{2}_{R}m+N_{0}/p)}{(\kappa^{2}_{T}(\omega+1)+\omega+1)}\mathbf{I}_{n}\right)^{-1}\mathbf{h}_{j}}{1+\mathbf{h}_{j}^{\mathcal{H}}\left(\mathbf{K}_{j}\mathbf{K}_{j}^{\mathcal{H}}+\frac{(\kappa^{2}_{R}m+N_{0}/p)}{(\kappa^{2}_{T}(\omega+1)+\omega+1)}\mathbf{I}_{n}\right)^{-1}\mathbf{h}_{j}}, \end{align*} and $\mathbf{K}_{j}\triangleq [\mathbf{h}_{1} \cdots \mathbf{h}_{j-1}\:\: \mathbf{h}_{j+1}\cdots \mathbf{h}_{m}]$. The form of (\ref{sindrmmse1}) is preferable than (\ref{sindrmmse}) for further analysis, because $\mathbf{h}_{j}$ and $\mathbf{K}_{j}$ are statistically independent. Also, in ideal conditions of perfect CSI and no hardware impairments, (\ref{sindrmmse1}) is reduced to the classical signal-to-interference-plus-noise ratio (SINR) expression of MMSE detectors \cite[Eqs. (11) and (13)]{ref14} \begin{align} \text{SINR}^{(j)}=\mathbf{h}_{j}^{\mathcal{H}}\left(\mathbf{K}_{j}\mathbf{K}_{j}^{\mathcal{H}}+\frac{N_{0}}{p}\mathbf{I}_{n}\right)^{-1}\mathbf{h}_{j}. \end{align} On the other hand, when MMSE-SIC is applied to the receiver, the corresponding SINDR of the $i$th SIC step ($1\leq i< m$) can be expressed as \begin{align} \text{SINDR}_{i}=\frac{\hat{\mathcal{C}}^{(i)}}{1-\frac{\hat{\mathcal{C}}^{(i)}}{2\sqrt{\omega}+1}}, \label{sindrmmsesic} \end{align} where $\hat{\mathcal{C}}^{(i)}$ is the same as $\mathcal{C}^{(i)}$, but replacing $\mathbf{K}_{i}$ with $\hat{\mathbf{K}}_{i} \in \mathbb{C}^{n \times (m-i)}$, which is the remaining (deflated) version of $\mathbf{K}_{i}$ with its ($i-1$) columns being removed. This occurs because MMSE-SIC at the $i$th SIC stage is equivalent to the classical MMSE detector with the previous ($i-1$) symbols already detected. Further, in the last SIC stage where $i=m$, it can be seen that (see Appendix \ref{app0}) \begin{align} \nonumber \text{SINDR}_{m}&=\frac{1}{(\kappa^{2}_{R}m+N_{0}/p)}\\ &\times \mathbf{h}_{m}^{\mathcal{H}}\left(\mathbf{h}_{m}\mathbf{h}_{m}^{\mathcal{H}}\frac{(\kappa^{2}_{T}(\omega+1)+\omega)}{(\kappa^{2}_{R}m+N_{0}/p)}+\mathbf{I}_{n}\right)^{-1}\mathbf{h}_{m}, \label{sindrmmsesicm} \end{align} since no inter-stream interference is experienced at the last SIC stage.\footnote{In fact, (\ref{sindrmmsesicm}) represents the optimal combining scheme in interference-free environments. In other words, it coincides with the maximal ratio combining (MRC) scheme, when imperfect CSI and hardware-impaired transceivers are present. Notice that when $\{\omega,\kappa_{T},\kappa_{R}\}=0$, (\ref{sindrmmsesicm}) is reduced to the classical SNR expression of MRC.} \section{Performance Analysis of the Ordered ZF-SIC} In this section, closed-form formulae with regards to the outage performance of the ordered ZF-SIC for each transmitted stream are provided. We start from the general scenario, when both CSI errors and hardware impairments are present, followed by some simplified special cases of interest. \subsection{General Case} We commence by deriving the CDF of the SINDR for each transmitted stream, which represents the corresponding outage probability, as follows. \begin{align} \nonumber &\text{Pr}\left[\text{SINDR}_{i}\leq \gamma_{\text{th}}\right]\Leftrightarrow\\ &\text{Pr}\left[p r^{2}_{ii}\leq \frac{\left(p\left(\kappa^{2}_{T}+1\right)Y_{i}+p\kappa^{2}_{R}m+N_{0}\right)\gamma_{\text{th}}}{\left(1-\kappa^{2}_{T}\gamma_{\text{th}}\right)}\right], \label{cdf} \end{align} where $\gamma_{\text{th}}$ denotes the predetermined SINDR outage threshold, while the auxiliary variable $Y_{i}\triangleq \sum^{m}_{j=1}|((\mathbf{R}^{-1})^{\mathcal{H}}\mathbf{\Delta H}^{\mathcal{H}}\mathbf{Q}\mathbf{R})_{ij}|^{2}$ is introduced for notational convenience. Notice that the condition $\kappa^{2}_{T} < 1/\gamma_{\text{th}}$ should be satisfied, which is typically the case in most practical applications. Thus, it holds that \begin{align} \nonumber &P^{(i)}_{\text{out}}(\gamma_{\text{th}})\triangleq F_{\text{SINDR}_{i}}(\gamma_{\text{th}})\approx\\ &1-\text{Pr}\left[p r^{2}_{ii}\geq \frac{\left(p\left(\kappa^{2}_{T}+1\right)Y_{i}+p\kappa^{2}_{R}m+N_{0}\right)\gamma_{\text{th}}}{\left(1-\kappa^{2}_{T}\gamma_{\text{th}}\right)}\right], \label{cdf1} \end{align} where $P^{(i)}_{\text{out}}(.)$ denotes the outage probability for the $i$th stream. To proceed, we have to determine the distributions of the mutually independent RVs, namely, $Y_{i}$ and $p r^{2}_{ii}$. \begin{lem} The PDF of $Y_{i}$, $f_{Y_{i}}(.)$, yields as \begin{align} f_{Y_{i}}(x)=\frac{x^{m-1}\exp \left(-\frac{x}{\omega}\right)}{\Gamma(m)\omega^{m}},\ \forall i,\ 1\leq i\leq m. \label{pdferror} \end{align} \end{lem} \begin{proof} From \cite{ref7}, while conditioning on $\mathbf{R}$, in a similar manner as in \cite[Eq. (11)]{ref16neww}, $Y_{i}\overset{\text{d}}=\frac{\omega}{2}\mathcal{G}_{i}$, where $\mathcal{G}_{i}\overset{\text{d}}=\mathcal{X}^{2}_{2m}$. Based on the scaling property of RVs (i.e., $f_{Z=c X}(z)=f_{X}(z/c)/c$ for $c\geq 0$), the result in (\ref{pdferror}) is obtained. \end{proof} On the other hand, $f_{p r^{2}_{ii}}(.)$ depends on the precise ordering that is adopted. In current study, the classical Foschini (norm-based) ordering is investigated, where the strongest stream is decoded first while the weakest stream is decoded last. It was recently demonstrated that the Foschini ordering coincides with the optimal ordering in the case when the transmission rate is uniformly allocated among the transmitters \cite{ref5}. \begin{lem} In the case when Foschini ordering is applied, $f_{p r^{2}_{ii}}(.)$ is given by \begin{equation} f_{p r^{2}_{ii}}(x)=\Xi_{i}\:x^{\xi_{i}}\exp \left(-\frac{(m+l-i+1)x}{p}\right), \label{pdfrii} \end{equation} where \begin{align} \nonumber &\Xi_{i}\triangleq \sum^{i-2}_{j=0}\sum^{i-1}_{l=0}\sum^{m+l-i}_{\rho_{1}=0}\sum^{\rho_{1}}_{\rho_{2}=0}\cdots\sum^{\rho_{n-2}}_{\rho_{n-1}=0}\sum^{i+\phi-j-2}_{r=0}(i+\phi-j-2)!\\ \nonumber &\times \prod^{n-1}_{t=1}\left[\frac{(-1)^{j+l}\binom{i-2}{j}\binom{i-1}{l}}{(\rho_{t-1}-\rho_{t})!(t!)^{\rho_{t}-\rho_{t+1}}}\right]\frac{p^{i-j-r-n-1}}{r!\rho_{n-1}!(n-1)!}\\ &\times \frac{(m+l-i)!(m+l-i+1)^{-(i+\phi-j-r-1)}}{B(n-i+1,i-1)B(m-i+1,i)},\ \ i>1, \label{Xi1} \end{align} or \begin{align} \nonumber \Xi_{1}&\triangleq \sum^{m-1}_{\rho_{1}=0}\sum^{\rho_{1}}_{\rho_{2}=0}\cdots\sum^{\rho_{n-2}}_{\rho_{n-1}=0}\frac{m!}{\rho_{n-1}!p^{n+\phi}(n-1)!}\\ &\times \prod^{n-1}_{t=1}\left[\frac{1}{(\rho_{t-1}-\rho_{t})!(t!)^{\rho_{t}-\rho_{t+1}}}\right],\ \ i=1, \label{Xi2} \end{align} while $\xi_{i}\triangleq n+r+j-i$ (for $i>1$), $\xi_{1}\triangleq n+\phi-1$, $\rho_{0}\triangleq m+l-i$ for $i>1$ or $\rho_{0}\triangleq j$ for $i=1$, $\rho_{n}\triangleq 0$, and $\phi\triangleq \sum^{n-1}_{q=1}\rho_{q}$. In general, $m+l-i$ is substituted with $j$ in the case of $i=1$. \end{lem} \begin{proof} The detailed proof is relegated in \cite{ref6}. \end{proof} In the simplified scenario of fixed symbol ordering (i.e., no ordering), $f_{p r^{2}_{ii}}(.)\overset{\text{d}}=\mathcal{X}^{2}_{2(n-i+1)}$ \cite{ref8}. Thereby, in this case, $\Xi_{i}\triangleq 1/(\Gamma(n-i+1)p^{n-i+1})$ and $\xi_{i}\triangleq n-i$ for $1\leq i\leq m$, while $\exp (-(m+l-i+1)x/p)$ in (\ref{pdfrii}) is replaced with $\exp (-x/p)$. We are now in a position to formulate the outage probability for the ordered ZF-SIC as follows: \begin{thm} Outage probability for the $i$th decoding layer is obtained in closed-form as \begin{align} \nonumber P^{(i)}_{\text{out}}(\gamma_{\text{th}})&\approx 1-\Psi_{i} \sum^{\mu}_{v=0}\frac{\binom{\mu}{v}\left(p\kappa^{2}_{R}m+N_{0}\right)^{\mu-v}\left(p\left(\kappa^{2}_{T}+1\right)\right)^{v}}{\Gamma(m)\omega^{m}(1-\kappa^{2}_{T}\gamma_{\text{th}})^{\mu}}\\ &\times \frac{\gamma_{\text{th}}^{\mu}\:\Gamma(v+m)\exp \left(-\frac{(m+l-i+1)\gamma_{\text{th}}(p\kappa^{2}_{R}m+N_{0})}{p(1-\kappa^{2}_{T}\gamma_{\text{th}})}\right)}{\left(\frac{(m+l-i+1)\gamma_{\text{th}}(\kappa^{2}_{T}+1)}{(1-\gamma_{\text{th}}\kappa^{2}_{T})}+\frac{1}{\omega}\right)^{v+m}}, \label{outclosed} \end{align} where \begin{align*} \Psi_{i}\triangleq \Xi_{i} \sum^{\xi_{i}}_{\mu=0}\frac{\xi_{i}!}{\mu!\left(\frac{m+l-i+1}{p}\right)^{\xi_{i}-\mu+1}}. \end{align*} \end{thm} \begin{proof} The proof is provided in Appendix \ref{appa}. \end{proof} It is noteworthy that the derived result includes finite sum series of simple elementary functions and factorials and, thus, can be efficiently and rapidly calculated.\footnote{At this point, it should be mentioned that the auxiliary parameters $\Xi_{i}$ and $\Psi_{i}$ include the required multiple nested sum series, while they are introduced for notational simplicity and presentation compactness.} \subsection{Imperfect CSI without hardware impairments} In this case, the system suffers from imperfect CSI, which in turn reflects to channel estimation errors, but it is equipped with ideal hardware. The corresponding outage probability of each stream is directly obtained from (\ref{outclosed}), by setting $\kappa_{T}=\kappa_{R}=0$. \subsection{Perfect CSI with hardware impairments} This scenario corresponds to the case when channel is correctly estimated (e.g., via pilot or feedback signaling), but the transmitted and/or received signal is impaired due to low-cost hardware equipment at the transceivers. \begin{prop} The exact closed-form outage probability of the $i$th stream under perfect CSI conditions with hardware impairments is expressed as \begin{align} \nonumber P^{(i)}_{\text{out}}(\gamma_{\text{th}})=&1-\Psi_{i} \left(\frac{\left(p\kappa^{2}_{R}m+N_{0}\right)\gamma_{\text{th}}}{\left(1-\kappa^{2}_{T}\gamma_{\text{th}}\right)}\right)^{\mu}\\ &\times \exp\left(-\frac{(m+l-i+1)\left(p\kappa^{2}_{R}m+N_{0}\right)\gamma_{\text{th}}}{p\left(1-\kappa^{2}_{T}\gamma_{\text{th}}\right)}\right). \label{outclosed1} \end{align} \end{prop} \begin{proof} The proof is given in Appendix \ref{appb}. \end{proof} \section{Performance Analysis of MMSE-SIC with Fixed Ordering} A closed-form expression for the PDF/CDF of SINDR with regards to the ordered MMSE-SIC is not available so far. To this end, we retain our focus on the unordered (fixed) MMSE-SIC scenario in this section, which can be used as a benchmark and/or as a lower performance bound for the more sophisticated ordered MMSE-SIC scheme. \begin{thm} Outage probability of the $i$th SIC stage, when $1\leq i < m$, is derived in a closed-form as given by (\ref{cdfsindrmmsesic}) \begin{figure*} \begin{align} \nonumber &P^{(i)}_{\text{out}}(\gamma_{\text{th}})=\\ \nonumber &1-\exp\left(-\frac{\left(\kappa^{2}_{R}m+\frac{N_{0}}{p}\right)\gamma_{\text{th}}}{1+\gamma_{\text{th}}\left(1-\left(\kappa^{2}_{T}(\omega+1)+\omega+1\right)\right)}\right)\bBigg@{6}[\sum^{n}_{k_{1}=1}\frac{1}{(k_{1}-1)!}\left(\frac{\left(\kappa^{2}_{R}m+\frac{N_{0}}{p}\right)\gamma_{\text{th}}}{1+\gamma_{\text{th}}\left(1-\left(\kappa^{2}_{T}(\omega+1)+\omega+1\right)\right)}\right)^{k_{1}-1}\\ &-\sum^{n}_{k_{2}=n-m+i+1}\:\:\sum^{m-i}_{j=n-k_{2}+1}\frac{\binom{m-i}{j}\left(\frac{\left(\kappa^{2}_{R}m+\frac{N_{0}}{p}\right)}{\left(\kappa^{2}_{T}(\omega+1)+\omega+1\right)}\right)^{k_{2}-1}\left(\frac{\gamma_{\text{th}}}{\gamma_{\text{th}}\left(\frac{(2\sqrt{\omega}+1)^{-1}}{\left(\kappa^{2}_{T}(\omega+1)+\omega+1\right)}-1\right)+\frac{1}{\left(\kappa^{2}_{T}(\omega+1)+\omega+1\right)}}\right)^{k_{2}+j-1}}{(k_{2}-1)!\left(1+\left(\frac{\gamma_{\text{th}}}{\gamma_{\text{th}}\left(\frac{(2\sqrt{\omega}+1)^{-1}}{\left(\kappa^{2}_{T}(\omega+1)+\omega+1\right)}-1\right)+\frac{1}{\left(\kappa^{2}_{T}(\omega+1)+\omega+1\right)}}\right)\right)^{m-i}}\bBigg@{6}] \label{cdfsindrmmsesic} \end{align} \hrulefill \end{figure*} and for the $m$th SIC stage as \begin{align} \nonumber P^{(m)}_{\text{out}}(\gamma_{\text{th}})&=1-\exp\left(-\frac{\left(\kappa^{2}_{R}m+\frac{N_{0}}{p}\right)\gamma_{\text{th}}}{\left(1-\left(\kappa^{2}_{T}(\omega+1)+\omega\right)\gamma_{\text{th}}\right)}\right)\\ &\times \sum^{n-1}_{k=0}\frac{\left(\frac{\left(\kappa^{2}_{R}m+\frac{N_{0}}{p}\right)\gamma_{\text{th}}}{\left(1-\left(\kappa^{2}_{T}(\omega+1)+\omega\right)\gamma_{\text{th}}\right)}\right)^{k}}{k!}. \label{cdfsindrmmsesicm} \end{align} \end{thm} \begin{proof} The proof is provided in Appendix \ref{appd}. \end{proof} In general, $\gamma_{\text{th}}<\frac{1}{(\kappa^{2}_{T}(\omega+1)+\omega)}$ should hold for the evaluation of every SIC stage (see Appendix \ref{appd} for details). When the latter condition is not satisfied, an outage occurs with probability one. As previously mentioned, typically $\kappa_{T}\leq 0.175$ \cite{ref21}. Moreover, practical values of $\omega$ could not exceed $30\%$ (i.e., $\omega\leq 0.3$), because higher values of channel estimation error reflect to a rather catastrophic reception \cite{ref7,ref29}. Thereby, based on the latter extreme values, $\gamma_{\text{th}}< 4.27$dB is required. Equivalently, since $\gamma_{\text{th}}\triangleq 2^{\mathcal{R}}-1$ (where $\mathcal{R}$ denotes a target transmission rate), $\mathcal{R}<1.88$bps/Hz is required for a feasible communication. Nonetheless, higher $\gamma_{\text{th}}$ values can be admitted for more relaxed CSI imperfections and/or hardware impairments, while there is no constraint in the ideal scenario. Moreover, notice that the special cases of non-impaired hardware or perfect CSI are directly obtained by setting $\kappa_{T}=\kappa_{R}=0$ or $\omega=0$ in (\ref{cdfsindrmmsesic}) and (\ref{cdfsindrmmsesicm}), respectively. \begin{cor} The ideal scenario of non-impaired hardware at the transceiver and perfect CSI conditions corresponds to the typical MMSE-SIC outage probability for the $i$th SIC stage (when $1\leq i<m$), given by \begin{align} \nonumber &P^{(i)}_{\text{out}}(\gamma_{\text{th}})=1-\exp\left(-\frac{N_{0}\gamma_{\text{th}}}{p}\right)\Bigg[\sum^{n}_{k_{1}=1}\frac{\left(\frac{N_{0}\gamma_{\text{th}}}{p}\right)^{k_{1}-1}}{(k_{1}-1)!}\\ &-\sum^{n}_{k_{2}=n-m+i+1}\:\:\sum^{m-i}_{j=n-k_{2}+1}\frac{\binom{m-i}{j}\left(\frac{N_{0}}{p}\right)^{k_{2}-1}\gamma_{\text{th}}^{k_{2}+j-1}}{(k_{2}-1)!\left(1+\gamma_{\text{th}}\right)^{m-i}}\Bigg], \label{cdfsindrmmsesic11} \end{align} while for the $m$th SIC stage is expressed as \begin{align} P^{(m)}_{\text{out}}(\gamma_{\text{th}})=1-\exp\left(-\frac{N_{0}\gamma_{\text{th}}}{p}\right)\sum^{n-1}_{k=0}\frac{\left(\frac{N_{0}\gamma_{\text{th}}}{p}\right)^{k}}{k!}, \label{cdfsindrmmsesicm111} \end{align} which coincides with the outage probability of the conventional MRC, as it should be. \end{cor} \section{Asymptotic Analysis} Although the previous formulae are presented in closed formulations, it is rather difficult to reveal useful insights, straightforwardly. Therefore, in this section, outage probability is analyzed in the asymptotically high SINDR regime. Thus, more amenable expressions are manifested, while important outcomes regarding the influence of imperfect CSI and hardware impairments are obtained. \subsection{Ordered ZF-SIC} \subsubsection{General Case} The following proposition presents a sharp outage floor for the general scenario of erroneous CSI under hardware impairments. \begin{prop} When $\frac{p}{N_{0}}\rightarrow \infty$, outage performance reaches to a floor, given by \begin{align} \nonumber &P^{(i)}_{\text{out}|\frac{p}{N_{0}}\rightarrow \infty}(\gamma_{\text{th}})\\ \nonumber &=\frac{m(m-i+1)^{1-i}}{(i-1)!(m-i)!(n-i+1)!(1-\gamma_{\text{th}}\kappa^{2}_{T})^{n-i+1}}\\ \nonumber &\times \sum^{n-i+1}_{k=0}\binom{n-i+1}{k}(\kappa^{2}_{R}m)^{n-i-k+1}(\kappa^{2}_{T}+1)^{k}\omega^{k}\Gamma(m+k)\\ &\times (N_{0}\gamma_{\text{th}})^{n-i+1}+o\left(\left(\frac{p}{N_{0}}\right)^{-(n-i+1)}\right)\\ \nonumber \\ \nonumber &=\textstyle \frac{(m-i+1)^{1-i}(\kappa^{2}_{R}m)^{n+m-i+1}(N_{0}\gamma_{\text{th}})^{n-i+1}(\omega(\kappa^{2}_{T}+1))^{-m}m!}{(i-1)!(m-i)!(n-i+1)!(1-\gamma_{\text{th}}\kappa^{2}_{T})^{n-i+1}}\\ &\times \textstyle \mathcal{U}\left(m,n+m-i+2,\frac{\kappa^{2}_{R}m}{\omega(\kappa^{2}_{T}+1)}\right)+o\left(\left(\frac{p}{N_{0}}\right)^{-(n-i+1)}\right). \label{outasympt} \end{align} \end{prop} \begin{proof} The proof is provided in Appendix \ref{appc}. \end{proof} \subsubsection{Imperfect CSI without hardware impairments} The following corollary describes this simplified scenario. \begin{cor} Asymptotic outage floor in the case of imperfect CSI but with ideal transceiver equipment is expressed as \begin{align} \nonumber &P^{(i)}_{\text{out}|\frac{p}{N_{0}}\rightarrow \infty}(\gamma_{\text{th}})=(n+m-i)!\\ &\times \frac{m(m-i+1)^{1-i}(\gamma_{\text{th}}\omega)^{n-i+1}}{(i-1)!(m-i)!(n-i+1)!}+o\left(\left(\frac{p}{N_{0}}\right)^{-(n-i+1)}\right). \label{outasympt111} \end{align} \end{cor} \begin{proof} In the absence of hardware impairments, it holds that $P^{(i)}_{\text{out}|\frac{p}{N_{0}}\rightarrow \infty}(\gamma_{\text{th}})=\int^{\infty}_{0}F_{pr^{2}_{ii}}(p\gamma_{\text{th}}y)f_{Y_{i}}(y)dy$. Evaluating the latter integral with the aid of (\ref{Fprapprox}) yields (\ref{outasympt111}). \end{proof} \subsubsection{Perfect CSI with hardware impairments} In this case, the following corollary describes the corresponding asymptotic outage performance. \begin{cor} Asymptotic outage performance is derived as \begin{align} \nonumber &P^{(i)}_{\text{out}|\frac{p}{N_{0}}\rightarrow \infty}(\gamma_{\text{th}})=\frac{m!(m-i+1)^{1-i}}{(i-1)!(m-i)!(n-i+1)!}\\ &\times \left(\frac{\gamma_{\text{th}}\kappa^{2}_{R}m}{(1-\gamma_{\text{th}}\kappa^{2}_{T})}\right)^{n-i+1}+o\left(\left(\frac{p}{N_{0}}\right)^{-(n-i+1)}\right). \label{outasympt1} \end{align} \end{cor} \begin{proof} Utilizing (\ref{cdf2}) and (\ref{Fprapprox}), (\ref{outasympt1}) can be readily obtained. \end{proof} \subsection{MMSE-SIC with Fixed Ordering} \subsubsection{General Case} The following proposition presents an outage floor for the general scenario of erroneous CSI under hardware impairments. \begin{prop} When $\frac{p}{N_{0}}\rightarrow \infty$, outage probability of the $i$th SIC stage reaches to a floor, which is given by (\ref{cdfsindrmmsesic}) and (\ref{cdfsindrmmsesicm}), when $1\leq i<m$ and $i=m$, respectively, by neglecting the $N_{0}/p$ term. \end{prop} The special cases of channel estimation error without hardware impairments or vice versa are obtained by setting $\kappa_{T}=\kappa_{R}=0$ or $\omega=0$, respectively. Most importantly, the system scenario with ideal (non-impaired) hardware at the receiver provides full diversity order (i.e., $n-m+i$), regardless of the presence of imperfect CSI or the amount of hardware impairments at the transmitter. The following proposition explicitly describes this effect. \begin{prop} Asymptotic outage probability of the $i$th SIC stage in the presence of imperfect CSI and when hardware impairments occur only at the transmitter reads as \begin{align} \nonumber &P^{(i)}_{\text{out}|\frac{p}{N_{0}}\rightarrow \infty}(\gamma_{\text{th}})=\frac{\left(\frac{N_{0}}{p\left(\kappa^{2}_{T}(\omega+1)+\omega+1\right)}\right)^{n-m+i}}{(n-m+i)!}\times\\ \nonumber &\frac{\left(\frac{\gamma_{\text{th}}}{\gamma_{\text{th}}\left(\frac{(2\sqrt{\omega}+1)^{-1}}{\left(\kappa^{2}_{T}(\omega+1)+\omega+1\right)}-1\right)+\frac{1}{\left(\kappa^{2}_{T}(\omega+1)+\omega+1\right)}}\right)^{n}}{\left(1+\left(\frac{\gamma_{\text{th}}}{\gamma_{\text{th}}\left(\frac{(2\sqrt{\omega}+1)^{-1}}{\left(\kappa^{2}_{T}(\omega+1)+\omega+1\right)}-1\right)+\frac{1}{\left(\kappa^{2}_{T}(\omega+1)+\omega+1\right)}}\right)\right)^{m-i}}\\ &+o\left(\left(\frac{p}{N_{0}}\right)^{-(n-m+i)}\right), \label{cdfsindrmmsesicasym} \end{align} and for the $m$th SIC stage as \begin{align} P^{(m)}_{\text{out}|\frac{p}{N_{0}}\rightarrow \infty}(\gamma_{\text{th}})=\frac{\left(\frac{\left(\frac{N_{0}\gamma_{\text{th}}}{p}\right)}{\left(1-\left(\kappa^{2}_{T}(\omega+1)+\omega\right)\gamma_{\text{th}}\right)}\right)^{n}}{n!}+o\left(\left(\frac{p}{N_{0}}\right)^{-n}\right). \label{cdfsindrmmsesicmasym} \end{align} \end{prop} \begin{proof} The proof is provided in Appendix \ref{appf}. \end{proof} Notice that when $\{\omega,\kappa_{T}\}=0$, (\ref{cdfsindrmmsesicasym}) and (\ref{cdfsindrmmsesicmasym}) reflect the corresponding asymptotic outage expressions for the ideal MMSE-SIC receivers. Collecting all the aforementioned asymptotic results, a number of conclusions can be drawn and, hence, the following remarks are outlined. \begin{rem} \label{rem1} When hardware impairments and/or imperfect CSI are present, outage performance reaches to an upper bound (i.e., outage floor), regardless of the adopted equalization technique (ZF or MMSE). This is explicitly indicated in (\ref{outasympt}), (\ref{outasympt111}) and (\ref{outasympt1}) for ZF-SIC and in (\ref{cdfsindrmmsesic}) and (\ref{cdfsindrmmsesicm}) for MMSE-SIC. Therefore, there is no feasible diversity order in this case. \end{rem} \begin{rem} \label{rem2} Diversity order manifests itself, only in the case of MMSE-SIC and when there is a non-impaired receiver, regardless of the presence of hardware impairments at the transmitter and/or imperfect CSI at the receiver. This is indicated in (\ref{cdfsindrmmsesicasym}) and (\ref{cdfsindrmmsesicmasym}), where both expressions tend to zero as $p/N_{0}\rightarrow \infty$ (by noticing the existence of the $N_{0}/p$ term within these expressions). Particularly, the diversity order in this case is $n-i+1$ with respect to the $i$th decoding layer or $n-m+i$ with respect to the $i$th SIC stage. \end{rem} It can be easily seen that the latter remark indicates no difference in the diversity order of the considered MMSE-SIC and the classical MMSE-SIC of an ideal communication setup (see, e.g., \cite{ref16}). Apparently, performance difference between these two scenarios appears to the underlying coding (array) gains. Observe that ZF-SIC does not achieve diversity order, even when hardware impairments occur only at the transmitter. This effect occurs due to the fact that ZF, in principle, operates by fully eliminating interference but enhancing the noise at the same time. When noise power is proportional to the transmission power, then it unavoidably reflects to the aforementioned outage floor. Such observations could be quite useful for system designers of various MIMO practical applications. As an indicative example, it is preferable to enable higher quality hardware gear for the antennas of the receiver rather than the transmitter. When such a condition occurs, the performance difference of MMSE-SIC over ZF-SIC is emphatically increased for larger SINDR regions. Yet, in order to achieve this performance gain, the variances of channel estimation error and hardware impairments at the transceiver are required, i.e., see the linear filter in (\ref{filterg}). \section{Error Propagation Effect} One of the most important degradation factors of SIC-based reception is the well-known error propagation effect. To date, it has been studied mainly numerically (e.g., see \cite{ref13} and references therein) and semi-analytically \cite{ref7} in terms of integral or bound expressions. The limited scenario of $m=2$ was analytically studied in \cite{ref22}, but the derived expressions were in terms of infinite series representations. In this section, error propagation is analyzed with regards to the average symbol error probability (ASEP). A formula including numerical verifications is presented for the general case, while closed-form expressions are obtained for some special cases of interest. ASEP of the $i$th decoding layer, namely ASEP$_{i}$, explicitly reads as \begin{align} \nonumber \text{ASEP}_{i}&\triangleq \text{Pr}\left[\epsilon_{i}|\epsilon_{m}\right]\text{Pr}\left[\epsilon_{m}\right]\\ \nonumber &\ \ \ \ +\text{Pr}\left[\epsilon_{i}|\epsilon_{m-1}\cap\epsilon^{c}_{m}\right]\text{Pr}\left[\epsilon_{m-1}\cap \epsilon^{c}_{m}\right]+\cdots\\ \nonumber &\ \ \ \ +\text{Pr}\left[\epsilon_{i}|\epsilon_{i+1}\cap\left(\bigcap^{m}_{l=i+2}\epsilon^{c}_{l}\right)\right]\\ \nonumber &\ \ \ \ \times \text{Pr}\left[\epsilon_{i+1}\cap\left(\bigcap^{m}_{l=i+2}\epsilon^{c}_{l}\right)\right]\\ \nonumber &\ \ \ \ +\text{Pr}\left[\epsilon_{i}|\bigcap^{m}_{l=i+1}\epsilon^{c}_{l}\right]\text{Pr}\left[\bigcap^{m}_{l=i+1}\epsilon^{c}_{l}\right]\\ &=\left(1-\frac{1}{\mathcal{M}}\right)\sum^{m}_{t=i}\text{Pr}\left[\epsilon_{t}|\bigcap^{m}_{l=t+1}\epsilon^{c}_{l}\right]\text{Pr}\left[\bigcap^{m}_{l=t+1}\epsilon^{c}_{l}\right], \label{asepi} \end{align} where $\epsilon_{i}$ denotes an error event at the $i$th decoding layer, $\epsilon^{c}_{i}$ is the complement of $\epsilon_{i}$, while $\mathcal{M}$ represents the number of modulation states. Also, the second equality of (\ref{asepi}) arises by assuming that an earlier error (with probability one) results in a uniform distribution over the constellation for a subsequent symbol decision (equal-power constellation). Hence, ASEP describing the overall behavior of the system, namely $\overline{\text{ASEP}}$, is given by \begin{align} \nonumber \overline{\text{ASEP}}&=\frac{1}{m}\sum_{i}\text{ASEP}_{i}\\ &=\frac{\left(1-\frac{1}{\mathcal{M}}\right)}{m}\sum^{m}_{t=1}t\overline{P}_{s_{t}}\prod^{m}_{l=t+1}\left(1-\overline{P}_{s_{l}}\right), \label{asep} \end{align} where $\overline{P}_{s_{i}}\triangleq \text{Pr}\left[\epsilon_{i}|\bigcap^{m}_{l=i+1}(1-\epsilon_{l})\right]$ is the conditional ASEP at the $i$th decoding layer given that there are no errors in prior layers. Thereby, finding $\overline{P}_{s_{i}}$ represents a key issue to prescribe the total ASEP. It holds that \cite{ref15} \begin{align} \overline{P}_{s_{i}}\triangleq \frac{\mathcal{A}\sqrt{\mathcal{B}}}{2\sqrt{\pi}}\int^{\mathcal{Z}}_{0}\frac{\exp(-\mathcal{B} x)}{\sqrt{x}}P^{(i)}_{\text{out}}(x)dx, \label{asepdef} \end{align} where $\mathcal{Z}=1/\kappa^{2}_{T}$ for ZF-SIC, while $\mathcal{Z}=1/(\kappa^{2}_{R}(\omega+1)+\omega)$ for MMSE-SIC. Note that $\mathcal{Z}\rightarrow +\infty$, in ideal conditions of both schemes. Also, $\mathcal{A}$ and $\mathcal{B}$ are specific constants that define the modulation type \cite{ref1555}. Unfortunately, there is no straightforward closed-form solution for $\overline{P}_{s_{i}}$ for the general case of ZF-SIC and MMSE-SIC, which is based on (\ref{outclosed}), (\ref{cdfsindrmmsesic}) and (\ref{cdfsindrmmsesicm}), to our knowledge. Thus, $\overline{P}_{s_{i}}$ and $\overline{\text{ASEP}}$ can be resolved only via numerical methods. Still, the involvement of a single numerical integration is much more efficient than classical simulation methods (e.g., Monte-Carlo). In the following, some certain scenarios of special interest admit a closed formulation of $\overline{P}_{s_{i}}$, which in turn provide a corresponding solution to $\overline{\text{ASEP}}$. \subsection{Ordered ZF-SIC} \begin{prop} The closed-form expression for $\overline{P}_{s_{i}}$ in the presence of channel estimation errors, an impaired receiver and an ideal transmitter is derived as \begin{align} \nonumber &\overline{P}_{s_{i}}\approx \frac{\mathcal{A}}{2}\Bigg[1-\sqrt{\frac{\mathcal{B}}{\pi}}\Psi_{i}\sum^{\mu}_{v=0}\binom{\mu}{v}(v+m-1)!\\ \nonumber &\times \frac{p^{v}(p\kappa^{2}_{R}m+N_{0})^{\mu-v}\Gamma(\mu+\frac{1}{2})}{\Gamma(m)\omega^{\mu+\frac{1}{2}-v}(m+l-i+1)^{\mu+\frac{1}{2}}}\\ &\times \mathcal{U}\left(\textstyle \mu+\frac{1}{2},\mu+\frac{3}{2}-v-m,\frac{(p\kappa^{2}_{R}m+N_{0})}{p\omega}+\frac{\mathcal{B}}{\omega(m+l-i+1)}\right)\Bigg]. \label{aseppp} \end{align} \end{prop} \begin{proof} Plugging (\ref{outclosed}) in (\ref{asepdef}), setting $\kappa_{T}=0$, while utilizing \cite[Eq. (2.3.6.9)]{ref11}, gives (\ref{aseppp}). \end{proof} Notice that although (\ref{aseppp}) is involved with a special function (i.e., Tricomi confluent hypergeometric function), it is in a form of finite sum series, whereas is included as standard built-in function in several popular mathematical software packages. Hence, this expression can be easily and efficiently calculated.\footnote{The asymptotic ASEP expressions could be easily extracted, by following the same methodology as in the previous section. Yet, they have omitted herein since they present very similar insights as the previously derived asymptotic outage probabilities.} \subsection{MMSE-SIC with Fixed Ordering} \begin{prop} $\overline{P}_{s_{i}}$, for the $i$th SIC stage ($1\leq i< m$), in the presence of perfect CSI, a non-impaired transmitter, and an impaired receiver is expressed as \begin{align} \nonumber &\overline{P}_{s_{i}}=\frac{\mathcal{A}}{2}\bBigg@{4}\{1-\sqrt{\frac{\mathcal{B}}{\pi}}\Bigg[\sum^{n}_{k_{1}=1}\frac{\Gamma(k_{1}-\frac{1}{2})\left(\kappa^{2}_{R}m+\frac{N_{0}}{p}\right)^{k_{1}-1}}{(k_{1}-1)!\left(\kappa^{2}_{R}m+\frac{N_{0}}{p}+\mathcal{B}\right)^{k_{1}-\frac{1}{2}}}\\ \nonumber &-\sum^{n}_{k_{2}=n-m+i+1}\:\:\sum^{m-i}_{j=n-k_{2}+1}\binom{m-i}{j}\left(\kappa^{2}_{R}m+\frac{N_{0}}{p}\right)^{k_{2}-1}\\ &\times \frac{\Gamma\left(\scriptstyle k_{2}+j-\frac{1}{2}\right)}{(k_{2}-1)!}\mathcal{U}\left(\scriptstyle k_{2}+j-\frac{1}{2},k_{2}+j+i-m+\frac{1}{2},\mathcal{B}+\kappa^{2}_{R}m+\frac{N_{0}}{p}\right)\Bigg]\bBigg@{4}\}. \label{asepmmsei} \end{align} \end{prop} \begin{proof} By invoking (\ref{cdfsindrmmsesic}) in (\ref{asepdef}), setting $\{\kappa_{T},\omega\}=0$, while utilizing \cite[Eq. (2.3.6.9)]{ref11}, (\ref{asepmmsei}) is obtained. \end{proof} For $i=m$, in the last SIC stage, the expression of (\ref{cdfsindrmmsesicm}) does not admit a closed formulation of ASEP. However, it can be numerically calculated quite easily by using (\ref{cdfsindrmmsesicm}) in (\ref{asepdef}) over the valid integration range $\{0,\frac{1}{(\kappa^{2}_{T}(\omega+1)+\omega)}\}$. \section{Numerical Results} \begin{figure}[!t] \centering \includegraphics[keepaspectratio,width=\columnwidth]{fig1} \caption{Outage performance of the 1st SIC stage (i.e., the $m$th decoding layer) of the ordered ZF-SIC and unordered (fixed) MMSE-SIC vs. various average input $p/N_{0}$ values, where $\left\{n,m\right\}=4$ and $\gamma_{\text{th}}=0$dB.} \label{fig1} \end{figure} \begin{figure}[!t] \centering \includegraphics[keepaspectratio,width=\columnwidth]{fig2} \caption{Outage performance of each SIC stage for the ordered and unordered ZF-SIC vs. various average input $p/N_{0}$ values, where $n=4$, $m=2$ and $\gamma_{\text{th}}=0$dB.} \label{fig2} \end{figure} In this section, analytical results are presented and cross-compared with Monte-Carlo simulations. There is a good match between all the analytical and the respective simulation results and, hence, the accuracy of the proposed approach is verified. Note that in Figs. \ref{fig1} and \ref{fig2}, for ease of tractability and without loss of generality, we assume symmetric levels of impairments at the transceiver, i.e., an equal hardware quality at the transmitter and receiver. To this end, let $\kappa_{T}=\kappa_{R}\triangleq \kappa$. In Fig. \ref{fig1}, the outage performance for the 1st stage of the ordered ZF- and unordered MMSE-SIC is presented for various system settings/conditions. There is an emphatic performance difference between the two schemes in all the considered cases, despite the fact that no optimal ordering is used in MMSE-SIC. This observation verifies the superiority of MMSE against ZF detectors in non-ideal communication setups. In addition, it is obvious that CSI imperfection impacts the performance of ZF-SIC in greater scale than hardware impairments. When this imperfection is more relaxed, the performance gap between the two extreme hardware impairment scenarios starts to grow. This occurs because ZF, fundamentally, relies on channel estimation accuracy to achieve performance gains, counteracting the unavoidable noise enhancement. Thereby, CSI imperfection dramatically affects its performance in comparison to the (noise-oriented) hardware imperfection. Interestingly, this does not comply with MMSE-SIC, whereas quite the opposite condition holds. This is consistent with Remark \ref{rem2}. Also, the traditional MMSE-SIC scheme (taking into consideration only the channel gains and $N_{0}$) is included for performance comparison reasons. The performance gain of the presented MMSE-SIC over its traditional counterpart is straightforward. Figure \ref{fig2} depicts the ordered and unordered outage performance of ZF-SIC in ideal and non-ideal communication setups. Obviously, diversity order is manifested only in the former case, while an outage floor is presented in the latter case. This is consistent with Remark \ref{rem1}. It is also noteworthy that the diversity order remains unaffected from the ordering strategy, in accordance to \cite{ref31}. Moreover, the superiority of the ordered 1st SIC stage against the corresponding unordered stage can be clearly seen. This is the price of performing optimal detection ordering. Furthermore, an important observation from the non-ideal scenario is the fact that the 2nd stage has worse performance as compared to the 1st stage of the ordered ZF-SIC in the entire SNR region. This should not be confusing since the 2nd stage of the ordered SIC has always the worst SNR, whereas this is not the case for the unordered SIC (on average). It seems that less interference (at the 2nd stage) is not enough to counteract the presence of channel imperfection severity and impaired hardware and, hence, to outperform 1st stage. This is in contrast to the traditional (ideal) SIC receivers, where the 1st SIC stage influences more drastically the overall system performance, representing a lower outage performance bound \cite{ref22,ref32}. \begin{figure}[!t] \centering \includegraphics[keepaspectratio,width=\columnwidth]{fig3} \caption{Outage performance of the 1st SIC stage of the ordered ZF-SIC, unordered ZF-SIC and unordered (fixed) MMSE-SIC vs. various average input $p/N_{0}$ values, where $n=6$, $\gamma_{\text{th}}=3$dB, $\omega=-10$dB, $\kappa_{T}=0.08$, and $\kappa_{R}=0$ (unless stated otherwise).} \label{fig3} \end{figure} Figure \ref{fig3} highlights the important outcome of Remark \ref{rem2} in non-ideal communication systems. Specifically, it can be seen that when hardware impairments occur only at the transmitter side, MMSE-SIC maintains its diversity order, while ZF-SIC introduces an outage floor, confirming the previous analysis. Also, in dense multi-stream transmissions (i.e., when $m=6$), outage performance of ZF-SIC is rather inefficient in comparison to MMSE-SIC. ASEP of the 1st MMSE-SIC stage is presented in Fig. \ref{fig4} for various settings, using (\ref{asepdef}). Again, it is verified that providing a higher-cost/higher-quality hardware gear at the receiver side is a much more fruitful option. Finally, Fig. \ref{fig5} presents the overall ASEP using (\ref{asep}), for the two considered SIC schemes. All the results for the ZF-SIC are obtained using (\ref{aseppp}). In addition, the corresponding results of MMSE-SIC for the scenarios with imperfect and perfect CSI are obtained via numerical integration (as in Fig. \ref{fig4}) and using (\ref{asepmmsei}), respectively. \begin{figure}[!t] \centering \includegraphics[keepaspectratio,width=\columnwidth]{fig4} \caption{ASEP of the 1st SIC stage for MMSE-SIC with fixed ordering under a BPSK modulation scheme vs. various average input $p/N_{0}$ values, where $n=8$.} \label{fig4} \end{figure} \begin{figure}[!t] \centering \includegraphics[keepaspectratio,width=\columnwidth]{fig5} \caption{Total ASEP of the ordered ZF-SIC and unordered (fixed) MMSE-SIC under a BPSK modulation scheme vs. various average input $p/N_{0}$ values, where $\{n,m\}=4$ and $\kappa_{T}=0$.} \label{fig5} \end{figure} Considering all the above, both the outage and error rate numerical results confirm the theoretical framework, while the following important outcomes are summarized: a) In the case of ZF-SIC, hardware impairments at the transmitter are as crucial (proportionally) as the impairments at the receiver; b) in ZF-SIC schemes, CSI imperfection influences more the performance than hardware impairments; c) MMSE-SIC appropriately counterbalance the impact of CSI imperfection and the amount of impaired hardware; d) when $\kappa_{R}=0$, MMSE-SIC maintains diversity order and, thus, there is an emphatic performance gain over ZF-SIC, especially in medium-to-high SNR regions. \section{Conclusions} Successive decoding of multiple individual streams was thoroughly investigated under practical communication scenarios. Particularly, ZF-SIC detection/decoding with symbol ordering and MMSE-SIC with fixed ordering were studied for hardware-impaired transceivers and when CSI is imperfectly provided at the receiver side. The analysis included i.i.d. Rayleigh multipath fading channels. New analytical and quite simple (in terms of computational complexity) expressions regarding the outage probability for each SIC stage were obtained. In addition, a general formula indicating the error rate performance with regards to the error propagation effect is provided. Moreover, it was indicated that MMSE-SIC outperforms ZF-SIC in non-ideal communication systems in spite of utilizing no optimal ordering. In addition, an unavoidable performance floor is introduced in the general scenario for both schemes, while diversity order is maintained in MMSE-SIC only when an ideal hardware equipment is enabled at the receiver.
2,869,038,154,161
arxiv
\section{Introduction} In 1900, Poincar\'e~\cite{Poincare} published a fundamental result on Lie algebras that would prove a powerful tool in representation theory: A Lie algebra embeds into an associative algebra that behaves in many ways like a polynomial ring. In 1937, Birkhoff~\cite{Birkhoff} and Witt~\cite{Witt} independently formulated and proved versions of the theorem that we use today, although neither author cited Poincar\'e's earlier work. The result was called the Birkhoff-Witt Theorem for years and then later the Poincar\'e-Witt Theorem (see Cartan and Eilenberg~\cite{CartanEilenberg}) before Bourbaki~\cite{Bourbaki} prompted use of its current name, the {\em Poincar\'e-Birkhoff-Witt Theorem}. The original theorem on Lie algebras was greatly expanded over time by a number of authors to describe various algebras, especially those defined by quadratic-type relations (including Koszul rings over semisimple algebras). Poincar\'e-Birkhoff-Witt\ theorems are often used as a springboard for investigating the representation theory of algebras. These theorems are used to \begin{itemize} \item reveal an algebra as a deformation of another, well-behaved algebra, \item posit a convenient basis (of ``monomials'') for an algebra, and \item endow an algebra with a canonical homogeneous (or graded) version. \end{itemize} In this survey, we sample some of the various Poincar\'e-Birkhoff-Witt theorems, applications, and techniques used to date for proving these results. Our survey is not intended to be all-inclusive; we instead seek to highlight a few of the more recent contributions and provide a helpful resource for users of Poincar\'e-Birkhoff-Witt theorems, which we henceforth refer to as {\em PBW theorems}. We begin with a quick review in Section~\ref{classical} of the original PBW Theorem for enveloping algebras of Lie algebras. We next discuss PBW properties for quadratic algebras in Section~\ref{homogeneous}, and for Koszul algebras in particular, before turning to arbitrary finitely generated algebras in Section~\ref{sec:nonhomdef}. We recall needed facts on Hochschild cohomology and algebraic deformation theory in Section~\ref{defHH}, and more background on Koszul algebras is given in Section~\ref{Koszul}. Sections~\ref{BG}--\ref{diamond} outline techniques for proving PBW results recently used in more general settings, some by way of homological methods and others via the Composition-Diamond Lemma (and Gr\"obner basis theory). One inevitably is led to similar computations when applying any of these techniques to specific algebras, but with different points of view. Homological approaches can help to organize computations and may contain additional information, while approaches using Gr\"obner basis theory are particularly well-suited for computer computation. We focus on some classes of algebras in Sections~\ref{DJQG} and~\ref{SRA} of recent interest: Drinfeld-Jimbo quantum groups, Nichols algebras of diagonal type, symplectic reflection algebras, rational Cherednik algebras, and graded (Drinfeld) Hecke algebras. In Section~\ref{positivechar}, we mention applications in positive characteristic (including algebras built on group actions in the modular case) and other generalizations that mathematicians have only just begun to explore. We take all tensor products over an underlying field $k$ unless otherwise indicated and assume all algebras are associative $k$-algebras with unity. Note that although we limit discussions to finitely generated algebras over $k$ for simplicity, many remarks extend to more general settings. \section{Lie algebras and the classical PBW Theorem}\label{classical} All PBW theorems harken back to a classical theorem for universal enveloping algebras of Lie algebras established independently by Poincar\'e~\cite{Poincare}, Birkhoff~\cite{Birkhoff}, and Witt~\cite{Witt}. In this section, we recall this original PBW theorem in order to set the stage for other PBW theorems and properties; for comprehensive historical treatments, see \cite{Grivel,so-called}. A finite dimensional {\em Lie algebra} is a finite dimensional vector space ${\mathfrak{g}}$ over a field $k$ together with a binary operation $[ \ , \ ] : {\mathfrak{g}} \times{\mathfrak{g}} \rightarrow {\mathfrak{g}}$ satisfying (i)\ \ (antisymmetry) $\ \ \, [v,v]=0$ and (ii)\, (Jacobi identity) $ \ [u,[v,w]]+[v,[w,u]]+[w,[u,v]] =0$ \noindent for all $u,v,w\in {\mathfrak{g}}$. Condition (i) implies $[v,w]=-[w,v]$ for all $v,w$ in $\mathfrak g$ (and is equivalent to this condition in all characteristics other than 2). The {\em universal enveloping algebra} $U({\mathfrak{g}})$ of ${\mathfrak{g}}$ is the associative algebra generated by the vectors in ${\mathfrak{g}}$ with relations $vw-wv=[v,w]$ for all $v,w$ in ${\mathfrak{g}}$, i.e., $$ U({\mathfrak{g}}) = \quotient{T({\mathfrak{g}})}{(v\otimes w - w\otimes v - [v,w] : v,w\in {\mathfrak{g}}), } $$ where $T({\mathfrak{g}})$ is the tensor algebra of the vector space ${\mathfrak{g}}$ over $k$. It can be defined by a universal property: $U({\mathfrak{g}})$ is the (unique up to isomorphism) associative algebra such that any linear map $\phi$ from ${\mathfrak{g}}$ to an associative algebra $A$ satisfying $[\phi(v),\phi(w)]=\phi([v,w])$ for all $v,w\in {\mathfrak{g}}$ factors through $U({\mathfrak{g}})$. (The bracket operation on an associative algebra $A$ is given by $[a,b]:= ab-ba$ for all $a,b\in A$.) As an algebra, $U({\mathfrak{g}})$ is filtered, under the assignment of degree 1 to each vector in $\mathfrak{g}$. \begin{namedthm}[Original PBW Theorem] A Lie algebra ${\mathfrak{g}}$ embeds into its universal enveloping algebra $U({\mathfrak{g}})$, and the associated graded algebra of $U({\mathfrak{g}})$ is isomorphic to $S({\mathfrak{g}})$, the symmetric algebra on the vector space ${\mathfrak{g}}$. \end{namedthm} Thus the original PBW Theorem compares a universal enveloping algebra $U({\mathfrak{g}})$ to an algebra of (commutative) polynomials. Since monomials form a $k$-basis for a polynomial algebra, the original PBW theorem is often rephrased in terms of a {\em PBW basis} (with tensor signs between vectors dropped): \begin{namedthm}[PBW Basis Theorem] Let $v_1,\ldots,v_n$ be an ordered $k$-vector space basis of the Lie algebra ${\mathfrak{g}}$. Then $ \{ v_{1}^{a_1}\cdots v_{n}^{a_n} : \ a_i \in \mathbb N\} $ is a $k$-basis of the universal enveloping algebra $U({\mathfrak{g}})$. \end{namedthm} \vspace{2ex} \begin{ex} The Lie algebra ${\mathfrak{sl}}_2(\CC)$ consists of $2\times 2$ matrices of trace 0 with entries in $\CC$ under the bracket operation on the associative algebra of all $2\times 2$ matrices. The standard basis of ${\mathfrak{sl}}_2(\CC)$ is \[ e = \left(\begin{array}{cc} 0&1\\0&0\end{array}\right), \ \ f = \left(\begin{array}{cc}0&0\\1&0\end{array}\right), \ \ h = \left(\begin{array}{cc}1&0\\0&-1\end{array}\right), \] for which $[e,f]=h, \ [h,e]=2e,\ [h,f]=-2f$. Thus $U({\mathfrak{sl}}_2(\CC))$ is the associative $\CC$-algebra generated by three symbols that we also denote by $e,f,h$ (abusing notation) subject to the relations $ef-fe=h$, $\ he-eh=2e$, $\ hf-fh=-2f$. It has $\CC$-basis $ \{ e^{a}h^{b}f^{c} : \, a,b,c\in{\mathbb N}\}$. \end{ex} \vspace{2ex} Proofs of the original PBW Theorem vary (and by how much is open to interpretation). The interested reader may wish to consult, for example, the texts~\cite{CartanEilenberg}, \cite{Dixmier}, \cite{Humphreys}, \cite{Jacobson}, and~\cite{Varadarajan}. Jacobson~\cite{Jacobson41} proved a PBW theorem for restricted enveloping algebras in positive characteristic. Higgins~\cite{Higgins} gives references and a comprehensive PBW theorem over more general ground rings. A PBW theorem for Lie superalgebras goes back to Milnor and Moore \cite{MilnorMoore} (see also Kac~\cite{Kac}). Grivel's historical article~\cite{Grivel} includes further references on generalizations to other ground rings, to Leibniz algebras, and to Weyl algebras. In Sections~\ref{BG} and \ref{diamond} below, we discuss two proof techniques particularly well suited to generalization: a combinatorial approach through the Composition-Diamond Lemma and a homological approach through algebraic deformation theory. First we lay some groundwork on quadratic algebras. \section{Homogeneous quadratic algebras}\label{homogeneous} Many authors have defined the notions of PBW algebra, PBW basis, PBW deformation, or PBW property in order to establish theorems like the original PBW Theorem in more general settings. Let us compare a few of these concepts, beginning in this section with those defined for {\em homogeneous} quadratic algebras. \subsection*{Quadratic algebras} Consider a finite dimensional vector space $V$ over $k$ with basis $v_1,\ldots, v_n$. Let $T$ be its tensor algebra over $k$, i.e., the free $k$-algebra $k\langle v_1,\ldots, v_n\rangle $ generated by the $v_i$. Then $T$ is an $\mathbb N$-graded $k$-algebra with $$T^0=k,\ T^1=V,\ T^2=V\otimes V,\ T^3=V\otimes V\otimes V, \text{ etc.}$$ We often omit tensor signs in writing elements of $T$ as is customary in noncommutive algebra, e.g., writing $x^3$ for $x\otimes x\otimes x$ and $xy$ for $x\otimes y$. Suppose $P$ is a set of filtered (nonhomogeneous) relations in degree 2, $$P\subseteq T^0 \oplus T^1 \oplus T^2, $$ and let $I=(P)$ be the $2$-sided ideal in $T$ generated by $P$. The quotient $A=T/I$ is a {\em nonhomogeneous quadratic algebra}. If $P$ consists of elements of homogeneous degree 2, i.e., $P\subseteq T^2$, then $A$ is a {\em homogeneous quadratic algebra}. Thus a quadratic algebra is just an algebra whose relations are generated by (homogeneous or nonhomogenous) quadratic expressions. We usually write each element of a finitely presented algebra $A=T/I$ as a coset representative in $T$, suppressing mention of the ideal $I$. Then a {\em $k$-basis} for $A$ is a subset of $T$ representing cosets modulo $I$ which form a basis for $A$ as a $k$-vector space. Some authors say a quadratic algebra has a {\em PBW basis} if it has the same $k$-basis as a universal enveloping algebra, i.e., if $\{v_1^{a_1}\cdots v_n^{a_n}: a_i\in\mathbb N\}$ is a basis for $A$ as a $k$-vector space. Such algebras include Weyl algebras, quantum/skew polynomial rings, some iterated Ore extensions, some quantum groups, etc. \subsection*{Priddy's PBW algebras} Priddy~\cite{Priddy} gave a broader definition of PBW basis for homogeneous quadratic algebras in terms of any ordered basis of $V$ (say, $v_1< v_2 < \cdots < v_n$) in establishing the notion of Koszul algebras. (A quadratic algebra is {\em Koszul} if the boundary maps in its minimal free resolution have matrix entries that are linear forms; see Section~\ref{Koszul}.) Priddy first extended the ordering degree-lexicographically to a monomial ordering on the tensor algebra $T$, where we regard pure tensors in $v_1, \ldots, v_n$ as monomials. He then called a $k$-vector space basis for $A=T/I$ a {\em PBW basis} (and the algebra $A$ a {\em PBW algebra}) if the product of any two basis elements either lay again in the basis or could be expressed modulo $I$ as a sum of larger elements in the basis. In doing so, Priddy~\cite[Theorem 5.3]{Priddy} gave a class of Koszul algebras which is easy to study: \begin{thm} If a homogeneous quadratic algebra has a PBW basis, then it is Koszul. \end{thm} Polishchuk and Positselski reframed Priddy's idea; we summarize their approach (see~\cite[Chapter~4, Section~1]{PP}) using the notion of leading monomial $\LM$ of any element of $T$ written in terms of the basis $v_1, \ldots, v_n$ of $V$. Suppose the set of generating relations $P$ is a subspace of $T^2$. Consider those monomials that are not divisible by the leading monomial of any generating quadratic relation: $$ {\mathcal{B}}_{P}=\{\text{monomials } m\in T: \LM(a) \nmid m,\ \forall a\in P \}\, . $$ Polishchuk and Positselski call $\mathcal{B}_P$ a {\em PBW basis} of the quadratic algebra $A$ (and $A$ a {\em PBW algebra}) whenever $\mathcal{B}_P$ is a $k$-basis of $A$. \subsection*{Gr\"obner bases} Priddy's definition and the reformulation of Polishchuk and Positselski immediately call to mind the theory of Gr\"obner bases. Recall that a set $\mathscr{G}$ of nonzero elements generating an ideal $I$ is called a (noncommutative) {\em Gr\"obner basis} if the leading monomial of each nonzero element of $I$ is divisible by the leading monomial of some element of $\mathscr{G}$ with respect to a fixed monomial (i.e., term) ordering (see~\cite{Mora} or~\cite{Li2012}). (Gr\"obner bases and Gr\"obner-Shirshov bases were developed independently in various contexts by Shirshov~\cite{ShirshovOn62} in 1962, Hironaka~\cite{Hironaka} in 1964, Buchberger~\cite{BuchbergerThesis} in 1965, Bokut'~\cite{Bokut} in 1976, and Bergman~\cite{Bergman} in 1978.) A Gr\"obner basis $\mathscr{G}$ is {\em quadratic} if it consists of homogeneous elements of degree 2 (i.e., lies in $T^2$) and it is {\em minimal} if no proper subset is also a Gr\"obner basis. A version of the Composition-Diamond Lemma for associative algebras (see Section~\ref{diamond}) implies that if $\mathscr{G}$ is a Gr\"obner basis for $I$, then $$\mathcal{B}_{\mathscr{G}}=\{\text{monomials } m \in T: \, \LM(a) \nmid m,\, \forall a \in \mathscr{G}\} $$ is a $k$-basis for $A=T(V)/I$. \vspace{2ex} \begin{ex} Let $A$ be the $\CC$-algebra generated by symbols $x,y$ with a single generating relation $xy=y^2$. Set $V=\CC\text{-span}\{x,y\}$ and $P=\{xy-y^2\}$ so that $A=T(V)/(P)$. A Gr\"obner basis $\mathscr{G}$ for the ideal $I=(P)$ with respect to the degree-lexicographical monomial ordering with $x<y$ is infinite: $$\begin{aligned} \mathscr{G}&=\{yx^ny-x^{n+1}y: n\in \mathbb N\},\\ \mathcal{B}_{P}&=\{\text{monomials } m\in T \text{ that are not divisible by } y^2\},\\ \mathcal{B}_{\mathscr{G}}&=\{\text{monomials } m\in T \text{ that are not divisible by } yx^ny \text{ for any } n\in \mathbb N\}. \end{aligned} $$ Hence, $A$ is not a PBW algebra using the ordering $x<y$ since $\mathcal{B}_{\mathscr{G}}$ is a $\CC$-basis for $A$ but $\mathcal{B}_{P}$ is not. If we instead take some monomial ordering with $x>y$, then $\mathscr{G}=P$ is a Gr\"obner basis for the ideal $I=(P)$ and $\mathcal{B}_{\mathscr{G}}=\mathcal{B}_{P}$ is a $\CC$-basis of $A$: $$ \begin{aligned} \mathcal{B}_{P}=\mathcal{B}_{\mathscr{G}}&= \{\text{monomials } m \in T \text{ that are not divisible by } xy\}\\ &=\{y^ax^b:a,b\in \mathbb N\}. \end{aligned} $$ Hence $A$ is a PBW algebra using the ordering $y<x$. \end{ex} \vspace{2ex} \subsection*{Quadratic Gr\"obner bases} How do the sets of monomials $\mathcal{B}_P$ and $\mathcal{B}_{\mathscr{G}}$ compare after fixing an appropriate monomial ordering? Suppose $\mathscr{G}$ is a minimal Gr\"obner basis for $I=(P)$ (which implies that no element of $\mathscr{G}$ has leading monomial dividing that of another). Then $\mathcal{B}_{\mathscr{G}}\subset \mathcal{B}_P$, and the reverse inclusion holds whenever $\mathscr{G}$ is quadratic (since then $\mathscr{G}$ must be a subset of the subspace $P$). Since each graded piece of $A$ is finite dimensional over $k$, a PBW basis thus corresponds to a quadratic Gr\"obner basis: $$\mathcal{B}_P \text{ is a PBW basis of }A \iff \mathcal{B}_{\mathscr{G}}= \mathcal{B}_P \iff \mathscr{G} \text{ is quadratic}. $$ Thus authors sometimes call any algebra defined by an ideal of relations with a quadratic Gr\"obner basis a PBW algebra. In any case (see~\cite{Anick},~\cite{BHV},~\cite{Froberg}): \begin{thm} Any quadratic algebra whose ideal of relations has a (noncommutative) quadratic Gr\"obner basis is Koszul. \end{thm} Backelin (see~\cite[Chapter 4, Section 3]{PP}) gave an example of a Koszul algebra defined by an ideal of relations with no quadratic Gr\"obner basis. Eisenbud, Reeves, and Totaro~\cite[p.~187]{ERT} gave an example of a commutative Koszul algebra whose ideal of relations does not have a quadratic Gr\"obner basis with respect to {\em any} ordering, even after a change of basis (see also~\cite{Froberg}). We relate Gr\"obner bases and PBW theorems for {\em nonhomogeneous} algebras in Section~\ref{diamond}. \section{Nonhomogeneous algebras: PBW deformations}\label{sec:nonhomdef} Algebras defined by generators and relations are not naturally graded, but merely filtered, and often one wants to pass to some graded or homogeneous version of the algebra for quick information. There is more than one way to do this in general. The original PBW Theorem shows that the universal enveloping algebra of a Lie algebra has one natural homogeneous version. Authors apply this idea to other algebras, saying that an algebra satisfies a {\em PBW property} when graded versions are isomorphic and call the original algebra a {\em PBW deformation} of this graded version. We make these notions precise in this section and relate them to the work of Braverman and Gaitsgory and of Polishchuk and Positselski on Koszul algebras in the next section. \subsection*{Filtered algebras} Again, consider an algebra $A$ generated by a finite dimensional vector space $V$ over a field $k$ with some defining set of relations $P$. (More generally, one might consider a module over a group algebra or some other $k$-algebra.) Let $T=\bigoplus_{i\geq 0} T^i$ be the tensor algebra over $V$ and let $I=( P)$ be the two-sided ideal of relations so that $$A={T}/ {I} \, .$$ If $I$ is homogeneous, then the quotient algebra $A$ is graded. In general, $I$ is nonhomogeneous and the quotient algebra is only filtered, with $i$-th filtered component $F^i(A)=F^i(T/I) = (F^i(T)+I)/I$ induced from the filtration on $T$ obtained by assigning degree one to each vector in $V$ (i.e., $F^i(T)= T^0\oplus T^1 \oplus \ldots \oplus T^i$). \subsection*{Homogeneous versions} One associates to the filtered algebra $A$ two possibly different graded versions. On one hand, we cross out lower order terms in the {\em generating} set $P$ of relations to obtain a homogeneous version of the original algebra. On the other hand, we cross out lower order terms in each element of the {\em entire} ideal of relations. Then {\em PBW conditions} are precisely those under which these two graded versions of the original algebra coincide, as we recall next. The {\em associated graded algebra} of $A$, $$\gr (A) = \ \bigoplus_{i\geq 0} \quotient{{F}^i(A)}{{F}^{i-1}(A)} \, , $$ is a graded version of $A$ which does not depend on the choice of generators $P$ of the ideal of relations $I$. (We set $F^{-1} = \{0\}$.) The associated graded algebra may be realized concretely by projecting each element in the ideal $I$ onto its leading homogeneous part (see Li~\cite[Theorem 3.2]{Li2012}): $$ \gr \big(\quotient{T}{I}\big) \ \cong\ \quotient{T}{ ( \text{LH}(I))}\, , $$ where $\text{LH}(S)=\{\text{LH}(f): f \in S\}$ for any $S\subseteq T$ and $\text{LH}(f)$ picks off the leading (or highest) homogeneous part of $f$ in the graded algebra $T$. (Formally, $\text{LH}(f)=f_d$ for $f=\sum_{i=1}^d f_i$ with each $f_i$ in $T^i$ and $f_d$ nonzero.) Those looking for a shortcut may be tempted instead simply to project elements of the generating set $P$ onto their leading homogeneous parts. A natural surjection (of graded algebras) always arises from this homogeneous version of $A$ determined by $P$ to the associated graded algebra of $A$: $$\quotient{T}{( \text{LH}(P)) } \twoheadrightarrow \gr \big(\quotient{T}{I}\big)\ .$$ \subsection*{PBW deformations} We say the algebra $T/I$ is a {\em PBW deformation} of its homogeneous version $T/ ( \text{LH}(P))$ (or satisfies the {\em PBW property} with respect to $P$) when the above surjection is also injective, i.e., when the associated graded algebra and the homogeneous algebra determined by $P$ coincide (see~\cite{BG}): $$ \quotient{T}{( \text{LH}(I) ) } \cong \gr \Big(\quotient{T}{I}\Big)\ \cong \quotient{T}{( \text{LH}(P))} \, .$$ In the next section, we explain the connections among PBW deformations, graded (and formal) deformations, and Hochschild cohomology. In this language, the original PBW Theorem for universal enveloping algebras asserts that the set $$ P=\{v\otimes w-w\otimes v-[v,w]:v,w\in V\} $$ gives rise to a quotient algebra $T/(P)$ that is a PBW deformation of the commutative polynomial ring $S(V)$, for $V$ the underlying vector space of a Lie algebra. Here, each element of $V$ has degree $1$ so that the relations are nonhomogeneous of degree $2$ and $T/(P)$ is a nonhomogenous quadratic algebra. We include an example next to show how the PBW property depends on choice of generating relations $P$ defining the algebra $T/I$. (But note that if $A$ satisfies the PBW property with respect to some generating set $P$ of relations, then the subspace $P$ generates is unique; see~\cite[Proposition 2.1]{PBWQuadratic}.) \vspace{2ex} \begin{ex}\label{cuteexample}{ We mention a filtered algebra that exhibits the PBW property with respect to one generating set of relations but not another. Consider the (noncommutative) algebra $A$ generated by symbols $x$ and $y$ with defining relations $xy=x$ and $yx=y$: $$ A=\quotient{k\langle x,y \rangle }{(xy-x, yx-y)}\, ,$$ where $k\langle x,y\rangle$ is the free $k$-algebra generated by $x$ and $y$. The algebra $A$ {\em does not} satisfy the PBW property with respect to the generating relations $xy-x$ and $yx-y$. Indeed, the relations imply that $x^2=x$ and $y^2=y$ in $A$ and thus the associated graded algebra $\gr(A)$ is trivial in degree two while the homogeneous version of $A$ is not (as $x^2$ and $y^2$ represent nonzero classes). The algebra $A$ {\em does} exhibit the PBW property with respect to the larger generating set $\{xy-x, yx-y, x^2-x, y^2-y\}$ since $$\gr A \cong \quotient{k\langle x,y\rangle}{(xy, yx, x^2, y^2)}\, . $$ Examples~\ref{cuteexampleGroebner} and~\ref{cuteexampleDiamond} explain this recovery of the PBW property in terms of Gr\"obner bases and the Composition-Diamond Lemma.}\end{ex} \section{Deformation Theory and Hochschild cohomology}\label{defHH} In the last section, we saw that an algebra defined by nonhomogeneous relations is called a {\em PBW deformation} when the homogeneous version determined by generating relations coincides with its associated graded algebra. How may one view formally the original nonhomogeneous algebra as a {\em deformation} of its homogeneous version? In this section, we begin to fit PBW deformations into the theory of algebraic deformations. We recall the theory of deformations of algebras and Hochschild cohomology, a homological tool used to predict deformations and prove PBW properties. \subsection*{Graded deformations} Let $t$ be a formal parameter. A {\em graded deformation} of a graded $k$-algebra $A$ is a graded associative $k[t]$-algebra $A_t$ (for $t$ in degree 1) which is isomorphic to $A[t]=A\otimes_k k[t]$ as a $k[t]$-module with $$A_t|_{t=0} \cong A .$$ If we specialize $t$ to an element of $k$ in the algebra $A_t$, then we may no longer have a graded algebra, but a filtered algebra instead. PBW deformations may be viewed as graded deformations: Each PBW deformation is a graded deformation of its homogeneous version with parameter $t$ specialized to some element of $k$. Indeed, given a finitely generated algebra $A=T/(P)$, we may insert a formal parameter $t$ of degree 1 throughout the defining relations $P$ to make each relation homogeneous and extend scalars to $k[t]$; the result yields a graded algebra $B_t$ over $k[t]$ with $A=B_t|_{t=1}$ and $B= B_t|_{t=0}$, the homogeneous version of $A$. One may verify that if $A$ satisfies the PBW property, then this interpolating algebra $B_t$ also satisfies a PBW condition over $k[t]$ and that $B_t$ and $B[t]$ are isomorphic as $k[t]$-modules. Thus as $B_t$ is an associative graded algebra, it defines a graded deformation of $B$. Suppose $A_t$ is a graded deformation of a graded $k$-algebra $A$. Then up to isomorphism, $A_t$ is just the vector space $A[t]$ together with some associative multiplication given by \begin{equation}\label{associativemultiplication} a * b = ab + \mu_1(a\otimes b)t + \mu_2(a\otimes b) t^2+\cdots, \end{equation} where $ab$ is the product of $a$ and $b$ in $A$ and for each $i$, and each $\mu_i$ is a linear map from $A\otimes A$ to $A$ of degree $-i$, extended to be $k[t]$-linear. The degree condition on the maps $\mu_i$ are forced by the fact that $A_t$ is graded for $t$ in degree 1. (One sometimes considers a {\em formal} deformation, defined over formal power series $k[[t]]$ instead of polynomials $k[t]$.) The condition that the multiplication $*$ in $A[t]$ be associative imposes conditions on the functions $\mu_i$ which are often expressed using Hochschild cohomology. For example, comparing coefficients of $t$ in the equation $(a*b)*c = a*(b*c)$, we see that $\mu_1$ must satisfy \begin{equation}\label{cocyclecondition} a\mu_1(b\otimes c) + \mu_1(a\otimes bc) = \mu_1(ab\otimes c) + \mu_1(a\otimes b)c \end{equation} for all $a,b,c\in A$. We see below that this condition implies that $\mu_1$ is a Hochschild 2-cocycle. Comparing coefficients of $t^2$ yields a condition on $\mu_1,\mu_2$ called the {\em first obstruction}, comparing coefficients of $t^3$ yields a condition on $\mu_1, \mu_2,\mu_3$ called the {\em second obstruction}, and so on. (See~\cite{Gerstenhaber}.) \subsection*{Hochschild cohomology} Hochschild cohomology is a generalization of group cohomology well suited to noncommutative algebras. It gives information about an algebra $A$ viewed as a bimodule over itself, thus capturing right and left multiplication, and predicts possible multiplication maps $\mu_i$ that could be used to define a deformation of $A$. One may define the Hochschild cohomology of a $k$-algebra concretely as Hochschild cocycles modulo Hochschild coboundaries by setting $$ \text{Hochschild $i$-cochains} = \{ \text{linear functions } \phi: \underbrace{A\otimes \cdots \otimes A}_{i-\text{times}} \rightarrow A \} $$ (i.e., multilinear maps $A\times \cdots\times A \rightarrow A$) with linear boundary operator $$\delta_{i+1}^*: i\text{-cochains} \rightarrow (i+1)\text{-cochains}$$ given by $$ \begin{aligned} (\delta_{i+1}^*\phi)&(a_0\otimes \cdots\otimes a_{i}) =\\ & a_0\phi(a_1\otimes \cdots\otimes a_{i}) +\sum_{0\leq j\leq i-1} (-1)^{j+1} \phi(a_0\otimes\cdots\otimes a_{j-1}\otimes a_j a_{j+1}\otimes a_{j+2} \otimes\cdots\otimes a_{i})\\ &\hphantom{xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx} + (-1)^{i+1}\phi(a_0\otimes\cdots\otimes a_{i-1}) a_{i}\, . \end{aligned} $$ We identify $A$ with $\{0\text{-cochains}\}$. Then $$ {\rm HH}^i(A):=\ker \delta_{i+1}^* / \text{Im } \delta_i^*\ . $$ We are interested in other concrete realizations of Hochschild cohomology giving isomorphic cohomology groups. Formally, we view any $k$-algebra $A$ as a bimodule over itself, i.e., a right $A^e$-module where $A^e$ is its enveloping algebra, $A\otimes A^{op}$, for $A^{\op}$ the opposite algebra of $A$. The Hochschild cohomology of $A$ is then just $$ {\rm HH}^{\DOT}(A)=\text{Ext}^{\DOT}_{A^e}(A,A). $$ This cohomology is often computed using the $A$-bimodule bar resolution of $A$: \begin{equation}\label{res-bar} \cdots \stackrel{}{\longrightarrow} A^{\otimes 4}\stackrel{\delta_2}{\longrightarrow} A^{\otimes 3} \stackrel{\delta_1}{\longrightarrow} A^{\otimes 2} \stackrel{\delta_0}{\longrightarrow} A \longrightarrow 0 , \end{equation} where $\delta_0$ is the multiplication in $A$, and, for each $i\geq 1$, $$ \delta_i(a_0\otimes\cdots\otimes a_{i+1}) = \sum_{j=0}^{i} (-1)^j a_0\otimes \cdots\otimes a_j a_{j+1}\otimes \cdots\otimes a_{i+1} $$ for $a_0,\ldots, a_{i+1}$ in $A$. We take the homology of this complex after dropping the initial term $A$ and applying $\mbox{\rm Hom\,}_{A\otimes A^{\text{op}}}(-,A)$ to obtain the above description of Hochschild cohomology in terms of Hochschild cocycles and coboundaries, using the identification $$ \mbox{\rm Hom\,}_{A\otimes A^{\text{op}}}(A\otimes A^{\otimes i}\otimes A, A) \cong \mbox{\rm Hom\,}_{k}(A^{\otimes i}, A). $$ \section{Koszul algebras}\label{Koszul} We wish to extend the original PBW Theorem for universal enveloping algebras to other nonhomogeneous quadratic algebras. When is a given algebra a PBW deformation of another well-understood and well-behaved algebra? Can we replace the polynomial algebra in the original PBW theorem by any homogeneous quadratic algebra, provided it is well-behaved in some way? We turn to Koszul algebras as a wide class of quadratic algebras generalizing the class of polynomial algebras. In this section, we briefly recall the definition of a Koszul algebra. \subsection{Koszul complex} The algebra $S$ is a {\em Koszul algebra} if the underlying field $k$ admits a linear $S$-free resolution, i.e., one with boundary maps given by matrices whose entries are linear forms. Equivalently, $S$ is a Koszul algebra if the following complex of left $S$-modules is acyclic: \begin{equation}\label{res-koszul} \cdots\longrightarrow K_3(S)\longrightarrow K_2(S)\longrightarrow K_1(S)\longrightarrow K_0(S)\longrightarrow k\longrightarrow 0 \end{equation} where $K_0(S) = S$, $K_1(S)=S\otimes V$, $K_2(S)=S\otimes R$, and for $i\geq 3$, $$ K_i(S) = S\otimes \left(\,\,\bigcap_{j=0}^{i-2} V^{\otimes j}\otimes R\otimes V^{\otimes (i-2-j)} \right) . $$ The differential is that inherited from the bar resolution of $k$ as an $S$-bimodule, \begin{equation}\label{res-bar-k} \cdots \stackrel{\partial_4}{\longrightarrow} S^{\otimes 4}\stackrel{\partial_3}{\longrightarrow} S^{\otimes 3} \stackrel{\partial_2}{\longrightarrow} S^{\otimes 2} \stackrel{\partial_1}{\longrightarrow} S \stackrel{\varepsilon}{\longrightarrow} k \longrightarrow 0 , \end{equation} where $\varepsilon$ is the augmentation ($\varepsilon(v)=0$ for all $v$ in $V$) and for each $i\geq 1$, $$ \partial_i(s_0\otimes\cdots\otimes s_{i}) = (-1)^i \varepsilon(s_i) s_0\otimes \cdots \otimes s_{i-1} + \sum_{j=0}^{i-1} (-1)^j s_0\otimes \cdots\otimes s_j s_{j+1}\otimes \cdots\otimes s_{i} . $$ (Note that for each $i$, $K_i(S)$ is an $S$-submodule of $S^{\otimes (i+1)}$.) \subsection*{Bimodule Koszul complex} Braverman and Gaitsgory gave an equivalent definition of Koszul algebra via the bimodule Koszul complex: Let \begin{equation}\label{K-tilde} \widetilde{K}_i(S) = K_i(S)\otimes S , \end{equation} an $S^e$-module (equivalently $S$-bimodule) where $S^e=S\otimes S^{op}$. Then $\widetilde{K}_{\DOT}(S)$ embeds into the bimodule bar resolution (\ref{res-bar}) whose $i$-th term is $S^{\otimes (i+2)}$, and $S$ is Koszul if and only if $\widetilde{K}_{\DOT}(S)$ is a bimodule resolution of $S$. Thus we may obtain the Hochschild cohomology ${\rm HH}^{\DOT}(S)$ of $S$ (which contains information about its deformations) by applying $\mbox{\rm Hom\,}_{S^e}( - , S)$ either to the Koszul resolution $\widetilde{K}_{\DOT}(S)$ or to the bar resolution (\ref{res-bar}) of $S$ as an $S^e$-module (after dropping the initial nonzero terms of each) and taking homology. We see in the next section how these resolutions and the resulting cohomology are used in homological proofs of a generalization of the PBW Theorem from~\cite{BG,PP,Positselski}. \section{Homological methods and deformations of Koszul algebras} \label{BG} Polishchuk and Positselski~\cite{PP,Positselski} and Braverman and Gaitsgory~\cite{BG} extended the idea of the original PBW Theorem for universal enveloping algebras to other nonhomogeneous quadratic algebras by replacing the polynomial algebra in the theorem by an arbitrary Koszul algebra. They stated conditions for a version of the original PBW Theorem to hold in this greater generality and gave homological proofs. (Polishchuk and Positselski~\cite{PP} in fact gave two proofs, one homological that goes back to Positselski~\cite{Positselski} and another using distributive lattices.) We briefly summarize these two homological approaches in this section and discuss generalizations. \subsection*{Theorem of Polishchuk and Positselski, Braverman and Gaitsgory} As in the last sections, let $V$ be a finite dimensional vector space over a field $k$ and let $T$ be its tensor algebra over $k$ with $i$-th filtered component $F^i(T)$. Consider a subspace $P$ of $F^2(T)$ defining a nonhomogeneous quadratic algebra $$A=\quotient{T}{(P)}\ .$$ Let $R=\text{LH}(P)\cap T^2$ be the projection of $P$ onto the homogeneous component of degree 2, and set $$S=\quotient{T}{(R)},$$ a homogeneous quadratic algebra (the homogeneous version of $A$ as in Section~\ref{sec:nonhomdef}). Then $A$ is a PBW deformation of $S$ when $\gr A$ and $S$ are isomorphic as graded algebras. Braverman and Gaitsgory and also Polishchuk and Positselski gave a generalization of the PBW Theorem~\cite{BG,PP,Positselski} as follows: \begin{thm}\label{IJ} Let $A$ be a nonhomogeneous quadratic algebra, $A = T/(P)$, and $S=T/(R)$ its corresponding homogeneous quadratic algebra. Suppose $S$ is a Koszul algebra. Then $A$ is a PBW deformation of $S$ if, and only if, the following two conditions hold: (I) $ \ P\cap F^1(T) = \{0\}$, and (J) $ \ (F^1(T)\cdot P \cdot F^1(T))\cap F^2(T) = P$. \end{thm} We have chosen the notation of Braverman and Gaitsgory. The necessity of conditions (I) and (J) can be seen by direct algebraic manipulations. Similarly, direct computation shows that if (I) holds, then (J) is equivalent to (i), (ii), and (iii) of Theorem~\ref{thm:BG} below. Braverman and Gaitsgory used algebraic deformation theory to show that these conditions are also sufficient. Polishchuk and Positselski used properties of an explicit complex defined using the Koszul dual of $S$. The conditions (i), (ii), (iii) facilitate these connections to homological algebra, and they are easier in practice to verify than checking (J) directly. But in order to state these conditions, we require a canonical decomposition for elements of $P$: Condition~(I) of Theorem~\ref{IJ} implies that every element of $P$ can be written as the sum of a nonzero element of $R$ (of degree $2$), a linear term, and a constant term, i.e., there exist linear functions $\alpha: R\rightarrow V$, $\beta: R\rightarrow k$ for which $$P=\{r-\alpha(r)-\beta(r)\mid r\in R\}.$$ One may then rewrite Condition~(J) and reformulate Theorem~\ref{IJ} as follows. \begin{thm}\label{thm:BG} Let $A$ be a nonhomogeneous quadratic algebra, $A=T/(P)$, and $S=T/(R)$ its corresponding homogeneous quadratic algebra. Suppose $S$ is a Koszul algebra. Then $A$ is a PBW deformation of $S$ if, and only if, the following conditions hold: (I) $ \ \ \ P\cap F^1(T) = \{0\}$, (i) $\ \ \ \Ima (\alpha\otimes \id - \id\otimes \alpha) \subseteq R$, (ii) $ \ \ \alpha\circ (\alpha\otimes\id - \id\otimes \alpha) = - (\beta\otimes\id -\id\otimes \beta)$, (iii) $ \ \beta\circ (\alpha\otimes\id - \id\otimes \alpha) = 0$, \noindent where the maps $\alpha\otimes\id -\id\otimes\alpha$ and $\beta\otimes\id - \id\otimes\beta$ are defined on the subspace $(R\otimes V)\cap (V\otimes R)$ of $T$. \end{thm} We explain next how the original PBW Theorem is a consequence of Theorem~\ref{thm:BG}. Indeed, Polishchuk and Positselski~\cite[Chapter 5, Sections 1 and 2]{PP} described the ``self-consistency conditions'' (i), (ii), and (iii) of the theorem as generalizing the Jacobi identity for Lie brackets. \vspace{2ex} \begin{ex} Let $\mathfrak g$ be a finite dimensional complex Lie algebra, $A=U({\mathfrak{g}})$ its universal enveloping algebra, and $S=S({\mathfrak{g}})$. Then $R$ has $\CC$-basis all $v\otimes w - w\otimes v$ for $v,w$ in $V$, and $\alpha(v\otimes w-w\otimes v) = [v,w]$, $\ \beta\equiv 0$. Condition (I) is equivalent to antisymmetry of the bracket. Condition (J) is equivalent to the Jacobi identity, with (i), (ii) expressing the condition separately in each degree in the tensor algebra ($\beta \equiv 0$ in this case). More generally, there are examples with $\beta\not\equiv 0$, for instance, the Sridharan enveloping algebras~\cite{Sridharan}. \end{ex} \vspace{2ex} \subsection*{Homological proofs} We now explain how Braverman and Gaitsgory and Polishchuk and Positselski used algebraic deformation theory and Hochschild cohomology to prove that the conditions of Theorem~\ref{thm:BG} are sufficient. Braverman and Gaitsgory constructed a graded deformation $S_t$ interpolating between $S$ and $A$ (i.e., with $S=S_t|_{t=0}$ and $A=S_t|_{t=1}$), implying that $\gr (A)\cong S$ as graded algebras. They constructed the deformation $S_t$ as follows. \begin{itemize} \item They identified $\alpha$ with a choice of first multiplication map $\mu_1$ and $\beta$ with a choice of second multiplication map $\mu_2$, via the canonical embedding of the bimodule Koszul resolution (\ref{K-tilde}) into the bar resolution (\ref{res-bar}) of $S$. (In order to do this, one must extend $\alpha,\beta$ (respectively, $\mu_1,\mu_2$) to be maps on a larger space via an isomorphism $\mbox{\rm Hom\,}_k(R,S)\cong \mbox{\rm Hom\,}_{S^e}(S\otimes R\otimes S,S)$ (respectively, $\mbox{\rm Hom\,}_k(S\otimes S,S)\cong \mbox{\rm Hom\,}_{S^e}(S^{\otimes 4},S)$.) \item Condition (i) is then seen to be equivalent to $\mu_1$ being a Hochschild 2-cocycle (i.e., satisfies Equation~(\ref{cocyclecondition})). \item Condition (ii) is equivalent to the vanishing of the first obstruction. \item Condition (iii) is equivalent to the vanishing of the second obstruction. \item All other obstructions vanish automatically for a Koszul algebra due to the structure of its Hochschild cohomology (see~\cite{BG}). \item Thus there exist maps $\mu_i$ for $i>2$ defining an associative multiplication $*$ (as in Equation~(\ref{associativemultiplication})) on $S[t]$. \end{itemize} Positselski~\cite[Theorem~3.3]{Positselski} (see also \cite[Proposition~5.7.2]{PP}) gave a different homological proof of Theorem~\ref{thm:BG}. Let $B$ be the Koszul dual $S^{!}:= \Ext^*_S(k,k)$ of $S$. Then $S\cong B^{!}:= \Ext^*_B(k,k)$. Polishchuk defined a complex whose terms are the same as those in the bar resolution of $B$ but with boundary maps modified using the functions $\alpha: R\rightarrow V$, $\beta: R\rightarrow k$ by first identifying $\beta$ with an element $h$ of $B^2$ and $\alpha$ with a dual to a derivation $d$ on $B$. The conditions (i), (ii), and (iii) on $\alpha,\beta$ correspond to conditions on $d,h$, under which Positselski called $B$ a CDG-algebra. The idea is that CDG-algebra structures on $B$ are dual to PBW deformations of $S$. Positselski's proof relies on the Koszul property of $S$ (equivalently of $B$) to imply collapsing of a spectral sequence with $E^2_{p,q} = \Ext^{-q,p}_B(k,k)$. The sequence converges to the homology of the original complex for $B$. Koszulness implies the only nonzero terms occur when $p+q=0$, and we are left with the homology of the total complex in degree 0. By its definition this is simply the algebra $A$, and it follows that $\gr A\cong B^{!}\cong S$. \subsection*{Generalizations and extensions} Theorem~\ref{thm:BG} describes nonhomogeneous quadratic algebras whose quadratic versions are Koszul. What if one replaces the underlying field by an arbitrary ring? Etingof and Ginzburg~\cite{EtingofGinzburg} noted that Braverman and Gaitsgory's proof of Theorem~\ref{thm:BG} is in fact valid more generally for Koszul rings over semisimple subrings as defined by Beilinson, Ginzburg, and Soergel~\cite{BGS}. They chose their semisimple subring to be the complex group algebra $\CC G$ of a finite group $G$ acting symplectically and their Koszul ring to be a polynomial algebra $S(V)$. They were interested in the case $\alpha\equiv 0$ for their applications to symplectic reflection algebras (outlined in Section~\ref{SRA} below). Halbout, Oudom, and Tang~\cite{HOT} state a generalization of Theorem~\ref{thm:BG} in this setting that allows nonzero $\alpha$ (i.e., allows relations defining the algebra $A$ to set commutators of vectors in $V$ to a combination of group algebra elements and vectors). A proof using the Koszul ring theory of Beilinson, Ginzburg, and Soergel and the results of Braverman and Gaitsgory is outlined in our paper~\cite{doa} for arbitrary group algebras over the complex numbers. We also included a second proof there for group algebras over arbitrary fields (of characteristic not 2) using the Composition-Diamond Lemma (described in the next section), which has the advantage that it is characteristic free. We adapted the program of Braverman and Gaitsgory to arbitrary nonhomogeneous quadratic algebras and Koszul rings defined over non-semisimple rings in~\cite{PBWQuadratic}, including group rings $kG$ where the characteristic of $k$ divides the order of the group $G$. The theory of Braverman and Gaitsgory was further generalized to algebras that are $N$-Koszul (all relations homogeneous of degree $N$ plus a homological condition) over semisimple or von Neumann regular rings by a number of authors (see~\cite{BergerGinzburg,FloystadVatne,HSS}). Cassidy and Shelton~\cite{CassidyShelton} generalized the theory of Braverman and Gaitsgory in a different direction, to graded algebras over a field satisfying a particular homological finiteness condition (not necessarily having all relations in a single fixed degree). \section{The Composition-Diamond Lemma and Gr\"obner basis theory}\label{diamond} PBW theorems are often proven using diamond or composition lemmas and the theory of (noncommutative) Gr\"obner bases. Diamond lemmas predict existence of a canonical normal form in a mathematical system. Often one is presented with various ways of simplifying an element to obtain a normal form. If two different ways of rewriting the original element result in the same desired reduced expression, one is reminded of diverging paths meeting like the sides of the shape of a diamond. Diamond lemmas often originate from Newman's Lemma~\cite{Newman} for graph theory. Shirshov (see~\cite{ShirshovOn62} and~\cite{ShirshovSome62}) gave a general version for Lie polynomials in 1962 which Bokut' (see~\cite{Bokut} and~\cite{BokutChen}) extended to associative algebras in 1976, using the term ``Composition Lemma.'' Around the same time (Bokut' cites a preprint by Bergman), Bergman~\cite{Bergman} developed a similar result which he instead called the Diamond Lemma. Both the Diamond Lemma and Composition Lemma are easy to explain but difficult to state precisely without the formalism absorbed by Gr\"obner basis theory. In fact, the level of rigor necessary to carefully state and prove these results can be the subject of debate. Bergman himself writes that the lemma ``has been considered obvious and used freely by some ring-theorists... but others seem unaware of it and write out tortuous verifications.'' (Some authors are reminded of life in a lunatic asylum (see~\cite{HellstromSilvestrov}) when making the basic idea rigorous.) We leave careful definitions to any one of numerous texts (for example, see~\cite{BMM} or~\cite{AlgorithmicMethods}) and instead present the intuitive idea behind the result developed by Shirshov, Bokut', and Bergman. \subsection*{The Result of Bokut' (and Shirshov)} We first give the original result of Bokut' (see~\cite[Proposition 1 and Corollary 1]{Bokut}), who used a degree-lexicographical monomial ordering (also see~\cite{BokutKukin}). \begin{namedthm}[Original Composition Lemma] Suppose a set of relations $P$ defining a $k$-algebra $A$ is ``closed under composition.'' Then the set of monomials that do not contain the leading monomial of any element of $P$ as a subword is a $k$-basis of $A$. \end{namedthm} Before explaining the notion of ``closed under composition,'' we rephrase the results of Bokut' in modern language using Gr\"obner bases to give a PBW-like basis as in Section~\ref{homogeneous} (see~\cite{Green94}, or~\cite{Mora}, for example). Fix a monomial ordering on a free $k$-algebra $T$ and again write $\LM(p)$ for the leading monomial of any $p$ in $T$. We include the converse of the lemma which can be deduced from the work of Shirshov and Bokut' and was given explicitly by Bergman, who used arbitrary monomial orderings. \begin{namedthm}[Gr\"obner basis version of Composition Lemma] The set $P$ is a (noncommutative) Gr\"obner basis of the ideal $I$ it generates if and only if $$\mathcal{B}_{P}=\{\text{monomials } m \text{ in } T: \ m \text{ not divisible by any } \LM(p),\ p \in P\} $$ is a $k$-basis for the algebra $A=T/I$. \end{namedthm} \vspace{2ex} \begin{ex}\label{cuteexampleGroebner} Let $A$ be the $k$-algebra generated by symbols $x$ and $y$ and relations $xy=x$ and $yx=y$ (Example~\ref{cuteexample}): $$A=\quotient{k\langle x, y\rangle}{(xy-x, yx-y)}\, .$$ Let $P$ be the set of defining relations, $P=\{xy-x, yx-y\}$, and consider the degree-lexicographical monomial ordering with $x>y$. Then $P$ is {\em not} a Gr\"obner basis of the ideal it generates since $x^2-x=x(yx-y)-(xy-x)(x-1)$ lies in the ideal $(P)$ and has leading monomial $x^2$, which does not lie in the ideal generated by the leading monomials of the elements of $P$. Indeed, $\mathcal{B}_{P}$ contains both $x^2$ and $x$ and hence can not be a basis for $A$. We set $P'=\{xy-x, yx-y, x^2-x, y^2-y\}$ to obtain a Gr\"obner basis of $(P)$. Then $\mathcal{B}_{P'}=\{\text{monomials } m : m \text{ not divisible by } xy, yx, x^2, y^2 \}$ is a $k$-basis for the algebra $A$. \end{ex} \vspace{2ex} \subsection*{Resolving ambiguities} Bergman focused on the problem of resolving ambiguities that arise when trying to rewrite elements of an algebra using different defining relations. Consider a $k$-algebra $A$ defined by a finite set of generators and a finite set of relations $$ m_1=f_1,\ m_2=f_2,\ \ldots,\ m_k=f_k\, , $$ where the $m_i$ are monomials (in the set of generators of $A$) and the $f_i$ are linear combinations of monomials. Suppose we prefer the right side of our relations and try to eradicate the $m_i$ whenever possible in writing the elements of $A$ in terms of its generators. Can we define the notion of a canonical form for every element of $A$ by deciding to replace each $m_i$ by $f_i$ whenever possible? We say an expression for an element of $A$ is {\em reduced} if no $m_i$ appears (as a subword anywhere), i.e., when no further replacements using the defining relations of $A$ are possible. The idea of a {\em canonical form} for $A$ then makes sense if the set of reduced expressions is a $k$-basis for $A$, i.e., if every element can be written uniquely in reduced form. A natural ambiguity arises: If a monomial $m$ contains both $m_1$ and $m_2$ as (overlapping) subwords, do we ``reduce'' first $m_1$ to $f_1$ or rather first $m_2$ to $f_2$ by replacing? (In the last example, the word $xyx$ contains overlapping subwords $xy$ and $yx$.) If the order of application of the two relations does not matter and we end up with the same reduced expression, then we say the {\em (overlap) ambiguity was resolvable}. The Composition-Diamond Lemma states that {\em knowing certain ambiguities resolve is enough to conclude that a canonical normal form exists for all elements in the algebra}. \vspace{2ex} \begin{ex}\label{cuteexampleDiamond} Again, let $A$ be the $k$-algebra generated by symbols $x$ and $y$ and relations $xy=x$ and $yx=y$ (Example~\ref{cuteexample}). We decide to eradicate $xy$ and $yx$ whenever possible in expressions for the elements of $A$ using just the defining relations. On one hand, we may reduce $xyx$ to $x^2$ (using the first relation); on the other hand, we may reduce $xyx$ to $xy$ (using the second relation) then to $x$ (using the first relation). The words $x$ and $x^2$ can not be reduced further using just the defining relations, so we consider them both ``reduced''. Yet they represent the same element $xyx$ of $A$. Thus, a canonical ``reduced'' form does not make sense given this choice of defining relations for the algebra $A$. \end{ex} \vspace{2ex} \subsection*{The result of Bergman} One makes the above notions precise by introducing a monomial ordering and giving formal definitions for ambiguities, reduction, rewriting procedures, resolving, etc. We consider the quotient algebra $A=T/(P)$ where $T$ (a tensor algebra) is the free $k$-algebra on the generators of $A$ and $P$ is a (say) finite set of generating relations. We single out a monomial $m_i$ in each generating relation, writing $$P=\{ m_i-f_i: 1\leq i\leq k\}\, ,$$ and choose a monomial ordering so that $m_i$ is the leading monomial of each $m_i-f_i$ (assuming such an ordering exists). Then the reduced elements are exactly those spanned by $\mathcal{B}_{P}$. If all the ambiguities among elements of $P$ are resolvable, we obtain a PBW-like basis, but Bokut' and Bergman give a condition that is easier to check. Instead of choosing to replace monomial $m_1$ by $f_1$ or monomial $m_2$ by $f_2$ when they both appear as subwords of a monomial $m$, we make {\em both} replacements separately and take the difference. If we can express this difference as a linear combination of elements $p$ in the ideal $(P)$ with $\LM(p)<m$, then we say the ambiguity was resolvable {\em relative to the ordering}. (Bokut' used ``closed under composition'' to describe this condition along with minimality of $P$.) See~\cite[Theorem~1.2]{Bergman}. \begin{namedthm}[Diamond Lemma idea] The following are equivalent: \begin{itemize} \item The set of reduced words is a $k$-basis of $T/(P)$. \item All ambiguities among elements of $P$ are resolvable. \item All ambiguities among elements of $P$ are resolvable relative to the ordering. \item Every element in $(P)$ can be reduced to zero in $T/(P)$ by just using the relations in $P$. \end{itemize} \end{namedthm} In essence, the lemma says that if the generating set of relations $P$ is well-behaved with respect to some monomial ordering, then one can define a canonical form just by checking that nothing goes wrong with the set $P$ instead of checking for problems implied by the whole ideal $(P)$. Thus, resolving ambiguities is just another way of testing for a Gr\"obner basis (see~\cite{Green94}): The set $P$ is a Gr\"obner basis for the ideal $(P)$ if and only if all ambiguities among elements of $P$ are resolvable. \subsection*{Applications} Although the idea of the Composition-Diamond lemma can be phrased in many ways, the hypothesis to be checked in the various versions of the lemma requires very similar computations in application. One finds precursors of the ideas underlying the Composition-Diamond Lemma in the original proofs given by Poincar\'e, Birkhoff, and Witt of the PBW Theorem for universal enveloping algebras of Lie algebras. These techniques and computations have been used in a number of other settings. For example, explicit PBW conditions are given for Drinfeld Hecke algebras (which include symplectic reflection algebras) by Ram and Shepler~\cite{RamShepler}; see Section~\ref{SRA}. In~\cite{doa}, we studied the general case of algebras defined by relations which set commutators to lower order terms using both a homological approach and the Composition-Diamond Lemma (as it holds in arbitrary characteristic). These algebras, called {\em Drinfeld orbifold algebras}, include Sridharan enveloping algebras, Drinfeld Hecke algebras, enveloping algebras of Lie algebras, Weyl algebras, and twists of these algebras with group actions. Gr\"obner bases were used explicitly by Levandovskky and Shepler~\cite{LevandovskyyShepler} in replacing a commutative polynomial algebra by a skew (or quantum) polynomial algebra in the theory of Drinfeld Hecke algebras. Bergman's Diamond Lemma was used by Khare~\cite{Khare} to generalize the Drinfeld Hecke algebras of Section~\ref{SRA} from the setting of group actions to that of algebra actions. Of course the Composition-Diamond Lemma and Gr\"obner-Shirshov bases have been used to explore many different kinds of algebras (and in particular to find PBW-like bases) that we will not discuss here. See Bokut' and Kukin~\cite{BokutKukin} and Bokut' and Chen~\cite{BokutChen} for many such examples. Note that some authors prove PBW theorems by creating a space upon which the algebra in question acts (see, e.g., Humphreys~\cite{Humphreys} or Griffeth~\cite[first version]{Griffeth}). Showing that the given space is actually a module for the algebra requires checking certain relations that are similar to the conditions that one must check before invoking the Composition-Diamond Lemma. \section{Drinfeld-Jimbo quantum groups and related Hopf algebras}\label{DJQG} Quantized enveloping algebras (that is, Drinfeld-Jimbo quantum groups~\cite{Drinfeld87,Jimbo}) are deformations of universal enveloping algebras of Lie algebras. (Technically, they are bialgebra deformations rather than algebra deformations.) Many mathematicians discovered PBW bases for these algebras, in particular Lusztig~\cite{Lusztig1,Lusztig1.5,Lusztig2} in a very general setting and DeConcini and Kac~\cite{DeConciniKac} by defining a corresponding algebra filtration. Although there are many incarnations of these algebras, we restrict ourselves to the simply-laced case and to algebras over the complex numbers for ease of notation. We state a PBW theorem in this context and refer the reader to the literature for more general statements (see, e.g.,~\cite{Lusztig2}). \subsection*{Quantum groups} Let $\mathfrak g$ be a finite dimensional semisimple complex Lie algebra of rank $n$ with symmetric Cartan matrix $(a_{ij})$. Let $q$ be a nonzero complex number, $q\neq \pm 1$. (Often $q$ is taken to be an indeterminate instead.) The {\em quantum group} $U_q({\mathfrak g})$ is the associative $\CC$-algebra defined by generators $$E_1,\ldots, E_n, F_1,\ldots,F_n, K_1^{\pm 1} ,\ldots, K_n^{\pm 1}$$ and relations \begin{eqnarray*} K_i ^{\pm 1} K_j^{\pm 1} = K_j^{\pm 1}K_i^{\pm 1}, & & K_i K_i^{-1} = 1 = K_i^{-1}K_i,\\ K_i E_j = q^{ a_{ij}} E_j K_i , & & K_i F_j = q^{- a_{ij}} F_j K_i, \\ E_i F_j - F_j E_i \! & = & \! \delta_{ij} \, \frac{K_i - K_i^{-1}}{q-q^{-1}},\\ E_i^2 E_j - (q+q^{-1}) E_i E_j E_i + E_j E_i^2\! & = & \! 0 \ \ \mbox{ if }a_{ij}=-1, \ \ \ E_iE_j=E_jE_i \ \ \mbox{ if }a_{ij}=0,\\ F_i^2 F_j \, - (q+q^{-1})\, F_iF_jF_i \, + F_j F_i^2 \! & = &\! 0 \ \ \mbox{ if }a_{ij}=-1, \ \ \ F_iF_j\, = F_jF_i \ \ \ \mbox{ if }a_{ij}=0. \end{eqnarray*} The last two sets of relations are called the quantum Serre relations. Let $W$ be the Weyl group of $\mathfrak g$. Fix a reduced expression of the longest element $w_0$ of $W$. This choice yields a total order on the set $\Phi^+$ of positive roots, $ \ \beta_1,\ldots, \beta_m$. To each $\alpha\in\Phi^+$, Lusztig~\cite{Lusztig1,Lusztig1.5,Lusztig2} assigned an element $E_{\alpha}$ (respectively, $F_{\alpha}$) in $U_q({\mathfrak {g}})$ determined by this ordering that is an iterated braided commutator of the generators $E_1,\ldots, E_n$ (respectively, $F_1,\ldots, F_n$). These ``root vectors'' then appear in a PBW basis: \begin{namedthm}[PBW Theorem for Quantum Groups] There is a basis of $U_q({\mathfrak{g}})$ given by $$ \{ E_{\beta_1}^{a_1}\cdots E_{\beta_m}^{a_m} K_1^{b_1}\cdots K_n^{b_n} F_{\beta_1}^{c_1}\cdots F_{\beta_m}^{c_m} : a_i, c_i \geq 0, \ b_i\in {\mathbb Z} \}. $$ Moreover, there is a filtration on the subalgebra $ U_q^{>0}({\mathfrak{g}})$ (respectively, $U_q^{<0}({\mathfrak{g}})$) generated by $E_1,\ldots,E_n$ (respectively, $F_1,\ldots,F_n$) for which the associated graded algebra is isomorphic to a skew polynomial ring. \end{namedthm} The skew polynomial ring to which the theorem refers is generated by the images of the $E_{\alpha}$ (respectively, $F_{\alpha}$), with relations $E_{\alpha}E_{\beta} = q_{\alpha\beta}E_{\beta}E_{\alpha}$ (respectively, $F_{\alpha}F_{\beta} = q_{\alpha\beta}F_{\beta}F_{\alpha}$) where each $q_{\alpha\beta}$ is a scalar determined by $q$ and by $\alpha,\beta$ in $\Phi^+$. \vspace{2ex} \begin{ex} The algebra $U^{>0}_q({\mathfrak{sl}}_3)$ is generated by $E_1,E_2$. Let $$E_{12}:= E_1E_2 - qE_2E_1.$$ Then, as a consequence of the quantum Serre relations, $E_1E_{12}= q^{-1}E_{12}E_1$ and $E_{12}E_2=q^{-1}E_2E_{12}$, and, by definition of $E_{12}$, we also have $E_1E_2= qE_2E_1 + E_{12}$. In the associated graded algebra, this last relation becomes $E_1E_2=qE_2E_1$. The algebras $U^{>0}_q({\mathfrak{sl}}_n)$ are similar, however in general the filtration on $U^{>0}_q({\mathfrak{g}})$ stated in the theorem is more complicated. \end{ex} \vspace{2ex} \subsection*{Proofs and related results} There are several proofs in the literature of the first statement of the above theorem and related results, beginning with Khoroshkin and Tolstoy~\cite{KhoroshkinTolstoy}, Lusztig~\cite{Lusztig1,Lusztig1.5,Lusztig2}, Takeuchi~\cite{Takeuchi}, and Yamane~\cite{Yamane}. These generally involve explicit computations facilitated by representation theory. Specifically, one obtains representations of $U_q({\mathfrak{g}})$ from those of the corresponding Lie algebra $\mathfrak g$ by deformation, and one then uses what is known in the classical setting to obtain information about $U_q({\mathfrak{g}})$. Ringel~\cite{Ringel} gave a different approach via Hall algebras. The filtration and structure of the associated graded algebra of $U^{>0}(\mathfrak{g})$ was first given by DeConcini and Kac~\cite{DeConciniKac}. For a general ``quantum PBW Theorem'' that applies to some of these algebras, see Berger~\cite{Berger}. In case $q$ is a root of unity (of order $\ell$), there are finite dimensional versions of Drinfeld-Jimbo quantum groups. The {\em small quantum group} $u_q({\mathfrak{g}})$ may be defined as the quotient of $U_q({\mathfrak{g}})$ by the ideal generated by all $E^{\ell}_{\alpha}, F^{\ell}_{\alpha}, K^{\ell}_{\alpha} - 1$. This finite dimensional algebra has $k$-basis given by elements in the PBW basis of the above theorem for which $0\leq a_i,b_i,c_i <\ell$. The existence of PBW bases for $U_q({\mathfrak{g}})$ and $u_q({\mathfrak{g}})$ plays a crucial role in their representation theory, just as it does in the classical setting of Lie algebras. Bases of finite dimensional simple modules and other modules are defined from weight vectors and PBW bases~\cite{Lusztig1.5}. R-matrices may be expressed in terms of PBW basis elements~\cite{Drinfeld87,Jimbo,Rosso}. Computations of cohomology take advantage of the structure provided by the PBW basis and filtration (e.g., see~\cite{GinzburgKumar}, based on techniques developed for restricted Lie algebras~\cite{FriedlanderParshall}). More generally, PBW bases and some Lie-theoretic structure appear in a much larger class of Hopf algebras. Efforts to understand finite dimensional Hopf algebras of various types led in particular to a study of those arising from underlying Nichols algebras. Consequently, a classification of some types of pointed Hopf algebras was completed by Andruskiewitsch and Schneider~\cite{AndruskiewitschSchneider}, Heckenberger~\cite{Heckenberger}, and Rosso~\cite{Rosso2}. A Nichols algebra is a ``braided'' graded Hopf algebra that is connected, generated by its degree~1 elements, and whose subspace of primitive elements is precisely its degree~1 component. The simplest Nichols algebras are those of ``diagonal type,'' and these underlie the Drinfeld-Jimbo quantum groups and the Hopf algebras in the above-mentioned classification. These algebras have PBW bases just as does $U^{>0}_q({\mathfrak{g}})$ or $u_q^{>0} ({\mathfrak{g}})$; a proof given by Kharchenko~\cite{Kharchenko} uses a combinatorial approach such as in Section~\ref{diamond}. \section{Symplectic reflection algebras, rational Cherednik algebras, and graded (Drinfeld) Hecke algebras}\label{SRA} Drinfeld~\cite{Drinfeld86} and Lusztig~\cite{Lusztig88, Lusztig89} originally defined the algebras now variously called symplectic reflection algebras, rational Cherednik algebras, and graded (Drinfeld) Hecke algebras, depending on context. These are PBW deformations of group extensions of polynomial rings (skew group algebras) defined by relations that set commutators of vectors to elements of a group algebra. Lusztig explored the representation theory of these algebras when the acting group is a Weyl group. Crawley-Boevey and Holland~\cite{CBH} considered subgroups of ${\rm SL}_2(\CC)$ and studied subalgebras of these algebras in relation to corresponding orbifolds. Initial work on these types of PBW deformations for arbitrary groups began with Etingof and Ginzburg~\cite{EtingofGinzburg} and Ram and Shepler~\cite{RamShepler}. Gordon~\cite{GordonInvent} used the rational Cherednik algebra to prove a version of the $n !$-conjecture for Weyl groups and the representation theory of these algebras remains an active area. (See~\cite{BrownSurvey},~\cite{GordonSurveyCherednik},~\cite{GordonSurveySymplectic}, and~\cite{RouquierSurvey}.) We briefly recall and compare these algebras. (See also ~\cite{Chlouveraki} for a survey of symplectic reflection algebras and rational Cherednik algebras in the context of Hecke algebras and representation theory.) Let $G$ be a group acting by automorphisms on a $k$-algebra $S$. The {\em skew group algebra} $S\# G$ (also written as a semidirect y product $S\rtimes G$) is the $k$-vector space $S\otimes kG$ together with multiplication given by $(r \otimes g)(s\otimes h)=r ( \,^{g}s)\otimes g h$ for all $r,s$ in $S$ and $g,h$ in $G$, where $^gs$ is the image of $s$ under the automorphism $g$. \subsection*{Drinfeld's ``Hecke algebra''} Suppose $G$ is a finite group acting linearly on a finite dimensional vector space $V$ over $k=\CC$ with symmetric algebra $S(V)$. Consider the quotient algebra $$ \mathcal{H}_{\kappa}=\quotient{T(V)\#G}{( v_1\otimes v_2-v_2\otimes v_1-\kappa(v_1,v_2): v_1,v_2\in V) } $$ defined by a bilinear parameter function $\kappa:V\times V\rightarrow \CC G$. We view $\mathcal{H}_{\kappa}$ as a filtered algebra by assigning degree one to vectors in $V$ and degree zero to group elements in $G$. Then the algebra $\mathcal{H}_{\kappa}$ is a PBW deformation of $S(V)\# G$ if its associated graded algebra is isomorphic to $S(V)\# G$. Drinfeld~\cite{Drinfeld86} originally defined these algebras for arbitrary groups, and he also counted the dimension of the parameter space of such PBW deformations for Coxeter groups (see also~\cite{RamShepler}). \vspace{2ex} \begin{ex} Let $V$ be a vector space of dimension 3 with basis $v_1,v_2,v_3$, and let $G$ be the symmetric group $S_3$ acting on $V$ by permuting the chosen basis elements. The following is a PBW deformation of $S(V)\# G$, where $(i\, j\, k)$ denotes a 3-cycle in $S_3$: $$\mathcal{H}_{\kappa} = \quotient{T(V)\# S_3} { ( v_i\otimes v_j - v_j\otimes v_i - (i\, j\, k) + (j\, i\, k) : \{i,j,k\}=\{1,2,3\} ) } . $$ \end{ex} \vspace{2ex} \subsection*{Lusztig's graded affine Hecke algebra} While exploring the representation theory of groups of Lie type, Lusztig~\cite{Lusztig88, Lusztig89} defined a variant of the affine Hecke algebra for Weyl groups which he called ``graded'' (as it was obtained from a particular filtration of the affine Hecke algebra). He gave a presentation for this algebra $\mathbb{H}_{\lambda}$ using the same generators as those for Drinfeld's Hecke algebra $\mathcal{H}_{\kappa}$, but he gave relations preserving the structure of the polynomial ring and altering the skew group algebra relation. (Drinfeld's relations do the reverse.) The {\em graded affine Hecke algebra} $\mathbb{H}_{\lambda}$ (or simply the ``graded Hecke algebra'') for a finite Coxeter group $G$ acting on a finite dimensional complex vector space $V$ (in its natural reflection representation) is the $\CC$-algebra generated by the polynomial algebra $S(V)$ together with the group algebra $\CC G$ with relations $$g v =\ ^g v g+ \lambda_g(v) g$$ for all $v$ in $V$ and $g$ in a set $\mathcal S$ of simple reflections (generating $G$) where $\lambda_g$ in $V^*$ defines the reflecting hyperplane ($\text{ker}\, \lambda_g\subseteq V$) of $g$ and $\lambda_g=\lambda_{hgh^{-1}}$ for all $h$ in $G$. (Recall that a {\em reflection} on a finite dimensional vector space is just a nonidentity linear transformation that fixes a hyperplane pointwise.) Note that for $g$ representing a fixed conjugacy class of reflections, the linear form $\lambda_g$ is only well-defined up to a nonzero scalar. Thus one often fixes once and for all a choice of linear forms $\lambda = \{ \lambda_g \}$ defining the orbits of reflecting hyperplanes (usually expressed using Demazure/BGG operators) and then introduces a formal parameter by which to rescale. This highlights the degree of freedom arising from each orbit; for example, one might replace $$ \lambda_g(v)\ \ \text{ by }\ \ c_g \ \langle v, \alpha_g^{\vee} \rangle = c_g\ \Big( \frac{v-\, ^gv}{\alpha_g}\Big) $$ for some conjugation invariant formal parameter $c_g$ after fixing a $G$-invariant inner product and root system $\{\alpha_g: g\in \mathcal S\} \subset V$ with coroot vectors $\alpha_g^{\vee}$. (Note that for any reflection $g$, the vector $(v-\, ^gv)$ is a nonzero scalar multiple of $\alpha_g$ and so the quotient of $v-\, ^gv$ by $\alpha_g$ is a scalar.) Each graded affine Hecke algebra $\mathbb{H}_{\lambda}$ is filtered with vectors in degree one and group elements in degree zero and defines a PBW deformation of $S(V)\# G$. \vspace{2ex} \begin{center} \begin{fbox} {\begin{tabular}{|l | l | l |} \hline $\rule[-1ex]{0ex}{4ex}$ Finite Group & Any $G\leq \text{GL}(V)$ & Coxeter $G \leq \text{GL}(V)$\\ \hline $\rule[-1ex]{0ex}{4ex}$ Algebra & $\mathcal{H}_{\kappa}$ \text{(Drinfeld)} & $\mathbb{H}_{\lambda}$ \text{(Lusztig)}\\ \hline $\rule[-1ex]{0ex}{4ex}$ generated by & $V\text{ and } \CC G$ & $ V\text{ and } \CC G$ \\ \hline $\rule[-1ex]{0ex}{4ex}$ with relations & $ g v =\ ^g v g , $ & $g v =\, ^g vg + \lambda_g(v)g ,$\\ $\rule[-1ex]{0ex}{2ex}$ & $v w = w v + \kappa(v,w) $ & $v w = w v $\\ & $(\forall v,w\in V, \ \forall g\in G)$ & $(\forall v,w\in V, \ \forall g\in \mathcal S)$\\ \hline \end{tabular}} \end{fbox} \end{center} \vspace{2ex} \subsection*{Comparing algebras} Ram and Shepler~\cite{RamShepler} showed that Lusztig's graded affine Hecke algebras are a special case of Drinfeld's construction: For each parameter $\lambda$, there is a parameter $\kappa$ so that the filtered algebras $\mathbb{H}_{\lambda}$ and $\mathcal{H}_{\kappa}$ are isomorphic (see~\cite{RamShepler}). Etingof and Ginzburg~\cite{EtingofGinzburg} rediscovered Drinfeld's algebras with focus on groups $G$ acting symplectically (in the context of orbifold theory). They called algebras $\mathcal{H}_{\kappa}$ satisfying the PBW property {\em symplectic reflection algebras}, giving necessary and sufficient conditions on $\kappa$ for symplectic groups. They used the theory of Beilinson, Ginzburg, and Soergel~\cite{BGS} of Koszul rings to generalize Braverman and Gaitsgory's conditions to the setting where the ground field is replaced by the semisimple group ring $\CC G$. (The skew group algebra $S(V)\# G$ is Koszul as a ring over the semisimple subring $\CC G$.) Ram and Shepler~\cite{RamShepler} independently gave necessary and sufficient PBW conditions on $\kappa$ for arbitrary groups acting linearly over $\CC$ and classified all such quotient algebras for complex reflection groups. Their proof relies on the Composition-Diamond Lemma. (See Sections~\ref{BG} and~\ref{diamond} for a comparison of these two techniques for showing PBW properties.) Both approaches depend on the fact that the underlying field $k=\CC$ has characteristic zero (or, more generally, has characteristic that does not divide the order of the group $G$). See Section~\ref{positivechar} for a discussion of PBW theorems in the modular setting when $\CC$ is replaced by a field whose characteristic divides $|G|$. \subsection*{Rational Cherednik algebras} The rational Cherednik algebra is a special case of a quotient algebra $\mathcal{H}_{\kappa}$ satisfying the PBW property (in fact, a special case of a symplectic reflection algebra) for reflection groups acting diagonally on two copies of their reflection representations (or ``doubled up''). These algebras are regarded as ``doubly degenerate'' versions of the double affine Hecke algebra introduced by Cherednik~\cite{Cherednik} to solve the Macdonald (constant term) conjectures in combinatorics. We simply recall the definition here in terms of reflections and hyperplane arrangements. Suppose $G$ is a finite group generated by reflections on a finite dimensional complex vector space $V$. (If $G$ is a Coxeter group, then extend the action to one over the complex numbers.) Then the induced diagonal action of $G$ on $V\oplus V^*$ is generated by {\em bireflections} (linear transformations that each fix a subspace of codimension 2 pointwise), i.e., by {\em symplectic reflections} with respect to a natural symplectic form on $V\oplus V^*$. Let $\mathcal{R}$ be the set of all reflections in $G$ acting on $V$. For each reflection $s$ in $\mathcal{R}$, let $\alpha_s$ in $V$ and $\alpha_s^*$ in $V^*$ be eigenvectors (``root vectors'') each with nonindentity eigenvalue. We define an algebra generated by $\CC G$, $V$, and $V^*$ in which vectors in $V$ commute with each other and vectors in $V^*$ commute with each other, but passing a vector from $V$ over one from $V^*$ gives a linear combination of reflections (and the identity). As parameters, we take a scalar $t$ and a set of scalars ${\bf c}=\{c_s:s\in \mathcal{R}\}$ with $c_s=c_{hsh^{-1}}$ for all $h$ in $G$. The {\em rational Cherednik algebra} ${\text{\bf H}}_{t,{\bf c}}$ with parameters $t, {\bf c}$ is then the $\CC$-algebra generated by the vectors in $V$ and $V^*$ together with the group algebra $\CC G$ satisfying the relations $$ \begin{aligned} gu & =\ ^g u g, \quad u u' =u' u,\\ v v^*& = v^* v+t\, v^*(v) -\sum_{s\in \mathcal{R}} c_s\ \alpha_s^*(v)\ v^*(\alpha_s)\ s \end{aligned} $$ for any $g$ in $G$, $v$ in $V$, $v^*$ in $V^*$, and any $u,u'$ both in $V$ or both in $V^*$. Note that $\alpha_s$ and $\alpha_s^*$ are only well-defined up to a nonzero scalar, and we make some conjugation invariant choice of normalization in this definition, say, by assuming that $\alpha_s^*(\alpha_s)=1$. One often replaces $\CC$ by $\CC[t, {\bf c}]$ to work in a formal parameter space. The relations defining the rational Cherednik algebra are often given in terms of the arrangement of reflecting hyperplanes $\mathcal{A}$ for $G$ acting on $V$. For each hyperplane $H$ in $\mathcal{A}$, choose a linear form $\alpha_H^*$ in $V^*$ defining $H$ (so $H=\text{ker}\, \alpha_H^*$) and let $\alpha_H$ be a nonzero vector in $V$ perpendicular to $H$ with respect to some fixed $G$-invariant inner product. Then the third defining relation of ${\text{\bf H}}_{t,{\bf c}}$ can be rewritten (without a choice of normalization) as $$v v^*-v^* v= t v^*(v) - \sum_{H\in \mathcal{A}} \frac{\alpha_H^*(v)\ v^*(\alpha_H)} {\alpha_H^*(\alpha_H)} \big(c_{s_H} s_H + c_{s_H^2} s_H^2 + \ldots +c_{s_H^{a_H}} s_H^{a_H}\big) $$ where $s_H$ is the reflection in $G$ about the hyperplane $H$ of maximal order $a_H+1$. Again, this is usually expressed geometrically in terms of the inner product on $V$ and induced product on $V^*$: $$\frac{\alpha_H^*(v)\ v^*(\alpha_H)} {\alpha_H^*(\alpha_H)} = \frac{\langle v, \alpha_H^{\vee}\rangle \langle \alpha_H, v^*\rangle} {\langle \alpha_H, \alpha^{\vee}\rangle }\ . $$ The PBW theorem then holds for the algebra ${\text{\bf H}}_{t,{\bf c}}$ (see~\cite{EtingofGinzburg}): \begin{namedthm}[PBW Theorem for Rational Cherednik Algebras] The rational Cherednik algebra ${\text{\bf H}}_{t,{\bf c}}$ is isomorphic to $S(V)\otimes S(V^*)\otimes \CC G$ as a complex vector space for any choices of parameters $t$ and $c$, and its associated graded algebra is isomorphic to $(S(V)\otimes S(V^*))\# G$. \end{namedthm} Connections between rational Cherednik algebras and other fields of mathematics are growing stronger. For example, Gordon and Griffeth~\cite{GordonGriffeth} link the Fuss-Catalan numbers in combinatorics to the representation theory of rational Cherednik algebras. These investigations also bring insight to the classical theory of complex reflection groups, especially to the perplexing question of why some reflection groups acting on $n$-dimensional space can be generated by $n$ reflections (called ``well-generated'' or ``duality'' groups) and others not. (See~\cite{BerkeschGriffethSam, GorskyOblomkovRasmussenShende, ShanVaragnoloVasserot} for other recent applications.) \section{Positive characteristic and nonsemisimple ground rings}\label{positivechar} Algebras displaying PBW properties are quite common over ground fields of positive characteristic and nonsemisimple ground rings, but techniques for establishing PBW theorems are not all equally suited for work over arbitrary fields and rings. We briefly mention a few results of ongoing efforts to establish and apply PBW theorems in these settings. The algebras of Section~\ref{SRA} make sense in the {\em modular setting}, that is, when the characteristic of $k$ is a prime dividing the order of the finite group $G$. In this case, however, the group algebra $kG$ is not semisimple, and one must take more care in proofs. PBW conditions on $\kappa$ were examined by Griffeth~\cite{Griffeth} by construction of an explicit $\mathcal{H}_{\kappa}$-module, as is done in one standard proof of the PBW Theorem for universal enveloping algebras. (See also Bazlov and Berenstein~\cite{BazlovBerenstein} for a generalization.) The Composition-Diamond Lemma, being characteristic free, applies in the modular setting; see our paper~\cite{doa} for a proof of the PBW property using this lemma that applies to graded (Drinfeld) Hecke algebras over fields of arbitrary characteristic. (Gr\"obner bases are explicitly used in Levandovskyy and Shepler~\cite{LevandovskyyShepler}.) Several authors consider representations of rational Cherednik algebras in the modular setting, for example, Balagovic and Chen~\cite{BalagovicChen}, Griffeth~\cite{Griffeth}, and Norton~\cite{Norton}. The theory of Beilinson, Ginzburg, and Soergel of Koszul rings over semisimple subrings, used in Braverman-Gaitsgory style proofs of PBW theorems, does not apply directly to the modular setting. However it may be adapted using a larger complex replacing the Koszul complex: In~\cite{PBWQuadratic}, we used this approach to generalize the Braverman-Gaitsgory argument to arbitrary Koszul algebras with finite group actions. This replacement complex has an advantage over the Composition-Diamond Lemma or Gr\"obner basis theory arguments in that it contains information about potentially new types of deformations that do not occur in the nonmodular setting. Other constructions generalize the algebras of Section~\ref{SRA} to algebras over ground rings that are not necessarily semisimple. Etingof, Gan, and Ginzburg~\cite{EGG} considered deformations of algebras that are extensions of polynomial rings by acting algebraic groups or Lie algebras. They used a Braverman-Gaitsgory approach to obtain a Jacobi condition by realizing the acting algebras as inverse limits of finite dimensional semsimple algebras. Gan and Khare~\cite{GanKhare} investigated actions of $U_q({\mathfrak{sl}}_2)$ on the quantum plane (a skew polynomial algebra), and Khare~\cite{Khare} looked at actions of arbitrary cocommutative algebras on polynomial rings. In both cases PBW theorems were proven using the Composition-Diamond Lemma. A general result for actions of (not necessarily semisimple) Hopf algebras on Koszul algebras is contained in Walton and Witherspoon~\cite{WaltonWitherspoon} with a Braverman-Gaitsgory style proof. See also He, Van Oystaeyen, and Zhang~\cite{HVZ} for a PBW theorem using a somewhat different complex in a general setting of Koszul rings over not necessarily semisimple ground rings. One expects yet further generalizations and applications of the ubiquitous and potent PBW Theorem.
2,869,038,154,162
arxiv
\section{Introduction} Machine learning is increasingly recognized as an effective approach for large-scale automated decisions in several domains. However, when ML model is deployed in critical decision-making scenarios such as medical and financial domains, many people are skeptical about its accountability and reliability. Hence, interpretable ML is vital to make machine learning models transparent and understandable by humans. Counterfactual explanation (CE) is the prominent example-based method in interpretable machine learning that generates counterfactual samples for interpreting machine learning model decisions. For example, consider a customer \texttt{A} whose loan application has been rejected by the ML model of a bank. Counterfactual explanations can generate a ``what-if'' scenario of this person, e.g., ``your loan would have been approved if your income was \$5,000 more''. Namely, the goal of counterfactual explanation is to generate perturbations of an input that lead to a different outcome from the ML model. By allowing users to explore such ``what-if'' scenarios, counterfactual examples are human-interpretable. Despite recent interests in counterfactual explanations, existing methods suffer three limitations: First, the counterfactual methods neglect the causal relationship among features, leading to the infeasible counterfactual samples for end-users \cite{ustun2019actionable,poyiadzi2020face}. A counterfactual sample is feasible if the changes satisfy constraints entailed by the causal relations. For example, since the education causes the choice of the occupation, changing the occupation without changing the education is infeasible for the loan applicant in the real-world. Namely, the generated counterfactuals need to preserve the causal relations between features in order to be realistic and actionable. Second, on the algorithm level, most counterfactual methods use the gradient-free optimization algorithm to deal with various data and model types \cite{sharma2020certifai,poyiadzi2020face,dhurandhar2019model,grath2018interpretable,lash2017generalized}. These gradient-free optimizations rely on the heuristic search, which however suffers from inefficiency due to the large heuristic search space. In addition, optimizing the trade-off among different loss terms in the objective function is difficult, which often leads to sub-optimal counterfactual samples \cite{mahajan2019preserving,mothilal2020explaining,grath2018interpretable}. To address the above limitations, we propose a prototype-based counterfactual explanation framework (ProCE) in this paper. ProCE is a model-agnostic method and is capable of explaining the classification in the mixed feature space. Overall, our contributions are summarized as follows: \begin{itemize} \item By integrating the structural causal model and causal loss function, our proposed method can produce the counterfactual samples that satisfy the causal constraints among features. \item We utilize the auto-encoder model and class prototype to guide the search progress and speed up the searching speed of counterfactual samples. \item We design a novel multi-objective optimization that can find the optimal trade-off between the objectives meanwhile maintain diversity in the feature space of counterfactual explanations. \end{itemize} \section{Related Work} Counterfactual explanation is the example-based approach which is a branch of model-agnostic techniques. Recently, there has been an increasing number of studies in this field. On the one hand, \cite{wachter2017counterfactual} first proposes using the counterfactual explanation to interpret machine learning models' decision. Particularly, they generate the counterfactual samples by minimizing the loss between the desired class and the counterfactual instances' prediction. To extend the previous study \cite{wachter2017counterfactual}, another framework called DiCE \cite{mothilal2020explaining} proposes using the diversity score to enhance the number of generated samples. They thereafter use the weighted sum to combine different loss functions together and adopt the gradient-descent algorithm to approximately find the optimal solution. This method is however restricted to the differentiable models, and finds it hard to deal with the non-continuous values in tabular data. On the other hand, CERTIFAI \cite{sharma2020certifai} is a recent gradient-free based approach that customizes the genetic algorithm for the counterfactuals search progress. When dealing with the categorical features, CERTIFAI adopts the indicator functions (1 for different values, else 0). \cite{poyiadzi2020face} introduces a method called FACE that adopts Dijsstra's algorithm to generate counterfactual samples by finding the shortest path of the original input and the existing data points. The generated samples of this method are limited to the input space without generating new data. Meanwhile, \cite{mahajan2019preserving} builds the generative model based on the variational auto-encoder (VAE) to generate multiple counterfactual samples for all input data points. The research \cite{van2019interpretable} utilizes the class prototype to guide the search progress to fall into the distribution of the expected class. This method however does not consider the causal relationship among features. Finally, there are also some other recent methods \cite{russell2019efficient,kanamori2020dace} that use linear programming, mixed-integer programming or solvers to deal with the objective optimization effectively. These approaches can be applied to linear models only. Our method extends the line of studies \cite{van2019interpretable,mahajan2019preserving} by integrating both structural causal model and class prototype. We also formulate the problem as the multi-objective optimization problem and propose an algorithm to find the counterfactual samples effectively. \section{Methodology} In this section, we firstly present the objective functions that can generate the counterfactual samples. The structural causal model and causal distance are also investigated to exploit the underlying causal relationship among features. Then, we formulate the counterfactual sample generation according by defining the loss functions as a multi-objective optimization problem, and propose an algorithm based on the non-dominated sorting genetic algorithm (NSGA-II) to effectively find the optimal solution. To begin with, we consider a classifier $h: \mathcal{X} \rightarrow \mathcal{Y}$, with the input of $D$-dimensional feature space $\mathcal{X} = \mathcal{X}^1 \ldots \mathcal{X^D} \subseteq \mathbb{R}^D$ and the output as $\mathcal{Y} = \{0, 1\}$. Let a vector $x = (x^1,\ldots,x^D) \in \mathcal{X}$ be an instance and $x^k$ be a feature $k$-th of $x$. \begin{definition}[Counterfactual Explanation] With the original instance $x_0 = (x_0^1, \cdots, x_0^D) \in \mathcal{X}$, and original prediction $y_0 \in \mathcal{Y}$, the counterfactual explanation aims to find the nearest counterfactual sample $x_{cf}$ such that the outcome of classifier for $x_{cf}$ changes to desired output class $y_{cf}$. In general, the counterfactual explanation $x_{cf}$ for the individual $x_0$ is the solution of the following optimization problem: \begin{equation} \label{eqn:original} x_{cf}^{*} = \argmin_{x_{cf} \in \mathcal{X}} f(x_{cf}) \quad\text{subject to}\quad h(x_{cf}^*) = y_{cf} \end{equation} \end{definition} where $f$ is the distance metric between $x_0$ and $x_{cf}$. For such explanations to be plausible, they should only suggest small changes in a few features. \subsection{Prototype-based Causal Model} Counterfactuals provide these explanations in the form of ``if these features had different values, your credit application would have been accepted''. This indicates that counterfactual samples should be constrained. We first provide detailed definitions of each constraint and further tie them together as a multi-objective optimization problem. \label{sec:objective} \subsubsection{Prediction Loss} In order to achieve the desired outcome, the basic loss term is to calculate the distance between the counterfactual prediction and the expected outcome. For the classification scenario, we particularly use the cross-entropy loss to minimize the counterfactual label and desired label. Specifically, the prediction loss is: \begin{equation} \label{eqn:cross} \resizebox{1.0\hsize}{!}{$f_\text{pred}(x_{cf}) = -{y_{cf}\log(h(x_{cf})) + (1 - y_{cf})\log(1 - h(x_{cf}))}$} \end{equation} \subsubsection{Prototype-based Loss} The counterfactuals search is incredibly slow, due to the enormous solutions in search space. Inspired by the study \cite{van2019interpretable}, we utilize the class prototype to guide the search progress toward the counterfactual solution. From the concept view, prototype is defined as the representative of the whole or subset of the data. For each class $i$ in the dataset, we first compute the $k$ nearest neighbors of $x_0$. To compute the distance, we resort to an encoder function parametrized by $\phi$ is denoted by $Q_{\phi}$ with $Q_{\phi}: \mathcal{X} \rightarrow \mathcal{Z}$. This encoder projects the input feature $\mathcal{X}$ to the $E$-dimensional latent space $\mathcal{Z} \subseteq \mathbb{R}^E$. Then, $k$ nearest neighbors of $x_0$ can be computed based on the latent distance in the projected space $\mathcal{Z}$, i.e., $||Q_{\phi}(x_k^i) - Q_{\phi}(x_0)||^2_2$. Finally, the prototype is computed by the mean of these neighbors: \begin{equation} \text{proto}_i = \frac{1}{K}\sum_{k=1}^K{Q_{\phi}(x_k^i)} \label{eq:proto_i} \end{equation} In the latent $\mathcal{Z}$ space, we define the prototype loss function as \begin{equation} \label{eqn:protoloss} f_\text{proto}(x_{cf}) = \| Q_{\phi}(x_{cf}) - \text{proto}_j \|^2_2 \end{equation} Note that $\text{proto}_j$ is the prototype of class $j$ that has smallest distance to the encoding of $x_0$. Given $y_0$ be the label for the input sample $x_0$, we have $j$ as \begin{equation} j = \argmin_{i \ne y_0} \| Q_{\phi}(x_0) - \text{proto}_i \|^2_2 \label{eq:j} \end{equation} \subsubsection{Proximity Loss} In general, the counterfactual samples should be as close to the original instance as possible to make it more useful and understandable by users. When it comes to the mixed-type tabular data which contains both the categorical and continuous features, it is challenging to define the loss function. The previous studies \cite{sharma2020certifai,mothilal2020explaining,dandl2020multi} normally apply the indicator function that returns 1 when two categorical values match and return 0 otherwise, and adopt $L_2$-norm distance for continuous features. However, the indicator function fails to produce the distance degree for categorical values. In this study, we use the encoder model $Q_{\phi}$ to map the categorical features into the latent space before estimating the distance. The main advantage of this approach is that the encoder model has the capability to capture the underlying relationship and pattern between each categorical value. This means that manual feature engineering such as assigning weight for each category is not necessary, thus reducing a great deal of time and effort. The distance between two instances is \begin{equation} \label{eqn:distance} \resizebox{1.0\hsize}{!}{$f_\text{dist}(x_{cf}, x_{0}) = \begin{cases} \norm{x^k_{cf} - x^k_{0}}^2_2,& \text{if $x^k$ is continuous} \\ \norm{Q_{\phi}(x^k_{cf}) - Q_{\phi}(x^k_0)}^2_2, & \text{if $x^k$ is categorical} \end{cases}$} \end{equation} \subsubsection{Causality-preserving Loss} Although the distance function in Eq.~\eqref{eqn:distance} demonstrates the closeness distance between two instances, it fails to capture the causal relationship between each feature. To deal with this problem, we integrate the structural causal model, and thus construct the causal loss function to ensure the features' causal relationship in generated samples. In general, structural causal model \cite{article_causal} consists of two main components: the causal graph and structural equations. A causal graph is the probabilistic graphical model representing the assumption about data generating mechanism. A causal graph is defined as $\mathcal{G} = \langle \mathcal{V}, \mathcal{E} \rangle$ where $\mathcal{V}$ is the set of nodes and $\mathcal{E}$ is the set of edges. Structural equation is a set of equations representing the causal effect illustrated by the edge in the causal graph. We classify the set of variables into two node groups including: \begin{itemize} \item $U$ as a set of exogenous variables are independent from other models' variable. \item $V$ as a set of endogenous variables are determined by its relationship with other variables within the model. \end{itemize} We consider a setting that the structural causal model is provided along with the observational data. For each endogenous node $v \in V$, and its parent nodes (${v_{p1}}, {v_{p2}},\ldots, {v_{pk}}$), we construct the structural causal equation $v = g({v_{p1}}, {v_{p2}},\ldots, {v_{pk}})$ to represent their causal relationship. During the counterfactuals generation progress, we firstly produce the predicted value of endogenous node $x^v$ based on their parents before estimating the distance, which is measured as: \begin{equation} \label{eqn:causal_dist} \begin{split} f_\text{causal}(x_{cf}^v, x_{0}^v) &= \norm{x^v_{cf} - x_0^v}^2_2 \\ &= \norm{g(x^{vp1}_{cf}, x^{vp2}_{cf},\ldots, x^{vpk}_{cf}) - x^v_{0}}^2_2 \end{split} \end{equation} Based on the Eq.~\eqref{eqn:distance} and Eq.~\eqref{eqn:causal_dist}, we come up with the distance between the original and counterfactual instance is the sum of distance of endogenous variables and exogenous variables, measured as: \begin{equation} \label{eqn:finaldist} f_{\text{final\_dist}}(x_{cf}) = \sum_u^U f_\text{dist}(x^u_{cf}, x^u_{0}) + \sum_v^V f_\text{causal}(x^v_{cf}, x^v_{0}) \end{equation} \subsection{Multi-objective Optimization} With the loss functions from the sections~\ref{sec:objective} including $f_{\text{pred}}$, $f_{\text{proto}}$, $f_{\text{final\_dist}}$, the majority of existing studies \cite{mahajan2019preserving,mothilal2020explaining,grath2018interpretable} uses the trade-off parameter sum assigning each loss function a weight, and combines them together. However, it is very challenging to balance the weights for each loss, resulting in a great deal of efforts and time into hyperparameter tuning. To address this issue, we propose to formulate the counterfactual explanation search as the multi-objective problem (MOP) as \begin{equation} x_{cf}^{*} = \argmin_{x_{cf} \in \mathcal{X}} \{f_\text{pred}(x_{cf}), f_\text{proto}(x_{cf}), f_\text{final\_dist}(x_{cf})\} \label{eq:obj} \end{equation} In this study, we modify the elitist non-dominated sorting genetic algorithm (NSGA-II) \cite{deb2000fast} to deal with this optimization problem. Its main superiority is to optimize each loss function simultaneously as well as provide the solutions presenting the trade-offs among objective functions. We first present some definitions. \begin{definition}[Dominance in the Objective Space] In the multi-objective optimization problem, the goodness of a solution is evaluated by the dominance \cite{deb2002fast}. Given two solutions $x$ and $\hat{x}$ along with a set of $m$ objective functions $f_i$, we have: \begin{itemize} \item $x$ weakly dominates $\hat{x}$ ($x \succeq \hat{x}$) iff $f_i(x) \ge f_i(\hat{x})$ $\forall i \in {1, \ldots , m}$ \item $x$ dominates $\hat{x}$ ($x \succ \hat{x}$) iff $x \succeq \hat{x}$ and $x \ne \hat{x}$ \end{itemize} \end{definition} \begin{definition}[Pareto Front] Pareto front \cite{ngatchou2005pareto} is the set of solutions that are non-dominated by each other but are superior to other solutions in the objective space. Pareto front is denoted as ${\mathcal{F}}$. \end{definition} \begin{definition}[Crowding Distance] To maintain the diversity of the candidate solutions in the population, one of the simplest methods is to choose the individuals having the low density. Therefore, the crowding distance \cite{raquel2005effective} is used to to rank each candidate solution. The crowding distance between two individuals $x$ and $y$ is measured: \begin{equation} \label{eqn:crowding} d_{xy} = \sqrt{\sum_{i=1}^M \left(\frac{f_i(x) - f_i(y)}{f_i^{min} - f_i^{max}}\right)^2} \end{equation} with $f_i$ is the $i$-th objective function, $f_i^{min}$ or $f_i^{max}$ is its minimum or maximum value. \end{definition} The optimization process for objective function~\eqref{eq:obj} is given by Algorithm~\ref{alg:mulobj}. The main idea behinds this approach is that for each generation, the algorithm chooses the non-dominated solutions for each objective function and evolves to the better ones. We firstly find the nearest class prototype of the original instance $x_0$, which is used to measure the prototype loss function later. For the optimal counterfactual $x_{cf}^*$ finding progress, each candidate solution is represented by the $D$-dimensional feature as the genes. A random candidate population is initialized with the Gaussian distribution. Thereafter, the objective functions including $f_{\text{pred}}$, $f_{\text{proto}}$, $f_{\text{final\_dist}}$ are calculated for each candidate. In the non-dominated sorting step, all the non-dominated solutions are selected from the population and are assigned to the Pareto front $\mathcal{F}_1$. After that, the non-dominated solutions are chosen from the remaining population. The process is repeated until all the solutions are assigned to a front. The crowding distance function in Eq.~\eqref{eqn:crowding} then is adopted to select the individuals for the current population with the purpose of maintaining the population diversity. The algorithm then only keeps the candidate solutions having the greatest ranking score. The cross-over and mutation procedure \cite{whitley1994genetic} are finally performed to generate the next population. Particularly, the cross-over of two parents generates the new candidate solutions by randomly swapping parts of genes. Meanwhile, the mutation procedure randomly alters some genes in the candidate solutions to encourage diversity and avoid local minimum. We repeat this process through many generations to find the optimal counterfactual solution. \begin{algorithm}[!htb] \small \caption{Multi-objective Optimization for Prototype-based Counterfactual Explanation (ProCE)} \label{alg:mulobj} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \REQUIRE An instance $x_0$ and label $y_0$, desired class $y_{cf}$, and a provided machine learning classifier $h$, encoder model $Q_{\phi}$. \STATE Evaluate prototypes for each class by Eq~\eqref{eq:proto_i}. \STATE Compute $\text{proto}_j$ by Eq.~\eqref{eq:j}. \STATE Initialize a batch of population $P =\{\boldsymbol{\Delta}_1,\cdots,\boldsymbol{\Delta}_m\}$ with $\boldsymbol{\Delta}_i\sim\mathcal{N}(\boldsymbol{\mu},\boldsymbol{\nu})$ \STATE $Q = \emptyset$ \FOR{$G$ generation} \STATE $ P = P \cup Q$ \FOR{$k=1,\cdots,m$} \STATE Compute $f_\text{pred}(\boldsymbol{\Delta}_k)$ based on Eq.~\eqref{eqn:cross}. \STATE Use $\text{proto}_j$ to compute $f_\text{proto}(\boldsymbol{\Delta}_k)$ based on Eq.~\eqref{eqn:protoloss}. \STATE Compute $f_\text{final\_dist}(\boldsymbol{\Delta}_k)$ based on Eq.~\eqref{eqn:finaldist}. \ENDFOR \STATE Compute $\mathcal{F} = \text{non-dominated-sorting}(P)$ \STATE $P = \emptyset$ \WHILE{$|P| + |\mathcal{F}_i| < m$} \STATE $P = P \cup \mathcal{F}_i$ \STATE $i = i + 1$ \ENDWHILE \STATE Compute ranking score for $P$ based on Eq.~\eqref{eqn:crowding}. \STATE Keep $n$ individual in $P$ based on ranking score. \STATE Randomly pair $\lceil m/2\rceil$ $\{\Delta_1,\Delta_2\}\in P$ \FOR{each pair $\{\Delta_1,\Delta_2\}$} \STATE Perform $\text{crossover}(\Delta_1,\Delta_2)\rightarrow \Delta_1^{\prime},\Delta_2^{\prime}$ \STATE Perform mutation $\Delta_1^{\prime}\rightarrow \tilde{\Delta}_1,\Delta_2^{\prime}\rightarrow \tilde{\Delta}_2$ \STATE $Q = Q \cup \{\tilde{\Delta}_1,\tilde{\Delta}_2\}$ \ENDFOR \ENDFOR \ENSURE $x_{cf} = \Delta^*$ \end{algorithmic} \end{algorithm} \section{Experiments} We conduct experiments on three datasets to prove the effectiveness of our proposed method by comparing with other existing methods. All implementations are conducted in Python 3.7.7 with 64-bit Red Hat, Intel(R) Xeon(R) Gold 6150 CPU @ 2.70GHz. For our proposed method, we construct the multi-objective optimization algorithm with the support of library Pymoo\footnote{\url{https://pymoo.org/algorithms/nsga2.html}} \cite{pymoo}. \subsection{Datasets} This section provides information about the datasets, on which we perform the experiments. To prove our method effectiveness in generating counterfactual samples that maintaining the causal relationship, for each dataset, we consider some feature conditions that a generated sample has to satisfy. To simplify, we denote $a \propto b$ meaning the condition that ($a$ \text{increase} $\Rightarrow$ $b$ \text{increase}) \text{AND} ($a$ \text{decrease} $\Rightarrow$ $b$ \text{decrease}). The datasets used include: \textbf{Simple-BN} \cite{mahajan2019preserving} is a synthetic dataset containing 10,000 records with three features ($x_1$,$x_2$,$x_3$) and a binary output (y). We consider the causal relationship $(x_1, x_2) \propto x_3$. \textbf{Sangiovese} \cite{magrini2017conditional} dataset \footnote{\url{https://www.bnlearn.com/bnrepository/clgaussian-small.html}} evaluates the impact of several agronomic settings on the quality of the Tuscan grapes. It has 14 continuous features along with the binary output representing the grapes' quality. The conditional linear Bayesian network is also provided within the dataset. We consider a causal relationship $BunchN \propto SproutN$. \textbf{Adult} \cite{Dua:2019} dataset \footnote{\url{https://archive.ics.uci.edu/ml/datasets/adult}} is the real-world dataset providing information about people applying for loan in the financial organization. This dataset consists of both continuous features and categorical features. The main task is to determine whether a person has an income exceeding \$50k dollars annually. Similar to the study \cite{mahajan2019preserving}, we consider two conditions: $$x_{cf}^{\text{age}} \geq x_0^{\text{age}} \text{ and } age \propto education$$ \subsection{Evaluation Metrics} In this section, we briefly describe six quantitative metrics that are used to evaluate the performance of our proposed method and baselines. \textbf{Target-class validity} measures the percentage of the counterfactual samples belonging to the desired class, evaluating how well the algorithm can produce valid samples. \textbf{Causal-constraint validity} measures the percentage of counterfactual samples satisfying the pre-defined causal conditions. With this metric, the main aim is to evaluate how well our algorithm can generate feasible counterfactual samples that do not violate the causal relationship among features \cite{mahajan2019preserving}. \textbf{Categorical proximity} measures the proximity for categorical features representing the total number of matches on the categorical value between $x_{cf}$ and $x_0$. Higher categorical proximity is better, implying that the counterfactual sample preserves the minimal changes from the original \cite{mothilal2020explaining}. \textbf{Continuous proximity} illustrates the proximity of the continuous features, which is calculated as the $l2$-distance between the continuous features in $x_{cf}$ and $x_{0}$. Lower continuous proximity is preferable, implying that the distance between the continuous features of $x_0$ and $x{cf}$ should be as close as possible \cite{mothilal2020explaining}. \textbf{IM1 and IM2} are two interpretability metric (IM) proposed in \cite{van2019interpretable}. Let the original class be $y_0$ and the counterfactual class be $y_{cf}$. $AE_{0}$, $AE_{cf}$ and $AE$ are the auto-encoder model trained specifically on instances of class $y_0$, instances of class $y_{cf}$ and the full dataset, respectively. IM1 measures the ratio of reconstruction errors of $x_{cf}$ using $AE_{cf}$ and $AE_0$, while IM2 evaluates the similarity between the reconstructed instance using $AE_{cf}$ and $AE$. Lower values for both IM1 and IM2 are preferable, implying that the generated counterfactual is more interpretable. \subsection{Baseline Methods} We compare our proposed method (ProCE) with two baselines, namely CERTIFAI and DiCE. To the best of our knowledge, there is not much similar work in this area and CERTIFAI and DiCE are two state-of-the-art and prominent approaches in the counterfactual explanation. \begin{itemize} \item \textbf{DiCE} \cite{mothilal2020explaining}. DiCE is the popular counterfactual explanation framework. This calculates the weighted sum of different loss functions including proximity, diversity and sparsity together, and approximately finds the optimal solution via gradient-descent algorithm. For implementation, we utilize the provided source code \footnote{\url{https://github.com/divyat09/cf-feasibility}} from the authors with their default settings. \item \textbf{CERTIFAI} \cite{sharma2020certifai}. CERTIFAI is the latest study, which constructs the counterfactuals search approach based on the genetic algorithm. Since there is no available source code for this method, we implement the algorithm in Python with the support from the library PyGAD \footnote{\url{https://github.com/ahmedfgad/GeneticAlgorithmPython}}. \end{itemize} For all the experiments, we build a machine learning classifier $h$ by a neural network with three hidden layers and the sigmoid function on the last layer. For the feature engineering, we normalize the continuous feature to range (0,1) and transform the categorical features by using the label encoder. \subsection{Results and Discussions} In this section, we firstly report the experimental results of different methods across three datasets to prove our proposed method's effectiveness. Then, we present the variation of proposed method performance with different auto-encoder models embedding sizes and different numbers of $k$-nearest instances in the class prototype finding. Table~\ref{tab:target} illustrates the target-class validity and causal-constraint validity of all methods across three datasets. In terms of target-class validity, all three methods perform well, except the CERTIFAI performance in Adult dataset with only 60\%. Regarding the percentage of samples satisfying the causal constraints, by far the greatest performance is achieved by ProCE with 86.67\%, 80\% and 93.33\% for Simple-BN, Sangiovese and Adult dataset, respectively. CERTIFAI is ranked the second across three datasets in terms of this metric, while the majority of generated samples from DiCE violate the causal constraints. These results suggest that by integrating the structural causal model, our proposed method can effectively produce the counterfactual samples preserving the features' causal relationships. Meanwhile, interpretability scores (IM1 and IM2) are shown in the Table~\ref{tab:im}. In general, our proposed method achieved the best IM1 and IM2 in three datasets. DiCE also produces a very competitive result in Adult dataset, whereas there is a good performance in Simple-BN and Sangiovese for CERTIFAI. \begin{table}[!htb] \centering \caption{Baseline results in terms of \textbf{Target-class validity} (Tcv\%) and \textbf{Causal-constraint validity} (Ccv\%)} \begin{tabular}{cccc} \hline Method & Dataset & \%Tcv & \%Ccv\\ \hline CERTIFAI & Simple-BN & 100.00 & 43.33 \\ DiCE & Simple-BN & 100.00 & 36.67\\ ProCE& Simple-BN & \textbf{100.0} & \textbf{86.67} \\ \hline CERTIFAI & Sangiovese & 100.00 & 50.00\\ DiCE & Sangiovese & 100.00 & 36.67\\ ProCE& Sangiovese & \textbf{100.00} & \textbf{80.00} \\ \hline CERTIFAI & Adult & 60.00 & 85.70\\ DiCE & Adult & 100.00 & 75.00 \\ ProCE& Adult & \textbf{100.00} & \textbf{93.33} \\ \hline \end{tabular} \label{tab:target} \end{table} \begin{table}[!htb] \centering \caption{Baseline results in terms of \textbf{IM1} and \textbf{IM2} with 95\% confidence bound} \resizebox{0.96 \textwidth}{!}{\begin{minipage}{\textwidth} \begin{tabular}{cccc} \hline Method & Dataset & IM1 & IM2 (x10) \\ \hline CERTIFAI & Simple-BN & 0.045 $\pm$ 0.003 & 0.040 $\pm$ 0.014 \\ DiCE & Simple-BN & 0.066 $\pm$ 0.004 & 0.070 $\pm$ 0.030 \\ ProCE& Simple-BN & \textbf{0.024 $\pm$ 0.002} & \textbf{0.020 $\pm$ 0.031} \\ \hline CERTIFAI & Sangiovese & 0.217 $\pm$ 0.002 & 0.090 $\pm$ 0.012 \\ DiCE & Sangiovese & 0.200 $\pm$ 0.003 & 0.090 $\pm$ 0.032 \\ ProCE& Sangiovese & \textbf{0.189 $\pm$ 0.000} & \textbf{0.040 $\pm$ 0.022} \\ \hline CERTIFAI & Adult & 0.600 $\pm$ 0.030& 0.510 $\pm$ 0.021\\ DiCE & Adult & 0.365 $\pm$ 0.050& 0.160 $\pm$ 0.025\\ ProCE& Adult & \textbf{0.099 $\pm$ 0.040}& \textbf{0.070 $\pm$ 0.015}\\ \hline \end{tabular} \end{minipage}} \label{tab:im} \end{table} Figure~\ref{fig:proximity} provides information about the categorical proximity in the Adult dataset and continuous proximity in three datasets. For the categorical feature proximity, ProCE achieves an average of 5 out of the total 6 categorical features in the dataset, whereas the lowest result is recorded in the CERTIFAI algorithm. These results illustrate that with the gradient-free based approach, we can achieve an outstanding performance when handling the non-continuous features in tabular data. When it comes to the continuous feature proximity, ProCE produces the counterfactual sample with the smallest distance from continuous features. The most significant variation is also seen in the CERTIFAI, whereas our proposed method produces the least variation in continuous proximity. \begin{figure}[!htb] \centerline{\includegraphics[width=0.5\textwidth]{figure/proximity.png}} \caption{Baseline results in terms of \textbf{continuous proximity} and \textbf{categorical proximity}. Lower continuous proximity is better and higher categorical proximity is better.} \label{fig:proximity} \end{figure} Figure~\ref{fig:instance} and~\ref{fig:size} show the variation of our method's performance with the different numbers of nearest neighbors for class prototype and the embedding size of auto-encoder model, respectively. It is clear from Figure~\ref{fig:size} that although there are some fluctuations in all four metrics, the performance nearly reaches stable when the embedding size is larger than 32. On the other hand, as can be seen from Figure~\ref{fig:instance}, IM1 and IM2 witness the worst performance when the number of instances is 15, followed by a stagnant performance with from 25 to 45 instances. Meanwhile, there is no significant fluctuation in the performance of continuous and categorical proximity across three datasets. These results suggest that the performance of our proposed method in all evaluation metrics is nearly stable with different embedding sizes and numbers of nearest neighbors, possibly implying the robustness of our method. \begin{figure}[!htb] \centerline{\includegraphics[width=0.5\textwidth]{figure/number_instance.png}} \caption{Our performance under different numbers of $k$-nearest neighbors for class prototype ($\text{proto}_j$ with $j=1$).} \label{fig:instance} \end{figure} \begin{figure}[!htb] \centerline{\includegraphics[width=0.5\textwidth]{figure/different_size.png}} \caption{Our performance under different sizes of $E$-dimensional embedding for encoder function $Q_{\phi}$.} \label{fig:size} \end{figure} \section{Conclusion} This paper introduced a novel counterfactual explanation algorithm by integrating the structural causal model and class prototype. We also proposed formulating the counterfactual generation as a multi-objective problem and construct an optimization algorithm to find the optimal solution effectively. Our experiments proved that our method surpasses other existing methods in many evaluation metrics. For future work, we plan to consider the imperfect structural causal model that is very commonplace in real-world scenarios. Other multi-objective optimization algorithms such as reinforcement learning and multi-task learning are also worthy of investigating. \bibliographystyle{named}
2,869,038,154,163
arxiv
\section{Introduction} \label{sec:intro} Supernova remnants (SNRs) are important for understanding the Galaxy. They heat up the interstellar medium, enrich the environment with heavy elements and accelerate cosmic rays. Almost 300 SNRs are listed in recent catalogs \citep[e.g.,][]{2014BASI...42...47G,2019JApA...40...36G} and more objects and candidates are being found, for example in deeper radio observations \citep[e.g.,][]{2017A&A...605A..58A,2019PASA...36...48H}. However, several thousand SNRs are expected to be found in the Galaxy based on the supernova rate. The discrepancy might be due to selection effects preventing the discovery of very dim SNRs or sources located outside the Galactic plane. SNRs are known to accelerate particles to relativistic energies. The radiation that these particles produce shows characteristic non-thermal features that can help identify an object as an SNR. Historically, the detection of a synchrotron spectrum in the radio emission of an SNR has been the natural way to confirm the presence of such an object \citep[for a review see, e.g.,][]{2015A&ARv..23....3D}. High-energy emission from SNRs from X-rays to gamma rays has also been detected and studied in detail \citep{2008ARA&A..46...89R,2016ApJS..224....8A}. Sometimes the accompanying pulsar wind nebula (PWN) associated with an SNR also produces non-thermal emission up to gamma-ray energies. Deep gamma-ray, unbiased, surveys could be useful to find previously unknown SNRs, PWNe or candidates \citep[see, e.g.,][]{2018A&A...612A...1H,Ackermann_2018,2020ApJ...903L..14A,2020ApJ...905...76A}. The gamma-ray source FHES~J\,$1723.5-0501$ was discovered outside the Galactic plane by \cite{Ackermann_2018} with data from the \emph{Fermi} Large Area Telescope (LAT). They searched for extended GeV sources in high Galactic latitude regions. They noted the presence of an unclassified radio shell in the 1.4 GHz continuum emission data from the NVSS \citep{1998AJ....115.1693C} along the southwestern edge of the somewhat larger source FHES~J\,$1723.5-0501$. This gamma-ray source was therefore proposed to be potentially associated to an SNR or a PWN. In the LAT 4FGL catalog \citep{2020ApJS..247...33A} two gamma-ray sources are found in the region: the extended source 4FGL~J\,$1723.5-0501$e (associated to FHES~J\,$1723.5-0501$), whose energy ($E$) spectrum is described by a simple power-law function ($\frac{dN}{dE}\propto E^{-\gamma}$, with $\gamma$ the spectral index), and the point source 4FGL~J$1722.8-0418$, having a curved spectrum described by a log-parabola ($\frac{dN}{dE}\propto E^{-\alpha - \beta \log{E}}$, with $\alpha$ and $\beta$ some constants). The morphology of this extended source is described by a 2D Gaussian with a 68\%-containment radius of $0.73\degr$. In this paper we present an analysis of radio observations in the region of FHES~J\,$1723.5-0501$ that confirm the existence of a shell located within the extent of the gamma-ray source. Our analysis reveals a non-thermal spectrum for the radio emission. The shell-like appearance and spectrum establish this object as a new SNR, labeled G\,\ensuremath{17.8+16.7}{}. We also analyzed gamma-ray data from the LAT which confirm that the emission in the region is best described by an extended source with a hard GeV spectrum. In Section~\ref{sec:data} we describe the radio and GeV data analyses, and in Section~\ref{discussion} we use the properties of the emission to constrain some parameters such as the source distance and age. \section{Data analysis}\label{sec:data} \subsection{Radio observations and data analysis}\label{sec:radio} Given the high Galactic latitude of G\,\ensuremath{17.8+16.7}{}, it can only be detected in relatively shallow wide-area sky surveys, rather than the deeper images often available from targeted Galactic plane surveys. Of the archives we searched, it was most clearly detected in the NRAO Very Large Array Sky Survey \citep[NVSS; ][]{1998AJ....115.1693C} as a partially-resolved crescent of diameter $\sim0.95\degr$ (top-left panel of Fig.~\ref{fig:all_radio}, which also shows other data used in this section). This is larger than the maximum angular scale to which NVSS is sensitive, so a total flux density cannot be measured from these data alone. To fill in these larger angular scales, we used the Continuum map of the HI Parkes All-Sky Survey \citep[CHIPASS; ][]{2014PASA...31....7C}, a radio sky survey\footnote{\href{https://www.atnf.csiro.au/people/mcalabre/CHIPASS/index.html}{https://www.atnf.csiro.au/people/mcalabre/CHIPASS/index.html}} at 1.4\,GHz covering Dec~$<+25\degr$ at $14.4'$ resolution. Following the method of \cite{2021A&A...648A..30B}, we selected all compact sources detected in NVSS in the region, convolved them to match the CHIPASS resolution, produced an output FITS image in the same sky frame as the CHIPASS data, and subtracted the NVSS model from the CHIPASS image (in Jy\,beam$^{-1}$). We used the software \textsc{poly\_flux} \citep{2019PASA...36...48H} to measure the total flux densities of G\,\ensuremath{17.8+16.7}{} in each band; the tool estimates and subtracts a mean background level. Since the selection of the boundaries of the SNR is somewhat subjective, we used the tool ten times and recorded the average result, finding that the total flux density at 1.4\,GHz is $2.1\pm0.1$\,Jy. We performed a similar process on the S-Band Polarization All Sky Survey \citep[SPASS ;][]{2019MNRAS.489.2330C}, a 2.3-GHz survey of polarized radio emission covering the Southern sky (Dec~$<-1\degr$) at $8.9'$ resolution. To scale the flux densities of the NVSS sources from 1.4\,GHz to 2.3\,GHz, we used the spectral indices from the catalogue produced by \cite{2018MNRAS.474.5008D}. Subtracting this model from the S-PASS Stokes~I results in the right-hand panel of Fig.~\ref{fig:radio}. Running \textsc{poly\_flux} repeatedly we find that the total flux density at 2.3\,GHz is $1.45\pm0.05$\,Jy; the uncertainty is dominated by the less clean source subtraction, particularly for two sources on the edge of the shell. Since CHIPASS and NVSS sample different but complementary angular scales, it is instructive to combine the images in the Fourier domain, a process known as ``feathering'' \citep{2017PASP..129i4501C}. We used a custom \textsc{python} command\footnote{\href{https://github.com/nhurleywalker/feather}{https://github.com/nhurleywalker/feather}} derived from the implementation in the Common Astronomy Software Applications (\textsc{CASA}) to perform this operation, and the result is shown in the left panel of Fig.~\ref{fig:radio}. G\,\ensuremath{17.8+16.7}{} is clearly visible as a sharp-edged elliptical shell which is quite filled, and brighter and more defined toward the Eastern side. \begin{figure} \includegraphics[width=\textwidth]{CHIPASS_SPASS} \caption{Four\,deg$^2$ of the region surrounding G\,\ensuremath{17.8+16.7}{} as seen at 1.4\,GHz (left) and 2.3\,GHz (right) after source subtraction as described in Section~\ref{sec:radio}; the subtracted CHIPASS and NVSS data (left) have been feathered together to produce an image to properly capture angular scales $>45''$. Global background levels of 31\,mJy\,beam$^{-1}$ and 640\,mJy\,beam$^{-1}$ have been subtracted from the left and right images, respectively. An ellipse centred on $17^\mathrm{h}24^\mathrm{m}10.5^\mathrm{s} -5\degr10'52.5''$, size $51'\times45'$, and orientation due North, is shown by a dashed black semi-transparent line on both panels. The full-width-half-maximum of the point spread function is shown as a solid black ellipse in the lower-left of each panel.} \label{fig:radio} \end{figure} No detections with signal-to-noise sufficient to determine a radio flux density were made in any other sky surveys. From the 1.4 and 2.3\,GHz flux density measurements we calculated a two-point spectral index, finding $\alpha=-0.75\pm 0.15$ for $S\propto\nu^\alpha$. This is consistent with non-thermal emission from a synchrotron-emitting shell supernova remnant. \subsection{Archival X-ray observations}\label{sec:xray} We searched the archives of the X-ray observatories but found no pointed (deep) observations that covered this field. On initial inspection, G\,\ensuremath{17.8+16.7}{} is not visible in the ROSAT All-Sky Survey \citep[RASS;][]{1999A&A...349..389V}. However, after convolution with an $\sim11'$ Gaussian kernel, some emission is visible in the RASS hard-energy band (0.5 -- 2.0\,keV); there is no detection in the soft-energy band (0.1 -- 0.4\,keV; see Fig.~\ref{fig:xray}). The hard X-ray emission correlates well with the radio shell on the Eastern side, and correlates more clearly with the gamma-ray emission (Section~\ref{sec:gamma}) on the Western side. The counts within the radio shell are about 50\,\% higher than the background in this area. \begin{figure} \includegraphics[width=\textwidth]{ROSAT} \caption{13\,deg$^2$ of the region surrounding G\,\ensuremath{17.8+16.7}{} as seen by the RASS (Section~\ref{sec:xray}); the left panel shows the soft band (0.1 -- 0.4\,keV) and the right panel shows the hard band (0.5 -- 2.0\,keV). Both images have been convolved by a Gaussian kernel with full-width-half-maximum $675''$ (shown as a solid black ellipse in the lower-left of each panel). The radio shell is indicated by the same ellipse as in Fig.~\ref{fig:radio}, and the best-fitting spatial model to the gamma-ray data (Gaussian + PS; see Section~\ref{sec:gamma}) is shown as a white dashed ellipse.} \label{fig:xray} \end{figure} \subsection{Gamma-ray observations}\label{sec:gamma} The \emph{Fermi}-LAT is a converter/tracker telescope detecting gamma rays in the energy range between 20 MeV and $\ga$1 TeV \citep{2009ApJ...697.1071A}. We gathered LAT data taken from August~2008 to July~2021 in the energy range 0.5--500\,GeV for the analysis. We used the software {\tt fermitools} version~2.0.8 by means of the {\tt fermipy} package version~1.0.0 to perform the analysis. We applied the recommended cuts for the analysis\footnote{See https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/Cicerone\_Data\_Exploration/Data\_preparation.html}, selecting good quality front and back-converted events in the {\tt SOURCE} class ({\tt evclass=128}, {\tt evtype=3}), and having zenith angles lower than $90\degr$ to avoid contamination from Earth's limb. We used the corresponding response functions for {\tt Pass 8} analysis, {\tt P8R3\_SOURCE\_V3} and binned events with a spatial scale of $0.05\degr$ and with ten bins per decade in energy for exposure calculations. We included events reconstructed within $15\degr$ from the coordinates RA (J2000) $=17^\mathrm{h}24^\mathrm{m}00^\mathrm{s}$, Dec (J2000) $=-5\degr12'00''$, defined as the region of interest (RoI). We included the sources in the 4FGL-DR2 catalog \citep{2020ApJS..247...33A,2020arXiv200511208B} that are located within $20\degr$ of the center of the RoI, except for 4FGL~J\,$1723.5-0501$e and 4FGL~J\,$1722.8-0418$, since we carried out a more detailed analysis of their emission. We modeled the diffuse Galactic emission and the isotropic emission (including the residual cosmic-ray background) with the standard files {\tt gll\_iem\_v07.fits} and {\tt iso\_P8R3\_SOURCE\_V3\_v1.txt}, respectively, provided by the LAT team\footnote{See \url{https://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html}}. We applied the energy dispersion correction to all sources except for the isotropic diffuse emission component, as recommended by the LAT team\footnote{See https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Pass8\_edisp\_usage.html}. We applied the maximum likelihood technique \citep{1996ApJ...461..396M} and fitted any free morphological or spectral parameters of the sources in order to maximize the probability for the model to explain the data. The detection significance of a source can be calculated using the test statistic (TS), defined as $-2\log(\mathcal{L}_0/\mathcal{L})$, with $\mathcal{L}$ and $\mathcal{L}_0$ being the maximum likelihood functions for a model containing the source and for the model without the additional source (the null hypothesis), respectively. Taking advantage of the improved point-spread function of the LAT at higher energies, we performed a morphological analysis of the emission in the region of G\,\ensuremath{17.8+16.7}{} using events with reconstructed energies above 5\,GeV. We left free the spectral normalizations of the 4FGL sources located within $10\degr$ of the RoI center, while for the sources located within $5\degr$ of the RoI center we left all spectral parameters free. We used the Akaike Information Criterion \citep{1974ITAC...19..716A}, defined as AIC = $2k - 2\ln(\mathcal{L})$, where $k$ is the number of free parameters in the model, to compare the relative quality of the morphological models. We compared the following hypotheses for the emission: a symmetric 2D Gaussian, a uniform disk, a symmetric 2D Gaussian with a point source and a uniform disk with a point source \citep[for the definition of the extended models, see][]{2012ApJ...756....5L}. For each case we let the location and extension of the sources free to vary and performed a likelihood profile scan to find the maximum model likelihood. We also assumed simple power-laws for the spectra of the sources, which is justified below. The results are shown in Table \ref{table:LAT}. A model consisting of a Gaussian and a point source result in the lowest AIC value and is therefore the best description among the tested models. The point source was found with the tool {\tt find\_sources} and its location is consistent with that of 4FGL J1722.8--0418, seen outside the radio shell of G\,\ensuremath{17.8+16.7}{} (see Fig.~\ref{fig:tsmap}). Including the point source 4FGL~J\,$1722.8-0418$ in the model improves the quality of the fit and is important to correctly estimate the size of the extended source, as seen in Table \ref{table:LAT}. We quantified the significance of extension by calculating twice the difference between the $\log \mathcal{L}$ for an extended source model and that obtained with a point-like source model at its best-fit position. In all cases the extended source model is preferred over a point source for the emission in the region of G\,\ensuremath{17.8+16.7}{}. For the last two models in Table \ref{table:LAT} we found TS$_{\mbox{\tiny ext}} = 55.3$ for the disk model and TS$_{\mbox{\tiny ext}} = 65.6$ for the Gaussian. The 68\%-containment extension of the Gaussian in our best-fit model is consistent with the extension reported for 4FGL~J\,$1723.5-0501$e in the 4FGL catalog \citep{2020ApJS..247...33A}. We estimated the effect of the systematic uncertainty in the model of the diffuse Galactic emission on the measurement of the source extension. The uncertainties in this model could be important for the treatment of extended sources, even though the effect is expected to be larger at the lowest energies where this emission dominates. We used the eight alternative model files developed originally by \cite{2016ApJS..224....8A}, scaled appropriately to account for differences in energy dispersion between {\tt Pass 7} and {\tt Pass 8} reprocessed data\footnote{See https://fermi.gsfc.nasa.gov/ssc/data/access/lat/Model\_details/Pass8\_rescaled\_model.html}. We fitted the source extension using both the uniform disk and the 2D Gaussian for each alternative Galactic diffuse emission model and estimated the systematic uncertainty as in \cite{2016ApJS..224....8A}. The systematic uncertainty for the 68\%-containment radius of the Gaussian is $0.15\degr$, and that of the disk radius $0.02\degr$. We also searched for hints of energy dependent morphology, as expected for example from electron cooling and transport in PWNe. We divided the data in two energy intervals, 5--40\,GeV and 40--500\,GeV. These energy intervals were chosen as to contain enough statistics to confirm a significant extension of the source with TS$_{\mbox{\tiny ext}} > 20$. We fitted the source extension in each interval and the results for the low and high-energy data sets were $0.54^{+0.14}_{-0.10}\degr$ and $0.92^{+0.22}_{-0.17}\degr$ for the 68\%-containment radii of the 2D Gaussian, and $0.54^{+0.04}_{-0.03}\degr$ and $0.66^{+0.05}_{-0.04}\degr$ for the disk radii, respectively ($1\sigma$ statistical errors given). The discrepancy between the measured extensions at the low and high energies, considering statistical uncertainties only, is then at the $1.5-2\sigma$ level, depending on the morphological model adopted. Slightly higher extensions are seen at higher energies, which is the opposite behaviour to that expected in a PWN. The centroid locations found for the disks at the low and high energies are incompatible at the $3\sigma$ level, considering their statistical uncertainties only, while the $1\sigma$ error ellipses on the location of the centroids found with the Gaussian templates do overlap. The centroids of the emission at the highest energies are shifted towards the south east with respect to those found in the low energy interval. In order to further study this slight tension we performed a new fit in the 5--500\,GeV energy range to probe the existence of two different extended sources. We modeled the GeV emission from the SNR with a uniform disk with a fixed radius of $0.4\degr$, centred at the position described in Fig.~\ref{fig:radio}, and we searched for an additional source fitting its location and extension by means of a likelihood profile scan under the uniform disk hypothesis. The best-fit radius of the new disk and its $1\sigma$ statistical uncertainty is $1.32^{+0.05}_{-0.06}\degr$, with its centre located within the radio shell of the SNR. This model shows a $\Delta$AIC$=1.6$ with respect to the best-fit model found before including only one extended source, and therefore results in no statistical improvement. Future studies with more statistics will be important to better understand the morphology of the emission. A TS map obtained from LAT data in a broader energy range, 1--500\,GeV, to improve the statistics, is shown in Fig. \ref{fig:tsmap}. The emission in the region of G\,\ensuremath{17.8+16.7}{} as well as that of the point source 4FGL J1722.8--0418 are clearly visible. The GeV emission is more significant within the shell boundary of G\,\ensuremath{17.8+16.7}{}, as seen in the figure. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{tsmap} \caption{TS map obtained with LAT data in the energy range 1--500\,GeV. The image is obtained by moving a putative point source through each pixel in the map and calculating its TS value. The solid cyan ellipse corresponds to the radio shell ellipse shown in Fig.~\ref{fig:radio}. The circles represent the 68\%-containment radius of the 2D Gaussian found in our analysis (thick dashed line) and its $1\sigma$ uncertainty region (thin dashed lines). The location of the point source 4FGL~J\,$1722.8-0418$ is indicated by the cross.} \label{fig:tsmap} \end{figure} Using events in the entire energy range, 0.5--500\,GeV, we tested for spectral curvature with a log-parabola function. The TS values only marginally improved with respect to the fits using the simple power-law as the spectral shape for 4FGL~J\,$1722.8-0418$ ($\Delta$TS$=2.4$) and for the extended emission ($\Delta$TS$=2.7$), indicating that in this energy range a simple power-law shape is preferred for both sources. The resulting spectral indices (and their $1\sigma$ uncertainties) are $2.44\pm 0.02_{\mbox{\tiny stat}} \pm 0.07_{\mbox{\tiny sys}}$ for 4FGL~J\,$1722.8-0418$ (which has an overall TS$=231.6$), in agreement with the studies of this point source by \cite{2016RAA....16...97D}, and $1.83\pm0.02_{\mbox{\tiny stat}} \pm 0.05_{\mbox{\tiny sys}}$ for the extended source (with TS$=153.2$). The systematic uncertainty in the spectral index for the extended source includes the effects of using the alternative diffuse Galactic background models described earlier, as well as the effect of replacing the Gaussian morphology with the best-fit disk morphology. Changing the morphology to describe the emission is the dominant source of systematic uncertainty. The GeV spectrum of 4FGL~J\,$1722.8-0418$ is clearly softer than that of the extended source. The spectral energy distribution (SED) fluxes of G\,\ensuremath{17.8+16.7}{} were obtained dividing the data in ten energy intervals and fitting the normalization in each bin. They are shown below in Fig. \ref{fig:SED}. \begin{table*} \caption{Results of the morphological analysis of \emph{Fermi}-LAT data.} \label{table:LAT} \begin{center} \begin{tabular}{lccc} \hline \hline Spatial model & Fitted size$^{a}$ ($\degr$) & $\Delta$AIC$^b$ \\ \hline Disk & $1.09^{+0.04}_{-0.05}$ & 54.3\\ Gaussian & $0.83^{+0.08}_{-0.07}$ & 36.2 \\ Disk$+$ PS & $0.55^{+0.05}_{-0.03}$ & 10.3\\ Gaussian$+$PS & $0.68^{+0.07}_{-0.16}$ & 0 \\ \hline \end{tabular}\\ \textsuperscript{$a$}\footnotesize{Radius for the disk and 68\%-containment radius for the Gaussian and their $1\sigma$ statistical uncertainties.}\\ \textsuperscript{$b$}\footnotesize{$\Delta$AIC is equal to the value of AIC for each model minus the AIC value for the best-fit model.}\\ \end{center} \end{table*} \section{Discussion}\label{discussion} \subsection{Limits on age and distance} In the absence of a measured distance, we can use the morphological and brightness properties of the SNR to infer limits on the physical characteristics. Studies of the Local Group galaxies and Magellanic Clouds have demonstrated that SNR 1.4-GHz luminosities typically have values in the range $5\times10^{14} < L_\mathrm{1.4GHz} < 10^{17}$\,W\,Hz$^{-1}$ \citep[e.g.][]{1998ApJ...504..761C}. Assuming that G\,\ensuremath{17.8+16.7}{} is more luminous than $5\times10^{14}$\,W\,Hz$^{-1}$, we can obtain a limit on its distance from Earth by $\sqrt{\frac{L_\mathrm{1.4GHz}}{4\pi S_\mathrm{1.4GHz}}}$, i.e. $d>1.4$\,kpc (and diameter $D>20$\,pc). Assuming a low ISM density of 0.1\,cm$^{-3}$ and using otherwise standard values in the SNR evolutionary model calculator provided by \cite{2017AJ....153..239L}, we find that for $D>20$\,pc, the source must be $>10$\,kyr old. Given its clearly defined edges and filled disk, we suggest it is still in the Sedov-Taylor phase, and not likely to be more than an order of magnitude older than this, which yields $D<50$\,pc and $d<3.5$\,kpc. However, since there are few known high-latitude SNRs, these values are only estimates, and a direct measurement may difficult for faint SNRs without obvious molecular cloud or pulsar associations. \subsection{Gamma-ray properties} A hard GeV spectrum such as that observed for G\,\ensuremath{17.8+16.7} is expected under an inverse Compton (IC) scenario, where high-energy electrons accelerated in the shock of the SNR interact with soft ambient photon fields, such as the cosmic microwave background, to produce the gamma rays. This mechanism for the production of high-energy emission would be more natural in a region located outside the Galactic plane where G\,\ensuremath{17.8+16.7}{} is found and the matter density is expected to be low. This leptonic-IC scenario for the gamma rays is consistent with our radio observations. Under this model, and given the measured GeV spectral index $\gamma \sim 1.83$ ($\frac{dN}{dE}\propto E^{-\gamma}$), the predicted radio spectral index for the synchrotron emission ($S \propto \nu^{\alpha}$) from the same uncooled population of electrons is $\alpha = 1-\gamma = -0.83$, fully consistent with our measured value of $-0.75\pm 0.15$. We used the {\tt naima} package \citep{naima} to fit the radio and GeV fluxes with a one-zone leptonic model using a particle distribution that is a power-law with an exponential cutoff. The radio fluxes result from synchrotron emission from electrons in a magnetic field while the gamma rays are from IC scattering of cosmic microwave background (CMB) photons by the same electrons. The results are shown in Fig. \ref{fig:SED}. The required magnetic field is $B=1.06^{+0.33}_{-0.23}\,\mu$G, the spectral index and cutoff energy of the lepton distribution are $2.5\pm0.1$ and $57^{+61}_{-30}$ TeV, respectively. The total energy content in the relativistic electrons (integrated above a particle energy of 1\,GeV) amounts to $(1.4^{+0.9}_{-0.6})\times 10^{49}\,\left( \frac{d}{3\, \mbox{\tiny kpc}} \right)^2$\,erg, normalised to an arbitrary distance of 3\,kpc. This is only between 0.3\% and 1.8\% of the typical kinetic energy available in an SNR shock ($10^{51}\,$erg) for a distance range of 1.4--3.5\,kpc. Given that the GeV spectrum of G\,\ensuremath{17.8+16.7}{} is described by a power-law with no apparent cutoff, the resulting cutoff in the particle distribution is not well-constrained. Observations in the TeV energy range should be carried out for this purpose. \begin{figure} \includegraphics[width=\textwidth]{ic_exp} \caption{SED of G\,\ensuremath{17.8+16.7}{} and the resulting fit to the radio and GeV data under a one-zone IC-CMB scenario. The shaded region represents the propagated $1\sigma$ statistical uncertainty on the spectral fit in the 0.5--500\,GeV energy range.} \label{fig:SED} \end{figure} Another mechanism for the origin of gamma-ray emission are inelastic collisions of relativistic protons, accelerated by the SNR, with ambient protons. In this case a high density of matter is required to enhance the flux of gamma rays. Since the gamma-ray distribution approximately follows the parent proton distribution in this hadronic scenario \citep[see, e.g.,][ for a review]{2013A&ARv..21...70B}, a proton distribution harder than predicted by linear diffusive shock acceleration theory would be required in order to explain the GeV spectrum of G\,\ensuremath{17.8+16.7}. However, if dense clumps of gas exist within the shell of an SNR and the highest energy protons are able to interact with this gas, it has been shown that a hard GeV spectrum could be produced \citep{2014MNRAS.445L..70G}. Our estimated gamma-ray luminosity of G\,\ensuremath{17.8+16.7}{} in the 1--100\,GeV energy interval is $\sim9\times 10^{33}$\,erg\,s$^{-1}$ for an arbitrary source distance of 3\,kpc chosen within our estimated range. This luminosity is between one and two orders of magnitude lower than the typical luminosities of evolved SNRs interacting with dense gas clouds \citep{2016ApJS..224....8A}. Our measurements are therefore more consistent with a low-density environment around the SNR and a leptonic scenario for the gamma rays. The spectral index that we measured for the high-energy emission ($\sim 1.83$) could suggest something regarding the SNR age, for example following the trend seen by \cite{2016ApJS..224....8A} where sources with harder GeV spectra tend to be usually younger SNRs. However, we note that there is a large scatter in the age of their sample SNRs even for hard GeV spectra ($\sim\,$1--60\,kyr). This could be due to the complexity in the environments of SNRs. SNRs with an age $\sim0.5\times 10^4$ yr in low-density environments (with number density $\lesssim 0.1$~cm$^{-3}$) and low magnetic fields are predicted to show gamma-ray fluxes dominated by IC emission with a hard GeV spectrum \citep{2012MNRAS.427...91O,2019ApJ...876...27Y}. Other SNRs with a very similar gamma-ray spectrum are the faint radio source G\,$279.0+1.1$ \citep{2020MNRAS.492.5980A}, whose age is unknown but shows features of an evolved SNR, and a dim SNR recently discovered, G\,$150.3+4.5$, \citep{2014A&A...567A..59G,2020A&A...643A..28D}. Other extended sources with similar GeV spectra, although with no known counterparts at other wavelengths, are G\,350.6--4.7 \citep{2018MNRAS.474..102A} and 2HWC\,J2006+341 \citep{2020ApJ...903L..14A}. Their GeV spectra are similar to those of some young SNRs \citep[SN 1006, RX J1713.7--3946, RCW 86, with ages of 1--2\,kyr,][]{2019PASJ...71...77X,2011ApJ...734...28A,2016ApJ...819...98A}. Another feature of G\,\ensuremath{17.8+16.7}{} that resembles a young SNR is its steep radio spectrum \citep[see][for examples of radio spectra]{2014Ap&SS.354..541U}. However, G\,\ensuremath{17.8+16.7}{} has fainter X-ray emission than young SNRs (Section~\ref{sec:xray}); deeper X-ray observations in this band would yield useful results, e.g. emission lines to help discriminate the progenitor, the properties of the plasma, the distance to the source and the possible presence of non thermal emission from the highest energy electrons. As pointed out before recent simulations predict the SED of SNRs with ages of up to at least $5\,$kyr to be dominated by IC emission at gamma-ray energies with low synchrotron fluxes \citep{2019ApJ...876...27Y}. Such an age is more compatible with the predicted value ($10^4$\,yr) for a low density environment mentioned earlier. It is interesting to compare the extension of the radio shell with that of the GeV emission. As seen in Fig. \ref{fig:tsmap} the gamma-ray source is more extended than the radio source. Gamma-ray emitting SNRs show very similar radio and GeV angular diameters \citep{2016ApJS..224....8A}. GeV diameters of LAT-detected SNRs are found within $\sim0.3\degr$ and $\sim$20\% of their radio diameters \citep{2016ApJS..224....8A}. Taking the radius of the GeV disk in our fit as a measure of the radius of the gamma-ray source and its corresponding uncertainty, which results from combining the statistical uncertainty (reported in Table \ref{table:LAT}) with the systematic uncertainty in quadrature, the diameter of the gamma-ray source is in the range 1.03--1.21$\degr$, which is greater than the $\sim$0.8$\degr$-radio diameter by 0.23--0.41$\degr$ (29--51\% larger). Future observations with more statistics might reveal that the discrepancy could be similar to that observed for other SNRs, or that the sizes could indeed be considerably different. There could be several reasons for this. First, deeper radio observations could reveal a larger shell. This possibility has been proposed by \cite{2020MNRAS.492.5980A} to explain a discrepancy for the SNR G\,$279.0+1.1$ GeV and radio extensions. However, G\,$279.0+1.1$ shows an incomplete shell while for G\,\ensuremath{17.8+16.7}{} it seems well-defined. Another possibility is the diffusion of relativistic particles that produce gamma rays in the interstellar medium around the SNR. Recent simulations show that electrons could produce a halo of hard GeV emission around an SNR, although having a flux of about 20 to 30\% of the SNR flux \citep{2021arXiv210810773B}, which would make a detection difficult. This scenario should be studied in the future with more data. Another possibility that would account for a discrepancy between the GeV and radio sizes is that part or all of the gamma-ray emission comes from a PWN that is perhaps associated to G\,\ensuremath{17.8+16.7}{}, while the radio emission is from the SNR. Gamma-ray emission from PWNe can be quite extended. However, no pulsars are reported in The Australia Telescope National Facility Pulsar Catalogue \citep{2005AJ....129.1993M} within $2.3\degr$ of the center of the SNR, and no composite SNRs are known for which the PWN is larger than the radio shell. Finally, we cannot rule out that a different gamma-ray source lies in the same line of sight as G\,\ensuremath{17.8+16.7}{}. We have found a slight discrepancy between the fitted GeV sizes at low and high energies (considering only their statistical uncertainties) which could point towards this latter scenario. Future studies with more statistics and additional multi-wavelength observations will be important to address this issue. The gamma-ray point source 4FGL~J\,$1722.8-0418$ shows a soft GeV spectrum. The spectrum of LAT pulsars is also usually soft, described by a power-law with an exponential cut-off. The spectral indices are typically found in the range 1--2 and the cut-off energies are of a few GeV \citep{Abdo_2013}. This is not the case for 4FGL~J\,$1722.8-0418$, for which we find a simple power-law spectrum in the 0.5--500 GeV energy range with an index of $\sim 2.44$. However, using classification algorithms, \cite{2016ApJ...820....8S} found that this source could be a pulsar (either a young or a millisecond pulsar) and it is likely not associated to an active galactic nucleus. No radio emission was found by \cite{2021ApJ...914...42B} at the location of the point source. Assuming 4FGL~J\,$1722.8-0418$ is a pulsar associated with G\,\ensuremath{17.8+16.7}{} that has travelled away from the center of the SNR to its current location $0.94\degr$ away, the required pulsar transverse velocity is $2200$\,km\,s$^{-1}\,\left(\frac{d}{1.4\mbox{\tiny\,kpc}} \right)\,\left(\frac{t}{10\mbox{\tiny\,kyr}} \right)^{-1}$, with $t$ the time since the explosion. The mean transverse velocity of pulsars is $\sim 500\,$km\,s$^{-1}$ \citep{1997MNRAS.289..592L}, rarely exceeding $1000\,$km\,s$^{-1}$, although such a high velocity is not impossible. Therefore, for a high transverse velocity of $1000\,$km\,s$^{-1}$ and our distance range of 1.4--3.5\,kpc estimated earlier, an age for the SNR in the range 22--55\,kyr is required. If, on the other hand, no pulsar association is found for G\,\ensuremath{17.8+16.7}, it would be consistent with it being the remnant of a type Ia supernova, while most pulsars are expected to be born in the Galactic plane. \section{Summary} We have found a non-thermal radio source at the location of the extended gamma-ray source 4FGL~J\,$1723.5-0501$e (FHES~J\,$1723.5-0501$), which we identify as a new SNR, G\,\ensuremath{17.8+16.7}. The size of its radio shell is $51'\times45'$. The gamma rays show a hard spectrum consistent with a leptonic (IC) model. This model predicts synchrotron emission by the corresponding particle distribution whose spectrum is consistent with our measured radio spectrum. Such a scenario for the origin of the GeV emission is also expected in a low-density environment. This would be consistent with the location of the SNR, which is found outside the Galactic plane, where low ambient densities are expected. Comparing the radio and gamma-ray features of G\,\ensuremath{17.8+16.7} with those of the known SNR populations, we obtained a distance to the object in the range 1.4--3.5\,kpc. An SNR age of the order of 10\,kyr is compatible with the radio and GeV features, but an older or younger SNR cannot be ruled out. More multiwavelength observations of this object should be carried out. \section*{Acknowledgements} We thank the anonymous referee for useful comments that helped improved this work. MA and SQ received funding from Universidad de Costa Rica grant B8267 and acknowledge the use of computational resources from CICIMA-UCR. NHW is supported by an Australian Research Council Future Fellowship (project number FT190100231) funded by the Australian Government. We have made use of the ROSAT Data Archive of the MaxPlanck-Institut f{\"u}r extraterrestrische Physik (MPE) at Garching, Germany. \section*{Data availability.} The derived data generated in this research will be shared on request to the corresponding author. \section{Appendix} \begin{figure} \includegraphics[width=\textwidth]{all_radio.pdf} \caption{The radio data of this region as processed in Section~\ref{sec:radio}. From left to right, the top row shows raw data from NVSS, CHIPASS, and SPASS. The second row shows the NVSS (left) and CHIPASS (right) data after source-subtraction. The bottom row shows the result of feathering the NVSS and CHIPASS data together (left) and the source-subtracted SPASS data (right).} \label{fig:all_radio} \end{figure} \bibliographystyle{mnras}
2,869,038,154,164
arxiv
\section{Introduction} \IEEEPARstart{V}{isual} object detection is one of the fundamental tasks in computer vision and various general-purpose detectors~\cite{girshick2014rich,he2014spatial,girshick2015fast,liu2016ssd,redmon2016you,dai2016r,ren2017faster} based on convolutional neural networks (CNNs) have been devised. Promising results have been achieved on public benchmarks including MS COCO \cite{lin2014microsoft} and VOC2007 \cite{everingham2010pascal} etc. However, most existing detectors do not pay particular attention to some common aspects for robust object detection in the wild: small size, cluttered arrangement and arbitrary orientations. These challenges are especially pronounced for aerial image~\cite{xia2018dota,li2020object, cheng2016learning, liu2017high} which has become an important area for detection in practice, for its various civil applications, e.g. resource detection, environmental monitoring, and urban planning. \begin{figure}[!tb] \centering \subfigure[Horizontal detection.]{ \begin{minipage}[t]{0.46\linewidth} \centering \includegraphics[width=1.0\textwidth]{ship_h.png} \centering \label{fig:ship_h} \end{minipage}} \subfigure[Rotation detection.]{ \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[width=1.0\textwidth]{ship_r.png} \centering \label{fig:ship_r} \end{minipage}} \caption{Small, cluttered and rotated objects in complex scene whereby rotation detection plays an important role. Red boxes indicate missing detection which are suppressed by non-maximum suppression (NMS).} \label{fig:ship} \end{figure} In the context of remote sensing, we further present some specific discussion to motivate this paper, as shown in Fig. \ref{fig:ship}. It shall be noted that the following three aspects also prevail in other sources e.g. natural images and scene texts. 1) \textbf{Small objects.} Aerial images often contain small objects overwhelmed by complex surrounding scenes. 2) \textbf{Cluttered arrangement.} Objects e.g. vehicles and ships in aerial images are often densely arranged, leading to inter-class feature coupling and intra-class feature boundary blur. 3) \textbf{Arbitrary orientations.} Objects in aerial images can appear in various orientations. Rotation detection is necessary especially considering the high aspect ratio issue: the horizontal bounding box for a rotated object is more loose than an aligned rotated one, such that the box contains a large portion of background or nearby cluttered objects as disturbance. Moreover, it will be greatly affected by non-maximum suppression, see Fig. \ref{fig:ship_h}. As described above, the small/cluttered objects problem can be interleaved with the rotation variance. In this paper, we aim to address the first challenge by seeking a new way of dismissing the noisy interference from both background and other foreground objects. While for rotation alignment, a new rotation loss is devised accordingly. Our both techniques can serve as plug in for existing detectors~\cite{ren2017faster, lin2017feature, lin2017focal, ma2018arbitrary, jiang2017r2cnn, yang2019r3det}, in an out of box manner. We give further description as follows. For small and cluttered object detection, we devise a denoising module and in fact denoising has not been studied for objection detection. We observe two common types of noises that are orthogonal to each other: i) image level noise, which is object-agnostic, and ii) instance level noise, specifically often in the form of mutual interference between objects, as well as background interference. Such noises are ubiquitous and pronounced in aerial images which are remotely sensed. In fact, denoising has been a long standing task~\cite{tian2019deep,xie2019feature,milani2012adaptive,cho2019dapas} in image processing while they are rarely designated for object detection, and the denoising is finally performed on raw image for the purpose of image enhancement rather than downstream semantic tasks, especially in an end-to-end manner. In this paper, we explore the way of performing instance level denoising (InLD) and particularly in the feature map (i.e. latent layers' outputs by CNNs), for robust detection. The hope is to reduce the inter-class feature coupling and intra-class interference, meanwhile blocking background interference. To this end, a novel InLD component is designated to decouple the features of different object categories into their respective channels approximately. Meanwhile, in the spatial domain, the features of the object and background are enhanced and weakened, respectively. It is worth noting that the above idea is conceptually similar to but inherently different from the recent efforts~\cite{xie2019feature, cho2019dapas} for image level feature map denoising (ImLD), which is used as a way of enhancing the image recognition model's robustness against attack, rather than location sensitive object detection. Readers are referred to Table \ref{table:ImLD_and_InLD} for a quick verification that our InLD can more effectively improve detection than ImLD for both horizontal and rotation cases. On the other hand, as discussed above, as a closely interleaved problem to small/cluttered object detection, accurate rotation estimation is addressed by devising a novel IoU-Smooth L1 loss. It is motivated by the fact that the existing state-of-the-art regression-based rotation detection methods e.g. five-parameter regression~\cite{azimi2018towards, ding2018learning, yang2019r3det, zhang2019cad} suffer from the issue of discontinuous boundaries, which is inherently caused by the periodicity of angular (PoA) and exchangeability of edges (EoE) \cite{yang2020arbitrary} (see details in Sec.~\ref{sec:rod}). We conduct extensive ablation study and experiments on multiple datasets including both aerial images from DOTA \cite{xia2018dota}, DIOR \cite{li2020object}, UCAS-AOD \cite{li2019feature}, as well as natural image dataset COCO \cite{lin2014microsoft}, scene text dataset ICDAR2015 \cite{karatzas2015icdar}, small traffic light dataset BSTLD \cite{behrendt2017deep} and our newly released S$^2$TLD to illustrate the promising effects of our techniques. \begin{figure*}[!tb] \begin{center} \includegraphics[width=0.85\linewidth]{pipeline.png} \end{center} \caption{The pipeline of our method (using RetinaNet~\cite{lin2017focal} as an embodiment). Our SCRDet++ mainly consists of four modules: basic embodiment for feature extraction, Image-level denoising for removing common image noise, instance-level denoising module for suppressing instance noise (i.e., inter-class feature coupling and distraction between intra-class and background) and the `class+box' branch for predicting classification score and bounding box position. `C' and `A' represent the number of object categories and the number of anchor at each feature point, respectively.} \label{fig:pipeline} \end{figure*} The preliminary content of this paper has partially appeared in the conference version~\cite{yang2019scrdet}\footnote{Compared with the conference version, this journal version has made the following extensions: i) we take a novel feature map denoising perspective to the small and cluttered object detection problem, and specifically devise a new instance-level feature denoising technique for detecting small and cluttered objects with little additional computation and parameter overhead; ii) comprehensive ablation study of our instance-level feature denoising component across datasets, which can be easily plugged into existing detectors. Our new method significantly outperforms our previous detector in the conference version (e.g. overall detection accuracy 72.61\% versus 76.81\%, and 75.35\% versus 79.35\% on the OBB and HBB task of DOTA dataset, respectively); iii) We collect, annotate and release a new small traffic light dataset (5,786 images with 1,4130 traffic light instances across five categories) to further verify the versatility and generalization performance of the instance-level denoising module; iv) last but not least, the paper has been largely rephrased and expanded to cover the discussion of up-to-date works including those on image denoising and small object detection. Finally the source code is released.}, with the detector named SCRDet (Small, Cluttered, and Rotated Object Detector). In this journal version, we extend our improved detector called SCRDet++. The overall contributions are: 1) To our best knowledge, we are the first to develop the concept of instance level noise (at least in the context of object detection), and design a novel Instance-Level Denoising (InLD) module in feature map. This is realized by supervised segmentation whose ground truth is approximately obtained by the bounding box in object detection. The proposed module effectively addresses the challenges in detecting small size, arbitrary direction, and dense distribution objects with little computation and parameter increase. 2) Towards more robust handling of arbitrarily-rotated objects, an improved smooth L1 loss is devised by adding the IoU constant factor, which is tailored to solve the boundary problem of the rotating bounding box regression. 3) We create and release a real-world traffic light dataset: S$^2$TLD. It consists of 5,786 images with 14,130 traffic light instances across five categories: red, green, yellow, off and wait on. It further verifies the effectiveness of InLD, and it is available at \url{https://github.com/Thinklab-SJTU/S2TLD}. 4) Our method achieves state-of-the-art performance on public datasets for rotation detection in complex scenes like the aerial images. Experiments also show that our InLD module, which can be easily plugged into existing architectures, can notably improve detection on different tasks. \section{Related Work}\label{sec:related} We first discuss existing detectors for both horizontal bounding box based detection and rotation detection. Then some representative works on image denoising and small object detection are also introduced. \subsection{Horizontal Region Object Detection} There is an emerging line of deep network based object detectors. R-CNN \cite{girshick2014rich} pioneers the CNN-based detection pipeline. Subsequently, region-based models such as Fast R-CNN \cite{girshick2015fast}, Faster R-CNN \cite{ren2017faster}, and R-FCN \cite{dai2016r} are proposed, which achieves more cost-effective detection. SSD \cite{liu2016ssd}, YOLO \cite{redmon2016you} and RetinaNet \cite{lin2017focal} are representative single-stage methods, and their single-stage structure further improves detection speed. In addition to anchor-based methods, many anchor-free also have become popular in recent years. FCOS \cite{tian2019fcos}, CornerNet \cite{law2018cornernet}, CenterNet \cite{duan2019centernet} and ExtremeNet \cite{zhou2019bottom} attempt to predict some keypoints of objects such as corners or extreme points, which are then grouped into bounding boxes, and these detectors have also been applied to the field of remote sensing~\cite{wei2019oriented,xiao2020axis}. R-P-Faster R-CNN \cite{han2017efficient} achieves satisfactory performance in small datasets. The method \cite{xu2017deformable} combines both deformable convolution layers \cite{dai2017deformable} and region-based fully convolutional networks (R-FCN) to improve detection accuracy further. The work \cite{ren2018deformable} adopts top-down and skipped connections to produce a single high-level feature map of a fine resolution, improving the performance of the deformable Faster R-CNN model. IoU-Adaptive R-CNN \cite{yan2019iou} reduces the loss of small object information by a new IoU-guided detection network. FMSSD \cite{wang2019fmssd} aggregates the context information both in multiple scales and the same scale feature maps. However, objects in aerial images with small size, cluttered distribution and arbitrary rotation are still challenging, especially for horizontal region detection methods. \subsection{Arbitrary-Oriented Object Detection} The demand for rotation detection has been increasing recently like for aerial images and scene texts. Recent advances are mainly driven by the adoption of rotated bounding boxes or quadrangles to represent multi-oriented objects. For scene text detection, RRPN \cite{ma2018arbitrary} employs rotated RPN to generate rotated proposals and further perform rotated bounding box regression. TextBoxes++ \cite{liao2018textboxes++} adopts vertex regression on SSD. RRD \cite{liao2018rotation} further improves TextBoxes++ by decoupling classification and bounding box regression on rotation-invariant and rotation sensitive features, respectively. EAST \cite{zhou2017east} directly predicts words or text lines of arbitrary orientations and quadrilateral shapes in full images, eliminating unnecessary intermediate steps with a single neural network. Recent text spotting methods like FOTS \cite{liu2018fots} show that training text detection and recognition simultaneously can greatly boost detection performance. In contrast, aerial images object detection is more challenging: first, multi-category object detection requires the generalization of the detector. Second, small objects in aerial images are usually densely arranged on a large scale. Third, aerial image detection requires a more robust algorithm due to the variety of noises. Many aerial images rotation detection algorithms are designed for different problems. ICN \cite{azimi2018towards}, ROI Transformer~\cite{ding2018learning}, and SCRDet~\cite{yang2019scrdet} are representative of two-stage aerial images rotation detectors, which are mainly designed from the perspective of feature extraction. From the results, they have achieved good performance in small or dense object detection. Compared to the previous methods, R$^3$Det~\cite{yang2019r3det} and RSDet~\cite{qian2019learning} are based on a single-stage detection method which pay more attention to the trade-off of accuracy and speed. Gliding Vertex \cite{xu2020gliding} and RSDet~\cite{qian2019learning} achieve more accurate object detection via quadrilateral regression prediction. Axis Learning \cite{xiao2020axis} and O$^2$-DNet \cite{wei2019oriented} are combined with the latest popular anchor-free ideas, to overcome the problem of too many anchors in anchor-based detection methods. \subsection{Image Denoising} Deep learning has obtained much attention in image denoising. The survey \cite{tian2019deep} divides image denoising using CNNs into four types (see the references therein): 1) additive white noisy images; 2) real noisy images; 3) blind denoising and 4) hybrid noisy images, as the combination of noisy, blurred and low-resolution images. In addition, image denoising also helps to improve the performance of other computer vision tasks, such as image classification \cite{xie2019feature}, object detection \cite{milani2012adaptive}, semantic segmentation \cite{cho2019dapas}, etc. In addition to image noise, we find that there is also instance noise in the field of object detection. Instance noise describes object-aware noise, which is more widespread in object detection than object-agnostic image noise. In this paper, we will explore the application of image-level denoising and instance-level denoising techniques to object detection in complex scenes. \subsection{Small Object Detection} Small object detection remains an unsolved challenge. Common small object solutions include data augmentation \cite{kisantal2019augmentation}, multi-scale feature fusion \cite{lin2017feature, deng2020extended}, tailored sampling strategies \cite{zhu2018seeing, liu2019hambox, yang2019scrdet}, generative adversarial networks \cite{li2017perceptual}, and multi-scale training \cite{singh2018sniper} etc. In this paper, we show that denoising is also an effective means to improve the detection performance of small objects. In complex scenes, the feature information of small objects is often overwhelmed by the background area, which often contains a large number of similar objects. Unlike ordinary image-level denoising, we will use instance-level denoising to improve the detection capabilities of small objects, which is a new perspective. This paper mainly considers designing a general-purpose instance level feature denoising module, to boost the performance of horizontal detection and rotation detection in challenging aerial imagery, as well as natural images and scene texts. Besides, we also design an IoU-Smooth L1 loss to solve the boundary problem of the arbitrary-oriented object detection for more accurate rotation estimation. \section{The Proposed Method}\label{sec:method} \subsection{Approach Overview} Figure \ref{fig:pipeline} illustrates the pipeline of the proposed SCRDet++. It mainly consists of four modules: i) feature extraction via CNNs which can take different forms of CNNs from existing detectors e.g.~\cite{girshick2014rich,liu2016ssd}, ii) image-level denoising (ImLD) module for removing common image noise, which is optional as its effect can be well offset by the subsequent InLD as devised in this paper; iii) instance-level denoising (InLD) module for suppressing instance noise (i.e., inter-class feature coupling and distraction between intra-class and background) and iv) the class and box branch for predicting score and (rotated) bounding box. Specifically, we first describe our main technique i.e. instance-level denoising module (InLD) in Sec. \ref{subsec:InLD}, which further contains a comparison with the image level denoising module (ImLD). Finally, we detail the network learning which involves a specially designed smooth loss for rotation estimation in Sec. \ref{subsec:learning}. Note that in experiments we show that InLD can replace and strike a more effective role for detection than ImLD, making ImLD a dispensable component in our pipeline. \subsection{Instance-level Feature Map Denoising}\label{subsec:InLD} In this subsection, we present our devised instance-level feature map denoising approach. To emphasis the importance of our instance-level operation, we further compare it with image-level denoising in feature map, which is also adopted for robust image recognition model learning in \cite{xie2019feature}. To our best knowledge, our approach is the first for using (instance level) feature map denoising for object detection. The denoising module can be learned in an end-to-end manner together with other modules, which is optimized for the object detection task. \subsubsection{Instance-Level Noise}\label{subsec:inld_noise} Instance-level noise generally refers to the mutual interference among objects, and also that from background. We discuss its properties in the following aspects. In particular, as shown in Fig. \ref{fig:denoise_visualize}, the adversary effect to object detection is especially pronounced in the feature map that calls for feature space denoising rather than on the raw input image. 1) The non-object with object-like shape has a higher response in the feature map, especially for small objects (see the top row of Fig. \ref{fig:denoise_visualize}). 2) Clutter objects that are densely arranged tend to suffer the issue for inter-class feature coupling and intra-class feature boundary blurring (see the middle row of Fig. \ref{fig:denoise_visualize}). 3) The response of object is not prominent enough surrounded by the background (see the bottom row of Fig. \ref{fig:denoise_visualize}). \subsubsection{Mathematical Modeling of Instance-Level Denoising} \label{sec:inld_fundamental} To dismiss instance level noise, one can generally refer to the idea of attention mechanism, as a common way of re-weighting the convolutional response maps to highlight the important parts and suppress the uninformative ones, such as spatial attention \cite{wang2018non} and channel-wise attention \cite{hu2018squeeze}. We show that existing aerial image rotation detectors, including FADet \cite{li2019feature}, SCRDet \cite{yang2019scrdet} and CAD-Det \cite{zhang2019cad}, often use the simple attention mechanism to re-weight the output, which can be reduced into the following general form: \begin{equation} \begin{aligned} \mathbf{Y} &= \mathcal{A}(\mathbf{X})\odot\mathbf{X} \\ &= \mathbf{W}_{s} \odot \mathbf{X} \odot \mathbf{W}_{c} \\ &= \mathbf{W}_{s} \odot \bigcup_{i=1}^{C}\mathbf{x}_{i} \cdot w^{i}_{c} \label{eq:attention} \end{aligned} \end{equation} where $\mathbf{X}, \mathbf{Y} \in \mathbb{R}^{C \times H \times W}$ represents two feature maps of input image. The attention function $\mathcal{A(\mathbf{X})}$ refers to the proposal output by a certain attention module e.g. \cite{wang2018non, hu2018squeeze}. Note $\odot $ is the element-wise product. $\mathbf{W}_{s} \in \mathbb{R}^{ H \times W}$ and $\mathbf{W}_{c} \in \mathbb{R}^{C}$ denote the spatial weight and channel weight. $w^{i}_{c}$ indicates the weight of the $i$-th channel, respectively. Throughout the paper, $\bigcup$ means the concatenation operation for connecting tensor along the feature map's channels. \begin{figure}[!tb] \centering \subfigure{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=1.0\textwidth]{scr_a.jpg}\vspace{0.04cm} \centering \includegraphics[width=1.0\textwidth]{scr_b.jpg}\vspace{0.04cm} \centering \includegraphics[width=1.0\textwidth]{scr_c.jpg} \end{minipage} \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=1.0\textwidth]{false_alarm.png}\vspace{0.04cm} \centering \includegraphics[width=1.0\textwidth]{blurred_borders.png}\vspace{0.04cm} \centering \includegraphics[width=1.0\textwidth]{low_response.png} \end{minipage} \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=1.0\textwidth]{denoise_a.jpg}\vspace{0.04cm} \centering \includegraphics[width=1.0\textwidth]{denoise_b.jpg}\vspace{0.04cm} \centering \includegraphics[width=1.0\textwidth]{denoise_c.jpg} \end{minipage}} \caption{Images (left) and their feature maps before (middle) and after (right) the instance-level denoising operation. First row: non-object with object-like shape. Second row: inter-class feature coupling and intra-class feature boundary blurring. Third row: weak feature response.} \label{fig:denoise_visualize} \end{figure} \begin{figure}[!t] \centering \subfigure{ \begin{minipage}[t]{0.22\linewidth} \centering \includegraphics[width=1.0\textwidth]{noise_a.png}\vspace{0.04cm} \centering \includegraphics[width=1.0\textwidth]{noise_b.png} \end{minipage} \begin{minipage}[t]{0.22\linewidth} \centering \includegraphics[width=1.0\textwidth]{noise_e.png}\vspace{0.04cm} \centering \includegraphics[width=1.0\textwidth]{noise_f.png} \end{minipage} \begin{minipage}[t]{0.22\linewidth} \centering \includegraphics[width=1.0\textwidth]{noise_c.png}\vspace{0.04cm} \centering \includegraphics[width=1.0\textwidth]{noise_d.png} \end{minipage} \begin{minipage}[t]{0.22\linewidth} \centering \includegraphics[width=1.0\textwidth]{noise_g.png}\vspace{0.04cm} \centering \includegraphics[width=1.0\textwidth]{noise_h.png} \end{minipage}} \caption{Feature maps corresponding to clean images (top) and to their noisy versions (bottom). The noise is randomly generated by a Gaussian function with a mean value of 0 and a variance of 0.005. The first and third columns: images; the rest columns: feature maps. The contrast between foreground and background in the feature map of the clean image is more obvious (second column), and the boundaries between dense objects are clearer (fourth column).} \label{fig:noise_visualize} \end{figure} However, Eq.~\ref{eq:attention} simply distinguishes feature response between objects and background in spatial domain, and $w^{i}_{c}$ is only used to measure the importance of each channel. In other words, the interaction between intra-class objects and inter-class objects is not considered which is important for detection in complex scene. We are aimed to devise a new network that can not only distinguish object from background, but also weaken the mutual interference among objects. Specifically, we propose adding instance-level denoising (InLD) module at intermediate layers of convolutional networks. The key is to decouple the feature of different object categories into their respective channels, and meanwhile the features of objects and background are enhanced and weakened in the spatial domain, respectively. As a result, our new formulation is as follows, which considers the total $I$ number of object categories with one additional category for background: \begin{equation} \begin{aligned} \mathbf{Y} &= \mathcal{D}_{InLD}(\mathbf{X})\odot\mathbf{X} \\ & = \mathbf{W}_{InLD} \odot \mathbf{X} \\ &= \bigcup_{i=1}^{I+1}\mathbf{W}_{InLD}^{i} \odot \mathbf{X}^{i} \\ &= \bigcup_{i=1}^{I+1}\bigcup_{j=1}^{C_{i}} \mathbf{w}^{i}_{j} \odot \mathbf{x}^{i}_{j} \label{eq:denoising} \end{aligned} \end{equation} where $\mathbf{W}_{InLD} \in \mathbb{R}^{C \times H \times W}$ is a hierarchical weight. $\mathbf{W}_{InLD}^{i} \in \mathbb{R}^{C_{i} \times H \times W}$ and $\mathbf{X}^{i} \in \mathbb{R}^{C_{i} \times H \times W}$ represent the weight and feature response corresponding to the $i$-th category, and its channel number is denoted by $C_{i}$, for $C=\sum_{i=1}^{I}C_{i} + C_{bg}$. $\mathbf{w}^{i}_{j}$ and $\mathbf{x}^{i}_{j}$ represent the weight and feature of the $i$-th category along the $j$-th channel, respectively. As can be seen from Eq.~\ref{eq:attention} and Eq.~\ref{eq:denoising}, $\mathcal{D}_{InLD}(\mathbf{X})$ can be approximated as a combination of multiple $\mathcal{A}^{i}(\mathbf{X}^{i})$, which denotes the attention function of category $i$. Thus we have: \begin{equation} \begin{aligned} \mathbf{Y} & = \mathcal{D}_{InLD}(\mathbf{X})\odot\mathbf{X} \\ &= \bigcup_{i=1}^{I+1}\mathcal{A}^{i}(\mathbf{X}^{i}) \odot \mathbf{X}^{i} \label{eq:DandA} \end{aligned} \end{equation} Without loss of generality, consider an image containing objects belonging to the first $I_0$ ($I_0\leq I$) categories. In this paper, we aim to decouple the above formula into three parts as concatenated to each other (see Fig.~\ref{fig:feature_denoising}): \begin{equation} \mathbf{Y} = \underbrace{\bigcup_{i=1}^{I_0}\bigcup_{p=1}^{C_{i}} \mathbf{w}^{i}_{p} \odot \mathbf{x}^{i}_{p}}_{\text{categories in image}} \cup \underbrace{\bigcup_{j=I_0+1}^{I}\bigcup_{q=1}^{C_{j}} \mathbf{w}^{j}_{q} \odot \mathbf{x}^{j}_{q}}_{\text{categories not in image}} \cup \underbrace{\bigcup_{k=1}^{C_{bg}} \mathbf{w}^{bg}_{k} \odot \mathbf{x}^{bg}_{k}}_{\text{background}} \label{eq:denoising_} \end{equation} For background and unseen categories not in image, ideally the response is filtered by our devised denoising module to be as small as possible. From this perspective, Eq.~\ref{eq:denoising_} can be further interpreted by: \begin{equation} \begin{aligned} \mathbf{Y} &= \underbrace{\bigcup_{i=1}^{I_0}\bigcup_{p=1}^{C_{i}} \mathbf{w}^{i}_{p} \odot \mathbf{x}^{i}_{p}}_{\text{categories in image}} \cup \underbrace{\bigcup_{j=I_0+1}^{I}\mathcal{O}_{j}}_{\text{categories not in image}} \cup \underbrace{\mathcal{O}_{bg}}_{\text{background}} \label{eq:denoising_final} \end{aligned} \end{equation} where $\mathcal{O}$ denotes tensor with small feature response one aims to achieve, for each category $\mathcal{O}_j$ and background $\mathcal{O}_{bg}$. In the following subsection, we show how to achieve the above decoupled feature learning among categories. \begin{figure}[!tb] \begin{center} \includegraphics[width=1.0\linewidth]{feature_denoising.png} \end{center} \caption{Feature map with decoupled category-specific feature signals along channels. The abbreviation `HA', `SP', `SH', and `SV' indicate `Harbor', `Swimming pool', `Ship', and `Small vehicle', respectively. `Others' include background and unseen categories that do not appear in the image. Features of different categories are decoupled into their respective channels (top and middle), while the features of object and background are enhanced and suppressed in spatial domain, respectively (bottom).} \label{fig:feature_denoising} \end{figure} \begin{table}[tb!] \centering \caption{Ablative study of five image level denoising settings as used in \cite{xie2019feature} on the OBB task of DOTA dataset.} \resizebox{0.48\textwidth}{!}{ \begin{tabular}{l|c|c} \hline Base Model & Image-Level Denoising & mAP (\%)\\ \hline \multirow{6}{*}{R$^3$Det \cite{yang2019r3det}} & none & 65.73\\\cline{2-3} & bilateral, dot prod & 66.94\\ & bilateral, gaussian & 67.03\\ & nonlocal, dot prod & 66.82\\ & nonlocal, gaussian & \bf{67.68}\\ & nonlocal, gaussian, 3x3 mean & 66.88\\ \hline \end{tabular}} \label{table:ImLD_Ablation_Study} \end{table} \subsubsection{Implementation of Instance-Level Denoising} \label{sec:inld_implement} Based on the above derivations, we devise a practical neural network based implementation. Our analysis starts with the simplest case with a single channel for each category's weight $\mathbf{W}_{InLD}^{i}$ in Eq.~\ref{eq:denoising}, or namely $C_{i}=1$. In this setting, the learned weight $\mathbf{W}_{InLD}$ can be regarded as the result of semantic segmentation of the image for specific categories (a three-dimensional one-hot vector). Then more channels of weight $\mathbf{W}_{InLD}$ in $\mathcal{D}_{InLD}$ can be guided by semantic segmentation, as illustrated in Fig. \ref{fig:pipeline} and Fig. \ref{fig:feature_denoising}. In semantic segmentation task, the feature responses of each category on the previous layers of the output layer tend to be separated in the channel dimension, and the feature responses of the foreground and background in the spatial dimension are also polarized. Hence one can adopt a semantic segmentation network for the operations in Eq.~\ref{eq:denoising_final}. Another advantage for holding this semantic segmentation view is that it can be conducted in an end-to-end supervised fashion, whose learned denoising weights can be more reliable and effective than the self-attention based alternatives \cite{wang2018non,hu2018squeeze}. In Fig. \ref{fig:pipeline}, we give a specific implementation as follows. The input feature map expands the receptive field by $N$ dilated convolutions \cite{yu2015multi} and a $1\times1$ convolutional layer at first. For instance, the values of $N$ take the numbers of $\{1, 1, 1, 1, 1\}$ on pyramid levels P3 to P7, respectively as set in our experiments. The feature map is then processed by two parallel $1\times1$ convolutional layers to obtain the two important outputs. One output (a three-dimensional one-hot feature map) is used to perform coarse multi-class segmentation, and the annotated bounding box in detection tasks can be used as the approximate ground truth. The hope is that this output will guide the other output into a denoising feature map. As shown in Fig. \ref{fig:feature_denoising}, this denoising feature map and the original feature map are combined (by dot operation) to obtain the final decoupled feature map. The purpose is in two-folds: along the channel dimension, inter-class feature responses of different object categories (excluding the background) are basically decoupled into their respective channels; In the spatial dimension, intra-class feature boundaries are sharpened due to the feature response of the object area is enhanced and background is weakened. As such, the three issues as raised in the beginning of this subsection are alleviated. As shown in the upper right corner of Fig. \ref{fig:pipeline}, the classification model is decomposed into two terms: objectness and category classification, as written by: \begin{equation} P(class_{i}, object) = \underbrace{P(class_{i}|object)}_{\text{category classification}} * \underbrace{P(object)}_{\text{objectness}} \label{eq:objectness} \end{equation} This probability map $P(object)$ relates to whether the anchor for each feature point is an object. While the above decoupled features are directly used for object classification $P(class_{i}|object)$ (as well as rotation regression which will be discussed in Sec.~\ref{subsec:learning}). During training, the probability map $P(object)$ will be used as a weight for the regression loss (see Eq.~\ref{eq:multitask_loss_h}), making those ambiguous positive samples get smaller weights and giving higher quality positive samples more attention. We find in the experiment that the introduction of the probability map can speed up the convergence of the model and improve the detection results, as shown in Table \ref{table:InLD_Ablative_Study}. \subsubsection{Comparison with Image-Level Denoising}\label{subsec:ImLD} Image denoising is a fundamental task in image processing, which may impose notable impact to image recognition, as has been recently studied and verified in~\cite{xie2019feature}. Specifically, the work~\cite{xie2019feature} shows that the transformations performed by the network layers exacerbate the perturbation, and the hallucinated activations can overwhelm the activations due to true signal, which leads to worse prediction. Here we also study this issue in the context of aerial images though we directly borrow the image level denoising model~\cite{xie2019feature}. As shown in Fig. \ref{fig:noise_visualize}, we add Gaussian noise on the raw aerial images and compare with the clean ones. The same feature map on clean and noisy images, extracted from the same channel of a res3 block in the same detection network trained on clean images are visualized. Though the noise has little effect and it is difficult to distinguish by naked eyes. However, it becomes more obvious in the feature map such that the objects are gradually submerged in the background or the boundary between the objects tends to be blurred. Since the convolution operation and the traditional denoising filters are highly correlated, we resort to a potential solution \cite{xie2019feature} which employs convolutional layers to simulate different types of differential filters, such as non-local means, bilateral filtering, mean filtering, and median filtering. Inspired by the success of these operation in adversarial attacks \cite{xie2019feature}, in this paper we migrate and extend these differential operations for object detection. We show the generic form of ImLD in Fig. \ref{fig:pipeline}. It processes the input features by a denoising operation, such as non-local means or other variants. The denoised representation is first processed by a $1\times1$ convolutional layer, and then added to the module’s input via a residual connection. The simulation of ImLD is expressed as follows: \begin{equation} \begin{aligned} \mathbf{Y} &= \mathcal{F}(\mathbf{X}) + \mathbf{X} \label{eq:ImLD_EQ} \end{aligned} \end{equation} where $\mathcal{F(\mathbf{X})}$ is the output by a certain filter. $\mathbf{X}$, $\mathbf{Y} \in \mathbb{R}^{C \times H \times W}$ represent the whole feature map of input image. The effect of the imposed denosing module is shown in Table~\ref{table:ImLD_Ablation_Study}. In the following, we further show that the more notable detection improvement comes from the InLD module and its effect can well cover the image level one. \subsection{Loss Function Design and Learning}\label{subsec:learning} \subsubsection{Horizontal Object Detection} Horizontal and rotation detection settings are both considered. For rotation detection, we need to redefine the representation of the bounding box. Fig. \ref{fig:90} shows the rectangular definition of the 90 degree angle representation range \cite{yang2018automatic, yang2019scrdet,yang2019r3det,qian2019learning,yang2018position}. $\theta$ denotes the acute angle to the x-axis, and for the other side we refer it as $w$. Note this definition is also officially adopted by OpenCV \url{https://opencv.org/}. The regression of the bounding box is given by: \begin{equation} \begin{aligned} t_{x}&=(x-x_{a})/w_{a}, t_{y}=(y-y_{a})/h_{a} \\ t_{w}&=\log(w/w_{a}), t_{h}=\log(h/h_{a}), \\ t_{\theta}&=\theta-\theta_{a} \quad (only\;for\;rotation\;detection) \label{eq:regression1} \end{aligned} \end{equation} \begin{equation} \begin{aligned} t_{x}^{'}&=(x_{}^{'}-x_{a})/w_{a}, t_{y}^{'}=(y_{}^{'}-y_{a})/h_{a} \\ t_{w}^{'}&=\log(w_{}^{'}/w_{a}), t_{h}^{'}=\log(h_{}^{'}/h_{a}),\\ t_{\theta}^{'}&=\theta_{}^{'}-\theta_{a} \quad (only\;for\;rotation\;detection) \label{eq:regression2} \end{aligned} \end{equation} where $x$, $y$, $w$, $h$, $\theta$ denote the box's center coordinates, width, height and angle, respectively. Variables $x, x_{a}, x^{'}$ are for the ground-truth box, anchor box, and predicted box, respectively (likewise for $y,w,h,\theta$). For horizontal detection, the multi-task loss is used which is defined as follows: \begin{equation} \begin{aligned} L_{h} = &\frac{\lambda_{reg}}{N}\sum_{n=1}^{N}t_{n}^{'} \cdot p(object_{n})\sum_{j\in\{x,y,w,h\}}L_{reg}(v_{nj}^{'},v_{nj}) \\ & + \frac{\lambda_{cls}}{N}\sum_{n=1}^{N}L_{cls}(p_{n},t_{n}) \\ & + \frac{\lambda_{InLD}}{h\times w}\sum_{i}^{h}\sum_{j}^{w}L_{InLD}(u_{ij}^{'},u_{ij}) \label{eq:multitask_loss_h} \end{aligned} \end{equation} where $N$ indicates the number of anchors, $t_{n}^{'}$ is a binary value ($t_{n}^{'}=1$ for foreground and $t_{n}^{'}=0$ for background, no regression for background). $p(object_{n})$ indicates the probability that the current anchor is the object. $v_{nj}^{'}$ denotes the predicted offset vectors of the n-th anchor, $v_{nj}$ is the targets vector between n-th anchor and ground-truth it matches. $t_{n}$ represents the label of object, $p_{n}$ is the probability distribution of various classes calculated by sigmoid function. $u_{ij}$, $u_{ij}^{'}$ denote the label and predict of mask's pixel respectively. The hyper-parameter $\lambda_{reg}$, $\lambda_{cls}$, $\lambda_{InLD}$ control the trade-off and are set to 1 by default. The classification loss $L_{cls}$ is focal loss \cite{lin2017focal}. The regression loss $L_{reg}$ is smooth L1 loss as defined in \cite{girshick2015fast}, and the InLD loss $L_{InLD}$ is pixel-wise softmax cross-entropy. \begin{figure}[!tb] \centering \subfigure{ \begin{minipage}[t]{0.9\linewidth} \centering \includegraphics[width=1.0\textwidth]{90_.png} \centering \label{fig:90} \end{minipage}} \\ \vspace{-15pt} \caption{Rotation box definitions (OpenCV definition). $\theta $ denotes the acute angle to the x-axis, and for the other side we refer it as $w$. The range of angle representation is $[-90,0)$.} \label{fig:definition} \vspace{-10pt} \end{figure} \begin{figure}[!tb] \centering \subfigure[Ideal case.]{ \begin{minipage}[t]{0.58\linewidth} \centering \includegraphics[width=1.0\textwidth]{example1.png} \centering \label{fig:example1} \end{minipage}} \subfigure[Actual case.]{ \begin{minipage}[t]{0.35\linewidth} \centering \includegraphics[width=1.0\textwidth]{example2.png} \centering \label{fig:example2} \end{minipage}} \caption{Boundary discontinuity of angle regression. Blue, green, and red bounding boxes denote the anchor/proposal, ground-truth, and prediction box, respectively.} \label{fig:example} \end{figure} \begin{figure}[!tb] \centering \subfigure[Smooth L1 loss.]{ \begin{minipage}[t]{0.4\linewidth} \centering \includegraphics[width=1.0\textwidth]{bad_case.png} \centering \label{fig:bad_case} \end{minipage}} \subfigure[IoU-smooth L1 loss.]{ \begin{minipage}[t]{0.4\linewidth} \centering \includegraphics[width=1.0\textwidth]{good_case.png} \centering \label{fig:good_case} \end{minipage}} \caption{Detection results by two losses. For this dense arrangement case, the angle estimation error will also make the classification even harder.} \label{fig:cases} \end{figure} \begin{table}[tb!] \centering \caption{Ablative study for speed and accuracy of InLD on OBB task of DOTA. Binary-Mask and Multi-Mask refer to binary and multi-class semantic segmentation, respectively. Coproduct denotes multiplying the objectness term $P(object)$ or not in Eq.~\ref{eq:objectness}.} \resizebox{0.48\textwidth}{!}{ \begin{tabular}{l|c|c|c|c} \hline Base Model & Mask Type & Coproduct & FPS & mAP (\%)\\ \hline \multirow{4}{*}{R$^3$Det \cite{yang2019r3det}} & null & $\times$ & 14 & 65.73\\ & Binary-Mask & $\times$ & 13.5 & 68.12\\ & Multi-Mask & $\times$ & 13 & 69.43\\ & Multi-Mask & $\surd$ & 13 & \textbf{69.81}\\ \hline \end{tabular}} \label{table:InLD_Ablative_Study} \end{table} \subsubsection{Rotation Object Detection} \label{sec:rod} In contrast, rotation detection needs to carefully address the boundary problem. In particular, there exists the boundary problem for the angle regression, as shown in Fig.~\ref{fig:example1}. It shows that an ideal form of regression (the blue box rotates counterclockwise to the red box), but the loss of this situation is very large due to the periodicity of angular (PoA) and exchangeability of edges (EoE). Therefore, the model has to be regressed in other complex forms like in Fig.~\ref{fig:example2} (such as the blue box rotating clockwise while scaling $w$ and $h$), increasing the difficulty of regression, as shown in Fig.~\ref{fig:bad_case}. We introduce the IoU constant factor $\frac{|-\log(IoU)|}{|L_{reg}(v_{j}^{'},v_{j})|}$ in the traditional smooth L1 loss to perfectly solve this problem, as shown in Eq. \ref{eq:multitask_loss_r}. This new loss function is named IoU-smooth L1 loss. It can be seen that in the boundary case, the loss function is approximately equal to $|-\log(IoU)|\approx0$, eliminating the sudden increase in loss caused by $|L_{reg}(v_{j}^{'},v_{j})|$, as shown in Fig.~\ref{fig:good_case}. The new regression loss can be divided into two parts: $\frac{L_{reg}(v_{j}^{'},v_{j})}{|L_{reg}(v_{j}^{'},v_{j})|}$ determines the direction of gradient propagation, and $|-\log(IoU)|$ for the magnitude of gradient. In addition, using IoU to optimize location accuracy is consistent with IoU-dominated metric, which is more straightforward and effective than coordinate regression. \begin{equation} \begin{aligned} L_{r} =& \frac{\lambda_{reg}}{N}\sum_{n=1}^{N}t_{n}^{'} \cdot p(object_{n}) \\ & \cdot \sum_{j\in\{x,y,w,h,\theta\}}\frac{L_{reg}(v_{nj}^{'},v_{nj})}{|L_{reg}(v_{nj}^{'},v_{nj})|}\left|-\log(IoU)\right| \\ & + \frac{\lambda_{cls}}{N}\sum_{n=1}^{N}L_{cls}(p_{n},t_{n}) \\ & + \frac{\lambda_{InLD}}{h\times w}\sum_{i}^{h}\sum_{j}^{w}L_{InLD}(u_{ij}^{'},u_{ij}) \label{eq:multitask_loss_r} \end{aligned} \end{equation} where $IoU$ denotes the overlap of the prediction box and ground-truth. \begin{table}[tb!] \centering \caption{Ablative study by accuracy (\%) of the number of dilated convolution on pyramid levels and the InLD loss $L_{InLD}$ in InLD on OBB task of DOTA. It can be found that supervised learning is the main contribution of InLD rather than more convolution layers.} \resizebox{0.45\textwidth}{!}{ \begin{threeparttable} \begin{tabular}{cccc} \hline \multicolumn{2}{c}{InLD} & \multirow{2}{*}{RetinaNet-H \cite{yang2019r3det}}& \multirow{2}{*}{R$^3$Det \cite{yang2019r3det}}\\ \cline{1-2} dilated convolution \cite{yu2015multi} & $L_{InLD}$ & &\\ \hline -- & -- & 62.21 & 65.73 \\ \{4,4,3,2,2\} & $\times$ & 62.36 & 66.62\\ \{1,1,1,1,1\} & $\surd$ & 65.40 & \textbf{69.81}\\ \{4,4,3,2,2\} & $\surd$ & \textbf{65.52} & 69.07\\ \hline \end{tabular} \end{threeparttable}} \label{table:InLD} \end{table} \begin{table}[tb!] \centering \caption{Detailed ablative study by accuracy (\%) of the effect of InLD on two traffic light datasets. Note the category `wait on' is only available in our collected S$^2$TLD dataset as released by this paper.} \resizebox{0.48\textwidth}{!}{ \begin{tabular}{l|c|c|c|c|c|c|c|c} \hline Dataset & Base Model & InLD & red & yellow & green & off & wait on & mAP \\ \hline \multirow{4}{*}{S$^2$TLD} & \multirow{2}{*}{RetinaNet \cite{lin2017focal}} & $\times$ & 97.94 & 88.63 & 97.17 & 90.13 & 92.40 & 93.25 \\ & & $\surd$ & 98.15 & 87.66 & 97.12 & 93.88 & 93.75 & \textbf{94.11} \\ \cline{2-9} & \multirow{2}{*}{FPN \cite{lin2017feature}} & $\times$ & 97.98 & 87.55 & 97.42 & 93.42 & 98.31 & 94.93 \\ & & $\surd$ & 98.04 & 92.84 & 97.69 & 92.06 & 99.08 & \textbf{95.94}\\ \hline \multirow{4}{*}{BSTLD \cite{behrendt2017deep}} & \multirow{2}{*}{RetinaNet \cite{lin2017focal}} & $\times$ & 69.91 & 19.71 & 77.11 & 22.33 & -- & 47.26 \\ & & $\surd$ & 70.50 & 24.05 & 77.16 & 22.51 & -- & \textbf{48.56}\\ \cline{2-9} & \multirow{2}{*}{FPN \cite{lin2017feature}} & $\times$ & 89.27 & 47.82 & 92.01 & 40.73 & -- & 67.46 \\ & & $\surd$ & 89.88 & 49.93 & 92.42 & 42.45 & -- & \textbf{68.67} \\ \hline \end{tabular}} \label{table:STLD} \end{table} \begin{figure*}[!tb] \centering \subfigure[red, green, off]{ \begin{minipage}[t]{0.19\linewidth} \centering \includegraphics[width=1.0\textwidth]{s2tld-1.jpg} \centering \end{minipage}} \subfigure[red, green, wait on]{ \begin{minipage}[t]{0.19\linewidth} \centering \includegraphics[width=1.0\textwidth]{s2tld-2.jpg} \end{minipage}} \subfigure[red]{ \begin{minipage}[t]{0.19\linewidth} \centering \includegraphics[width=1.0\textwidth]{s2tld-3.jpg} \end{minipage}} \subfigure[red, green, off]{ \begin{minipage}[t]{0.19\linewidth} \centering \includegraphics[width=1.0\textwidth]{s2tld-4.jpg} \end{minipage}} \subfigure[red, yellow]{ \begin{minipage}[t]{0.19\linewidth} \centering \includegraphics[width=1.0\textwidth]{s2tld-5.jpg} \end{minipage}}\\ \subfigure[red]{ \begin{minipage}[t]{0.19\linewidth} \centering \includegraphics[width=1.0\textwidth]{s2tld-6.jpg} \centering \end{minipage}} \subfigure[red, green, wait on]{ \begin{minipage}[t]{0.19\linewidth} \centering \includegraphics[width=1.0\textwidth]{s2tld-7.jpg} \end{minipage}} \subfigure[red]{ \begin{minipage}[t]{0.19\linewidth} \centering \includegraphics[width=1.0\textwidth]{s2tld-8.jpg} \end{minipage}} \subfigure[red]{ \begin{minipage}[t]{0.19\linewidth} \centering \includegraphics[width=1.0\textwidth]{s2tld-9.jpg} \end{minipage}} \subfigure[green]{ \begin{minipage}[t]{0.19\linewidth} \centering \includegraphics[width=1.0\textwidth]{s2tld-10.jpg} \end{minipage}} \caption{Illustrations of the five categories and different lighting and weather conditions in our collected S$^2$TLD dataset as released in the paper.} \label{fig:S2TLD_VIS} \end{figure*} \begin{table}[tb!] \centering \caption{Ablative study by accuracy (\%) of ImLD, InLD and their combination (numbers in bracket denote relative improvement against using InLD alone) on different datasets and different detection tasks.} \resizebox{0.48\textwidth}{!}{ \begin{tabular}{l|c|c|c|c|c} \hline Dataset and task & Base Model & Baseline & ImLD & InLD & ImLD + InLD \\ \hline \multirow{3}{*}{DOTA OBB\cite{xia2018dota}} & RetinaNet-H \cite{yang2019r3det} & 62.21 & 62.39 & 65.40 & \textbf{65.62 (+0.22)} \\ & RetinaNet-R \cite{yang2019r3det} & 61.94 & 63.96 & 64.52 & \textbf{64.60 (+0.08)} \\ & R$^3$Det \cite{yang2019r3det} & 65.73 & 67.68 & 69.81 & \textbf{69.95 (+0.14)}\\ \hline \multirow{1}{*}{DOTA HBB \cite{xia2018dota}} & RetinaNet \cite{lin2017focal} & 67.76 & 68.05 & 68.33 & \textbf{68.50 (+0.17)} \\ \hline \multirow{2}{*}{DIOR \cite{li2020object}} & RetinaNet \cite{lin2017focal} & 68.05 & 68.42 & \textbf{69.36} & 69.35 (-0.01)\\ & FPN \cite{lin2017feature} & 71.74 & 71.83 & 73.21 & \textbf{73.25 (+0.04)}\\ \hline \multirow{1}{*}{ICDAR2015 \cite{karatzas2015icdar}} & RetinaNet-H \cite{yang2019r3det} & 77.13 & -- & \textbf{78.68} & --\\ \hline \multirow{2}{*}{COCO \cite{lin2014microsoft}} & FPN \cite{lin2017feature} & 36.1 & -- & \textbf{37.2} & --\\ & RetinaNet \cite{lin2017focal} & 34.4 & -- & \textbf{35.8} & --\\ \hline \multirow{1}{*}{S$^2$TLD} & RetinaNet \cite{lin2017focal} & 93.25 & -- & \textbf{94.11} & --\\ \hline \end{tabular}} \label{table:ImLD_and_InLD} \end{table} \section{Experiments} \label{sec:experiment} Experiments are performed on a server with GeForce RTX 2080 Ti and 11G memory. We first give the description of the dataset, and then use these datasets to verify the advantage of the proposed method. Source code is available at \url{https://github.com/SJTU-Thinklab-Det/DOTA-DOAI}. \begin{table*}[tb!] \centering \caption{Ablative study by accuracy (\%) of each component in our method on the OBB task of DOTA dataset. For RetinaNet, `H' and `R' represents the horizontal and rotation anchors, respectively.} \resizebox{1.0\textwidth}{!}{ \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline Base Method & Backbone & InLD & Data Aug. & PL & BD & BR & GTF & SV & LV & SH & TC & BC & ST & SBF & RA & HA & SP & HC & mAP\\ \hline \multirow{2}{*}{RetinaNet-H \cite{yang2019r3det}} & ResNet50 & $\times$ & $\times$ & 88.87 & 74.46 & 40.11 & 58.03 & 63.10 & 50.61 & 63.63 & \textbf{90.89} & 77.91 & 76.38 & 48.26 & 55.85 & 50.67 & 60.23 & 34.23 & 62.22 \\ & ResNet50 & $\surd$ & $\times$ & 88.83 & 74.70 & 40.80 & 65.85 & 59.76 & 53.51 & 67.38 & 90.82 & 78.49 & 80.52 & 52.02 & 59.77 & 53.56 & 66.80 & 48.24 & 65.40 \\ \hline \multirow{2}{*}{RetinaNet-R \cite{yang2019r3det}} & ResNet50 & $\times$ & $\times$ & 88.92 & 67.67 & 33.55 & 56.83 & 66.11 & 73.28 & 75.24 & 90.87 & 73.95 & 75.07 & 43.77 & 56.72 & 51.05 & 55.86 & 21.46 & 62.02 \\ & ResNet50 & $\surd$ & $\times$ & 88.96 & 70.77 & 33.30 & 62.02 & 66.35 & 75.69 & 73.49 & 90.84 & 78.73 & 77.21 & 47.54 & 55.59 & 51.52 & 58.06 & 37.65 & 64.52 \\ \hline \multirow{5}{*}{R$^3$Det \cite{yang2019r3det}} & ResNet50 & $\times$ & $\times$ & 88.78 & 74.69 & 41.94 & 59.88 & 68.90 & 69.77 & 69.82 & 90.81 & 77.71 & 80.40 & 50.98 & 58.34 & 52.10 & 58.30 & 43.52 & 65.73 \\ & ResNet152 & $\times$ & $\surd$ & 89.24 & 80.81 & \textbf{51.11} & 65.62 & 70.67 & 76.03 & 78.32 & 90.83 & 84.89 & 84.42 & 65.10 & 57.18 & 68.10 & 68.98 & 60.88 & 72.81\\ & ResNet50 & $\surd$ & $\times$ & 88.63 & 75.98 & 45.88 & 65.45 & 69.74 & 74.09 & 78.30 & 90.78 & 78.96 & 81.28 & 56.28 & 63.01 & 57.40 & 68.45 & 52.93 & 69.81 \\ & ResNet101 & $\surd$ & $\surd$ & \textbf{89.25} & 83.30 & 49.94 & 66.20 & \textbf{71.82} & 77.12 & \textbf{79.53} & 90.65 & 82.14 & \textbf{84.57} & 65.33 & \textbf{63.89} & 67.56 & 68.48 & 54.89 & 72.98 \\ & ResNet152 & $\surd$ & $\surd$ & 89.20 & \bf{83.36} & 50.92 & \textbf{68.17} & 71.61 & {\bf80.23} & 78.53 & 90.83 & \textbf{86.09} & 84.04 & {\bf65.93} & 60.80 & {\bf68.83} & {\bf71.31} & {\bf66.24} & {\bf74.41} \\ \hline \end{tabular}} \label{table:dota_ablation_study} \end{table*} \begin{table*}[!tb] \centering \caption{Ablative study by accuracy (\%) of IoU-Smooth L1 loss by using it or not in the three methods on the OBB task of DOTA dataset. Numbers in bracket denote the relative improvement by using the proposed IoU-Smooth L1 loss.} \resizebox{0.98\textwidth}{!}{ \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline Method & Backbone & IoU-Smooth L1 & InLD & PL & BD & BR & GTF & SV & LV & SH & TC & BC & ST & SBF & RA & HA & SP & HC & mAP \\ \hline \multirow{2}{*}{RetinaNet-R \cite{yang2019r3det}} & ResNet50 & $\times$ & $\times$ & 88.92 & 67.67 & 33.55 & 56.83 & 66.11 & 73.28 & 75.24 & 90.87 & 73.95 & 75.07 & 43.77 & 56.72 & 51.05 & 55.86 & 21.46 & 62.02 \\ & ResNet50 & $\surd$ & $\times$ & 89.27 & 74.93 & 37.01 & 64.49 & 66.00 & 75.87 & 77.75 & 90.76 & 80.35 & 80.31 & 54.75 & 61.17 & 61.07 & 64.78 & 51.24 & 68.65 \textbf{(+6.63)} \\ \hline \multirow{2}{*}{SCRDet \cite{yang2019scrdet}} & ResNet101 & $\times$ & $\times$ & 89.65 & 79.51 & 43.86 & 67.69 & 67.41 & 55.93 & 64.86 & 90.71 & 77.77 & 84.42 & 57.67 & 61.38 & 64.29 & 66.12 & 62.04 & 68.89 \\ & ResNet101 & $\surd$ & $\times$ & 89.41 & 78.83 & 50.02 & 65.59 & 69.96 & 57.63 & 72.26 & 90.73 & 81.41 & 84.39 & 52.76 & 63.62 & 62.01 & 67.62 & 61.16 & 69.83 \textbf{(+0.94)} \\ \hline \multirow{2}{*}{FPN \cite{lin2017feature}} & ResNet101 &$\times$ & $\surd$ & 90.25 & 85.24 & 55.18 & 73.24 & 70.38 & 73.77 & 77.00 & 90.77 & 87.74 & 86.63 & 68.89 & 63.45 & 72.73 & 67.96 & 60.23 & 74.90 \\ & ResNet101 & $\surd$ & $\surd$ & 89.77 & 83.90 & 56.30 & 73.98 & 72.60 & 75.63 & 82.82 & 90.76 & 87.89 & 86.14 & 65.24 & 63.17 & 76.05 & 68.06 & 70.24 & 76.20 \textbf{(+1.30)} \\ \hline \end{tabular}} \label{table:iou-smooth-l1} \end{table*} \begin{table*} \centering \caption{AP and mAP (\%) across categories of OBB and HBB task on DOTA. MS indicates multi-scale training and testing.} \resizebox{0.98\textwidth}{!}{ \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \textbf{OBB (oriented bounding boxes)} & Backbone & PL & BD & BR & GTF & SV & LV & SH & TC & BC & ST & SBF & RA & HA & SP & HC & mAP\\ \hline \textbf{Two-stage methods} & \multicolumn{16}{|c}{} \\ \hline FR-O \cite{xia2018dota} & ResNet101 \cite{he2016deep} & 79.09 & 69.12 & 17.17 & 63.49 & 34.20 & 37.16 & 36.20 & 89.19 & 69.60 & 58.96 & 49.4 & 52.52 & 46.69 & 44.80 & 46.30 & 52.93 \\ R-DFPN \cite{yang2018automatic} & ResNet101 & 80.92 & 65.82 & 33.77 & 58.94 & 55.77 & 50.94 & 54.78 & 90.33 & 66.34 & 68.66 & 48.73 & 51.76 & 55.10 & 51.32 & 35.88 & 57.94 \\ R$^2$CNN \cite{jiang2017r2cnn} & ResNet101 & 80.94 & 65.67 & 35.34 & 67.44 & 59.92 & 50.91 & 55.81 & 90.67 & 66.92 & 72.39 & 55.06 & 52.23 & 55.14 & 53.35 & 48.22 & 60.67 \\ RRPN \cite{ma2018arbitrary} & ResNet101 & 88.52 & 71.20 & 31.66 & 59.30 & 51.85 & 56.19 & 57.25 & 90.81 & 72.84 & 67.38 & 56.69 & 52.84 & 53.08 & 51.94 & 53.58 & 61.01 \\ ICN \cite{azimi2018towards} & ResNet101 & 81.40 & 74.30 & 47.70 & 70.30 & 64.90 & 67.80 & 70.00 & 90.80 & 79.10 & 78.20 & 53.60 & 62.90 & 67.00 & 64.20 & 50.20 & 68.20 \\ RADet \cite{li2020radet} & ResNeXt101 \cite{xie2017aggregated} & 79.45 & 76.99 & 48.05 & 65.83 & 65.46 & 74.40 & 68.86 & 89.70 & 78.14 & 74.97 & 49.92 & 64.63 & 66.14 & 71.58 & 62.16 & 69.09 \\ RoI-Transformer \cite{ding2018learning} & ResNet101 & 88.64 & 78.52 & 43.44 & 75.92 & 68.81 & 73.68 & 83.59 & 90.74 & 77.27 & 81.46 & 58.39 & 53.54 & 62.83 & 58.93 & 47.67 & 69.56 \\ CAD-Net \cite{zhang2019cad} & ResNet101 & 87.8 & 82.4 & 49.4 & 73.5 & 71.1 & 63.5 & 76.7 & \textbf{90.9} & 79.2 & 73.3 & 48.4 & 60.9 & 62.0 & 67.0 & 62.2 & 69.9 \\ SCRDet \cite{yang2019scrdet} & ResNet101 & 89.98 & 80.65 & 52.09 & 68.36 & 68.36 & 60.32 & 72.41 & 90.85 & {\bf87.94} & 86.86 & 65.02 & 66.68 & 66.25 & 68.24 & 65.21 & 72.61\\ SARD \cite{wang2019sard} & ResNet101 & 89.93 & 84.11 & 54.19 & 72.04 & 68.41 & 61.18 & 66.00 & 90.82 & 87.79 & 86.59 & 65.65 & 64.04 & 66.68 & 68.84 & 68.03 & 72.95 \\ FADet \cite{li2019feature} & ResNet101 & 90.21 & 79.58 & 45.49 & 76.41 & 73.18 & 68.27 & 79.56 & 90.83 & 83.40 & 84.68 & 53.40 & 65.42 & 74.17 & 69.69 & 64.86 & 73.28\\ MFIAR-Net \cite{peng2020multi} & ResNet152 \cite{he2016deep} & 89.62 & 84.03 & 52.41 & 70.30 & 70.13 & 67.64 & 77.81 & 90.85 & 85.40 & 86.22 & 63.21 & 64.14 & 68.31 & 70.21 & 62.11 & 73.49 \\ Gliding Vertex \cite{xu2020gliding} & ResNet101 & 89.64 & 85.00 & 52.26 & \textbf{77.34} & 73.01 & 73.14 & \textbf{86.82} & 90.74 & 79.02 & 86.81 & 59.55 & \textbf{70.91} & 72.94 & 70.86 & 57.32 & 75.02 \\ Mask OBB \cite{wang2019mask} & ResNeXt101 & 89.56 & \textbf{85.95} & 54.21 & 72.90 & 76.52 & 74.16 & 85.63 & 89.85 & 83.81 & 86.48 & 54.89 & 69.64 & 73.94 & 69.06 & 63.32 & 75.33 \\ FFA \cite{fu2020rotation} & ResNet101 & 90.1 & 82.7 & 54.2 & 75.2 & 71.0 & 79.9 & 83.5 & 90.7 & 83.9 & 84.6 & 61.2 & 68.0 & 70.7 & 76.0 & 63.7 & 75.7 \\ APE \cite{zhu2019adaptive} & ResNeXt-101 & 89.96 & 83.62 & 53.42 & 76.03 & 74.01 & 77.16 & 79.45 & 90.83 & 87.15 & 84.51 & 67.72 & 60.33 & 74.61 & \textbf{71.84} & 65.55 & 75.75 \\ CSL \cite{yang2020arbitrary} & ResNet152 & \textbf{90.25} & 85.53 & 54.64 & 75.31 & 70.44 & 73.51 & 77.62 & 90.84 & 86.15 & 86.69 & 69.60 & 68.04 & 73.83 & 71.10 & 68.93 & 76.17\\ SCRDet++ (FPN-based) & ResNet101 & 89.77 & 83.90 & \textbf{56.30} & 73.98 & 72.60 & 75.63 & 82.82 & 90.76 & 87.89 & 86.14 & 65.24 & 63.17 & \textbf{76.05} & 68.06 & 70.24 & 76.20 \\ SCRDet++ MS (FPN-based) & ResNet101 & 90.05 & 84.39 & 55.44 & 73.99 & \textbf{77.54} & 71.11 & 86.05 & 90.67 & 87.32 & \textbf{87.08} & \textbf{69.62} & 68.90 & 73.74 & 71.29 & 65.08 & \textbf{76.81} \\ \hline \textbf{Single-stage methods} & \multicolumn{16}{|c}{} \\ \hline IENet \cite{lin2019ienet} & ResNet101 & 80.20 & 64.54 & 39.82 & 32.07 & 49.71 & 65.01 & 52.58 & 81.45 & 44.66 & 78.51 & 46.54 & 56.73 & 64.40 & 64.24 & 36.75 & 57.14 \\ Axis Learning \cite{xiao2020axis} & ResNet101 & 79.53 & 77.15 & 38.59 & 61.15 & 67.53 & 70.49 & 76.30 & 89.66 & 79.07 & 83.53 & 47.27 & 61.01 & 56.28 & 66.06 & 36.05 & 65.98\\ P-RSDet \cite{zhou2020objects} & ResNet101 & 89.02 & 73.65 & 47.33 & 72.03 & 70.58 & 73.71 & 72.76 & 90.82 & 80.12 & 81.32 & 59.45 & 57.87 & 60.79 & 65.21 & 52.59 & 69.82 \\ O$^2$-DNet \cite{wei2019oriented} & Hourglass104 \cite{newell2016stacked} & 89.31 & 82.14 & 47.33 & 61.21 & 71.32 & 74.03 & 78.62 & 90.76 & 82.23 & 81.36 & 60.93 & 60.17 & 58.21 & 66.98 & 61.03 & 71.04 \\ R$^3$Det \cite{yang2019r3det} & ResNet152 & 89.24 & 80.81 & 51.11 & 65.62 & 70.67 & 76.03 & 78.32 & 90.83 & 84.89 & 84.42 & 65.10 & 57.18 & 68.10 & 68.98 & 60.88 & 72.81\\ RSDet \cite{qian2019learning} & ResNet152 & 90.1 & 82.0 & 53.8 & 68.5 & 70.2 & 78.7 & 73.6 & 91.2 & 87.1 & 84.7 & 64.3 & 68.2 & 66.1 & 69.3 & 63.7 & 74.1 \\ SCRDet++ (R$^3$Det-based) & ResNet152 & 89.20 & 83.36 & 50.92 & 68.17 & 71.61 & 80.23 & 78.53 & 90.83 & 86.09 & 84.04 & 65.93 & 60.8 & 68.83 & 71.31 & 66.24 & 74.41 \\ SCRDet++ MS (R$^3$Det-based) & ResNet152 & 88.68 & 85.22 & 54.70 & 73.71 & 71.92 & \textbf{84.14} & 79.39 & 90.82 & 87.04 & 86.02 & 67.90 & 60.86 & 74.52 & 70.76 & \textbf{72.66} & 76.56\\ \hline ~&\multicolumn{16}{c}{}\\ \hline \textbf{HBB (horizontal bounding boxes)} & Backbone & PL & BD & BR & GTF & SV & LV & SH & TC & BC & ST & SBF & RA & HA & SP & HC & mAP\\ \hline \textbf{Two-stage methods} & \multicolumn{16}{|c}{} \\ \hline FR-H \cite{ren2017faster} & ResNet101 & 80.32 & 77.55 & 32.86 & 68.13 & 53.66 & 52.49 & 50.04 & 90.41 & 75.05 & 59.59 & 57.00 & 49.81 & 61.69 & 56.46 & 41.85 & 60.46 \\ ICN \cite{azimi2018towards} & ResNet101 & 90.00 & 77.70 & 53.40 & 73.30 & 73.50 & 65.00 & 78.20 & 90.80 & 79.10 & 84.80 & 57.20 & 62.10 & 73.50 & 70.20 & 58.10 & 72.50 \\ IoU-Adaptive R-CNN \cite{yan2019iou} & ResNet101 & 88.62 & 80.22 & 53.18 & 66.97 & 76.30 & 72.59 & 84.07 & 90.66 & 80.95 & 76.24 & 57.12 & 66.65 & 84.08 & 66.36 & 56.85 & 72.72 \\ SCRDet \cite{yang2019scrdet} & ResNet101 & {\bf 90.18} & 81.88 & 55.30 & 73.29 & 72.09 & 77.65 & 78.06 & {\bf 90.91} & 82.44 & 86.39 & 64.53 & 63.45 & 75.77 & 78.21 & 60.11 & 75.35 \\ FADet \cite{li2019feature} & ResNet101 & 90.15 & 78.60 & 51.92 & \textbf{75.23} & 73.60 & 71.27 & 81.41 & 90.85 & 83.94 & 84.77 & 58.91 & 65.65 & 76.92 & 79.36 & 68.17 & 75.38\\ Mask OBB \cite{wang2019mask} & ResNeXt-101 & 89.69 & \textbf{87.07} & 58.51 & 72.04 & 78.21 & 71.47 & 85.20 & 89.55 & 84.71 & 86.76 & 54.38 & 70.21 & 78.98 & 77.46 & 70.40 & 76.98 \\ A$^2$RMNet \cite{qiu2019a2rmnet} & ResNet101 & 89.84 & 83.39 & 60.06 & 73.46 & \textbf{79.25} & 83.07 & \textbf{87.88} & 90.90 & 87.02 & \textbf{87.35} & 60.74 & 69.05 & 79.88 & 79.74 & 65.17 & 78.45\\ SCRDet++ (FPN-based) & ResNet101 & 90.01 & 82.32 & 61.94 & 68.62 & 69.62 & 81.17 & 78.83 & 90.86 & 86.32 & 85.10 & 65.10 & 61.12 & 77.69 & \textbf{80.68} & 64.25 & 76.24 \\ SCRDet++ MS (FPN-based) & ResNet101 & 90.00 & 86.25 & \textbf{65.04} & 74.52 & 72.93 & \textbf{84.17} & 79.05 & 90.72 & \textbf{87.37} & 87.06 & \textbf{72.10} & 66.72 & \textbf{82.64} & 80.57 & \textbf{71.07} & \textbf{79.35} \\ \hline \textbf{Single-stage methods} & \multicolumn{16}{|c}{} \\ \hline SBL \cite{sun2018salience} & ResNet50 & 89.15 & 66.04 & 46.79 & 52.56 & 73.06 & 66.13 & 78.66 & 90.85 & 67.40 & 72.22 & 39.88 & 56.89 & 69.58 & 67.73 & 34.74 & 64.77 \\ FMSSD \cite{wang2019fmssd} & VGG16 \cite{simonyan2014very} & 89.11 & 81.51 & 48.22 & 67.94 & 69.23 & 73.56 & 76.87 & 90.71 & 82.67 & 73.33 & 52.65 & \textbf{67.52} & 72.37 & 80.57 & 60.15 & 72.43 \\ EFR \cite{fu2019enhanced} & VGG16 & 88.36 & 83.90 & 45.78 & 67.24 & 76.80 & 77.15 & 85.35 & 90.77 & 85.55 & 75.77 & 54.64 & 60.76 & 71.40 & 77.90 & 60.94 & 73.49 \\ SCRDet++ (RetinaNet-based) & ResNet152 & 87.89 & 84.64 & 56.94 & 68.03 & 74.67 & 78.75 & 78.50 & 90.80 & 85.60 & 84.98 & 53.56 & 56.75 & 76.66 & 75.08 & 62.75 & 74.37 \\ \hline \end{tabular}} \label{table:dota_sota} \end{table*} \begin{table*} \centering \caption{Accuracy (\%) on DIOR. $^*$ indicates our own implementation, higher than the official baseline. $^\dagger$ indicates data augmentation is used.} \resizebox{1.0\textwidth}{!}{ \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline & Backbone & c1 & c2 & c3 & c4 & c5 & f6 & c7 & c8 & c9 & c10 & c11 & c12 & c13 & c14 & c15 & c16 & c17 & c18 & c19 & c20 & mAP\\ \hline \textbf{Two-stage methods} & \multicolumn{21}{|c}{} \\ \hline Faster-RCNN \cite{ren2017faster} & VGG16 & 53.6 & 49.3 & 78.8 & 66.2 & 28.0 & 70.9 & 62.3 & 69.0 & 55.2 & 68.0 & 56.9 & 50.2 & 50.1 & 27.7 & 73.0 & 39.8 & 75.2 & 38.6 & 23.6 & 45.4 & 54.1 \\ \hline \multirow{2}{*}{Mask‐RCNN \cite{he2017mask} } & ResNet‐50 & 53.8 & 72.3 & 63.2 & 81.0 & 38.7 & 72.6 & 55.9 & 71.6 & 67.0 & 73.0 & 75.8 & 44.2 & 56.5 & 71.9 & 58.6 & 53.6 & 81.1 & 54.0 & 43.1 & 81.1 & 63.5\\ & ResNet‐101 & 53.9 & 76.6 & 63.2 & 80.9 & 40.2 & 72.5 & 60.4 & 76.3 & 62.5 & 76.0 & 75.9 & 46.5 & 57.4 & 71.8 & 68.3 & 53.7 & 81.0 & 62.3 & 43.0 & 81.0 & 65.2 \\ \hline \multirow{2}{*}{PANet \cite{liu2018path} } & ResNet‐50 & 61.9 & 70.4 & 71.0 & 80.4 & 38.9 & 72.5 & 56.6 & 68.4 & 60.0 & 69.0 & 74.6 & 41.6 & 55.8 & 71.7 & 72.9 & 62.3 & 81.2 & 54.6 & 48.2 & 86.7 & 63.8\\ & ResNet‐101 & 60.2 & 72.0 & 70.6 & 80.5 & 43.6 & 72.3 & 61.4 & 72.1 & 66.7 & 72.0 & 73.4 & 45.3 & 56.9 & 71.7 & 70.4 & 62.0 & 80.9 & 57.0 & 47.2 & 84.5 & 66.1 \\ \hline CornerNet \cite{law2018cornernet} & Hourglass104 & 58.8 & 84.2 & 72.0 & 80.8 & 46.4 & 75.3 & 64.3 & 81.6 & 76.3 & 79.5 & 79.5 & 26.1 & 60.6 & 37.6 & 70.7 & 45.2 & 84.0 & 57.1 & 43.0 & 75.9 & 64.9 \\ \hline \multirow{2}{*}{FPN \cite{lin2017feature}} & ResNet‐50 & 54.1 & 71.4 & 63.3 & 81.0 & 42.6 & 72.5 & 57.5 & 68.7 & 62.1 & 73.1 & 76.5 & 42.8 & 56.0 & 71.8 & 57.0 & 53.5 & 81.2 & 53.0 & 43.1 & 80.9 & 63.1\\ & ResNet‐101 & 54.0 & 74.5 & 63.3 & 80.7 & 44.8 & 72.5 & 60.0 & 75.6 & 62.3 & 76.0 & 76.8 & 46.4 & 57.2 & 71.8 & 68.3 & 53.8 & 81.1 & 59.5 & 43.1 & 81.2 & 65.1 \\ \hline CSFF \cite{cheng2020cross} & ResNet-101 & 57.2 & 79.6 & 70.1 & 87.4 & 46.1 & 76.6 & 62.7 & 82.6 & 73.2 & 78.2 & 81.6 & 50.7 & 59.5 & \textbf{73.3} & 63.4 & 58.5 & 85.9 & 61.9 & 42.9 & 86.9 & 68.0\\ \hline FPN$^*$ & ResNet‐50 & 66.57 & 83.00 & 71.89 & 83.02 & 50.41 & 75.74 & 70.23 & 81.08 & 74.83 & 79.03 & 77.74 & 55.29 & 62.06 & 72.26 & 72.10 & 68.64 & 81.20 & 66.07 & 54.56 & 89.09 & 71.74 \\ SCRDet++ (FPN$^*$-based) & ResNet‐50 & 66.35 & 83.36 &74.34 & 87.33 & 52.45 & 77.98 & 70.06 & 84.22 & 77.95 & 80.73 & 81.26 & 56.77 & 63.70 & 73.29 & 71.94 & \textbf{71.24} & 83.40 & 62.28 & 55.63 & 90.00 & 73.21 \\ SCRDet++ (FPN$^*$-based)$^\dagger$ & ResNet‐101 & \textbf{80.79} & \textbf{87.67} & \textbf{80.46} & \textbf{89.76} & \textbf{57.83} & \textbf{80.90} & 75.23 & \textbf{90.01} & \textbf{82.93} & \textbf{84.51} & \textbf{83.55} & 63.19 & \textbf{67.25} & 72.59 & \textbf{79.20} & 70.44 & \textbf{89.97} & 70.71 & \textbf{58.82} & \textbf{90.25} & \textbf{77.80} \\ \hline \textbf{Single-stage methods} & \multicolumn{21}{|c}{} \\ \hline SSD \cite{fu2017dssd} & VGG16 & 59.5 & 72.7 & 72.4 & 75.7 & 29.7 & 65.8 & 56.6 & 63.5 & 53.1 & 65.3 & 68.6 & 49.4 & 48.1 & 59.2 & 61.0 & 46.6 & 76.3 & 55.1 & 27.4 & 65.7 & 58.6 \\ YOLOv3 \cite{redmon2018yolov3} & Darknet‐53 & 72.2 & 29.2 & 74.0 & 78.6 & 31.2 & 69.7 & 26.9 & 48.6 & 54.4 & 31.1 & 61.1 & 44.9 & 49.7 & 87.4 & 70.6 & 68.7 & 87.3 & 29.4 & 48. 3 & 78.7 & 57.1 \\ \hline \multirow{2}{*}{RetinaNet \cite{lin2017focal}} & ResNet‐50 & 53.7 & 77.3 & 69.0 & 81.3 & 44.1 & 72.3 & 62.5 & 76.2 & 66.0 & 77.7 & 74.2 & 50.7 & 59.6 & 71.2 & 69.3 & 44.8 & 81.3 & 54.2 & 45.1 & 83.4 & 65.7\\ & ResNet‐101 & 53.3 & 77.0 & 69.3 & 85.0 & 44.1 & 73.2 & 62.4 & 78.6 & 62.8 & 78.6 & 76.6 & 49.9 & 59.6 & 71.1 & 68.4 & 45.8 & 81.3 & 55.2 & 44.4 & 85.5 & 66.1 \\ \hline RetinaNet$^*$ & ResNet‐50 & 59.98 & 79.02 & 70.85 & 83.37 & 45.25 & 75.93 & 64.53 & 76.87 & 66.63 & 80.25 & 76.75 & 55.94 & 60.70 & 70.38 & 61.45 & 60.15 & 81.13 & 62.76 & 44.52 & 84.46 & 68.05 \\ SCRDet++ (RetinaNet$^*$-based) & ResNet‐50 & 64.33 & 78.99 & 73.24 & 85.72 & 45.83 & 75.99 & 68.41 & 79.28 & 68.93 & 77.68 & 77.87 & 56.70 & 62.15 & 70.38 & 67.66 & 60.42 & 80.93 & 63.74 & 44.44 & 84.56 & 69.36 \\ SCRDet++ (RetinaNet$^*$-based)$^\dagger$ & ResNet‐101 & 71.94 & 84.99 & 79.48 & 88.86 & 52.27 & 79.12 & \textbf{77.63} & 89.52 & 77.79 & 84.24 & 83.07 & \textbf{64.22} & 65.57 & 71.25 & 76.51 & 64.54 & 88.02 & \textbf{70.91} & 47.12 & 85.10 & 75.11 \\ \hline \end{tabular}} \label{table:dior_sota} \end{table*} \subsection{Datasets and Protocols} We choose a wide variety of public datasets from both aerial images as well as natural images and scene texts for evaluation. The details are as follows. \textbf{DOTA} \cite{xia2018dota}: DOTA is a complex aerial image dataset for object detection, which contains objects exhibiting a wide variety of scales, orientations, and shapes. DOTA contains 2,806 aerial images and 15 common object categories from different sensors and platforms. The fully annotated DOTA benchmark contains 188,282 instances, each of which is labeled by an arbitrary quadrilateral. There are two detection tasks for DOTA: horizontal bounding boxes (HBB) and oriented bounding boxes (OBB). The training set, validation set, and test set account for 1/2, 1/6, 1/3 of the entire data set, respectively. Due to ranging of the image size from around $ 800 \times 800 $ to $ 4,000 \times 4,000 $ pixels, we divide the images into $ 600 \times 600 $ subimages with an overlap of 150 pixels and scale it to $ 800 \times 800 $. With all these processes, we obtain about 27,000 patches. The model is trained by 135k iterations in total, and the learning rate changes during the 81k and 108k iterations from 5e-4 to 5e-6. The short names for categories are defined as (abbreviation-full name): PL-Plane, BD-Baseball diamond, BR-Bridge, GTF-Ground field track, SV-Small vehicle, LV-Large vehicle, SH-Ship, TC-Tennis court, BC-Basketball court, ST-Storage tank, SBF-Soccer-ball field, RA-Roundabout, HA-Harbor, SP-Swimming pool, and HC-Helicopter. \begin{figure}[!tb] \centering \subfigure[COCO: the red boxes represent missed objects and the orange boxes represent false alarms.]{ \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[width=1.0\textwidth]{COCO_val2014_000000224664_.jpg} \centering \label{fig:COCO_val2014_000000224664_} \end{minipage} \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[width=1.0\textwidth]{COCO_val2014_000000224664_bs_.jpg} \centering \label{fig:COCO_val2014_000000224664_bs_} \end{minipage}}\\ \subfigure[ICDAR2015: red arrows denote missed objects.]{ \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[width=1.0\textwidth]{img_51_R2CNN++_.jpg} \centering \label{fig:img_108_R2CNN++} \end{minipage} \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[width=1.0\textwidth]{img_51_R2CNN_.jpg} \centering \label{fig:img_108_R2CNN_} \end{minipage}}\\ \subfigure[S$^2$TLD: the red box represent missed object.]{ \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[width=1.0\textwidth]{001961_InLD.jpg} \centering \label{fig:001961_InLD} \end{minipage} \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[width=1.0\textwidth]{001961.jpg} \centering \label{fig:001961} \end{minipage}}\\ \caption{Detection illustration on the datasets of COCO, ICDAR2015, S$^2$TLD before and after using InLD.} \label{fig:InLD_vis} \end{figure} \textbf{DIOR}\cite{li2020object}: DIOR is another large aerial images dataset labeled by a horizontal bounding box. It consists of 23,463 images and 190,288 instances, covering 20 object classes. DIOR has a large variation of object size, not only in spatial resolutions, but also in the aspect of inter‐class and intra‐class size variability across objects. The complexity of DIOR is also reflected in different imaging conditions, weathers, seasons, and image quality, and it has high inter‐class similarity and intra‐class diversity. The training protocol of DIOR is basically consistent with DOTA. The short names c1-c20 for categories in our experiment are defined as: Airplane, Airport, Baseball field, Basketball court, Bridge, Chimney, Dam, Expressway service area, Expressway toll station, Golf course, Ground track field, Harbor, Overpass, Ship, Stadium, Storage tank, Tennis court, Train station, Vehicle, and Wind mill. \textbf{UCAS-AOD} \cite{zhu2015orientation}: UCAS-AOD contains 1510 aerial images of approximately $ 659 \times 1280 $ pixels, it contains two categories of 14,596 instances. In line with \cite{xia2018dota, azimi2018towards}, we randomly select 1,110 for training and 400 for testing. \textbf{BSTLD} \cite{behrendt2017deep}: BSTLD contains 13,427 camera images at a resolution of $ 720 \times 1,280 $ pixels and contains about 24,000 annotated small traffic lights. Specifically, 5,093 training images are annotated by 15 labels every 2 seconds, but only 3,153 images contain the instance, about 10,756. There are very few instances of many categories, so we reclassify them into 4 categories (red, yellow, green, off). In contrast, 8,334 consecutive test images are annotated by 4 labels at about 15 fps. In this paper, we only use the training set of BSTLD, whose median traffic light width is 8.6 pixels. In the experiment, we divide BSTLD training set into a training set and a test set according to the ratio of $6:4$. Note that we use the RetinaNet with P2 feature level and FPN to verify InLD, and scale the size of the input image to $ 720 \times 1,280$. \textbf{S$^2$TLD}: S$^2$TLD\footnote{S$^2$TLD is available at \url{https://github.com/Thinklab-SJTU/S2TLD}} is our newly collected and annotated traffic light dataset as released in this paper, which contains 5,786 images of approximately $ 1,080 \times 1,920 $ pixels (1,222 images) and $ 720 \times 1,280 $ pixels (4,564 images). It also contains 5 categories (namely red, yellow, green, off and wait on) of 14,130 instances. The scenes cover a variety of lighting, weather and traffic conditions, including busy street scenes inner-city, dense stop-and-go traffic, strong changes in illumination/exposure, flickering/fluctuating traffic lights, multiple visible traffic lights, image parts that can be confused with traffic lights (e.g. large round tail lights), as shown in Fig. \ref{fig:S2TLD_VIS}. The training strategy is consistent with BSTLD. In addition to the above datasets, we also use natural image dataset COCO \cite{lin2014microsoft} and scene text dataset ICDAR2015 \cite{karatzas2015icdar} for further evaluation. The experiments are initialized by ResNet50 \cite{he2016deep} by default unless otherwise specified. The weight decay and momentum for all experiments are set 0.0001 and 0.9, respectively. We employ MomentumOptimizer over 8 GPUs with a total of 8 images per minibatch. We follow the standrad evaluation protocol of COCO, while for other datasets, the anchors of RetinaNet-based method have areas of $32^{2}$ to $512^2$ on pyramid levels from P3 to P7, respectively. At each pyramid level we use anchors at seven aspect ratios $\{1,1/2,2,1/3,3,5,1/5\}$ and three scales $\{2^{0}, 2^{1/3}, 2^{2/3}\}$. For rotating anchor-based method (RetinaNet-R), the angle is set by an arithmetic progression from $-90^\circ$ to $-15^\circ$ with an interval of $15$ degrees. \subsection{Ablation Study} The ablation study covers the detailed evaluation of the effect of image level denoising (ImLD) and instance level denoising (InLD), as well as their combintation. \textbf{Effect of Image-Level Denoising.} We have experimented with five denoising modules introduced in \cite{xie2019feature} on DOTA dataset. We use our previous work R$^3$Det \cite{yang2019r3det}, one of the most state-of-the-art methods on the DOTA, as the baseline. From Table \ref{table:ImLD_Ablation_Study}, one can observe that most methods workable except the mean filtering. Among them, the non-local with Gaussian is the most effective (1.95\% higher). \textbf{Effect of Instance-Level Denoising.} The purpose of designing InLD is to make the feature of different categories decoupled in the channel dimension, while the features of the object and non-object are enhanced and weakened in the spatial dimension, respectively. We have designed some verification tests and obtained positive results as shown in Table \ref{table:InLD_Ablative_Study}. We first explore the utility of weakening the non-object noise by binary semantic segmentation, and the detection mAP has increased from 65.73\% to 68.12\%. The result on multi-category semantic segmentation further proves that there is indeed interference between objects, which is reflected by the $1.31\%$ increase of detection mAP (reaching 69.43\%). From the above two experiments, we can preliminarily speculate that the interference in the non-object area is the main reason that affects the performance of the detector. It is surprising to to find that coproducting the prediction score for objectness (see $P(object)$ in Eq.~\ref{eq:objectness}) can further improve performance and speed up training with a final accuracy of 69.81\%. Experiments in Table \ref{table:dota_ablation_study} show that InLD has greatly improved the R$^3$Det's performance of small objects, such as BR, SV, LV, SH, SP, HC, which increased by 3.94\%, 0.84\%, 4.32\%, 8.48\%, 10.15\%, and 9.41\%, respectively. While the accuracy is greatly improved, the detection speed of the model is only reduced by 1fps (at 13fps). In addition to the DOTA dataset, we have used more datasets to verify the general applicability, such as DIOR, ICDAR, COCO and S$^2$TLD. InLD obtains 1.44\%, 1.55\%, 1.4\% and 0.86\% improvements in each of the four datasets according to Table \ref{table:ImLD_and_InLD} and Fig.~\ref{fig:InLD_vis} shows the visualization results before and after using InLD. In order to investigate whether the performance improvement brought by InLD is due to the extra computation (dilated convolutions) or supervised learning ($L_{InLD}$), we perform ablation experiments by controlling the number of dilated convolutions and supervision signal. Table \ref{table:InLD} shows that supervised learning is the main contribution of InLD rather than more convolution layers. In particular, we conduct a detailed study on the SJTU Small Traffic Light Dataset (S$^2$TLD) which is our newly released traffic detection dataset. Compared with BSTLD, S$^2$TLD has more available categories. In addition, S$^2$TLD contains two different resolution images taken from two different cameras, which can be used for more challenging detection tasks. Table \ref{table:STLD} shows the effectiveness of InLD on these two traffic light datasets. \begin{figure}[!tb] \centering \subfigure[BC and TC]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=1.0\textwidth, height=3.5cm]{BC_TC.jpg} \centering \end{minipage}} \subfigure[SBF and GTF]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=1.0\textwidth, height=3.5cm]{SBF_GTF_TC_SP.jpg} \end{minipage}} \subfigure[HA and SH]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=1.0\textwidth, height=3.5cm]{HA_SH.jpg} \end{minipage}}\\ \subfigure[SP]{ \begin{minipage}[t]{0.22\linewidth} \centering \includegraphics[width=1.0\textwidth, height=2.5cm]{SP.jpg} \centering \end{minipage}} \subfigure[RA and SV]{ \begin{minipage}[t]{0.22\linewidth} \centering \includegraphics[width=1.0\textwidth, height=2.5cm]{RA_SV.jpg} \centering \end{minipage}} \subfigure[ST]{ \begin{minipage}[t]{0.22\linewidth} \centering \includegraphics[width=1.0\textwidth, height=2.5cm]{ST.jpg} \centering \end{minipage}} \subfigure[BD and RA]{ \begin{minipage}[t]{0.22\linewidth} \centering \includegraphics[width=1.0\textwidth, height=2.5cm]{BD_RA.jpg} \centering \end{minipage}}\\ \subfigure[SV and LV]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=1.0\textwidth, height=3cm]{SV_LV.jpg} \centering \end{minipage}} \subfigure[PL and HC]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=1.0\textwidth, height=3cm]{PL_HC.jpg} \end{minipage}} \subfigure[BR]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=1.0\textwidth, height=3cm]{BR.jpg} \end{minipage}} \caption{Detection illustration on OBB task on DOTA of different objects by the proposed method.} \label{fig:DOTA} \end{figure} \textbf{Effect of combining ImLD and InLD.} A natural idea is whether we can combine these two denoising structures, as shown in Fig. \ref{fig:pipeline}. For more comprehensive study, we perform detailed ablation experiments on different datasets and different detection tasks. The experimental results are listed in Table \ref{table:ImLD_and_InLD}, and we tend to get the following remarks: 1) Most of the datasets are relatively clean, so ImLD does not obtain a significant increase in all datasets. 2) The performance improvement of detectors with InLD is very significant and stable, and is superior to ImLD. 3) The gain by the combination of ImLD and InLD is not large, mainly because their effects are somewhat overlapping: InLD weakens the feature response of the non-object region while weakening the image noise interference. \begin{figure*}[!tb] \centering \subfigure[Small vehicle and large vehicle (HBB task).]{ \begin{minipage}[t]{0.76\linewidth} \centering \includegraphics[width=1.0\textwidth]{large_sence1.jpg} \centering \end{minipage}}\\ \subfigure[Plane (OBB task).]{ \begin{minipage}[t]{0.76\linewidth} \centering \includegraphics[width=1.0\textwidth]{large_sence2.jpg} \end{minipage}} \caption{Detection examples of our proposed method in large scenarios on DOTA dataset. Our method can both effectively handle the dense (top plot with white bounding box) and rotating (bottom plot with red bounding box) cases.} \label{fig:LS} \end{figure*} Therefore, ImLD is an optional module depending on the dataset and computing environment. We will not use ImLD in subsequent experiments unless otherwise stated. \textbf{Effect of IoU-Smooth L1 Loss.} The IoU-Smooth L1 loss\footnote{Source code of IoU-Smooth L1 Loss is separately available at \url{https://github.com/DetectionTeamUCAS/RetinaNet_Tensorflow_Rotation}} eliminates the boundary effects of the angle, making it easier for the model to regress to the objects coordinates. Table \ref{table:iou-smooth-l1} shows that new loss improves three detectors' accuracy to 69.83\%, 68.65\% and 76.20\%, respectively. \textbf{Effect of Data Augmentation and Backbone.} Using ResNet101 as backbone and data augmentation (random horizontal, vertical flipping, random graying, and random rotation), we observe a reasonable improvement as shown in Table \ref{table:dota_ablation_study} (69.81\% $\rightarrow$ 72.98\%). We improve the final performance of the model from 72.98\% to 74.41\% by using ResNet152 as backbone. Due to the extreme imbalance of categories in the dataset, this provides a huge advantage to data augmentation, but we have found that this does not affect the functioning of InLD under these heave settings, from 72.81\% to 74.41\%. All experiments are performed on the OBB task on DOTA, and the final model baesd on R$^3$Det is also named R$^3$Det++\footnote{Code of R$^3$Det and R$^3$Det++ are all available at \url{https://github.com/Thinklab-SJTU/R3Det_Tensorflow}.}. \subsection{Comparison with the State-of-the-Art Methods} We compare our proposed InLD with the state-of-the-art algorithms on two datasets DOTA \cite{xia2018dota} and DIOR \cite{li2020object}. Our model outperforms all other models. \textbf{Results on DOTA.} We compare our results with the state-of-the-arts results in DOTA as depicted in Table \ref{table:dota_sota}. The results of DOTA reported here are obtained by submitting our predictions to the official DOTA evaluation server\footnote{\url{https://captain-whu.github.io/DOTA/}}. In the OBB task, we add the proposed InLD module to a single-stage detection method (R$^3$Det++) and a two-stage detection method (FPN-InLD). Our methods achieve the best performance, 76.56\% and 76.81\% respectively. To make fair comparison, we do not use overlays of various tricks, oversized backbones, and model ensemble , which are often used on DOTA's leaderboard methods. In the HBB task, we also conduct the same experiments and obtain competitive detection mAP, about 74.37\% and 76.24\%. Model performance can be further improved to 79.35\% if multi-scale training and testing are used. It is worth noting that FADet \cite{li2019feature}, SCRDet \cite{yang2019scrdet} and CAD-Det \cite{zhang2019cad} use the simple attention mechanism as described in Eq. \ref{eq:attention}, but our performance is far better than all. Fig.~\ref{fig:DOTA} shows some aerial subimages, and Fig.~\ref{fig:LS} shows the aerial images of large scenes. \textbf{Results on DIOR and UCAS-AOD.} DIOR is a new large-scale aerial images dataset, and has more categories than DOTA. In addition to the official baselines, we also give our final detection results in Table \ref{table:dior_sota}. It should be noted that the baseline we reproduce is higher than the official one. In the end, we obtain 77.80\% and 75.11\% mAP on FPN and RetinaNet based methods. Table \ref{table:UCAS-AOD} illustrates the comparison of performance on UCAS-AOD dataset. As we can see, our method achieves 96.95\% for OBB task and is the best out of all the existing published methods. \begin{table}[tb!] \centering \caption{Performance by accuracy (\%) on UCAS-AOD dataset.} \resizebox{0.35\textwidth}{!}{ \begin{tabular}{l|c|c|c} \hline Method & mAP & Plane & Car \\ \hline YOLOv2 \cite{redmon2017yolo9000} & 87.90 & 96.60 & 79.20 \\ R-DFPN \cite{yang2018automatic} & 89.20 & 95.90 & 82.50 \\ DRBox \cite{liu2017learning} & 89.95 & 94.90 & 85.00 \\ S$^2$ARN \cite{bao2019single} & 94.90 & 97.60 & 92.20 \\ RetinaNet-H \cite{yang2019r3det} & 95.47 & 97.34 & 93.60 \\ ICN \cite{azimi2018towards} & 95.67 & -- & -- \\ FADet \cite{li2019feature} & 95.71 & 98.69 & 92.72 \\ R$^3$Det \cite{yang2019r3det} & 96.17 & 98.20 & 94.14 \\ \hline SCRDet++ (R$^3$Det-based) & \textbf{96.95} & \textbf{98.93} & \textbf{94.97} \\ \hline \end{tabular}} \label{table:UCAS-AOD} \end{table} \section{Conclusion}\label{sec:conclusion} We have presented an instance level denoising technique in the feature map for improving detection especially for small and densely arranged objects e.g. in aerial images. The core idea of InLD is to make the feature of different categories decoupled over different channels, while the features of the object and non-object are enhanced and weakened in the space, respectively. Meanwhile, the IoU constant factor is added to the smooth L1 loss to address the boundary problem in rotation detection for more accurate rotation estimation. We perform extensive ablation studies and comparative experiments on multiple aerial image datasets such as DOTA, DIOR, UCAS-AOD, small traffic light dataset BSTLD and our released S$^2$TLD, and demonstrate that our method achieves the state-of-the-art detection accuracy. We also use natural image dataset COCO and scene text dataset ICDAR2015 to verify the effectiveness of our approach. \section*{Acknowledgment} This research was supported by National Key Research and Development Program of China (2018AAA0100704, 2016YFB1001003), and NSFC (61972250, U19B2035), STCSM (18DZ1112300). The author Xue Yang is supported by Wu Wen Jun Honorary Doctoral Scholarship, AI Institute, Shanghai Jiao Tong University. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
2,869,038,154,165
arxiv
\section{Introduction} Photospheric and corona magnetic field evolution is essential to the energy build-up and release process of solar eruptions. The field evolution is closely related to changes of sunspot structures. It is thus of vital importance to establish the physical connection between the photospheric magnetic field, the sunspot evolution, and the coronal dynamics. Many previous studies have been focusing on flare-induced changes of the photospheric magnetic field and sunspot structures (e.g., Severny 1964; Wang et al., 1994; Spirock et al., 2002; Yurchyshyn et al., 2004; Wang \& Liu 2010; Wang et al., 2012; Liu et al., 2014). Over the past five decades with growing observational capabilities, it was established that the sunspot structure and corresponding photospheric magnetic field may change suddenly and irreversibly after flares. The flare-induced changes are manifested in the rapid change of the transverse photospheric magnetic field ($B_h$), and darkening as well as decay or even disappearance of sunspot structures. In a recent study, Liu et al. (2014) presented a comprehensive comparison of two major events released from the same NOAA active region (AR 11283). Both events are characterized by X-class flares (an X2.1 one on 2011 September 6 and an X1.8 one on September 7), with fast filament eruptions and coronal mass ejections (CMEs). The authors found that both flares result in rapid increases of $B_h$ around the flaring polarity inversion line (PIL) and decreases in the surrounding peripheral penumbral region, corresponding to the darkening or decay in white-light (WL) intensities respectively. This is interpreted as the result of the inward collapse of the central magnetic field (also see Hudson, 2000) and the radial outward stretching of the peripheral magnetic field during the flare-CME eruption. From the space weather forecasting perspective, it is more important to figure out whether there exist some general trends in the pre-flare photospheric magnetic field and WL evolutions of involved sunspots. In Ruan et al. (2014), a study focusing on the 6-hour long pre-flare sunspot activities and photospheric magnetic field evolution of the X2.1 event from AR 11283 was presented. It was concluded that the persistent sunspot rotation plays an important role in twisting, energizing, and destabilizing the coronal filament-flux rope system. During the period of apparent sunspot rotation, it was found that both horizontal field strength ($B_h$) and the inclination angle ($\theta_B$ the angle between the vector magnetic field and the local radial direction) decline gradually. They found that the variation of the surface field and the inclination angle is associated with the overall ascending motion of the corona filament-flux rope structure. Ruan et al. (2014) proposed that the long-term pre-flare evolution of the photospheric $B_h$ can be taken as a possible precursor of an eruption. In addition, the photospheric field evolution carries information about the energy storage and triggering process of the event, and can be used to discern different eruption mechanism. It was suggested that a gradual decrease of $B_h$ may be a precursor for an eruption in terms of the flux rope instability (see, e.g., Lin et al., 2003), while a persistent increase of this quantity may be a precursor of the tether-cutting reconnection scenario (Moore 2001). In this study, we investigate the X1.8 flare on 2011 September 7, occurred in the same AR as studied by Ruan et al. (2014). As mentioned, this event was associated with a fast CME-filament eruption, very similar to the preceding X2.1 event according to the Solar Dynamics Observatory (SDO; Pesnell et al., 2012) data. Our focus is the long-term pre-flare evolution of the photospheric magnetic field and the sunspot structure, and their correlation with the pre-eruption dynamics in the upper layers of the solar atmosphere. \section{Observations and the overall profile of the event} For this study, we mainly analyzed the multi-wavelength imaging data provided by the Atmospheric Imaging Assembly (AIA; Lemen et al. 2012) and the vector magnetic field and continuum intensity data by the Helioseismic and Magnetic Imager (HMI; Schou et al. 2012) on board the Solar Dynamic Observatory (SDO). The AIA data at passbands of 304 \,\AA{} (HeII, T$\sim$0.05 MK), 171 \,\AA{} (FeIX, $\sim$0.6 MK) and the 94 \,\AA{} (FeXVIII, T$\sim$6.3 MK) are examined in order to reveal the filament and coronal dynamics at different temperatures. The processed disambiguated HMI vector magnetic field data are of 12-minute cadence at a 0.5$^{\prime}$$^{\prime}$ pixel resolution, provided by the HMI team (see ftp://pail.stanford.edu/pub/HMIvector2/movie/ar1283.mov for the movie). We also used the HMI continuum intensity observation at the 6173\,\AA{} to investigate the evolution of the sunspot structure. These intensity data have been normalized and removed of limb darkening effect with a cadence of 12 minutes and a 0.5$^{\prime}$$^{\prime}$ pixel resolution. We also examined the TiO image (a proxy for continuum at 7057 \,\AA{}) taken by the New Solar Telescope (NST; Goode et al. 2010; Cao et al. 2010) at Big Bear Solar Observatory (BBSO) with a high spatial resolution of about 0.04$^{\prime}$$^{\prime}$/pixel. The AR 11283 was located N14W30 at 19:55 UT on September 7, close to the disk center. As mentioned, it released two X-class flares on September 6 and 7, respectively. The eruption processes of both events have been well-studied by many authors (Feng et al., 2013; Jiang et al., 2013; Wang et al., 2012; Zharkov et al., 2013; Ruan et al., 2014; Liu et al. 2014; Shen et al. 2014). So details of the overall evolution of this AR will not be repeated here. In Figure 1, we show the GOES SXR (1-8\,\AA{}) light curve from 20:00 UT Sept. 6 to 24:00 UT Sept. 7. The X1.8 flare started at 22:32 UT, peaked at 22:38 UT, and ended at 22:44 UT according to the GOES data. A C3 flare took place from 19:55 UT to 20:19 UT with a peaking time at 20:06 UT, which is also of interest to this study. The peaking times of these three flares have been labeled with blue vertical dotted lines. According to the CDAW (Coordinated Data Analysis Workshops) catalog of the LASCO data (Brueckner et al. 1995), the X1.8 flare was accompanied by a CME travelling at a linear speed of 792 km s$^{-1}$. In Figure 2, we present overall structure of this active region at several observing wavelengths. Panel (a) is the BBSO high-resolution image taken at 18:15 UT to show details of the sunspot structure. The image has been rotated by -22.6$^{\circ}$ to facilitate comparison with the SDO data. Panel (b) is for the HMI 6173 \,\AA{} intensity data in a similar field of view (FOV) as that of panel (a). Panel (c) is taken from the HMI vector magnetic field data of this AR with the color map representing the vertical field component ($B_z$) and arrows for the $B_h$ component. Panels (d) to (f) present the 304, 171, and 94 \,\AA{} images recorded by SDO around the same time. The contours are given by $\pm$300~G of the HMI vertical field component ($B_z$) on the solar surface. This figure shows some pre-eruption condition of the AR. The large-scale magnetic field is quadrupolar, with the two spots on the west giving a $\delta$ configuration, in which the negative one being the leading and the positive one being the following spot. Both X-class flares occurred in similar areas in this AR (Liu et al., 2014). The yellow line in panel (c) delineates an "L"-shaped PIL. The HMI movie given above provides the long-term evolution of the sunspot and the magnetic field, from which we observe that the following $\delta$ spot continues to move eastward after the X2.1 flare (peaking at 22:20 UT, September 6). The magnetic field around the PIL is almost parallel to the PIL indicating a severely-sheared state of the magnetic field. This is consistent with the transverse alignment of the surrounding filamentary penumbral structures as seen in panel (a) of Figure 2. Liu et al. (2014) measured the shearing speed of the two opposite-polarity spots in this $\delta$ configuration, and detected a weaker converging motion along the north-south direction. In addition, after a careful inspection of the HMI data, we find some signatures of rotation of the positive $\delta$ spot, although no clear features like a well-developed magnetic tongue can be used to undoubtedly trace the rotation (see, e.g., Ruan et al., 2014). No apparent flux emergence is observed during the period between the X2.1 and the X1.8 flare. Generally speaking, the above existing sunspot motions continuously transport energy to the corona through their magnetic connection, and play a fundamental role in pushing the corona state to the eruptive point. The 304-171-94 \,\AA{} images presented in panels (d)-(f) and the accompanying animation show the pre-eruption structures and dynamics of the filament and the corona. We can see the filament is above the PIL with the northern foot rooted in the positive spot umbra. It has a sigmoidal shape, corresponding to the hot sigmoidal structure observed with 94 \,\AA{}. The filament-sigmoid structure has often been taken as the signature of a twisted flux rope structure (e.g., Rust \& Kumar 1994; Titov \& Demoulin 1999; McKenzie \& Canfield 2008), which shall carry a major part of the free magnetic energy to be released. Now let us inspect the dynamic evolution of the relevant filament-corona structure. Initially, the filament is buried underneath the overlying coronal arcades (171\,\AA{}). As the region evolves, the coronal arcades expand gradually and continuously, especially those arcades atop the filament. These arcades seem to be consistently removed from the filament top. This allows a larger part of the filament to be exposed. Before the eruption, the filament becomes thicker, darker and more bulging than before, as seen from both the 304 and 171 \,\AA{} images. Brightening of loops atop the filament can be seen for several times from the 171 and 94 \,\AA{} passbands, indicating reconnections on-going there. The post-flare loops of the C3 flare are representative of these loops. Later, we will discuss the possible role of this C3 flare and following reconnections in triggering the subsequent major eruption. \section{Correlation of the photospheric transverse field and the penumbral decay} In Figure 3 we present the long-term temporal evolution of 6173 \,\AA{} intensity map (upper panels) and the photospheric transverse field ($B_h$, lower panels) of the AR derived from the HMI data. $B_h$ is illustrated using a color map with red color representing stronger and blue representing weaker magnetic field strength. This figure and the accompanying animation exhibit a very striking feature, that is the close correlation between the gradual changes of $B_h$ and the WL intensity ($I_c$). This feature is the focus of our study. Note that for completeness the data pre- and during the previous X2.1 flare have been included in the animation. Here we only focus on the data since 00:00 UT of September 7. Initially, we see that the region with enhanced $B_h$ is allocated with the PIL. Then, the red color of this region gradually evolves into a region of yellow, green and blue colors. This indicates that $B_h$ there becomes weaker with time. After the peaking of the X1.8 flare, the $B_h$ gets enhanced suddenly. On the other hand, the sunspot penumbrae around the PIL get brightened persistently in almost the same period ($>$10 hours before the X1.8 flare), and become darkened suddenly during the flare. The changes of $B_h$ and the penumbral structures during the flare are consistent with previous studies mentioned in our introduction. Of particular interest here is their long-term pre-flare evolution and correlation of the two quantities. In the earlier papers reported the flare-induced penumbra intensity changes (Wang et al. 2004; Deng et al., 2005), the authors adopted the wording ``penumbral decay'' to describe the process. Here we follow them on the terminology, although the wording ``penumbral fade'' might be a better choice since during the process only the penumbral intensity changes considerably while the area remains largely unchanged. To further examine their evolution and correlation, we select a trapezoid to include the main area of the pre-flare enhanced $B_h$ region, which is around the PIL across the $\delta$ spot. We then calculate the averages of the magnetic field components ($B_h$ and $B_z$), inclination angle ($\theta_B$), total magnetic flux ($\phi$), and the WL intensity ($I_c$) within the trapezoid. The obtained profiles are plotted in the two panels of Figure 4. In the upper panel, we see that after $\sim$7:00 UT, $B_h$ starts to decline gradually till the X1.8 flare, from an initial magnitude of $\sim$1226 G to a final value of $\sim$679 G. The overall declining percentage is $\sim$45$\%$. During the X1.8 flare, $B_h$ jumps to $\sim$1053 G. Similarly, $\theta_B$ decreases from $\sim$68$^{\circ}$ (10:00 UT) to $\sim$55$^{\circ}$ before the flare, and jumps back to 64$^{\circ}$ after the flare (24:00 UT). On the other hand, the average intensity starts to increase also from $\sim$10:00 UT, from a normalized value of $\sim$0.70 to $\sim$0.89 before the flare and drops to 0.78 after the flare (24:00 UT). The intimate anti-correlation between $I_c$ and $B_h$ (or $\theta_B$) is self-evident. In the lower panel of Figure 4, we plot the profiles of the average positive and negative components of $B_z$ as well as the total flux ($\Phi$). We see that during the whole pre-flare stage, all the three quantities do not show any considerable systematic changes. The value of the positive (negative) $B_z$ lies in a narrow range of [251, 280] G ( [-174, -215] G), and $\Phi$ lies in a range of [0.58x$10^{18}$, 0.65x$10^{18}$] Maxwell, from 05:00 UT to 21:00 UT. This indicates that the PIL region is dominated by the transverse component of the photospheric field, and there is no significant flux emergence or cancellation in the period of study. A straightforward explanation of the results shown in Figures 3 and 4 is that the magnetic field lines on the photosphere becomes more vertical with time during the pre-flare phase. To find further observational support of this explanation, we examine the AIA data taken in the 304 and 171 \,\AA{} passbands. Following Ruan et al. (2014), we plot the height-time map along a slice of the filament. The slice location and the map are shown in the upper panels of Figure 5, with an accompanying animation available online. A dashed white line is plotted for visual guide. We see that, along the slice, at about 08:00 UT the filament becomes thick enough to be observable from the map. After that, it shows a gradual ascending motion, very similar to the filament motion leading to the previous X2.1 flare (Ruan et al., 2014). The 171 \,\AA{} data present the filament and its overlying arcades. As described in the previous section, initially the filament seems to be buried underneath these arcades. Along with the continuous apparent expansion of the arcades, a larger part of the filament comes into view. To show this continuous expansion, we make an angular time map along a circular slice centered around one foot of the arcades. The northward direction is taken to be 0 degree and the angle increases clockwise. The slice and the map are shown in the lower panels of Figure 5, where the expansion is manifested as a gradual rotating motion, which is obvious from 05:00 UT on. The apparent expansion is initially very fast, then gets slower with time. The outer edge of the loops move from $\sim$50$^{\circ}$ at 05:00 UT to $\sim$100$^{\circ}$ at 12:00 UT and reaches $\sim$120$^{\circ}$ from 12:00 UT to 22:00 UT, along the circular slice. Note that thermal effects, which can make the coronal arcades visible or invisible, can result in some pseudo expansion. Nevertheless, with a careful inspection of the animation, we believe that the genuine physical expansion of the arcades is important here. Assuming the pre-eruption corona field evolves slowly enough so that it can be represented by a series of magnetic equilibria, Liu et al. (2014) deduced this slow evolution with a Nonlinear Force-Free Field (NLFFF) extrapolation technique (Wieglemann, 2004; Wieglemann et al., 2006). In their Figure 4 and the accompanying animation, they showed the long-term evolution of the extrapolated electric current density distribution and magnetic field lines across the central part of the filament. It is clear from their study that there exists a flux rope aligning with the filament and the flux rope arises gradually during the period from after the X2.1 to before the X1.8 flare. This result is consistent with the observation that a significant part of the well-developed filament exists after the X2.1 flare (see also Liu et al. 2014), and supports our argument deduced above. In summary, we found a persistent well-correlated pre-flare long term evolution of the penumbral structure and the photospheric transverse field around the PIL. The observed penumbral decay and $B_h$ decline are likely caused by the ascending motion of the filament-flux rope system. This picture is supported by the simultaneous imaging data with the 304 \,\AA{} for the filament and the 171 \,\AA{} for the overlying arcades. Nevertheless, it cannot be ruled out of the possibility that the pre-flare sunspot decay and the $B_h$ decline are simply the result of the overall sunspot evolution and not the direct response to the filament-corona dynamics. \section{Possible triggering process of the major eruption} Another interesting problem to discuss here is the possible role played by the C3 flare in triggering the following X1.8 eruption. To show this, in Figure 6 we present the four AIA 171 \,\AA{} images from 20:39 UT to 22:27 UT. The first panel presents the post loops of the C3 flare, stretching over the filament. A nearby bright loop on the northern side is also shown. In the second panel both loops get darkened and expanded. In the third panel (21:54 UT) a new loop connecting the southern foot of the post-flare loops and the eastern foot of the nearby loop appears. This is very likely a result of the reconnection between the two loop systems, as will be explained in the following paragraph. At 22:27 UT, in the fourth panel, the post-reconnection loops get brightened considerably and the post-C3 loops almost disappear. This evolution of coronal loops can also be clearly seen from the accompanying AIA 171 \,\AA{} animation. To support this reconnection scenario, we show the contours of the vertical magnetic field component of HMI in Figure 6 (b). It is seen that the magnetic field polarities are in the order of positive, negative, positive, and negative for the four foots of the two sets of pre-reconnection loops, thus favoring the reconnection process described above. The post-reconnection bright loops (see Figure 6d) are still likely on top of the filament and more high-lying in the corona in comparison to those pre-reconnection overlying loops. Shortly after the reconnection, the filament starts to rise rapidly and the X1.8 flare takes place. To illustrate the reconnection process more clearly, we show schematics in the left two panels at the bottom of Figure 6, where yellow lines are for pre-reconnection loops and red lines are for post-reconnection loops. It can be seen that the expansion of the coronal arcades overlying the filament (in cyan) drives their reconnection with longer magnetic loops rooted in a nearby pair of opposite polarities. This reconnection can remove some of the arcades overlying the filament and reduce the confining force (or the strapping effect, see recent paper of Wang et al., 2015) acting on the filament. This will at least speed up the evolution process leading to the following major eruption, and possibly play as a trigger of the filament-flux rope instability. \textbf{Possible configuration during the impulsive stage of the X1.8 flare is also shown in the last panel of Figure 6, for completeness, from which we see that the filament erupts with a flux rope and part of its overlying arcades.} The picture involves a quadrupolar topology and a triggering reconnection in the corona, both are essential features of the breakout model (Antiochos et al., 1999) for solar eruption. We therefore suggest that the breakout process may be important to the trigger of the X1.8 flare and the associated CME. \section{Conclusions and discussion} Some recent studies on the magnetic field evolution have been focusing on rapid changes of the photospheric transverse field and sunspot structures induced by solar flares. In this study we investigate the correlation between the long-term ($\sim$10-20 hours) pre-flare evolution of the sunspot penumbrae and the photospheric transverse field component around the PIL, as well as their relation with filament-corona dynamics. It is found that the penumbrae decayed gradually and the strength of the transverse field (and the inclination angle of the magnetic field) on the solar surface declined correspondingly, indicating that the pre-flare magnetic structure from the photosphere to the corona is getting more vertical with time. This indication is corroborated with the SDO imaging observation of the filament arising and the apparent expansion of its overlying arcades, and consistent with the NLFFF extrapolation results of the pre-eruption state of the local corona. Intensity changes of the sunspot penumbra induced by solar flares have been reported in several studies (e.g., Wang et al., 2004; Deng et al., 2005; Liu et al., 2014). Here we report the long-term pre-flare evolution of the penumbra intensity change as well as its correlation with the photospheric magnetic field evolution and relevant coronal dynamics. In Deng et al. (2005), the flare-induced penumbra decay was explained with the change of the inclination angle of the magnetic field in the corresponding penumbral region. According to Leka {\&} Skumanich (1998), the magnetic field inclination angle in the peripheral penumbrae, when turning from more inclined to more vertical and toward the umbra, can directly suppress the penumbral Evershed flow resulting in an increase of the continuum intensity. Here we propose a very similar scenario to interpret the observational result, although the trend of long term change is opposite to that of the flare-induced rapid changes. We suggest that the observed persistent pre-flare penumbral decay (or fade) is a result of the gradual change of the direction of the magnetic field in the penumbral region from more horizontal to more vertical, which is likely caused by the gradual rising of the filament-flux rope system in the upper solar atmosphere. The long-term pre-flare behavior of the photospheric magnetic field, sunspot, filament and the corona arcades are of vital importance to space weather studies. These behaviors could provide clues about how the solar magnetic field evolves from a pre-eruption state to the eruption, including the energy transport, storage, and release processes. The result of our study is based on eruptions from only one AR. More events should be investigated to decide what signatures or processes could be used as possible eruption precursor or trigger. \begin{figure*}[!htbp] \vspace{12.3mm} \centering \includegraphics[width=120mm]{f1.ps} \caption{The 1-8 \,\AA{} GOES SXR flux intensity profiles from 20:00 UT September 6 to 24:00 UT September 7. The peaking times of these three flares X2.1 (22:20 UT), C3.0 (20:06 UT), X1.8 (22:38 UT) have been labelled with blue vertical dotted lines. An animation and a color version of this figure are available online.} \label{fig:bright} \end{figure*} \begin{figure*}[!htbp] \centering \includegraphics[width=140mm]{f2.ps} \caption{Pre-eruption observations of the AR 11283: (a) the BBSO high-resolution image taken at 18:15 UT, (b) the HMI 6173 \,\AA{} intensity image, (c) the HMI vector magnetic field with the color map representing the vertical field component ($B_z$) and arrows for the $B_h$ component, (d) to (f) the 304, 171, and 94 \,\AA{} images recorded by SDO. The contours in panel (d) are given by $\pm$300 G of the HMI vertical field component ($B_z$) on the solar surface. The FOV of panels (a) and (b) has been marked in panel (c) with a square. A color version of this figure is available online.} \label{fig:bright} \end{figure*} \begin{figure*}[!htbp] \centering \includegraphics[width=140mm]{f3.ps} \vspace{12.3mm} \caption{Temporal evolution of 6173 \,\AA{} intensity map (upper panels) and the photospheric transverse field strength ($B_h$, lower panels) of the AR given by HMI. An animation and a color version of this figure are available online.} \label{} \end{figure*} \begin{figure*}[!htbp] \includegraphics[width=140mm]{f4a.ps} \includegraphics[width=140mm]{f4b.ps} \caption{(a) Temporal profiles of the average transverse field strength $B_h$, average inclination angle $\theta_B$ and normalized white-light intensity $I_c$, and (b) Temporal profiles of the average positive and negative components of $B_z$ as well as the total flux ($\Phi$). All parameters are calculated in the area defined by the trapezoid shown in Figure 3. A color version of this figure is available online.} \label{} \end{figure*} \begin{figure*}[!htbp] \centering \includegraphics[width=140mm]{f5.ps} \caption{Time-stacked images along specified slices of the filament and the overlying arcades. Panels (a) and (c) show the direct images of AIA/SDO at 18:00 UT on September 7, 2011 in the 304\,\AA{} and 171\,\AA{} passbands, with the location of the slices. The green vertical lines in (b) and (d) represent the observation times of panels (a) and (c). An animation and a color version of this figure are available online.} \label{fig:bright} \end{figure*} \begin{figure*}[!htbp] \begin{minipage}{\textwidth} \includegraphics[width=140mm]{f6a_6d.ps} \end{minipage} \vspace{-7.0mm} \begin{minipage}{\textwidth} \hspace{2.7mm} \includegraphics[width=130mm]{f6e_6g.ps} \end{minipage} \caption{Panels (a-d): The AIA 171 \,\AA{} images from 20:39 UT to 22:27 UT. The contours in panel (b) are given by 300 G for the positive component (in blue) and 200 G for the negative component (in green) of $B_z$ on the solar surface. Red arrows point to the loops of interest, and the black arrow in panel (a) points to the filament of study. Panels (e-g): Schematics showing the proposed reconnection-triggering process \textbf{and possible magnetic configuration during the impulsive stage of the major eruption.} The pre-reconnection magnetic field lines are in yellow and post-reconnection lines are in red, and the filament of study is depicted with a cyan dashed line. Arrows in (e) indicate the magnetic field direction. An animation and a color version of this figure are available online.} \label{fig:bright} \end{figure*} \newpage \acknowledgements We thank SDO/HMI and SDO/AIA science teams for the free access to the data. We are grateful to the referee for valuable comments and Dr. Guohui Du, Bing Wang, and Di Zhao for their help in preparing the figures. This work was supported by the 973 program NSBRSF 2012CB825601, U1331104 and NNSFC grants 41331068, 41274175 to SDUWH, and by NSF grants AGS-1348513 and AGS-1408703 to NJIT.
2,869,038,154,166
arxiv
\section{Introduction} In recent years, much effort has been devoted toward formulating supersymmetric (SUSY) gauge theories on the lattice. This has been partially motivated by the theoretical as well as technical challenges associated with the problem, as well as the obvious potential role of SUSY in beyond the standard model physics. Of crucial importance pertaining to the latter point is an understanding of dynamical symmetry breaking--something which may in principle be achieved with numerical simulations provided an appropriate lattice discretization of the theory is found. Since na\'ive discretizations typically break SUSY explicitly, lattice simulations require fine-tuning in order to cancel off the undesirable SUSY breaking operators which may arise through radiative corrections. However, it has been realized for some time that one of the simplest SUSY theories, ${\cal N}=1$ supersymmetric Yang-Mills (SYM), may be simulated with conventional lattice discretizations and yet require only a minimal degree of fine-tuning. Although this theory does not possess SUSY breaking (as implied by a nonvanishing Witten index \cite{Witten:1982df}), it is believed to exhibit a variety of other interesting features such as discrete chiral symmetry breaking. The field content of ${\cal N}=1$ SYM consists of a vector field and a single adjoint representation Majorana fermion, and in conventional lattice discretizations of this theory, the only relevant SUSY violating operator which may arise radiatively is a gluino mass term. As a result, in the chiral and continuum limits SUSY is restored {\it accidentally} at infinite space-time volume. In the past, a variety of numerical studies have employed Wilson fermions in order to simulate ${\cal N}=1$ SYM (for a review, see \cite{Feo:2002yi}), however, these were subject to both difficulties of fine-tuning and the sign problem. In contrast, it was observed in \cite{Kaplan:1999jn} that domain-wall fermions (DWFs) are an ideal fermion discretization for simulating ${\cal N}=1$ SYM because of their good chiral properties \cite{Kaplan:1992bt,Narayanan:1992wx,Shamir:1993zy} and positivity of the fermion Pfaffian obtained from ``integrating out'' the fermion degrees of freedom. In this formulation, the chiral limit may be achieved without any fine-tuning of operators. The first and until recently the only study to use DWFs to investigate ${\cal N}=1$ SYM focused on the chiral limit of the gluino condensate \cite{Fleming:2000fa}. In our work, we expand on this early study in several important respects: we 1) establish the lattice scale by measuring the static quark potential and provide evidence for confinement which is consistent with expectations, 2) determine the size of the residual mass in order to ascertain the proximity to the SUSY point, 3) extrapolate the chiral condensate to the chiral limit using a recent, theoretically motivated fit formula for its $L_s$ dependence, and 4) study the spectrum of the theory. With exception to the third point, questions such as these could not be easily addressed in \cite{Fleming:2000fa} because space-time volumes where too small. Finally, we note that a similar but independent study of ${\cal N}=1$ SYM was presented by J. Giedt at this conference \cite{Giedt:2008aa}. \section{Simulation and measurement details} Numerical simulations of ${\cal N}=1$ SYM were performed using an appropriately modified version of the Columbia Physics System (CPS). We use a Wilson gauge action with Majorana DWFs in the adjoint representation of the gauge group $SU(2)$. The specific details of this lattice action may be found in \cite{Fleming:2000fa}. Rational Hybrid Monte Carlo simulations were performed on a $16^3\times32\times L_s$ lattice with $L_s=16,20,24$ and $28$, gluino masses $m_f = 0.01, 0.02$ and $0.04$, a domain-wall height of $M=1.9$, and coupling $\beta=2.3$. Several additional simulations where performed at the weaker couplings $\beta = 2.3533$ and $\beta=2.4$ as well. For each ensemble, a total of 2500 to 3000 trajectories were generated starting from an ``ordered'' configuration and equilibrium was achieved within the first $500$ trajectories. Measurements were made using uncorrelated configurations which were generated thereafter. A plot of the gluino condensate time history is shown in \Fig{thermalization}. \begin{figure} \begin{minipage}[t]{0.486\textwidth} \centering \includegraphics[angle=270,width=2.9in]{pbp_16x16x16x32_3.45_0.02_thermalization.ps} \caption{Monte Carlo time history for the gluino condensate for $\beta=2.3$ and $m_f=0.02$.} \label{fig:thermalization} \end{minipage} \hspace{8pt} \begin{minipage}[t]{0.486\textwidth} \centering \includegraphics[angle=270,width=2.9in]{hqpot_16x16x16x32x16_0.02.ps} \caption{% Heavy quark potential as a function of distance for $m_f=0.02$ and $L_s=16$. Sommer scale error bars are indicated by dashed lines.} \label{fig:hqpot} \end{minipage} \end{figure} \section{Results} \subsection{Static quark potential} The static quark potential was extracted from Wilson loop measurements for a range of couplings. Wilson loops were measured in the fundamental representation of the gauge group and on Coulomb gauge fixed gauge configurations. For fixed distances $r=|{\bf x}|$, the Wilson loops were fit as a function of time to the formula: \beq \langle W({\bf x},t) \rangle = C({\bf x}) e^{-V({\bf x}) t}\ , \qquad V({\bf x}) = V_0 - \frac{\alpha}{|{\bf x}|} + \sigma |{\bf x}| \eeq within an interval were excited state contamination appeared to be negligible. The extracted values of $V({\bf x})$ were then fit to the Cornell potential. The constant term ($V_0$), L\"uscher term ($\alpha$), string tension ($\sigma$), and Sommer scale ($r_0$) defined by: \beq \left. |{\bf x}|^2 \frac{\partial V({\bf x})}{\partial |{\bf x}|} \right|_{|{\bf x}|=r_0} = 1.65 \eeq were determined by double jackknife fits to the data. \Tab{hqpot} and \Fig{hqpot} summarizes the fit results. A decrease in the string tension with coupling supports the conclusion that we are in a confining phase of the theory. Taking the Sommer scale to be $r_0 = 0.5$ fm, we find that the inverse lattice spacing ranges between 1.3 GeV at $\beta=2.3$ and 2.1 GeV at $\beta=2.4$. \begin{table}[t] \caption{Static quark potential fit parameters for $m_f=0.02$ and $L_s=16$.} \centering \begin{tabular}{l l c c c c c c c} \hline\hline $\beta$ & t range & r range & $r_0$ & $V_0$ & $\alpha$ & $\sigma$ \\ \hline 2.3 & 4-8 & $\sqrt{3}$-6 & 3.339(23) & 0.501(18) & 0.161(23) & 0.134(4) \\ 2.3533 & 5-9 & $\sqrt{3}$-6 & 4.379(80) & 0.569(23) & 0.240(23) & 0.074(4) \\ 2.4 & 5-10 & $\sqrt{3}$-6 & 5.306(68) & 0.539(11) & 0.205(15) & 0.051(2) \\ \end{tabular} \label{tab:hqpot} \end{table} \subsection{Residual mass} It is of crucial importance that we have an understanding of the residual mass ($m_{res}$) since its magnitude determines our proximity to the SUSY point in this lattice formulation. The $L_s$ dependence of the residual mass may be parameterized by the theoretically motivated formula \cite{Antonio:2008zz}: \beq m_{res} \sim a_0 \frac{e^{- a_1 L_s}}{L_s} + a_2 \frac{\rho(0)}{L_s}\ , \label{eq:mres} \eeq where $\rho(0)$ appearing in the second term (i.e. the dislocation term) represents the density of zero eigenvalues of the fifth dimension transfer matrix Hamiltonian. The residual mass was determined from a ratio $R(t)$, given by the coupling of the pion \footnote{Here, we refer to the flavor non-singlet pseudo-scalar associated with a partially quenched theory.} to the mid-point pseudo-scalar density divided by its coupling to the boundary (see \cite{Blum:2000kn} for details). At large times this ratio of correlation functions is expected to tend asymptotically toward $m_{res}$; plots of this ratio for $L_s=16$ lattices are shown in \Fig{R}. We determine the residual mass by fitting $R(t)$ with a constant over the plateau region. \Fig{mres} shows a plot of the extracted values of $m_{res}$ as a function of the coupling. We find that the residual mass is roughly 5-10 times that of the input gluino mass. Furthermore, the strong coupling dependence of $m_{res}$ suggests that the dislocation term appearing in \Eq{mres} dominates the residual mass. In order to reduce the residual mass, simulations at weaker couplings and larger $L_s$ values are currently underway. \begin{figure} \begin{minipage}[t]{0.486\textwidth} \centering \includegraphics[angle=270,width=2.9in]{R_16x16x16x32x16_0.02.ps} \caption{R as a function of time for $m_f=0.02$ and $L_s=16$.} \label{fig:R} \end{minipage} \hspace{8pt} \begin{minipage}[t]{0.486\textwidth} \centering \includegraphics[angle=270,width=2.9in]{mres_vs_beta_16x16x16x32x16_0.02.ps} \caption{Residual mass as a function of coupling for $m_f=0.02$ and $L_s=16$. } \label{fig:mres} \end{minipage} \end{figure} \subsection{Gluino condensate} The gluino condensate was measured using a stochastic estimator with a single hit. We perform chiral limit extrapolations of the gluino condensate using two different limit orders following \cite{Fleming:2000fa}. First, we perform a linear $m_f \to0$ extrapolation of the gluino condensate at fixed $L_s$, followed by an $L_s \to \infty$ extrapolation of the $m_f=0$ result using the best available, theoretically motivated fit formula: \beq c_0 + c_1 \frac{e^{-c_2 L_s}}{L_s}\ , \label{eq:Ls_fit} \eeq which may be derived from the fifth dimension transfer matrix formalism. Note that the dislocation contribution to $m_{res}$ which appears in \Eq{mres} is absent in \Eq{Ls_fit}. This may be understood by observing that the chiral condensate is dominated by contributions from UV modes, whereas the dislocation term appearing in $m_{res}$ may be attributed to purely low energy phenomena \cite{Cheng:2008aa,RBC:2008aa}. Following the double extrapolation procedure outlined above we obtain an unrenormalized value of 0.003087(159) ($\chi^2/d.o.f. = 3.3$) for the gluino condensate in the chiral limit at finite lattice spacing. Chiral extrapolations have been performed by reversing the order of limits and yield consistent results with comparable error bars. Plots of the $m_f$ and $L_s$ fits for these extrapolations are shown in \Fig{condensate}. We have performed additional fits using other, phenomenologically motivated fit formulae to describe the $L_s$ dependence of the gluino condensate (e.g. \Eq{Ls_fit}, without the $L_s^{-1}$ prefactor in the second term). Such fits yield a 20\% shift in the gluino condensate as compared to the value obtained with \Eq{Ls_fit}. \begin{figure} \begin{minipage}[t]{0.486\textwidth} \centering \includegraphics[angle=270,width=2.9in]{pbp_16x16x16x32_3.45_mf_extrapolation.ps} \end{minipage} \hspace{8pt} \begin{minipage}[t]{0.486\textwidth} \centering \includegraphics[angle=270,width=2.9in]{pbp_16x16x16x32_3.45_Ls_extrapolation.ps} \end{minipage} \caption{Fits of the gluino condensate as a function of $m_f$ (left; extrapolated values are indicated by $\times$) and as a function of $L_s$ (right; extrapolation error bars are indicated by dashed lines) for $\beta = 2.3$.} \label{fig:condensate} \end{figure} \subsection{Spectrum} The low energy spectrum of ${\cal N}=1$ SYM is believed to consist of supermultiplets which may involve glue-glue, glue-gluino as well as gluino-gluino bound states. Although the states within a given multiplet are degenerate, at finite gluino mass (e.g. $L_s\neq\infty, m_f\neq0)$ one expects mass splittings which are to leading order linear in the gluino mass. The mass splittings have been calculated using a variety of effective theories \cite{Veneziano:1982ah,Farrar:1997fn}, however such calculations are unreliable since there is no separation of scales and therefore no small expansion parameter. For the scalar and pseudo-scalar gluino-gluino and the glue-gluino composite states, we consider the interpolating fields \beq \Omega_i({\bf x},t) = \Tr \bar \lambda({\bf x},t) \Gamma_i \lambda({\bf x},t)\ , \qquad \Omega({\bf x},t) = \Tr F_{\mu\nu}({\bf x},t) \Sigma_{\mu\nu} \lambda({\bf x},t)\ , \eeq where $\Gamma_i = \{1, \gamma_5 \}$ for the scalar (s) and pseudo-scalar (ps), $\lambda({\bf x},t)$ is an appropriate interpolating field for the gluino, $\Sigma_{\mu\nu} = \frac{-i}{2} [ \gamma_\mu, \gamma_\nu ]$ and $F_{\mu\nu}$ represents an interpolating field for the field strength tensor (e.g. a clover-leaf shaped product of link matrices). The lowest energy states created by the former operator correspond to the $f_0$ and $\eta^\prime$ respectively in QCD, whereas there is no QCD analogue for the latter operator. Upon ``integrating out'' the fermion degrees of freedom, the correlation function for the scalar and pseudo-scalar operators involve a difference between two contributions: a ``connected'' part $C_i(t)$ and a ``disconnected'' part $D_i(t)$. As is the case with the $\eta^\prime$ in QCD, the disconnected contribution is numerically extremely difficult to evaluate exactly. We choose to instead use a stochastic estimator to approximate the correlator following the techniques of \cite{Hashimoto:2008xg}. \begin{table}[t] \caption{Pseudo-scalar fit parameters for $m_f=0.02$ and $L_s=16$.} \centering \begin{tabular}{l|cc|cc} \hline\hline $\beta$ & t range & $m_{ps}^{connected}$ & t range & $\Delta m_{ps}$ \\ \hline 2.3 & 9-23 & 0.8701(3) & 2-4 & 0.0180(37) \\ 2.3533 & 9-23 & 0.8144(6) & 2-4 & 0.0176(52) \\ 2.4 & 9-23 & 0.7367(9) & 2-4 & 0.0230(51) \\ \end{tabular} \label{tab:spectrum} \end{table} The connected and disconnected correlators where measured using random $Z_2$ volume and wall sources respectively (a single hit for the connected part and five hits for the disconnected part); to improve statistics we project onto the zero momentum state by averaging the result over all of space. In order to extract the mass of the pseudo-scalar, we study the ratio of disconnected and connected correlation functions \beq \frac{D_{ps}(t) }{C_{ps}(t) } = 2 - d e^{-\Delta m_{ps} t}\ , \label{eq:ratio} \eeq where $\Delta m_{ps} = m_{ps} - m_{ps}^{connected}$. Plots of this ratio at several different couplings are shown in \Fig{ratio}. The linearity of these plots appear consistent with $\Delta m_{ps} t \ll 1$, presumably due to the presence of a large residual mass. Assuming that this is the case, we may expand \Eq{ratio} to leading order in $\Delta m_{ps} t$ and then perform a linear fit to obtain a value for the mass difference $\Delta m_{ps}$. \Fig{meff} shows effective mass plots for the connected part of the pseudo-scalar correlation function at several different couplings from which $m_{ps}^{connected}$ may be extracted. With $\Delta m_{ps}$ and $m_{ps}^{connected}$ determined, we may finally extract the pseudo-scalar mass $m_{ps}$. The results of these fits are provided in \Tab{spectrum}. While at present there are insufficient statistics to differentiate $\Delta m_{ps}$ between $\beta$ runs, it is none-the-less evident that for each coupling $m_{ps}$ is dominated by its contribution from $m_{ps}^{connected}$. A complete analysis of the pseudo-scalar, scalar and their fermionic superpartner at smaller residual masses is currently underway and results will appear in a forthcoming publication. \begin{figure} \begin{minipage}[t]{0.486\textwidth} \centering \includegraphics[angle=270,width=2.9in]{pscalar_ratio_16x16x16x32x16_0.02.ps} \caption{Ratio of connected and disconnected pseudo-scalar correlators as a function of time for $m_f = 0.02$ and $L_s = 16$.} \label{fig:ratio} \end{minipage} \hspace{8pt} \begin{minipage}[t]{0.486\textwidth} \centering \includegraphics[angle=270,width=2.9in]{connected_pscalar_meff_16x16x16x32x16_0.02.ps} \caption{Effective mass plot of the connected pseudo-scalar correlator as a function of time for $m_f = 0.02$ and $L_s = 16$.} \label{fig:meff} \end{minipage} \end{figure} \section{Acknowledgments} M. G. E. would like to thank N. Christ, C. Kim and R. Mawhinney for numerous helpful discussions, I. Mihailescu for fitting the static quark potential data presented in this work, and C. Jung for technical assistance with compiling and running CPS on BlueGene/L. This research utilized resources at the New York Center for Computational Sciences at Stony Brook University/Brookhaven National Laboratory which is supported by the U.S. Department of Energy under Contract No. DE-AC02-98CH10886 and by the State of New York. This work was supported by the U.S. Department of Energy under grant number DE-FG02-92ER40699.
2,869,038,154,167
arxiv
\section{Introduction} \label{sec:intro} \begin{figure}[t] \centering \vspace{-0.35cm} \subfigtopskip=2pt \subfigbottomskip=2pt \subfigcapskip=-5pt \subfigure[Atrous Conv\cite{dilation}]{ \label{fig:sub_astous} \includegraphics[width=0.4\linewidth]{fig/astrous.pdf}} \subfigure[FPN\cite{fpn}]{ \label{fig:sub_fpn} \includegraphics[width=0.4\linewidth]{fig/unet.pdf}} \subfigure[BiSeNet\cite{bisenet}]{ \label{fig:sub_bisenet} \includegraphics[width=0.35\linewidth]{fig/bisenet.pdf}} \subfigure[Proposed BiAlignNet]{ \label{fig:sub_bialign} \includegraphics[width=0.35\linewidth]{fig/bialign.pdf}} \caption{\textbf{Comparison of different segmentation architectures.} \subref{fig:sub_astous} uses astrous convolution layers to obtain larger receptive field and high resolution feature map but introduces heavy computation complexity. \subref{fig:sub_fpn} is a FPN-like model. It gets a high resolution feature map by adding top-down and lateral fusions. \subref{fig:sub_bisenet} shows the structure of BiSeNet\cite{bisenet}. We propose \subref{fig:sub_bialign} to maximize the utilization between two paths and add different supervision according to their priorities. Best viewed in color. } \label{fig:teaser} \end{figure} Semantic Segmentation is a fundamental vision task that aims to classify each pixel in the images correctly. Some earlier approaches~\cite{deeplabv1, li2011superpixel} use structured prediction operators such as conditional random fields (CRFs) to refine segmentation results. Recent methods for semantic segmentation are predominantly based on FCNs~\cite{fcn}. Current state-of-the-art methods~\cite{pspnet,DAnet,nvidia_seg_video} apply atrous convolutions~\cite{dilation} at the last several stages of their networks to yield feature maps with strong semantic representation while at the same time maintaining the high resolution, as shown in Fig.~\ref{fig:teaser}(a). Moreover, there are also several methods based on Feature Pyramid Network (FPN)-like~\cite{fpn,PanopticFPN,unet} models which leverage the lateral path to fuse feature maps in a top-down manner. In this way, the deep features of the last several layers strengthen the shallow features with high resolution. Therefore, the refined features are possible to keep high resolution and meanwhile catch semantic representation, which is beneficial to the accuracy improvement, as shown in Fig.~\ref{fig:teaser}(b). However, both designs are not practical for real-time settings. The former methods~\cite{pspnet,DAnet} require extra computation since the feature maps in the last stages can reach up to 64 times bigger than those in FCNs. Meanwhile, the latter one~\cite{PanopticFPN} has a heavier fusion operation in their decoder. For example, under a single GTX 1080Ti GPU, the previous model PSPNet~\cite{pspnet} has a frame rate of only 1.6 FPS for $1024 \times 2048$ input images. As a consequence, this is very problematic for many time-critical applications, such as autonomous driving and robot navigation, which desperately demand real-time online data processing. There are several specific designed real-time semantic segmentation models~\cite{ICnet,dfanet,bisenet,bisenetv2} handling above issues. However, these methods can not achieve satisfactory segmentation results as accurate models. The representative works BiSeNets~\cite{bisenet,bisenetv2} propose to use two different paths for learning spatial details and coarse context information respectively, shown in Fig.~\ref{fig:teaser}(c). However, they have not explored the interaction between two data flows explicitly. We believe such two data flows contain complementary content that can benefit each other. In this paper, we propose a new network architecture for real-time scene parsing settings. As shown in Fig.~\ref{fig:teaser}(d), two paths interact with each other through specific design modules before the fusing. Motivated by a recent alignment module~\cite{sfnet} which deforms the entire feature map using a learned flow field, we propose a Gated Flow Alignment Module to avoid noise during the fusing since two paths contain diverse information. The proposed module is light-weight and can be inserted on each path before fusion. The features are aligned to each other through the learned flow fields. Moreover, to make the spatial path learn detailed information, we supervise it using the edge-guided hard pixel mining loss~\cite{ohem} to further improve the performance. We term our network as BiAlignNet for short. Finally, we evaluate BiAlignNet on two datasets, i.e., Cityscapes~\cite{Cityscapes} and CamVid~\cite{CamVid}. The results demonstrate the effectiveness of the proposed components. Specifically, our methods improve the origin BiSegNet baseline by about 2\% mIoU on the test set of Cityscapes with only 3 FPS drop. Our method can achieve 78.5\% mIoU while running at 32 FPS on single 1080Ti without acceleration. \section{Method} \label{Method} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{fig/method.pdf} \caption{\textbf{Overview of the BiAlignNet.} The context path is in the \textcolor{blue}{blue} box. The spatial path is in the \textcolor{YellowGreen}{green} box. \textcolor{Dandelion}{Orange} part represents the bidirectional alignment. Best viewed in color.} \label{fig:bialgin} \end{figure} We present the overall network architecture in Fig.~\ref{fig:bialgin}. BiAlignNet includes the following three parts: two pathways, which are Spatial Path and Context Path, and Bidirectional Alignment using Gated Flow Alignment Module to align features in both directions. We also specially design the loss functions explained in Sec.~\ref{sec:loss} to supervise different sorts of information in two paths at last. \subsection{Spatial Path and Context Path} We briefly review the spatial and context path in BiSeNet~\cite{bisenet}. The spatial path is designed to capture the low-level information from the input image. We only use shallow layers to preserve spatial details. It only consists of three convolution layers with batch normalization and ReLU. Each layer has a stride of 2, so the final feature map of the spatial path is $\frac{1}{8}$ of the input size. The context path is responsible for extracting high-level information using a deeper network with more downsample operation. For implementation, we employ lightweight backbone DFNet~\cite{DF-seg-net} series for context path. Pyramid Pooling Module (PPM)~\cite{pspnet}, which has shown a strong ability to catch contextual information, is also added to our model. All backbones have four stages of residual blocks, and the first layer of each stage has a stride of 2. Thus, the final output of the context path is $\frac{1}{32}$ of the input size. \subsection{Bidirectional Alignment} \label{sec:gfam} In this section, we present a Gated Flow Alignment Module (GFAM) to align features with each other. The original FAM~\cite{sfnet} is proposed to align adjacent features in the decoder. However, directly using such a module may lead to inferior results because of the huge semantic gap between the two paths. Thus, we plug a gate into the FAM to avoid the noises and highlight the important information. Suppose $\mathbf{F}_s$ is the source feature, and we want to align the information from $\mathbf{F}_s$ to target feature $\mathbf{F}_t$. Inspired by original FAM~\cite{sfnet}, we first generate a flow field grid $G$: \begin{equation} G = conv(cat(\mathbf{F}_s || \mathbf{F}_t)), \end{equation} where $\mathbf{F}_s$ and $\mathbf{F}_t$ can be features from the spatial path and the context path respectively, and vice versa. The feature map that has a smaller size is bilinearly upsampled to reach the same size as the larger one. After flow field grid generation, we adopt a pixel-wise gate to emphasize the important part in current data flow: \begin{equation} \hat{G} = \sigma(conv(\mathbf{F}_t)) \odot G, \end{equation} where $\hat{G}$ is the gated flow field grid, $\sigma$ means the sigmoid layer and $\odot$ represents element-wise product. Each position $p$ in target feature $\mathbf{F}_t$ can be mapped to a position $p^\prime$, according to the values in gated flow field grid $\hat{G}$. Note that the mapping result is not an integer, so the value at $\mathbf{F}_t(p^\prime)$ is interpolated by the values of the 4-neighbors $\mathcal{N}\left(p^\prime\right)$ (top-left, top-right, bottom-left, and bottom-right): \begin{equation} \hat{\mathbf{F}_t}\left(p\right)=\sum_{i \in \mathcal{N}\left(p^\prime\right)} w_{p} \mathbf{F}_t(p^\prime), \end{equation} where $w_{p}$ is the bilinear kernel weights estimated by the distance of warped grid, $\hat{\mathbf{F}_t}$ is the target feature aligned with information from source feature $\mathbf{F}_s$. In BiAlignNet, we take both spatial feature and context feature as source features to align with each other bidirectionally. In this way, different pieces of information can complement each other, as shown in the orange box of Fig.~\ref{fig:bialgin}. \subsection{Loss Function} \label{sec:loss} The spatial path gives priority to spatial details while context path focuses on high-level semantic context. To force spatial path to focus on detailed information, we introduce an edge-guided hard pixel indicator map $d$ to supervise the learning. $d$ is predicted from the spatial path feature and normalized by a sigmoid layer. Since most of the fine information are concentrated in the boundaries, the edge map $b$ is derived from the segmentation labels through algorithm~\cite{findCountour} which retrieves contours from the binary image. We utilize the edge map $b$ to guide the prediction of indicator $d$. As for context path, we use cross-entropy loss with online hard example mining (OHEM)~\cite{ohem,bisenet}. We jointly supervise two paths with a loss function $L$: \begin{equation} L = L_{spatial}(d, b, s, g) + L_{context}(s, g), \end{equation} where $s$ is the predicted segmentation output of the model and $g$ is the ground truth segmentation labels, and $L_{context}$ is the OHEM loss. $L_{spatial}$ is calculated from the following equation. \begin{equation} L_{spatial} =\lambda L_{bce}(d, b) + L_{hard}(s, g, d), \end{equation} \begin{equation} L_{hard} = -\frac{1}{K} \sum_{i=1}^{N} \mathbbm{1}\left[s_{i, g_{i}}<t_{K} \& d_{i}>t_{b}\right] \cdot \log s_{i, g_{i}}, \label{eq:hard} \end{equation} where $L_{bce}$ is the binary cross-entropy loss for edge-guided hard pixel indicator $d$, $L_{hard}$ mines the hard pixels with high probability in $d$ and calculate the cross-entropy loss. $N$ is the total number of pixels. $\mathbbm{1}[x]=1$ if $x=1$ otherwise 0. First Eq.~\ref{eq:hard} filters the positions that have a higher probability than threshold $t_b$=0.8 in $d$. Then it picks positions within top $K$ losses, where $t_K$ is the threshold for top $K$ loss. Empirically, we set $\lambda= 25$ to balance the losses in all experiments. In this way, the spatial path learns more detailed information during the training. \section{Experiment} \label{exp} \subsection{Datasets} We carry out experiments on Cityscapes and Camvid datasets. Cityscapes~\cite{Cityscapes} is a large street scene dataset which contains 2,975 fine-annotated images for training, 500 images for validation and a testing set without annotations of 1,525 images. All images in this dataset have a high resolution of 1,024$\times$2,048. CamVid~\cite{CamVid} is another road scene dataset. This dataset contains 367 training images, 101 validation images and 233 testing images with a resolution of $720 \times 960$. \subsection{Speed and Accuracy Analysis} \textbf{Implementation Details.} Our experiments are done with the PyTorch framework. We use stochastic gradient descent (SGD) with a batch size of 16 and a momentum of 0.9 and weight decay of 5e-4. The initial learning rate is 0.01 with a "poly" learning rate strategy in which the initial rate is multiplied by $\left(1-\frac{\text{ iter }}{\text{total\_iter}}\right)^{0.9}$. As for data augmentation, we randomly horizontally flip the images and randomly resize them with a scale of [0.5, 2.0], and crop images to a size of 1024$\times$1024 (720$\times$720 for CamVid). We use the single scale inference and report the speed with one 1080Ti GPU. \begin{table}[!t]\setlength{\tabcolsep}{6pt} \caption{\textbf{Comparison on Cityscapes {\it val} and {\it test} set with state-of-the-art real-time models.} Notation: $\gamma$ is the downsampling ratio corresponding to the original $1024\times 2048$ resolution, for example, $\gamma=0.75$ means the model's input size is $768 \times 1536$. "*" noted methods and ours are tested on single 1080Ti GPU.} \centering \label{table:cityscapes_sota_speed_acc2} \begin{threeparttable} \scalebox{0.70}{ \begin{tabular}{lcccccc} \toprule[0.2em] \multirow{2}{*}{Method} & \multirow{2}{*}{$\gamma$} & \multirow{2}{*}{Backbone} & \multicolumn{2}{c}{mIoU ($\%$) } & \multirow{2}{*}{\#FPS} & \multirow{2}{*}{\#Params} \\ \cline{4-5} & & & val & test & & \\ \toprule[0.2em] ENet~\cite{ENnet} & 0.5 & - & - &58.3 & 60 & 0.4M\\ ESPNet~\cite{ESPNet} & 0.5 & ESPNet & - &60.3 & 132 & 0.4M \\ ESPNetv2~\cite{ESPNetv2} & 0.5 & ESPNetv2 &66.4 & 66.2 & 80 & 0.8M \\ ERFNet~\cite{ERFNet} & 0.5 & - &70.0 &68.0 & 41.9 & - \\ BiSeNetv1~\cite{bisenet}$^*$ & 0.75 & Xception39 &69.0 &68.4 & 175 & 5.8M \\ ICNet~\cite{ICnet} & 1.0 & PSPNet50 &- &69.5 & 34 & 26.5M \\ CellNet~\cite{custom_search_seg}&0.75& - &- & 70.5 &108 & - \\ DFANet~\cite{dfanet} &1.0 & Xception A &- &71.3 &100 &7.8M \\ BiSeNetv2~\cite{bisenetv2}$^*$ & 0.5 & - &73.4 &72.6 & 28 & - \\ DF1-Seg~\cite{DF-seg-net}$^*$&1.0 & DFNet1 & - &73.0 & 100 & 8.55M \\ BiSeNetv1~\cite{bisenet}$^*$ & 0.75 & ResNet18 &74.8 &74.7 & 35 & 12.9M \\ DF2-Seg~\cite{DF-seg-net}$^*$& 1.0 &DFNet2 & - &74.8 & 68 & 18.88M \\ SwiftNet~\cite{swiftnet}$^*$ & 1.0 &ResNet18 &75.4 &75.8 &39.9&11.8M \\ FC-HarDNet~\cite{chao2019hardnet}$^*$ & 1.0 & HarDNet & 77.4 &76.0 & 35 & 4.1M \\ SwiftNet-ens~\cite{swiftnet}$^*$&1.0 & - &- &76.5 &18.4 &24.7M \\ \midrule BiAlignNet & 0.75 & DFNet2 & 76.8 & 75.4 & 50 & 19.2M \\ BiAlignNet & 1.0 & DFNet2 & 78.7 & 77.1 & 32 & 19.2M \\ BiAlignNet\textdagger & 0.75 & DFNet2 & 79.0 & 76.9 & 50 & 19.2M \\ BiAlignNet\textdagger & 1.0 & DFNet2 & \textbf{80.1} & \textbf{78.5} & 32 & 19.2M \\ \bottomrule[0.1em] \end{tabular} } \begin{tablenotes} \item {\scriptsize \textdagger Mapillary dataset used for pretraining. } \end{tablenotes} \end{threeparttable} \end{table} \noindent \textbf{Result Comparison.} Table~\ref{table:cityscapes_sota_speed_acc2} shows the results of our method compared to other state-of-the-art real-time methods. Our method with an input size of $768\times1536$ can get the best trade-off between accuracy and speed. When input with the whole image, BiAlignNet still runs in real time and gets 78.7\% mIoU and 77.1\% mIoU on val and test, which outperforms all the methods listed above. After pre-training on Mapillary~\cite{mapillary} dataset, our BiAlignNet gains 1.4\% improvement. We also apply our method with different light-weight backbones on CamVid dataset and report comparison results in Table~\ref{table:camvid_res}. BiAlignNet also achieves state-of-the-art performance on the CamVid. \\ \textbf{Visualization.} In Fig.~\ref{fig:vis}, we visualize flow fields from two directions. Flow from the spatial path to the context path (Column b) contains more detailed information and Column c that is from the context path, includes more high-level information. Thus, different features are aligned to each other under the guidance of learned flow field. Fig.~\ref{fig:vis}(d) shows that BiAlignNet outperforms BiSeNet (Column e) on boundaries and details. Fig.~\ref{fig:gate} gives more insights into the proposed GFAM module and the hard pixel mining supervision. As shown in Column b, gates from the spatial path assign higher scores on image details. It confirms that the gate in GFAM can filter the noise and highlight the significant part in the flow field. Fig.~\ref{fig:gate}(c) and (d) visualize hard pixels used in $L_{hard}$ and the predicted indicator map by the spatial path. They are consistent with the fact that edge-guided hard pixel mining pays more attention to fine-grained objects and edges that are difficult to separate. \begin{table}[!t]\setlength{\tabcolsep}{10pt} \caption{\textbf{Comparison on the CamVid {\it test} set with previous state-of-the-art real-time models.}} \label{table:camvid_res} \centering \begin{threeparttable} \scalebox{0.70}{ \begin{tabular}{lccc} \toprule[0.2em] Method & Backbone & mIoU ($\%$) & \#FPS \\ \toprule[0.2em] DFANet B~\cite{dfanet} & - & 59.3 & 160 \\ SwiftNet~\cite{swiftnet} & ResNet18 & 63.33 & - \\ DFANet A~\cite{dfanet} & - & 64.7 & 120 \\ ICNet~\cite{ICnet}& ResNet-50 & 67.1 & 34.5 \\ BiSeNetv1~\cite{bisenet} & ResNet18 & 68.7 & 60 \\ BiSeNetv2~\cite{bisenetv2} & - & 72.4 & 60 \\ BiSeNetv$\text{2}^*$~\cite{bisenetv2} & - & 76.7 & 60 \\ \midrule BiAlignNet & DFNet1 & 68.9 & 85\\ BiAlignNet & DFNet2 & 72.3 & 65\\ BiAlignNe$\text{t}^*$ & DFNet2 & \textbf{77.1} & 65\\ \bottomrule[0.1em] \end{tabular} } \begin{tablenotes} \item {\scriptsize * Cityscapes dataset used for pretraining. } \end{tablenotes} \end{threeparttable} \end{table} \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{fig/vis2.pdf} \caption{\textbf{Visualization of learned flow field and segmentation output.} Column (a) lists three exemplary images. Column (b) and (c) show the flow field in two directions, spatial to context and context to spatial correspondingly. Column (d) and (e) show the comparison between BiAlignNet and BiSeNet. Best viewed on screen and zoom in.} \label{fig:vis} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{fig/gate.pdf} \caption{\textbf{Visualization of flow gate, hard examples in spatial loss and predicted edges.} Column (a) lists input images. Column (b) shows the gate map from spatial path to context path. Column (c) shows the hard examples in $L_{hard}$. Column (d) illustrates the predicted hard pixel indicator map from the spatial path. Best viewed on screen and zoom in.} \label{fig:gate} \end{figure} \subsection{Ablation Study} We carry out ablation studies on each component of BiAlignNet in this section. As shown in Table~\ref{table:ablation}, our proposed module only introduces a very small amount of computation. \noindent \textbf{Ablation for bidirectional alignment.} We argue that insufficiently feature fusion leads to low performance in previous BiSeNet. As we can see in Table~\ref{table:ablation}, compared to the baseline that simply concatenates two feature maps, bidirectional alignment with GFAM can improve performance by 2.4\%. Moreover, the alignments in two directions show the synergistic effects with each other. The performance increase brought by bidirectional alignment is more than the two one-way models. Also, the simple gate mechanism in GFAM results in a 0.8\% performance increase. \noindent \textbf{Ablation for the spatial loss.} We expect two paths to learn different contents from the input, especially the spatial path. Thus, we enhance the detail supervision in the spatial path through the specially designed spatial loss with a hard pixel mining indicator. After adding the spatial loss, the performance has improved by 0.9\%. This proves the effectiveness of the designed spatial loss function. \begin{table}[!t]\setlength{\tabcolsep}{6pt} \caption{\textbf{Ablation Study.} We show the effectiveness of each component in BiAlignNet with DFNet2 on validation set of Cityscapes. \textbf{CP}: Context Path; \textbf{SP}: Spatial Path; \textbf{GFAM}: Gated Flow Alignment Module; \textbf{FAM}: original Flow Alignment Module; $\xrightarrow{}$: Alignment direction; \textbf{SL}: Spatial Loss.} \label{table:ablation} \centering \begin{threeparttable} \scalebox{0.65}{ \begin{tabular}{lccc} \toprule[0.2em] Method & mIoU ($\%$) & $\Delta$ ($\%$) & \#GFLOPs\\ \toprule[0.2em] CP\,+\,SP (baseline) & 75.4 & - & 108 \\ CP\,+\,SP\,+\,GFAM (CP$\xrightarrow{}$SP)& 76.5 & 1.1$\uparrow$& 108.37\\ CP\,+\,SP\,+\,GFAM (SP$\xrightarrow{}$CP)& 76.6 &1.2$\uparrow$ & 108.36\\ CP\,+\,SP\,+\,FAM (bidirection)& 77.0 & 1.6$\uparrow$ & 108.72 \\ CP\,+\,SP\,+\,GFAM (bidirection)& 77.8 & 2.4$\uparrow$& 108.73 \\ \midrule CP\,+\,SP\,+\,GFAM (bidirection)\,+\,SL & \textbf{78.7} & 3.3$\uparrow$& 108.73\\ \bottomrule[0.1em] \end{tabular} } \end{threeparttable} \end{table} \section{Conclusion} \label{conclusion} In this paper, we propose a Bidirectional Alignment Network (BiAlignNet) for fast and accurate scene parsing. With the bidirectional alignment and specific supervision in each pathway, the low-level spatial feature can be deeply fused with the high-level context feature. Comparative experiments are performed to show the effectiveness of our proposed components over the baseline models. BiAlignNet also achieves a considerable trade-off between segmentation accuracy and the inference speed.
2,869,038,154,168
arxiv
\section{Introduction} Diffeomorphism of the spacetime manifold is in itself not a physical symmetry; the physics is determined by the spacetime symmetry in the locally inertial manifold \cite{W}. In this sense we talk of relativistic or nonrelativistic diffeomorphism invariance. Non relativistic diffeomorphism invariance (NRDI) has recently gained considerablely interest in the literature \cite{SW,Bekaert:2011qd,Hoyos:2011ez, Schaefer:2013oba,Hoyos:2013qna} due to its diverse application in condensed matter physics (specifically in the theory of fractional quantum hall effect)(FQHE),hollographic models \cite{Janiszewski:2016zrm}, Newtonian Gravity and others. It was none other than Cartan \cite{Cartan-1923,Cartan-1924} who formulated a geometric theory of Newtonian gravity way back in 1923 . Much work was done \cite{Havas, ANDE, EHL, MALA, MTW} on the geometric properties of the corresponding Newton - Cartan (NC) spaceime. However. during resurgence of the NRDI the chief issue was coupling of non relativistic field theories with background curved spacetime \cite{SW}, which was not much discussed in the then literature. A host of applications of the NRDI model of \cite{SW} appeared in the literature \cite{Bekaert:2011qd,Hoyos:2011ez, Schaefer:2013oba,Hoyos:2011ez}. However. certain problems appeared in the formulation of \cite{SW}. These are, \begin{enumerate} \item The transformation of the metric becomes non canonical and \item Galilean symmetry could not be retrieved in the flat limit \end{enumerate} The problems were tackled by considering a gauge field and relating the Galilean boost parameter with the gauge parameter. Assuming a $U(1)$ gauge field in the context of FQHE is only natural. But trading off galilean boost symmetry with $U(1)$ gauge symmetry is not very apetizing. Again, that this endeavour decreases the number of symmetry elements was overlooked. Following this line of research, a U(1) gauge field was later introduced as an element of NC geometry \cite{J}. The geometric structure erected by a long work of many stalwarts in the field was thus required to be modified. Different approaches to the problem, namely the algebraic method \cite{abpr}, coset construction \cite{Karananas:2016hrm}, nonrelativistic limit procedures \cite{J} and others evolved to investigate NRDI but it can be asserted that a general procedure for coupling nonrelativistic field theories with gravity was not available. In this scenario Galilean gauge theory (GGT) \cite{BMM1,BMM2,BMM3,BM4} was formulated basing on the gauging of symmetry approach introduced by Utiyama \cite{U} for relativistic theories, tailored appropriately for nonrelativistic theories Spatial diffeomorphism can be easily obtained from GGT \cite{BM4}. However there are significant differences in some issues between the result from GGT with other approaches.This is most prominant in the coupling of the Schrodinger field theory with curve space (\cite{SW}) where Galilean symmetry can only be retrieved in the flat limit if there is a gauge field (see above). On the other hand the spatially diffeomorphic theory obtained from GGT finds smoothly the flat galilean limit and does not require any additional gauge interaction. Following the GGT approach one can consistently tackle the issue of torsion in Newton Cartan space time \cite {BM5} or provide the basis for Milne boost symmetry of metric NC theory \cite{BM6}, to name a few examples, within the purview of the NC geometry. However, the dynamical consistency of GGT is yet to be examined. Naturally, Hamiltonian analysis is an important tool to understand the consistency of a field theoretic model. The objective of this work is to formulate the Schrodinger field coupled with gravity as obtained from GGT in the phase space. Note that there are very few examples of such analysis available in the literature, still fewer with the motivation of the present work. Hamiltonian structure of non relativistic Schrodinger model coupled with curved space time as obtained from GGT will be analysed here. Observe that so far we consider theories coupled with background gravity. Interestingly, symmetries of a model with background interaction which are evident from the action can not be reproduced by Hamiltonian method. For the latter, dynamics of the gravitational interaction is required to be included. This is not surprising because Hamiltonian analysis is performed in the phase space where the variables are coordinates and their conjugate momenta.The latter is derived by differentiating the Lagrangian with respect to generalised velocity. The momenta conjugate to the background fields weakly vanish. In the Hamiltonian framework these are constraints. Conservation of these constraints is the step where dynamics comes into play. However, when fields do not have any dynamics, such analysis is bound to be trivial. Consequently, for useful Hamiltonian analysis, we will have to supplement the action obtained from GGT with a dynamical term for gravity. Now in 2+1 dimension the Chern Simons term provides an interesting dynamical term for both relativistic and non relativistic models. Thus Chern - Simons gravity \cite{Witten} will be a suitable choice. The fields appearing in our model have origin in the localisation process. It thus necessarily contains Hamiltonian constraints. A comprehensive method of Hamiltonian analysis for such singular system was introduced by Dirac \cite{D}. Our aim is to analyse Chern Simons gravity coupled non relativistic schrodinger field model by Dirac's method and to discuss the consistency of the model. This will enable us to compare different spatially diffeomorphic models also, as we will see. We will provide a comprehensive account of constraints structure of the model in question which is a novel calculation. The inclusion of the Chern Simons gravity in the context of spatial diffeomorphism is once again a unique feature. No doubt that the problem investigated in this paper is quite interesting in its own merit. Before finishing the introductory section an account of the organisation of the paper will be appropriate. In the next section the nonrelativistic Schrodinger field theory coupled with background gravity is written from GGT. As we have learnt, the dynamics of gravity must be included in our model to carry out a meaningful Hamiltonian analysis. In $(2+1)$ dimensions the Chern Simons gravity action is a simple and very important candidate for the dynamics. The Chern Simons gravity action is introduced and its reduction in the adapted coordinates is discussed. Adding the piece with the first part from GGT the complete action is obtained. The Hamiltonian analysis is presented in section 3. This Hamiltonian analysis is repeated in the next section with a truncated action which manifests a magical change of the results. We see that it leads to unphysical degree of freedom of the system. In the next section the results are discussed in the context of the present state of the art. Section 6 contains the concluding remarks. \section{The model} The Galilean gauge theory (GGT) enables us to couple a nonrelativistic field theory with background gravity \cite{BMM1}, \cite{BMM2}. The free Schrodinger field theory in galilean coordinates is given by \begin{eqnarray} S = \int d^3x\left[ \frac{i}{2}\left(\psi^*\partial_0\psi - \psi\partial_0\psi^* \right) -\frac{1}{2m}\partial_k\psi^* \partial_k\psi\right]\label{fs} \end{eqnarray} where $\psi$ and $\psi^ *$ are the complex Schrodinger fields. According to GGT, to derive the corresponding coupled action we have to replace the partial derivatives $\partial_\mu\psi$ by the corresponding $\nabla_\mu\psi$ where \begin{eqnarray} \nabla_0\psi &=& \Sigma_0{}^\sigma \left(\partial_\sigma + i B_\sigma \right)\psi\nonumber\\ \nabla_a\psi &=&\Sigma a{}^l \left(\partial_l + i B_l \right)\psi\label{kd} \end{eqnarray} $\Sigma$ and $B$ fields, originally introduced as compensating (gauge) fields, are identified with the vierbein and spin connection of the Newton Cartan spacetime \cite{BMM1,BMM2}. If $\sigma_{ab}, mx_a $ are the generators of spatial rotation and Galileo boost. \begin{equation} B_{\mu}=\frac{1}{2}B^{ab}_{\mu}\sigma_{ab}+B^{a0}_{\mu}mx_{a} \label{bstructure} \end{equation} The last equation introduces the independent fields $B^{a0}_{\mu}$ and $B^{ab}_{\mu}$ which, along with $\Sigma_\alpha{}^\mu$ constitute the configuration space of the theory. Note that there is an asymmetry in the expression of the covariant derivative, $\Sigma_a{}^0 =0 $ but $\Sigma_0{}^k\ne 0 $. Also $B_\mu ^{0a} =0 $ while $B_\mu ^{a0} \ne 0 $ . These are reflection of the fact that time and space are treated in different ways in nonrelativistic physics. From (\ref{fs}), following the procedure detailed above and correcting for the measure we get the action of Schrodinger field coupled with background Newtonian gravity. The Lagrangian density becomes \cite{BMM1, BM4}, \begin{eqnarray} S = \int d^3x \det{\Sigma_\alpha{}^\mu}\left[ \frac{i}{2}\left(\psi^*\nabla_0\psi - \psi\nabla_0\psi^* \right) -\frac{1}{2m}\nabla_a\psi^* \nabla_a\psi\right]\label{slcompact} \end{eqnarray} Expanding, we get \begin{multline} \label{ssg} \mathcal{L}=\frac{M}{\Sigma^{0}_{0}}\Bigl[\frac{i}{2}\Sigma^{0}_{0}\left(\psi^{*}\partial_{0}\psi-\psi\partial_{0}\psi^{*}\right)+\frac{i}{2}\Sigma^{k}_{0}\left(\psi^{*}\partial_{k}\psi-\psi\partial_{k}\psi^{*}\right)\\ -\Sigma^{0}_{0}B_{0}\psi^{*}\psi-\Sigma^{k}_{0}B_{k}\psi^{*}\psi-\frac{1}{2m}\Sigma^{k}_{a}\Sigma^{l}_{a}\left(\partial_{k}\psi^{*}-iB_{k}\psi^{*}\right)\left(\partial_{l}\psi+iB_{l}\psi\right)\Bigr] \end{multline} An important point may be emphasised about the Hamiltonian analysis of (\ref{ssg}). In this theory $ \Sigma $ and $B $ are background fields,introduced originally as compensating gauge fields and later identified as the vielbeins and spin connections respectively . From the Hamiltonian point of view these fields act like Lagrange multipliers and not as dynamical fields. They are thus not included in the phase space variables. As a result the symmetries exhibited by the action do not show up in the Hamiltonian analysis. Meaningful Hamiltonian analysis is possible when an appropriate kinetic term is provided to define the dynamics. We chose 2+1 dimensional Chern-Simons term to make the fields dynamical. The Chern Simons term being a topological term, does not have an independent dynamics. Thus it may be coupled both with relativistic and non relativistic theories. Also the Churn Simons gravity is a very important part in $(2+1) -$ dim gravity. So, the Hamiltonian analysis presented here has genuine intrinsic appeal. The Lagrangian for the Chern-Simons gravity is \begin{equation} {\mathcal{L}_{cs}}=\epsilon^{\gamma\lambda\rho}\Lambda^{\alpha}_{\gamma}R_{\alpha\lambda\rho} \end{equation} where \begin{eqnarray} R_{\alpha\lambda\rho}=\partial_{\lambda}\omega_{\alpha\rho}-\partial_{\rho}\omega_{\alpha\lambda}+\epsilon_{\alpha\beta\gamma}\omega^{\beta}_{\Lambda}\omega^{\gamma}_{\rho}\label{R} \end{eqnarray} and \begin{eqnarray} \omega_{\alpha\rho}=-\frac{1}{2}\epsilon_{\alpha\beta\gamma} B^{\beta\gamma}_{\rho}\label{W} \end{eqnarray} In order to write the appropriate action in the Galilean frame in Newton Cartan spacetime, we have to substitute $\Sigma_a{}^0 = 0$ and $ B_\mu{}^{0a}= 0 $ \cite{BMN}. From (\ref{R}) and (\ref{W}), we have \begin{align*} R_{0\lambda\rho}&=-\frac{1}{2}\epsilon_{ab}\left(\partial_{\lambda}B^{ab}_{\rho}-\partial_{\rho}B^{ab}_{\lambda}+\frac{1}{2}B^{a0}_{\rho}B^{b0}_{\lambda}\right)\\ R_{a\lambda\rho}&=-\frac{1}{2}\epsilon_{ab}\partial_{\lambda}B^{b0}_{\rho}+\frac{1}{2}\epsilon_{ab}\partial_{\rho}B^{b0}_{\lambda}-\frac{1}{4}\epsilon_{cd}\left(B^{a0}_{\lambda}B^{cd}_{\rho}-B^{a0}_{\rho}B^{cd}_{\lambda}\right)\label{•} \end{align*} Using the expressions of $R_{0kl}$, $R_{akl}$ and $R_{a0l}$ we can write the C-S piece as, \begin{multline*} \mathcal{L}_{cs}=-\frac{1}{2}\epsilon^{kl}\epsilon_{ab}\Lambda^{0}_{0}\left(\partial_{k}B^{ab}_{l}-\partial_{l}B^{ab}_{k}+\frac{1}{2}B^{a0}_{k}B^{b0}_{l}\right)\\ +\epsilon^{kl}\Lambda^{a}_{0}\Bigl[-\frac{1}{2}\epsilon_{ab}\partial_{k}B^{b0}_{l}+\frac{1}{2}\epsilon_{ab}\partial_{l}B^{b0}_{k}-\frac{1}{4}\epsilon_{cd}\left(B^{a0}_{k}B^{cd}_{l}-B^{a0}_{l}B^{cd}_{k}\right)\Bigr]\\ -2\epsilon^{kl}\Lambda^{a}_{k}\Bigl[-\frac{1}{2}\epsilon_{ab}\partial_{0}B^{b0}_{l}+\frac{1}{2}\epsilon_{ab}\partial_{l}B^{b0}_{0}-\frac{1}{4}\epsilon_{cd}\left(B^{a0}_{0}B^{cd}_{l}-B^{a0}_{l}B^{cd}_{0}\right)\Bigr] \end{multline*} After adding Chern-Simons gravity term, the dynamically complete Lagrangian density is given by \begin{equation} \mathcal{L} = \mathcal{L}+\mathcal{L}_{cs} \end{equation}. Explicitly, in terms of the basic fields $\psi$, $\psi^*$, $\Sigma$ and $B$,we have, \begin{multline} \label{wl} \mathcal{L} = \frac{M}{\Sigma^{0}_{0}}\Bigl[\frac{i}{2}\Sigma^{0}_{0}\left(\psi^{*}\partial_{0}\psi-\psi\partial_{0}\psi^{*}\right)+\frac{i}{2}\Sigma^{k}_{0}\left(\psi^{*}\partial_{k}\psi-\psi\partial_{k}\psi^{*}\right)\\ -\Sigma^{\mu}_{0}B^{a0}_{\mu}mx_{a}\psi^{*}\psi-\frac{1}{2m}\Sigma^{k}_{a}\Sigma^{l}_{a}\left(\partial_{k}\psi^{*}-iB^{b0}_{k}mx_{b}\psi^{*}\right)\left(\partial_{l}\psi+iB^{c0}_{l}mx_{c}\psi\right)\Bigr]\\ -\epsilon^{kl}\Lambda^{0}_{0}\frac{\epsilon_{ab}}{2}\left(\partial_{k}B^{ab}_{l}-\partial_{l}B^{ab}_{k}+\frac{1}{2}B^{a0}_{k}B^{b0}_{l}\right)+\epsilon^{kl}\Lambda^{a}_{0}\Bigl[\frac{\epsilon_{ab}}{2}\left(\partial_{l}B^{b0}_{k}-\partial_{k}B^{b0}_{l}\right)-\frac{\epsilon_{cd}}{4}\left(B^{a0}_{k}B^{cd}_{l}-B^{a0}_{l}B^{cd}_{k}\right)\Bigr]\\ -2\epsilon^{kl}\Lambda^{a}_{k}\Bigl[\frac{\epsilon_{ab}}{2}\left(\partial_{l}B^{b0}_{0}-\partial_{0}B^{b0}_{l}\right)-\frac{\epsilon_{cd}}{4}\left(B^{a0}_{0}B^{cd}_{l}-B^{a0}_{l}B^{cd}_{0}\right)\Bigr] \end{multline} where we propose to analyse the constraint structure of the theory (\ref{wl}), using Dirac's method of constrained Hamiltonian dynamics \cite{D}. This provides many important probes to check the consistency of a theory, as listed below, \begin{enumerate} \item The number of propagating degrees of freedom may be calculated in the phase space from the relation \begin{eqnarray} N = N_1 - 2N_2 - N_3 \label{dof} \end{eqnarray} where $N_1 =$ Total number of canonical variables, $N_2 =$ Total number of first class constraints and, $N_3 = $Total number of second class constraints. Since the Chern Simons fields have no independent dynamics, the no. of degrees of freedom should be $N = 4 $ for our model. Physically, this corresponds to $\psi$ and $\psi*$ and their conjugate momenta. \item The number of primary first class constraints is equal to the number of independent gauge degrees of freedom. Note that this number can alternatively be obtained from the number of independent local symmetries of the action. \end{enumerate} Consistency in the Hamiltonian analysis is essential for a feasible model. We will see that the model (\ref{sl}) for the Schrodinger field coupled with non relativistic space is consistent from this point of view. This is remarkable because a host of models have been proposed for this problem, many of which have some differences with (\ref{sl}). Also it may be pointed out that Hamiltonian treatment of these theories are not much available. In the following section we will discuss the Dirac approach to the constraint analysis of the problem. \section{Canonical Analysis - the constraints of the theory} To proceed with the canonical analysis of (\ref{wl}) we define the momenta $\pi$, $\pi^{*}$,$\pi^{0}_{\mu}$, $\pi^{a}_{k}$, $\pi^{\mu}_{ab}$, $\pi^{l}_{b0}$, $\pi^{0}_{a0}$ conjugate to the fields $\psi$, $\psi^{*}$,$\Sigma^{\mu}_{0}$, $\Sigma_{k}{}^{a}$, $B^{ab}_{\mu}$, $B^{b0}_{l}$, $\pi^{0}_{a0}$ respectively. Then \begin{eqnarray} \pi=\frac{\partial\mathcal{L}}{\partial\dot{\psi}} =\frac{Mi}{2}\psi^{*} \hspace{.2cm}; \hspace{.2cm}\pi^{*}=\frac{\partial\mathcal{L}}{\partial\dot{\psi^{*}}} =-\frac{Mi}{2}\psi \notag\\ \pi^{0}_{\mu}=\frac{\partial\mathcal{L}}{\partial\dot{\Sigma^{\mu}_{0}}}=0 \hspace{.2cm}; \hspace{.2cm} \pi^{a}_{k}=\frac{\partial\mathcal{L}}{\partial\dot{\Sigma^{k}_{a}}}=0 \hspace{.2cm}; \hspace{.2cm} \notag\\ \pi^{\mu}_{ab}=\frac{\partial\mathcal{L}}{\partial\dot{B^{ab}_{\mu}}}=0 \hspace{.2cm}; \hspace{.2cm} \pi^{l}_{b0}=\frac{\partial\mathcal{L}}{\partial\dot{B^{b0}_{l}}}=\epsilon^{kl}\epsilon_{ab}\Lambda^{a}_{k} \notag\\ \pi^{0}_{a0}=\frac{\partial\mathcal{L}}{\partial\dot{B^{a0}_{0}}}= 0 \label{m} \end{eqnarray} The Poisson brackets (PB) between the canonical pairs are usual: \begin{eqnarray} {\{\psi(x),\pi(y)\}}=\delta^{2}(x-y)\notag\\ {\{\psi^{*}(x),\pi^{*}(y)\}}=\delta^{2}(x-y)\notag\\ {\{\Sigma^{\mu}_{0}(x),\pi^{0}_{\nu}(y)\}}=\delta^{\mu}_{\nu}\delta^{2}(x-y)\notag\\ {\{\Sigma^{l}_{b}(x),\pi^{a}_{k}(y)\}}=\delta^{a}_{b}\delta^{l}_{k}\delta^{2}(x-y)\notag\\ {\{B^{ab}_{\nu}(x),\pi^{\mu}_{cd}(y)\}}=\delta^{\mu}_{\nu}(\delta^{a}_{c}\delta^{b}_{d}-\delta^{b}_{c}\delta^{a}_{d})\delta^{2}(x-y)\notag\\ {\{B^{a0}_{k}(x),\pi^{l}_{b0}(y)\}}=\delta^{l}_{k}\delta^{a}_{b}\delta^{2}(x-y)\notag\\ {\{B^{b0}_{0}(x),\pi^{0}_{a0}(y)\}}=\delta^{b}_{a}\delta^{2}(x-y) \label{cpb} \end{eqnarray} From definition (\ref{m}) the following primary constraints emerge, \begin{align} \Omega_{1}=\pi-\frac{Mi}{2}\psi^{*}\thickapprox 0\hspace{.2cm};\hspace{.2cm} \Omega_{2}&=\pi^{*}+\frac{Mi}{2}\psi \thickapprox 0\notag\\ \Omega^{0}_{\mu}=\pi^{0}_{\mu}\thickapprox 0\hspace{.2cm};\hspace{.2cm} \Omega^{a}_{k}&=\pi^{a}_{k}\thickapprox 0\notag\\ \Omega^{\mu}_{ab}=\pi^{\mu}_{ab}\thickapprox 0\hspace{.2cm};\hspace{.2cm} \Omega^{0}_{a0}&=\pi^{0}_{a0}\thickapprox 0\notag\\ \Omega^{l}_{b0}&=\pi^{l}_{b0}-\epsilon^{kl}\Lambda^{a}_{k}\epsilon_{ab} \thickapprox 0 \label{pc} \end{align} As is well known, conserving the primary constraints (\ref{cpb}) we may get secondary constraints. We have to construct the total Hamiltonian, which is the canonical Hamiltonian improved by the linear combinations of the primary constraints. The canonical Hamiltonian density of the theory is given by \begin{eqnarray} \mathcal{H}_{can}=\pi\dot{\psi}+\pi^{*}\dot{\psi^{*}}+\pi^{0}_{\mu}\dot{\Sigma^{\mu}_{0}}+\pi^{a}_{k}\dot{\Sigma^{k}_{a}}+\pi^{\mu}_{ab}\dot{B^{ab}_{\mu}}+\pi^{l}_{b0}\dot{B^{b0}_{l}}+\pi^{0}_{a0}\dot{B^{a0}_{0}}-\mathcal{L} \label{canh} \end{eqnarray} Explicitly, \begin{multline} \mathcal{H}_{can}=-\frac{M}{\Sigma^{0}_{0}}\Bigl[\frac{i}{2}\Sigma^{k}_{0}\left(\psi^{*}\partial_{k}\psi-\psi\partial_{k}\psi^{*}\right)-\Sigma^{\mu}_{0}B^{a0}_{\mu}mx_{a}\psi^{*}\psi\\ -\frac{1}{2m}\Sigma^{k}_{a}\Sigma^{l}_{a}\left(\partial_{k}\psi^{*}\partial_{l}\psi+iB^{b0}_{l}mx_{b}\psi\partial_{k}\psi^{*}-iB^{b0}_{k}mx_{b}\psi^{*}\partial_{l}\psi+B^{c0}_{k}B^{b0}_{l}m^{2}x_{c}x_{b}\psi^{*}\psi\right)\Bigr]\\ +\epsilon^{kl}\Lambda^{0}_{0}\frac{\epsilon_{ab}}{2}\left(\partial_{k}B^{ab}_{l}-\partial_{l}B^{ab}_{k}+\frac{1}{2}B^{a0}_{k}B^{b0}_{l}\right)\\ -\epsilon^{kl}\Lambda^{a}_{0}\Bigl[\frac{\epsilon_{ab}}{2}\left(\partial_{l}B^{b0}_{k}-\partial_{k}B^{b0}_{l}\right)-\frac{\epsilon_{cd}}{4}\left(B^{a0}_{k}B^{cd}_{l}-B^{a0}_{l}B^{cd}_{k}\right)\Bigr]\\ +2\epsilon^{kl}\Lambda^{a}_{k}\Bigl[\frac{\epsilon_{ab}}{2}\partial_{l}B^{b0}_{0}-\frac{\epsilon_{cd}}{4}\left(B^{a0}_{0}B^{cd}_{l}-B^{a0}_{l}B^{cd}_{0}\right)\Bigr] \end{multline} The total Hamiltonian is \begin{multline} H_{T}=\int{d^{2}x}\left(\mathcal{H}_{can}+\lambda_{1}\Omega_{1}+\lambda_{2}\Omega_{2}+\lambda^{\mu}_{0}\Omega^{0}_{\mu}+\lambda^{k}_{a}\Omega^{a}_{k}+\frac{1•}{2•}\lambda^{ab}_{\mu}\Omega^{\mu}_{ab}+\lambda^{b0}_{l}\Omega^{l}_{b0}+\lambda^{a0}_{0}\Omega^{0}_{a0}\right) \end{multline} Here $\lambda_{1}$, $\lambda_{2}$, $\lambda^{\mu}_{0}$, $\lambda^{k}_{a}$, $\lambda^{ab}_{\mu}$, $\lambda^{b0}_{l}$, $\lambda^{a0}_{0}$ are Lagrange multipliers enforcing the constraints. In this theory, the non-vanishing fundamental Poisson brackets are given by \begin{align*} \label{pc1} {\{\Omega_{1}(x),\Omega_{2}(y)\}}&=-iM\delta^{2}\left(x-y\right)\\ {\{\Omega_{1}(x),\Omega^{a}_{k}(y)\}}&=\frac{i\psi^{*}}{2}M\Lambda^{a}_{k}\delta^{2}\left(x-y\right)\\ {\{\Omega_{2}(x),\Omega^{a}_{k}(y)\}}&=-\frac{i\psi}{2}M\Lambda^{a}_{k}\delta^{2}\left(x-y\right)\\ {\{\Omega^{a}_{k}(x),\Omega^{l}_{b0}(y)\}}&=-\epsilon^{jl}\epsilon_{db}\Lambda^{a}_{j}\Lambda^{d}_{k}\delta^{2}\left(x-y\right) \end{align*} where we have used (\ref{cpb}). The primary constraints are denoted by the generic symbol $\Omega$, The index structure is sufficient to identify the particular one. Apparently, all the constraints have nonzero PBs between each other, However, it may so happen that by combinations of the constraints, a subset of them can be made to have vanishing PBs with all the elements of the set of constraints. For the time being let us carry on with the stationary of the primary constraints $\Omega^{0}_{a0}$ i.e; $\dot{\Omega}^{0}_{a0}={\{\Omega^{0}_{a0}(x),H_{T}\}} \thickapprox 0$ which yields the following expression, \begin{equation} \label{s15} \Gamma_{a}=-Mmx_{a}\psi^{*}\psi+\epsilon^{kl}\epsilon_{da}\partial_{l}\left(\Lambda^{d}_{k}\right)+\frac{\epsilon^{kl}}{2}\Lambda^{a}_{k}\epsilon_{cd}B^{cd}_{l} \thickapprox 0 \end{equation} Note that the terms containing $x^a$ and the rest are separately zero. Two new secondary constraints are thus obtained, \begin{eqnarray} \label{s} \Phi_1 = \psi^{*}\psi\thickapprox 0 \end{eqnarray} and \begin{equation} \label{s11} \Phi_{a}=\epsilon^{kl}\epsilon_{da}\partial_{l}\left(\Lambda^{d}_{k}\right)+\frac{\epsilon^{kl}}{2}\Lambda^{a}_{k}\epsilon_{cd}B^{cd}_{l} \thickapprox 0 \end{equation} The stationary of the primary constraint $\Omega^{0}_{ab}$ i.e; $\dot{\Omega}^{0}_{ab}={\{\Omega^{0}_{ab}(x),H_{T}\}} \thickapprox 0$ gives the secondary constraints as \begin{equation} \Phi_2 =\epsilon^{kl}\Lambda^{a}_{k}B^{a0}_{l} \thickapprox 0 \end{equation} Conserving $\pi^{j}_{ef}$ in time, a secondary constraint emerges \begin{equation} S_{j}=\epsilon^{kj}\partial_{k}(\Lambda^{0}_{0})-\epsilon^{kj}\Lambda^{a}_{0}B^{a0}_{k}+\epsilon^{kj}\Lambda^{a}_{k}B^{a0}_{0} \thickapprox 0 \end{equation} From $\dot{\pi}^{0}_{j}={\{\pi^{0}_{j}(x),H_{T}}\} \thickapprox 0$, we get further secondary constraints expression as, \begin{equation} \label{s12} \Gamma_{j}^{'}=\frac{M_{i}}{2}\left(\psi^{*}\partial_{j}\psi-\psi\partial_{j}\psi^{*}\right)-MB^{a0}_{j}mx_{a}\psi^{*}\psi+\epsilon^{kl}\epsilon_{ab}\Lambda^{a}_{j}\partial_{k}B^{b0}_{l}+\frac{\epsilon^{kl}}{2}\epsilon_{cd}\Lambda^{a}_{j}B^{a0}_{k}B^{cd}_{l} \thickapprox 0 \end{equation} Noting that the terms containing $x_a$ should be vanishing separately, we get a new secondary constraint, \begin{equation} \bar{S_{k}}=\frac{Mi}{2}\left(\psi^{*}\partial_{k}\psi-\psi\partial_{k}\psi^{*}\right)-\epsilon^{jn}\epsilon_{da}B^{a0}_{k}\partial_{n}\Lambda^{d}_{j}+\epsilon^{jn}\epsilon_{ab}\Lambda^{a}_{k}\partial_{j}B^{b0}_{n} \thickapprox 0 \end{equation} where some simplification have been done using (\ref{s11},\ref{s12}). Finally, conservation of $\pi_0^0 \thickapprox 0 $ leads to \begin{multline} \bar{\Gamma}=-\frac{M}{\Sigma^{0}_{0}}\Bigl[\frac{i}{2}\Sigma^{k}_{0}\left(\psi^{*}\partial_{k}\psi-\psi\partial_{k}\psi^{*}\right)-\Sigma^{k}_{0}B^{a0}_{k}mx_{a}\psi^{*}\psi\\ -\frac{1}{2m}\Sigma^{k}_{d}\Sigma^{l}_{d}{\{\partial_{k}\psi^{*}\partial_{l}\psi-iB^{a0}_{l}mx_{a}\left(\psi^{*}\partial_{k}\psi-\psi\partial_{k}\psi^{*}\right)+B^{a0}_{k}B^{b0}_{l}m^{2}x_{a}x_{b}\psi^{*}\psi}\}\\ +\epsilon^{kl}\epsilon_{ab}\Lambda^{0}_{0}\left(\partial_{k}B^{ab}_{l}+\frac{1}{4}B^{a0}_{k}B^{b0}_{l}\right) +\Lambda^{a}_{0}\epsilon^{kl}\left(\epsilon_{ab}\partial_{k}B^{b0}_{l}+\frac{\epsilon_{cd}}{2}B^{a0}_{k}B^{cd}_{l}\right) \thickapprox 0\label{compo} \end{multline} Looking at (\ref{compo}) we see that it holds irrespective of $x^a$. But it can only happen if \begin{multline} \bar{\Gamma}=-\frac{M}{\Sigma^{0}_{0}}\Bigl[\frac{i}{2}\Sigma^{k}_{0}\left(\psi^{*}\partial_{k}\psi-\psi\partial_{k}\psi^{*}\right)\psi\\ -\frac{1}{2m}\Sigma^{k}_{a}\Sigma^{l}_{a}{\{\partial_{k}\psi^{*}\partial_{l}\psi}\\ +\epsilon^{kl}\epsilon_{ab}\Lambda^{0}_{0}\left(\partial_{k}B^{ab}_{l}+\frac{1}{4}B^{a0}_{k}B^{b0}_{l}\right) +\Lambda^{a}_{0}\epsilon^{kl}\left(\epsilon_{ab}\partial_{k}B^{b0}_{l}+\frac{\epsilon_{cd}}{2}B^{a0}_{k}B^{cd}_{l}\right) \thickapprox 0\label{compo1} \end{multline} and \begin{multline} \bar{\Gamma}=-\Sigma^{k}_{0}B^{a0}_{k}\psi^{*}\psi\\ -\frac{1}{2m}\Sigma^{k}_{a}\Sigma^{l}_{a}{\{\partial_{k}\psi^{*}\partial_{l}\psi-iB^{a0}_{l}\left(\psi^{*}\partial_{k}\psi-\psi\partial_{k}\psi^{*}\right)}\}\label{compo2} \end{multline} \ref{compo}) is equivalent to (\ref{compo1}) and (\ref{compo2}). Simplifying , we get two new set of constraints, \begin{align} S&=\frac{M}{2m}\Sigma^{k}_{c}\Sigma^{l}_{c}\partial_{k}\psi^{*}\partial_{l}\psi+\epsilon^{jn}\epsilon_{ab}\left(\partial_{j}B^{ab}_{n}+\frac{1}{4}B^{a0}_{j}B^{b0}_{n}\right) \thickapprox 0 \notag \\ S^{'}_{e}&=\Sigma^{k}_{c}\Sigma^{l}_{c}\epsilon^{jn}\epsilon_{fd}B^{e0}_{l}\left(B^{d0}_{k}\partial_{n}\Lambda^{f}_{j}-2\Lambda^{f}_{k}\partial_{j}B^{d0}_{n}-\frac{1}{2}\Lambda^{a}_{k}B^{a0}_{j}B^{fd}_{n}\right) \thickapprox 0 \end{align} Conserving the rest of the primary constraints $\Omega_1$, $\Omega_2$, $\Omega^a_k$, $\Omega^l_{b0}$ and the new secondary constraints $\Gamma_a$, $\Gamma$, $\Gamma_{j}$, $\Gamma_{j}^{'}$, $\bar{\Gamma}$ no new constraints generate ; only some of the multipliers are fixed. The constraint structure is thus closed. The secondary constraints are then listed below: \begin{align} \Phi_{1}&=\psi^{*}\psi \thickapprox 0 \notag \\ \Phi_{d}&=\epsilon^{kl}\epsilon_{ad}\partial_{l}\Lambda^{a}_{k}+\frac{\epsilon^{kl}}{2}\Lambda^{d}_{k}\epsilon_{ca}B^{ca}_{l} \thickapprox 0 \notag \\ \Phi_{2}&=\epsilon^{kl}\Lambda^{a}_{k}B^{a0}_{l} \thickapprox 0 \notag \\ S_{j}&=\epsilon^{kj}\partial_{k}\Lambda^{0}_{0}-\epsilon^{kj}\Lambda^{a}_{0}B^{a0}_{k}+\epsilon^{kj}\Lambda^{a}_{k}B^{a0}_{0} \thickapprox 0 \notag \\ \bar{S_{k}}&=\frac{Mi}{2}\left(\psi^{*}\partial_{k}\psi-\psi\partial_{k}\psi^{*}\right)-\epsilon^{jn}\epsilon_{da}B^{a0}_{k}\partial_{n}\Lambda^{d}_{j}+\epsilon^{jn}\epsilon_{ab}\Lambda^{a}_{k}\partial_{j}B^{b0}_{n} \thickapprox 0 \notag \\ S&=\frac{M}{2m}\Sigma^{k}_{c}\Sigma^{l}_{c}\partial_{k}\psi^{*}\partial_{l}\psi+\epsilon^{jn}\epsilon_{ab}\left(\partial_{j}B^{ab}_{n}+\frac{1}{4}B^{a0}_{j}B^{b0}_{n}\right) \thickapprox 0 \notag \\ S^{'}_{e}&=\Sigma^{k}_{c}\Sigma^{l}_{c}\epsilon^{jn}\epsilon_{fd}B^{e0}_{l}\left(B^{d0}_{k}\partial_{n}\Lambda^{f}_{j}-2\Lambda^{f}_{k}\partial_{j}B^{d0}_{n}-\frac{1}{2}\Lambda^{a}_{k}B^{a0}_{j}B^{fd}_{n}\right) \thickapprox 0 \label{sc} \end{align} The complete set of constraints of the theory comprises of (\ref{pc}) and (\ref{sc}). The analysis of the constraints in first and second class gives a host of informations, as we have seen. We will now take up the issue. \subsection{Classification of the constraints and degrees of freedom count} In the Dirac method the constraints are divided in first and second class according to whether they have all mutual Poisson brackets vanishing or not. Using the fundamental Poisson brackets (\ref{cpb}) we can straightforwardly work out these brackets The non-vanishing Poisson brackets are given by- \begin{align} \label{split} {\{{\Omega}_{1}(x),\Omega_2(y)\}}&=-iM\delta^{2}(x-y)\\ {\{\Omega_{1}(x),\Phi_{1}(y)}\}&=-\psi^{*}\delta^{2}(x-y)\\ {\{\Omega_{2}(x),\Phi_{1}(y)}\}&=-\psi\delta^{2}(x-y)\\ {\{\Omega_{1}(x),\bar{S_{k}}(y)}\}&=\frac{Mi}{2}\left[\partial^{y}_{k}\psi^{*}(y)\delta^{2}(x-y)-\psi^{*}(y)\partial^{y}_{k}\left(\delta^{2}(x-y)\right)\right]\\ {\{\Omega_{2}(x),\bar{S_{k}}(y)}\}&=\frac{Mi}{2}\left[\psi(y)\partial^{y}_{k}\left(\delta^{2}(x-y)\right)-\partial^{y}_{k}\psi(y)\delta^{2}(x-y)\right]\\ {\{\Omega_{1}(x),S(y)}\}&=-\frac{M}{2m}\Sigma^{k}_{c}\Sigma^{l}_{c}\partial^{y}_{k}\psi^{*}(y)\partial^{y}_{l}\left(\delta^{2}(x-y)\right)\\ {\{\Omega_{2}(x),S(y)}\}&=-\frac{M}{2m}\Sigma^{k}_{c}\Sigma^{l}_{c}\partial^{y}_{l}\psi(y)\partial^{y}_{k}\left(\delta^{2}(x-y)\right)\\ {\{\Omega^{a}_{k}(x),\Omega_{1}(y)}\}&=-\frac{i\psi^{*}}{2}M\Lambda^{a}_{k}\delta^{2}(x-y)\\ {\{\Omega^{a}_{k}(x),\Omega_{2}(y)}\}&=\frac{i\psi}{2}M\Lambda^{a}_{k}\delta^{2} (x-y) \end{align} \begin{align} \label{split1} {\{\Omega^{0}_{0}(x),S_{j}(y)\}}&=\epsilon^{kj}\partial^{y}_{k}(\Lambda^{0}_{0}\Lambda^{0}_{0}\delta^{2}(x-y))-\epsilon^{kj}B^{a0}_{k}\Lambda^{0}_{0}\Lambda^{a}_{0}\delta^{2}(x-y)\\ {\{\Omega^{0}_{k}(x),S_{j}(y)\}}&=-\epsilon^{pj}B^{a0}_{p}\Lambda^{0}_{0}\Lambda^{a}_{k}\delta^{2}(x-y)\\ {\{\Omega^{a}_{k}(x),\Omega^{l}_{b0}(y)}\}&=-\epsilon^{pl}\epsilon_{cb}\Lambda^{c}_{k}\Lambda^{a}_{p}\delta^{2}(x-y) \\ {\{\Omega^{a}_{k}(x),\Phi_{d}(y)}\}&=\epsilon^{jl}\epsilon_{cd}\partial^{y}_{l}\left(\Lambda^{c}_{k}\Lambda^{a}_{j}\delta^{2}(x-y)\right)+\frac{1}{2}\epsilon^{jl}\epsilon_{cb}B^{cb}_{l}\Lambda^{d}_{k}\Lambda^{a}_{j}\delta^{2}(x-y) \\ {\{\Omega^{a}_{k}(x),\Phi_{2}(y)}\}&=\epsilon^{pl}B^{b0}_{l}\Lambda^{b}_{k}\Lambda^{a}_{p}\delta^{2}(x-y)\\ {\{\Omega^{a}_{k}(x),S_{j}(y)}\}&=\left[-\epsilon^{lj}B^{b0}_{l}\Lambda^{b}_{k}\Lambda^{a}_{0}+\epsilon^{lj}B^{b0}_{0}\Lambda^{b}_{k}\Lambda^{a}_{l}\right]\delta^{2}(x-y) \end{align} \begin{align} \label{split2} {\{\Omega^{a}_{k}(x),\bar{S}_{l}(y)}\}&=\frac{i}{2}\left(\psi^{*}\partial_{l}\psi-\psi\partial_{l}\psi^{*}\right)M\Lambda^{a}_{k}\delta^{2}(x-y)\\ &-\epsilon^{jn}\epsilon_{db}B^{b0}_{l}\partial^{y}_{n}\left(\Lambda^{d}_{k}\Lambda^{a}_{j}\delta^{2}(x-y)\right)+\epsilon^{jn}\epsilon_{cb}\partial_{j}B^{b0}_{n}\Lambda^{c}_{k}\Lambda^{a}_{l}\delta^{2}(x-y)\\ {\{\Omega^{a}_{k}(x),S(y)}\}&=\frac{M}{2m}\left[\Sigma^{j}_{c}\Sigma^{l}_{c}\Lambda^{a}_{k}\partial^{y}_{j}\psi^{*}\partial^{y}_{l}\psi-\Sigma^{j}_{a}\partial^{y}_{j}\psi^{*}\partial^{y}_{k}\psi-\Sigma^{l}_{a}\partial^{y}_{k}\psi^{*}\partial^{y}_{l}\psi\right]\delta^{2}(x-y)\\ {\{\Omega^{a}_{k}(x),\Phi_{e}(y)}\}&=-\epsilon^{jn}\epsilon_{fd}\Bigl[B^{e0}_{k}\Sigma^{p}_{a}\bigl(B^{d0}_{p}\partial^{y}_{n}\Lambda^{f}_{j}-2\Lambda^{f}_{p}\partial^{y}_{j}B^{d0}_{n}-\frac{1}{2}\Lambda^{b}_{p}B^{b0}_{j}B^{fd}_{n}\bigr)\\ &+B^{e0}_{l}\Sigma^{l}_{a}\left(B^{d0}_{k}\partial^{y}_{n}\Lambda^{f}_{j}-2\Lambda^{f}_{k}\partial^{y}_{j}B^{d0}_{n}-\frac{1}{2}\Lambda^{b}_{k}B^{b0}_{j}B^{fd}_{n}\right)\Bigr]\delta^{2}(x-y)\\ &+\epsilon^{jn}\epsilon_{fd}B^{e0}_{l}\Sigma^{p}_{c}\Sigma^{l}_{c}\Bigl[B^{d0}_{p}\partial^{y}_{n}\bigl(\Lambda^{f}_{k}\Lambda^{a}_{j}\delta^{2}(x-y)\bigr)\\ &-2\partial^{y}_{j}B^{d0}_{n}\Lambda^{f}_{k}\Lambda^{a}_{p}\delta(x-y)-\frac{1}{2}B^{b0}_{j}B^{fd}_{n}\Lambda^{b}_{k}\Lambda^{a}_{p}\delta^{2}(x-y)\Bigr]\\ {\{\Omega^{l}_{ab}(x),\Phi_{d}(y)}\}&=-\epsilon^{kl}\epsilon_{ab}\Lambda^{d}_{k}\delta^{2}(x-y) \end{align} \begin{align} {\{\Omega^{l}_{ab}(x),S(y)}\}&=-2\epsilon^{jl}\epsilon_{ab}\partial^{y}_{l}\bigl(\delta^{2}(x-y)\bigr)\\ {\{\Omega^{l}_{ab}(x),S^{'}_{e}(y)}\}&=\Sigma^{k}_{c}\Sigma^{n}_{c}\epsilon^{jl}\epsilon_{ab}B^{e0}_{n}\Lambda^{d}_{k} B^{d0}_{j}\delta^{2}(x-y)\\ {\{\Omega^{l}_{b0}(x),\Phi_{2}(y)}\}&=-\epsilon^{kl}\Lambda^{b}_{k}\delta^{2}(x-y)\\ {\{\Omega^{l}_{b0}(x),S_{j}(y)}\}&=\epsilon^{lj}\Lambda^{b}_{0}\delta^{2}(x-y)\\ {\{\Omega^{l}_{b0}(x),\bar{S_{k}}(y)}\}&=\epsilon^{jn}\epsilon_{db}\partial^{y}_{n}\left(\Lambda^{d}_{j}\right)\delta^{l}_{k}\delta^{2}(x-y)-\epsilon^{jl}\epsilon_{ab}\Lambda^{a}_{k}\partial^{y}_{j}\left(\delta^{2}(x-y)\right)\\ {\{\Omega^{l}_{b0}(x),S(y)}\}&=-\frac{1}{2}\epsilon^{jl}\epsilon_{ab}B^{a0}_{j}\delta^{2}(x-y)\\ {\{\Omega^{0}_{a0}(x),S_{j}(y)}\}&=-\epsilon^{kj}\Lambda^{a}_{k}\delta^{2}(x-y)\\ \end{align}\\ \begin{multline*} {\{\Omega^{l}_{bo}(x),S'_{e}(y)}\}=-\Sigma^{l}_{c}\Sigma^{p}_{c}\epsilon^{jn}\epsilon_{fb}B^{e0}_{p}\partial^{y}_{n}\Lambda^{f}_{j}\delta^{2}(x-y)\\ +2\Sigma^{k}_{c}\Sigma^{p}_{c}\epsilon^{jl}\epsilon_{fb}B^{e0}_{p}\Lambda^{f}_{k}\partial^{y}_{j}\left(\delta^{2}(x-y)\right)\\ +\frac{•1}{•2}\Sigma^{k}_{c}\Sigma^{p}_{c}\epsilon^{ln}\epsilon_{fd}B^{e0}_{p}\Lambda^{b}_{k}B^{fd}_{n}\delta^{2}(x-y)\\ -\Sigma^{k}_{c}\Sigma^{l}_{c}\epsilon^{jn}\epsilon_{fd}\delta^{e}_{b}\left(B^{d0}_{k}\partial^{y}_{n}\Lambda^{f}_{j}-2\Lambda^{f}_{k}\partial^{y}_{j}B^{d0}_{n}-\frac{1}{2•}\Lambda^{a}_{k}B^{a0}_{j}B^{fd}_{n}\right)\delta^{2}(x-y)\\ \end{multline*} Poisson bracket of $\Omega^{0}_{l} \approx 0$ vanishes with all the constraints except $S_{j}$. \begin{equation} {\{\Omega^{0}_{l}(x),S _{j}(y)}\}=-\epsilon^{kj}B^{a0}_{k}\Lambda^{0}_{0}\Lambda^{a}_{l}\delta^{2}(x-y) \end{equation} If we construct, \begin{eqnarray} \bar{\Omega^{0}_{l}}=\pi^{0}_{l}-\Lambda^{0}_{0}B^{a0}_{l}\pi^{0}_{a0} \thickapprox 0 \end{eqnarray}, then \begin{eqnarray} {\{\bar{\Omega^{0}_{l}}(x),S_{j}(y)}\}=\epsilon^{kj}\Lambda^{0}_{0}\left(\Lambda^{a}_{k}B^{a0}_{l}-\Lambda^{a}_{l}B^{a0}_{k}\right)\delta^{2}(x-y) \approx 0 \end{eqnarray} where we have used $\Lambda^{a}_{l}B^{a0}_{k}=\Lambda^{a}_{k}B^{a0}_{l}$ which is obtained from constraint $\Phi_{2}$. Also $\bar{\Omega^{0}_{l}}$ has vanishing Poisson brackets with all other constraints. Replacing ${\Omega^{0}_{l}}$ by $\bar{\Omega^{0}_{l}}$ in the set of constraints (\ref{pc}.\ref{sc}) we find that $\bar{\Omega_k{}^0}$ , $\Omega_{ab}{}^0$ have vanishing PBs among themselves and with other constraints. With these results the classification of the constraints can easily be done. The complete classification of constraints is summarized in Table \ref{tab:Classification of Constraints} below. \begin{table}[ht] \caption{Classification of Constraints} \label{tab:Classification of Constraints} \centering \begin{tabular}{c c c} \hline\hline\\ \ & First Class & Second Class \\ \hline\\ Primary & $\bar{\Omega_k{}^0}$, ${\Omega^{0}_{ab}}$ & ${\Omega_{1}}$,${\Omega_{2}}$, ${\Omega^{0}_{0}}$, $\Omega^{a}_{k}$, $\Omega^{l}_{ab}$, $\Omega^{l}_{b0}$, $\Omega^{0}_{a0}$ \\[1em] \hline\\ Secondary & & $\Phi_{1}$, $\Phi_{d}$, $\Phi_{2}$, $S_{j}$, $\bar{S_{k}}$, $S$, $S^{'}_{e}$ \\[1em] \hline\hline \end{tabular} \end{table} The results tabulated above can be physically interpreted in the following way: \begin{enumerate} \item The number of independent fields is 18. That gives 36 fields in the phase space as each field is accompanied with its canonically conjugate momentum. The number of first class constraints is 3 while the number of secondary constraints is 26. The number of independent degrees of freedom in the phase space can now be calculated. Using \ref{dof} we get $N = 36 - 2 \times 3 - 26 =4 $. So, the no. of degrees of freedom in configuration space is 2. Physically, they correspond to $\psi$ and $\psi^*$. Note that the Chern Simons dynamics does not contribute any propagating degree of freedom. \item The number of independent primary first class constraints is three. According to Dirac conjecture it is the number of independent 'gauge' degrees of freedom. Here arbitrary functions in the solutions of the equations of motion will then be three in number. Physically, these are the consequence of three local symmetry operations, one rotation and two boosts. \end{enumerate} \section{Canonical analysis with $\Sigma_0{}^k$ = 0} We have already discussed at few places in this paper that the motivation of our work is to check the consistency of the model (\ref{wl}) and to posit it in relation to the corresponding actions obtained from other approaches. To our knowledge the latter are of the same form as that of \cite{SW}. This form differs from our model in essence by the absence of the term $ \Sigma_0{}^k$ = 0. It will then be crucial to check whether in our model we substitute $\Sigma_0{}^k$ = 0 it still has the same physically consistent Hamiltonian structure. We therefore consider the truncated model \begin{multline} \label{wlt} \mathcal{L} = M{}\Bigl[\frac{i}{2}\left(\psi^{*}\partial_{0}\psi-\psi\partial_{0}\psi^{*}\right))\\ -B^{a0}_{0}mx_{a}\psi^{*}\psi-\frac{1}{2m}\Sigma^{k}_{a}\Sigma^{l}_{a}\left(\partial_{k}\psi^{*}-iB^{a0}_{k}mx_{a}\psi^{*}\right)\left(\partial_{l}\psi+iB^{a0}_{l}mx_{a}\psi\right)\Bigr]\\ -\epsilon^{kl}\frac{\epsilon_{ab}}{2}\left(\partial_{k}B^{ab}_{l}-\partial_{l}B^{ab}_{k}+\frac{1}{2}B^{a0}_{k}B^{b0}_{l}\right) -2\epsilon^{kl}\Lambda^{a}_{k}\Bigl[\frac{\epsilon_{ab}}{2}\left(\partial_{l}B^{b0}_{0}-\partial_{0}B^{b0}_{l}\right)-\frac{\epsilon_{cd}}{4}\left(B^{a0}_{0}B^{cd}_{l}-B^{a0}_{l}B^{cd}_{0}\right)\Bigr] \end{multline} which is obtained from (\ref{wl}) by putting $ \Sigma_0{}^k$ = 0 in it. We have also taken $ \Sigma_0{}^0$ = 1 as it is possible when there is no transformation of time i.e. there is spatial diffeomorphism only \cite{BMM1}.The canonical analysis proceeds in the same way as above. Performing the canonical analysis, we obtain the following primary constraints: \begin{align*} \Omega_{1}&=\pi-\frac{Mi}{2}\psi^{*} \approx 0\\ \Omega_{2}&=\pi^{*}+\frac{Mi}{2}\psi \approx 0\\ \Omega^{a}_{k}&=\pi^{a}_{k} \approx 0\\ \Omega^{\mu}_{ab}&=\pi^{\mu}_{ab} \approx 0\\ \Omega^{0}_{a0}&=\pi^{0}_{a0} \approx 0\\ \Omega^{l}_{b0}&=\pi^{l}_{b0}-\epsilon^{kl}\epsilon_{ab}\Lambda^{a}_{k} \approx 0\\ \end{align*} The stationary of the primary constraints $\Omega^{\mu}_{ab}$ and $\Omega^{0}_{a0}$ give the following secondary constraints: \begin{align*} \Phi_{1}&=\psi^{*}\psi \approx 0\\ \Phi_{d}&=\epsilon^{kl}\epsilon_{ad}\partial_{l}(\Lambda^{a}_{k})+\frac{\epsilon^{kl}}{2}\Lambda^{d}_{k}\epsilon_{ca}B^{ca}_{l} \approx 0\\ \Phi_{2}&=\epsilon^{kl}\Lambda^{a}_{k}B^{a0}_{l} \approx 0\\ S^{'}_{j}&=\epsilon^{kj}\Lambda^{a}_{k}B^{a0}_{0} \approx 0\\ \end{align*} The iteration terminates with the closure of the constraint algebra. The non-vanishing poisson brackets between the constraints are given by \begin{align*} {\{\Omega_{1}(x),\Omega_{2}(y)}\}&=-Mi\delta^{2}(x-y)\\ {\{\Omega_{1}(x),\Phi_{1}(y)}\}&=-\psi^{*}\delta^{2}(x-y)\\ {\{\Omega_{2}(x),\Phi_{1}(y)}\}&=-\psi\delta^{2}(x-y)\\ {\{\Omega^{a}_{k}(x),\Omega^{l}_{b0}(y)}\}&=-\epsilon^{pl}\epsilon_{cb}\Lambda^{c}_{k}\Lambda^{a}_{p}\delta^{2}(x-y)\\ {\{\Omega^{a}_{k}(x),\Omega_{1}(y)}\}&=-\frac{i\psi^{*}}{2}M\Lambda^{a}_{k}\delta^{2}(x-y)\\ {\{\Omega^{a}_{k}(x),\Omega_{2}(y)}\}&=\frac{i\psi}{2}M\Lambda^{a}_{k}\delta^{2}(x-y)\\ {\{\Omega^{a}_{k}(x),\Phi_{d}(y)}\}&=\epsilon^{jl}\epsilon_{cd}\partial^{y}_{l}\left(\Lambda^{c}_{k}\Lambda^{a}_{j}\delta^{2}(x-y)\right)\\ &+\frac{1}{2}\epsilon^{jl}\epsilon_{cb}B^{cb}_{l}\Lambda^{d}_{k}\Lambda^{a}_{j}\delta^{2}(x-y)\\ {\{\Omega^{a}_{k}(x),\Phi_{2}(y)}\}&=\epsilon^{pl}B^{b0}_{l}\Lambda^{b}_{k}\Lambda^{a}_{p}\delta^{2}(x-y)\\ {\{\Omega^{l}_{b0}(x),\Phi_{2}(y)}\}&=-\epsilon^{kl}\Lambda^{b}_{k}\delta^{2}(x-y)\\ {\{\Omega^{0}_{a0}(x),S^{'}_{j}(y)}\}&=-\epsilon^{kj}\Lambda^{a}_{k}\delta^{2}(x-y)\\ {\{\Omega^{a}_{k}(x),S^{'}_{j}(y)}\}&=\epsilon^{pj}\Lambda^{a}_{p}\Lambda^{b}_{k}B^{b0}_{0}\delta^{2}(x-y)\\ {\{\Omega^{l}_{ab}(x),\Phi_{d}(y)}\}&=-\epsilon^{kl}\epsilon_{ab}\Lambda^{d}_{k}\delta^{2}(x-y)\\ \end{align*} The complete classification of constraints is summarized in Table 2 below. \begin{table}[ht] \caption{Classification of Constraints when $\Sigma_0{}^k = 0$} \label{tab:Classification of Constraints of the truncated model} \centering \begin{tabular}{c c c} \hline\hline\\ \ & First Class & Second Class \\ \hline\\ Primary & ${\Omega^{0}_{ab}}$ & ${\Omega_{1}}$, ${\Omega_{2}}$, $\Omega^{a}_{k}$, $\Omega^{l}_{ab}$, $\Omega^{l}_{b0}$, $\Omega^{0}_{a0}$ \\[1em] \hline\\ Secondary & & $\Phi_{1}$, $\Phi_{d}$, $\Phi_{2}$, $S^{'}_{j}$ \\[1em] \hline\hline \end{tabular} \end{table} The number of fields is 15, the number of first class constraints is one whereas there are 20 secondary constraints. So the number of degrees of freedom in the phase space is 8. This is twice as large as the physical degrees of freedom. So we see that the model with $\Sigma_0{}^k = 0$ is unable to give the hamiltonian analysis consistently. Again, we see from table -2 that the number of primary first class constraints is one. So the model predicts one local symmetry as opposed to three physical symmetries. So taking $\Sigma_0{}^k =0$ also gives incorrect symmetries. Further investigation shows that the boost symmetries are lost. This connection with boost is indeed remarkable, not only for GGT but also in general. In the above we have assumed $\Sigma_0{}^0 = 1$ in addition to $\Sigma_0{}^k = 0$. One may enquire the reason behind such choice. It has been proved in \cite{BMM1} that for spatial diffeomorphism where time translation parameter is zero, $\Sigma_0{}^0 $ is a constant which can conveniently put to be unity. The condition $\Sigma_0{}^k = 0$ {\bf{is not permitted}} by GGT in general. So there is no wonder that it leads to unphysical result. The assertion can be verified by taking .$\Sigma_0{}^0 $ in account following similar calculation we can show that the number of degrees of freedom comes out to be three, different from the physical value. Moreover, from the point of view of lost symmetries, there is no improvement. \section{Discussion of the results} The basic issue discussed in this paper is the consistency of a non relativistic complex scalar field (the Schrodinger field) coupled with background gravity by Galilean gauge theory (GGT)\cite{BMM1}, \cite{BM4} in phase space.The dynamics of gravity is assumed to be given by the Chern - Simons gravity action. The model is invariant under spatial diffeomorphism. The pioneering model in this field was given in \cite{SW}. However, it was riddled with certain difficulties concerning symmetries. The solution provided in \cite{SW} was to exploit certain relationship between the gauge and boost parameters. The same model was derived in \cite{J} from a relativistic theory in the $c\to \infty$ limit. But that raised several questions like the reason for the reduction of independent number of symmetry parameters (owing to the equality of gauge and boost parameter) and more important, what would happen if one likes to couple a free Schrodinger field with background gravity \cite{n1}? The confusions were correctly diagonosed to be due to the lack of understanding the proper way to couple with the nonrelativistic Newton cartan spacetime. Thus it was proposed that the gauge field be included in the elements of NC algebra \cite{J}. However, to many it appears little contrived. Certainly, the masters who erected the structure of NC spacetime never conjectured it. Also this proposal is not free of inner problems (like the issue of connection etc.). GGT was developed in this background \cite{BMM1}, \cite{BMM2} which followed an alternative approach based on localisation of symmetry. Equation (\ref{sl}) is our result for a non relativistic complex scalar field (the Schrodinger field) coupled with background gravity in Newton - Cartan spacetime. In GGT it is pretty straightforward to specialize (\ref{sl}) so that it is invariant under spatial diffeomorphism and include a gauge field in the action., From (\ref{sl}) \begin{multline} \label{sl} \mathcal{L}={\sqrt{g}}\Bigl[\frac{i}{2}\left(\psi^{*}\partial_{0}\psi-\psi\partial_{0}\psi^{*}\right)+\frac{i}{2}\Sigma^{k}_{0}\left(\psi^{*}\partial_{k}\psi-\psi\partial_{k}\psi^{*}\right)\\ -B_{0}\psi^{*}\psi-\Sigma^{k}_{0}B_{k}\psi^{*}\psi-\frac{1}{2m}\Sigma^{k}_{a}\Sigma^{l}_{a}\left(\partial_{k}\psi^{*}-iB_{k}\psi^{*}\right)\left(\partial_{l}\psi+iB_{l}\psi\right)\Bigr] \end{multline} where we have substituted $\Sigma_0{}^0 =1$. The spatial metric is defined as \begin{equation} g_{ij} = \Lambda_i{}^a\Lambda_j^a \end{equation} Clearly $M=\det {\Lambda_i{}^a} = \sqrt{g}$ where $g= \det{g_{ij}}$ Now the gauge field can be simply included by replacing the partial derivatives by the appropriate covariant derivative \begin{eqnarray} {D}_{0}\phi&=\partial_0\phi +iA_0\phi\notag\\ {D}_{k}\phi&=\partial_k\phi +iA_k\phi \label{P110} \end{eqnarray} where $A_\mu$ is an (external) gauge field The resulting model can be organnised as \cite{BMM3, BM4}. \begin{eqnarray} \tilde{S} &=& \int dx^0 d^2x \sqrt{g}[ \frac{i}{2}\left(\phi^{*}{\bar{D}}_{0}\phi-\phi{\bar{D}}_0\phi^{*}\right) -g^{kl}\frac{1}{2m}\bar{D}_k\phi^{*}\bar{D}_l\phi] \nonumber\\&+&\int dx^0 d^2x \sqrt{g} [\frac{i}{2}\Sigma_0{}^k\left(\phi^{*}{\bar{D}}_{k}\phi -\phi{\bar{D}}_k\phi^{*}\right)] \label{diffaction12} \end{eqnarray} where \begin{eqnarray} \bar{D}_{0}\phi&=\partial_0\phi +i\bar{{A_0}\phi}\notag\\ \bar{D}_{k}\phi&=\partial_k\phi +i\bar{A_k}\phi \label{P1100} \end{eqnarray} and \begin{eqnarray} \bar{A_\mu} = A_\mu + B_\mu \end{eqnarray} Compare (\ref{P1100}) with the action given by\cite{SW} \begin{equation}\label{free-L1} S = \int dx^0 dx \sqrt{g}\left[\frac{i}{2} (\phi^{*}D_{0}\phi-\phi D_{0}\phi^*) - \frac{g^{ij}}{2m}(D_i\phi^*D_j\phi)\right], \end{equation} The differences between (\ref{free-L1}) and (\ref{diffaction12}) is in the former the spin connections $B_\mu{}^{ab}$ and $B_\mu{}^{a0}$ are absent. Since the Schrodinger field is a $3$- scalar $B_\mu{}^{ab}$ is dropped but the same is not true for $B_\mu{}^{a0}$. However, the principal difference is the absence of the of the term containing $\Sigma_0{}^k$ in the action The Hamiltonian analysis in the first place confirms the models in the phase space To check the impact of the we have repeated the Hamiltonian analysis of our model, this time taking $\Sigma_0{}^k = 0$. We have seen that by dropping $\Sigma_0{}^k$= 0, we no longer get a consistent theory. Hence the model (\ref{free-L1}) is ruled out due to its inconsistency in the phase space. We conclude that the model given by GGT must be taken as \ref{free-L1}. As for the model of $\cite{SW}$, we note that a Hamiltonian analysis of it is unavailable. \section{Conclusion} A nonrelativistic diffeomorphism invariant Schrodinger field theory coupled with Chern Simons gravity \cite{witten} has been considered. The 'matter ' part of the theory has been obtained using the algorithm of the recently proposed Galilean gauge theory \cite{BMM1, BMM2, BMM3, BM4} which leads to coupling with gravity through the vierbeins and spin connections of the spacetime manifold. The gravity dynamics is given by the CS term which is an interesting alternative to (being equivalent to ) the Einstein Hilbert action in $2+1$ dimensions \cite{blago}. The Schrodinger field theory coupled with background gravity was recently found to be very useful in connection with the research in fractional quantum Hall effect \cite{SW}. The model of \cite{SW} were used in diverse problems \cite{SW,Bekaert:2011qd,Hoyos:2011ez, Schaefer:2013oba,Hoyos:2013qna} but there were many loose ends ofit Thus, the metric transformed in an anomalous way and the Galilean symmetry could only be retrieved in the flat limit by equating the gauge and boost parameters. The Chern Simons term which was known to be instrumental in FQHE was found to be incompatble with the NRDI of the model \cite{Hoyos:2011ez}. These problems were eradicated in the systematic treatment of GGT where the Schrodinger field theory coupled with background NC gravity was systematically obtained which have \cite{BMM1, BMM2,BMM3, BM4}. \begin{enumerate} \item non relativistic spatial diffeomorphism invariance; \item galilean symmetry in the flat limit \item facility to include Chern Simons term as easily as any gauge interaction \end{enumerate} . As the Schrodinger field coupled with NC gravity is associated with very important phenomenologies, the details of it is required to be investigated from different points of view. The results of the present Hamiltonian analysis has demonstrated that not only the GGT model is physically consistent, any deviation from it would lead to unphysical conclusions. We have performed a Hamiltonian analysis of spatially diffeomorphic nonrelativistic Schrodinger field theory coupled with Chern Simons gravity . The coupled model was derived from the recently developed Galilean gauge theory \cite{BMM1, BMM2,BMM3, BM4}. We have shown that the number of degrees of freedom matches with the physically expected values. Also, the number of independent gauge symmetries comes out to be same as the number of independent symmetries of the action. The coupled action contains a term which vanishes if the time space part of the vielbein in Galilean coordinates is taken to be zero. We have explicitly worked out the constraint algebra of the reduced form but it failed to give correct values of the degrees of freedom and the independent symmetries of the truncated action Our results confirm that the model obtained from GGT is consistent in phase space in its entirety, notwithstanding the difference with the other approaches. Also such analysis is not quite available in the literature. Also, it introduces a model with Chern - Simons gravity in the literature in this field {\bf{acknowledgement}} One of the authors (PM) likes to thank Rabin Banerjee for useful discussions.
2,869,038,154,169
arxiv
\section*{Methods Summary} \footnotesize{ \textbf{Linear Calibration Procedure.} We determine the evanescent optomechanical coupling by displacing the nanostring by a known distance using a piezoelectric element and measuring the resulting frequency shift on the optical resonance. The frequency shift is calibrated via modulation of known frequency applied to the laser. We then establish the response of the homodyne by sweeping the laser detuning over the optical resonance and measuring the slope of the phase response. This parameter combined with the previously determined optomechanical coupling rate gives the total response of the combined cavity interferometer system in [V/nm], allowing direct calibration of the time domain data in [nm]. We calibrate the response of our spectrum analyser by applying a test tone of known amplitude, which using the time domain calibration gives a spectral peak of known displacement spectral density. \textbf{Quadratic Calibration Procedure.} Frequency domain calibration of the quadratic measurement is performed by ensuring the calibrated RMS displacement, obtained from the linear measurement, is consistent with the noise power of the $2\ensuremath{\omega_\textrm{M}}$ peak, in accordance with the Isserlis-Wick theorem. In the time domain, a simple regression is used between the square of the linear measurement (\ensuremath{\widetilde{X}}) and the quadratic measurement ($\ensuremath{\widetilde{X}}_{2\omega}^2$). We verify that these procedures are consistent, to within known uncertainties, with one another and with the value of $\lambda^2\bar{n}$ computed from the independently measured system parameters. \textbf{State Conditioning.} At each discrete time step, we rotate the vector \{\ensuremath{\widetilde{P}},\ensuremath{\widetilde{Q}}\} by an angle $2\phi$, such that a new vector $\{\ensuremath{\widetilde{P}}^{2\phi},\ensuremath{\widetilde{Q}}^{2\phi}\}=\{(\ensuremath{\widetilde{P}}^2 + \ensuremath{\widetilde{Q}}^2)^{\frac{1}{2}},0\} $ is obtained. The simultaneously acquired linear data \{\ensuremath{\widetilde{X}},\ensuremath{\widetilde{Y}}\} is then rotated through the half angle, $\phi$, to obtain $\{\ensuremath{\widetilde{X}^{\phi}},\ensuremath{\widetilde{Y}^{\phi}}\}=\{\ensuremath{\widetilde{X}}\cos(\phi)+\ensuremath{\widetilde{Y}}\sin(\phi),\ensuremath{\widetilde{Y}}\cos(\phi)-\ensuremath{\widetilde{X}}\sin(\phi)\}$. For state preparation, the rotated linear data is conditioned upon the value of $\ensuremath{\widetilde{P}}^{2\phi}$, which is proportional to $\frac{1}{2}(\ensuremath{\widetilde{X}^{\phi}})^2$. We choose a conditioning window 4 times smaller than the quadratic measurement uncertainty. When the conditioning criterion is satisfied, the state is read-out using the rotated linear data $\{\ensuremath{\widetilde{X}}^{\phi},\ensuremath{\widetilde{Y}}^{\phi}\}$. }
2,869,038,154,170
arxiv
\section{Introduction}\label{sec:intro} Information processing, both classical and quantum, is ultimately about getting a desired output from a given input. This can be seen as a guessing game, where the aim is formalized as a score function that gives high scores for successful outputs and no scores for unsuccessful outputs. The guessing game setup is a natural translation of many different information processing scenarios and it is therefore a useful framework for studying the advantages that manipulation of quantum systems can give in information processing tasks. The guessing game can be a communication scenario, where Alice either tries to transmit information to Bob, possibly simultaneously hiding it from others. Or it can be a computing scenario, where Alice chooses an input string and then runs a computation on it (in this case Alice and Bob can be the same person). Our interest is in quantum guessing games, where the transmitted information is encoded into quantum states and then decoded by a quantum measurement. There can be processing between encoding and decoding, but this can all be seen as a part of the measurement since we put no restrictions on it. In both of the previously mentioned scenarios it is possible that Alice, or someone else, sends partial information after Bob has already performed a measurement. In our present investigated scenario this later sent information is classical and we call these games \emph{(quantum) guessing games with posterior information}. In the computing scenario this kind of game can be seen as a hybrid computation, where one runs classical and quantum computing in parallel and uses both to conclude the final result. The classical part of a computation may, for example, find one instance that is known to be incorrect with certainty while the quantum part tries to find the correct answer even if some error is expected. The final guess takes into account both parts and is then typically better than each of them alone. The main aim of this paper is to present a clear and general framework for different types of guessing games with posterior information. We show that any such game can be written in a certain kind of standard form and, further, the calculation of the maximal average score in a given game reduces to the calculation of the usual discrimination success probability of a so-called auxiliary state ensemble. We formulate symmetry for guessing games with posterior information and present the solution of a symmetric scenario when the symmetry is related to an irreducible representation. With examples we demonstrate that it is, indeed, possible to calculate the best average score analytically in many interesting cases. In our exemplary cases we derive the solutions for a class of encodings in a qubit system (the angle between the states of the encodings being a free parameter) and this therefore enables to make comparisons and observations that a bunch of numerical solutions could not provide. It is instructive to compare guessing games with posterior information to similar scenarios where the classical partial information is given to Bob before he is performing a measurement. We call this kind of scenario a \emph{(quantum) guessing game with prior information}. Typically prior information allows Bob to adjust and optimize his measurement in a more clever way than when the same partial information is given afterwards. This difference in average scores is the basis of a method that uses guessing games in the detection of quantum incompatibility. We reformulate the incompatibility detection method in the present general framework, recall the known results and point out some open questions. We further characterize a class of encodings for which prior and posterior information are equally valuable. For these encodings the timing of partial information is therefore irrelevant. The fact that in quantum guessing games the timing of partial information can change the maximal average score is the essential difference to classical guessing games. This observation may aid in finding new applications of quantum guessing games where the manipulation of quantum systems boosts information processing. Our investigation is organized as follows. In Section \ref{sec:guessing} we recall the basics of usual state discrimination and, more generally, guessing games with arbitrary score function. This scenario is expanded in Section \ref{sec:posterior} to cover guessing games with posterior information, which are the focus of the current work. These games can be recast in the so-called standard form, explained in Section \ref{sec:standard}. Section \ref{sec:prior} reviews the connection of guessing games to incompatibility detection. Strikingly, the maximal average score in any guessing game with posterior information equals with the maximal success probability in the usual state discrimination game of a related auxiliary state ensemble. This simple but important result is treated in Section \ref{sec:reduction} and it implies that all known methods to solve state discrimination games are applicable in our more general setting. In Section \ref{sec:symmetry} we formulate symmetry of guessing games with posterior information and show how it can be used to calculate the maximal average score in symmetric scenarios. Finally, in Section \ref{sec:qubit} we treat three different kind of examples that demonstrate all the presented main concepts and results. \section{Guessing games}\label{sec:guessing} We will deal with finite dimensional quantum systems and measurements with a finite number of outcomes. We fix a $d$-dimensional, complex Hilbert space $\hh$, denote by $\lh$ the set of all its linear operators and say that $\varrho$ is {\em state} on $\hh$ if it is a positive element of $\lh$ (i.e.~$\varrho$ is selfadjoint with nonnegative eigenvalues) and $\tr{\rho} = 1$. We denote by $|X|$ the cardinality of a finite set $X$. A {\em measurement} on $\hh$ with the outcome set $X$ is a map $\M:X\to\lh$ such that $\M(x)$ is positive for all $x$ and $\sum_x\M(x)=\id$. A {\em state ensemble} on $\hh$ with the label set $X$ is a map $\en:X\to\lh$ such that $\en(x)$ is positive for all $x$ and $\sum_x \tr{\en(x)}=1$. Any state ensemble can be written as a product $\en(x)=p(x)\,\varrho_x$, where $\{\varrho_x\}_{x\in X}$ is a family of states on $\hh$ and $p:x\mapsto\tr{\en(x)}$ is a probability distribution on $X$. In the usual minimum error state discrimination, the system is prepared in one of several possible states $\varrho_x$, $x\in X$, and the task is to guess the correct state with a measurement (see e.g. reviews \cite{BaCr09,Bae13,BaKw15} for background and details). This can be seen as a scenario where two parties communicate by one of them sending one classical message $x$ -- the label of the state -- to the other, and to this aim he encodes $x$ into a quantum system. The encoding is then described by a state ensemble $\en$, in which the probability distribution $p$ is the prior probability of labels to occur and $x\mapsto \varrho_x$ is the actual encoding. For any measurement $\M$ with the outcome set $X$, we denote by $\Pg(\en;\M)$ the \emph{guessing probability}, given as \begin{equation} \Pg(\en;\M) = \sum_{x} \tr{\en(x)\,\M(x)} = \sum_{x} p(x)\, \tr{\varrho_x\,\M(x)} \, . \end{equation} The maximal guessing probability for $\en$ is denoted as \begin{equation} \Pg(\en) = \max_{\M}\, \Pg(\mathcal{E};\M) \, , \end{equation} where the optimization is over all measurements with the outcome set $X$. A \emph{guessing game} can be something different than discrimination, although the basic idea is the same. Generally, we have a \emph{score function} $f:X\times Y \to [0,1]$ and the associated \emph{average score} is given as \begin{equation} \Ef(\en;\M) = \sum_{x,y} f(x,y)\, \tr{\en(x)\,\M(y)} = \sum_{x,y} f(x,y)\,p(x)\, \tr{\varrho_x\,\M(x)} \, . \end{equation} The input and output label sets $X$ and $Y$ can be different (some examples are presented shortly). We are often considering a scenario where $\en$ is given and $\M$ is optimized to give as high average score as possible. The maximal average score is denoted by \begin{equation}\label{eq:def_E(E)} \Ef(\en) = \max_{\M}\, \Ef(\en;\M) \, . \end{equation} In a typical guessing game some pairs $(x,y)\in X\times Y$ are wanted (successful guess) and other pairs are unwanted (unsuccessful guess). If we assign values $f(x,y)=1$ for wanted pairs and $f(x,y)=0$ for unwanted pairs, then the average score $\Ef(\en;\M)$ equals with the probability of getting a wanted pair. Intermediate scores (i.e. $0<f(x,y)<1$) are also possible and can be e.g.~used to give some reward if the guess is almost wanted but not exactly. \begin{example}(\emph{Discrimination and antidiscrimination games})\label{ex:discr} In state discrimination, we set $X=Y$ and choose a score function $f$ which assigns nonzero values to all elements on the diagonal of $X\times X$ and $f(x,y) = 0$ for all $x\neq y$. If we additionally require that $f$ takes only values $0$ and $1$, then we get the standard discrimination score function $f(x,y)=\delta_{x,y}=:f_\delta(x,y)$, for which $\Eg_{f_\delta} = \Pg$. Antidiscrimination (also called antidistinguishability) is defined in a similar manner, but now one aims to get any other outcome than the sent message $x$. This means that we choose a score function $f$ such that $f(x,x)=0$ and $f(x,y)>0$ for $y\neq x$. If we further require that $f$ takes only values $0$ and $1$, then we obtain the standard antidiscrimination score function $f=1-f_\delta$. The discrimination and antidiscrimination games have natural generalizations to the cases in which the receiver is allowed to guess several (fixed integer $2\leq k< |X|$) outcomes instead of one. To formulate these type of guessing games, we choose $Y=\{S\subset X : \mo{S} = k\}$ and $f$ such that $f(x,S)=1$ for $x\in S$ and $f(x,S)=0$ otherwise. The receiver hence gets a score if and only if the input $x$ is contained in the guessed set $S$. In the respective generalization for antidiscrimination games we choose $f$ such that $f(x,S)=1$ for $x\notin S$ and $f(x,S)=0$ otherwise. \end{example} \begin{example}(\emph{Partition and property guessing games})\label{ex:partition} In a partition guessing game, the input set $X$ is partitioned in some way and $Y$ labels the partitions of $X$. For instance, we can take $X=\{1,\ldots,n\}$ and $Y=\{\text{even}, \text{odd}\}$. The aim is to guess the correct quality of the input label, which is obviously less demanding than to guess the input label itself. Generally, suppose that $Y$ is an arbitrary set, $\upsilon:X \to Y$ is a function and let $X_y = \upsilon^{-1}(y)$ for all $y$. Then, $(X_y)_{y\in Y}$ is a partition of $X$, i.e., the subsets are disjoint and their union is $X$. The associated score function $f_\upsilon$ is defined as \begin{equation}\label{eq:partition} f_\upsilon(x,y) = \delta_{\upsilon(x),y} = \begin{cases} 1 & \text{if $x\in X_y$} \\ 0 & \text{otherwise} \end{cases}\,. \end{equation} Another related score function $f_{\neg \upsilon}$ is defined as $f_{\neg \upsilon}(x,y)=1-f_\upsilon(x,y)$. In the special case when $X=Y$ and $\upsilon$ is the identity function, the score function $f_\upsilon$ is the standard discrimination score function $f_\delta$ and $f_{\neg \upsilon}$ is the standard antidiscrimination score function $1-f_\delta$ introduced in Example \ref{ex:discr}. Partition guessing games are a special class of property guessing games. While a partition divides a set $X$ into disjoint subsets, properties can have overlaps. For instance, we can take $X=\{1,\ldots,n\}$, $Y=\{\text{small}, \text{large}\}$ and agree that `small' are numbers $x$ satisfying $x\leq \ceil{\tfrac{n+1}{2}}$ and `large' are numbers $x$ satisfying $x\geq\floor{\tfrac{n+1}{2}}$. In this case, the numbers $x$ with $\floor{\tfrac{n+1}{2}} \leq x\leq \ceil{\tfrac{n+1}{2}}$ have both properties. Generally, suppose that $X$, $Y$ are arbitrary sets and $R \subset X \times Y$ is a relation. The associated score function $f_R$ is defined as \begin{equation*} f_R(x,y) = 1_R(x,y) = \begin{cases} 1 & \text{if $xRy$} \\ 0 & \text{otherwise} \end{cases}\,. \end{equation*} Another related score function $f_{\neg R}$ is defined as $f_{\neg R}(x,y)=1-f_R(x,y)$. In the special case when $Y=\{S\subset X : \mo{S} = k\}$ and $R$ is the `belongs to' relation, the property guessing games defined via $f_R$ and $f_{\neg R}$ are the generalized (anti)discrimination games introduced in Example \ref{ex:discr}. \end{example} Following \cite[Section 2.2.2]{SSQT01}, any guessing game can be recast as a standard discrimination game by suitably redefining the state ensemble at hand. To this aim, we set \begin{equation}\label{eq:Delta} \Delta(\en,f) = \sum_{x,y} f(x,y) \, \tr{\en(x)} \end{equation} and whenever this constant is nonzero we further define the \emph{auxiliary state ensemble} $\en_f$ with the label set $Y$ as \begin{equation}\label{eq:enf_0} \en_f (y) = \Delta(\en,f)^{-1} \sum_x f(x,y)\,\en(x) \,. \end{equation} With this definition we have the equalities \begin{gather} \Ef(\en;\M) = \Delta(\en,f)\ \Pg(\en_f;\M) \,, \label{eq:aux_Pgfpost_1} \\ \Ef(\en) = \Delta(\en,f)\ \Pg(\en_f)\,. \label{eq:aux_Pgfpost_2} \end{gather} In this way a guessing game with an arbitrary score function $f$ reduces to the usual state discrimination game for the respective auxiliary state ensemble. We remark that the precondition $\Delta(\en,f)\neq 0$ mentioned earlier means that $f(x,y)\neq 0$ for some $x,y$ with $\en(x)\neq 0$. If $\Delta(\en,f)=0$, the auxiliary state ensemble can be defined in an arbitrary way without changing \eqref{eq:aux_Pgfpost_1}-\eqref{eq:aux_Pgfpost_2}, since in that case $\Ef(\en;\M) = 0$ for all $\M$ and thus \eqref{eq:aux_Pgfpost_1}-\eqref{eq:aux_Pgfpost_2} are satisfied for any choice of $\en_f$. \begin{example}\label{ex:mixtures} Let $\upsilon:X \to Y$ be a function that determines a partition guessing game as explained in Example \ref{ex:partition}. Then, $\Delta(\en,f_\upsilon)=1$ and \begin{equation*} \en_{f_\upsilon}(y) = \sum_{x \in X_y} \en(x) \, . \end{equation*} We conclude that a partition guessing game reduces to the usual discrimination game where the states are mixtures of the states in the blocks of the partition. \end{example} We end this section with an upper bound for $\Pg(\en)$ which despite its simplicity will be quite useful in the later developments (Sections \ref{sec:symmetry} and \ref{sec:qubit}). It has the same derivation as \cite[Proposition 2]{CaHeTo18}. \begin{proposition}\label{prop:Pbound} For a state ensemble $\en$ with the label set $X$, we denote by $\Lambda(\en)$ the largest eigenvalue of all the operators $\en(x)$, $x\in X$. Then, \begin{equation} \Pg(\en)\leq d \, \Lambda(\en) \,. \end{equation} The above equality is attained if and only if there exists a measurement $\M$ with the outcome set $X$ satisfying $\en(x)\,\M(x) = \Lambda(\en)\,\M(x)$ for all $x\in X$. If this is the case, then $\Pg(\en) = \Pg(\en;\M)$ for such a measurement. \end{proposition} \begin{proof} If $\lambda(x)$ is the largest eigenvalue of the operator $\en(x)$, we have $\lambda(x)\,\id-\en(x) \geq 0$, and then \begin{align*} & \lambda(x)\,\tr{\M(x)}-\tr{\en(x)\,\M(x)} = \tr{\big(\lambda(x)\,\id-\en(x)\big)\M(x)} \\ & \qquad\quad = {\rm tr}\Big\{\Big[\big(\lambda(x)\,\id-\en(x)\big)^{\frac{1}{2}}\M(x)^{\frac{1}{2}}\Big]^* \Big[\big(\lambda(x)\,\id-\en(x)\big)^{\frac{1}{2}} \M(x)^{\frac{1}{2}}\Big]\Big\} \geq 0\,. \end{align*} In this expression, the last equality is attained if and only if $\big(\lambda(x)\,\id-\en(x)\big)\M(x) = 0$, that is, $\en(x)\,\M(x) = \lambda(x)\,\M(x)$. It follows that \begin{align*} \Pg(\en;\M) & = \sum_x \tr{\en(x)\,\M(x)} \leq \sum_x \lambda(x)\,\tr{\M(x)} \leq \sum_x \Lambda(\en)\,\tr{\M(x)} = \Lambda(\en)\,\tr{\id} \\ & = d \, \Lambda(\en) \,, \end{align*} where all the equalities are attained if and only if $\en(x)\,\M(x) = \lambda(x)\,\M(x)$ for all $x$ and $\M(x) = 0$ for all $x$ such that $\lambda(x)<\Lambda(\en)$. The latter two conditions are equivalent to $\en(x)\,\M(x) = \Lambda(\en)\,\M(x)$ for all $x$, thus proving the claim. \end{proof} \section{Guessing games with posterior information}\label{sec:posterior} We will now expand the guessing game setup to cover later sent classical information. Related formulations have been investigated earlier in \cite{BaWeWi08,GoWe10} and their differences to the current approach has been explained in \cite{CaHeTo18}. In \emph{guessing games with posterior information}, the standard communication scenario is modified by adding one step to it. The starting point, known both to Alice and Bob, consists of finite sets $X$, $Y$, a score function $f:X\times Y \to [0,1]$, a finite set $T$ describing partial information, and conditional probabilities $\alpha(t\mid x)$ for all $t\in T$ and $x\in X$ relating partial information to input labels. We can take $T=\{1,\ldots,m \}$ whenever it is convenient to label the elements of $T$ by integers, although this is not always the case as $T$ may not have a natural ordering (see e.g.~Example \ref{ex:ruling-out} below). The scenario has the following steps (see Figure \ref{fig:post}): \begin{enumerate}[(i)] \item Alice uses a state ensemble $\en$ with the label set $X$. This means that she picks a label $x$ with the probability $p(x)=\tr{\en(x)}$ and transmits the respective state $\varrho_x=\en(x)/\tr{\en(x)}$ to Bob. \item Bob receives $\varrho_x$ and performs a measurement $\M$ with the outcome set $Z$. Bob obtains the outcome $z\in Z$ with probability $\tr{\varrho_x\,\M(z)}$. \item Bob receives a classical message $t\in T$. This message depends on the input label $x$; Bob receives $t$ with probability $\alpha(t\mid x)$. This additional information can be sent by Alice, but it can have also another origin. The essential point is that Bob receives it after he has performed the measurement. We call $\alpha$ the \emph{partial information map}. \item Bob uses the additional information to post-process the obtained measurement outcome $z$ to an element $y\in Y$. For each $t\in T$, Bob can use a different post-processing matrix $\nu_t$ that relabels the outcome $z$ into $y$ with probability $\nu_t(y\mid z)$. We denote $\nu: t \mapsto \nu_t$ and call this the \emph{post-processing map}. The aim of Bob is to choose $y$ such that $f(x,y)$ is maximal. \end{enumerate} \begin{figure}[h!] \centering \includegraphics[scale=0.7]{Post.pdf} \caption{In a guessing game with posterior information Bob receives Alice's partial information only after he has performed a measurement in the quantum state transmitted by her. He then postprocesses the obtained outcome trying to maximize the score of the game. \label{fig:post}} \end{figure} Summarizing, a guessing game with posterior information is defined by a score function $f$ (the goal of the game) and a partial information map $\alpha$ (the additional aid for reaching the goal), while Alice's preparations are determined by $\en$ and Bob's guessing strategy is determined by a measurement $\M$ and post-processing map $\nu$. The average score in the previously described scenario is \begin{equation}\label{eq:Pgfpost} \Efpost(\en;\M,\nu) = \sum_{x,y,t,z} f(x,y)\, \alpha(t \mid x)\, \nu_t(y \mid z)\, \tr{\en(x)\,\M(z)} \end{equation} and its maximal value is \begin{equation}\label{eq:Ppost_f,alpha} \Efpost(\en) = \max_{\M,\nu}\, \Efpost(\en;\M,\nu) \, , \end{equation} where the optimization is over all measurements $\M$ and post-processing maps $\nu$. We remark that in \eqref{eq:Ppost_f,alpha} also the outcome set of $\M$ is allowed to vary. In particular, the fact that the maximum in \eqref{eq:Ppost_f,alpha} is attained is not immediate. However, we will prove in Section \ref{sec:standard} that this is indeed the case (see Proposition \ref{prop:max_barM}). In the following we present some examples to get familiar with the presented definitions and to see the diversity of the guessing game scenario. \begin{example}(\emph{Two extreme cases of posterior information.}) There are two extreme cases of posterior information, those of telling everything or telling nothing. Firstly, Alice can tell the sent label $x$ to Bob as it is, in which case $T=X$ and $\alpha(t \mid x)=\delta_{t,x}$. This means that the quantum prepare-and-measure part as well as the postprocessing are obsolete and Bob -- as he learns $x$ -- can just choose $y_x$ such that $f(x,y_x)$ is maximal. Indeed, in this setting we have \begin{align*} \Efpost(\en;\M,\nu) & = \sum_{x,y,z} f(x,y)\, \nu_x(y \mid z)\, \tr{\en(x)\,\M(z)} \\ & \leq \sum_x f(x,y_x)\sum_{z}\bigg( \sum_y \nu_x(y \mid z)\bigg)\, \tr{\en(x)\,\M(z)}\\ & =\sum_x f(x,y_x)\,p(x) \end{align*} and the bound is achieved by choosing $\nu_x(y\mid z)=\delta_{y,y_x}$ The maximal average score is thus given as $\sum_x f(x,y_x)\,p(x)$. Secondly, Alice can tell a posterior message $t$ that is independent of the original label, i.e., $\alpha(t \mid x) = \alpha(t)$. From \eqref{eq:Pgfpost} we get \begin{align*} \Efpost(\en;\M,\nu) &= \Ef(\en;\M') \, , \end{align*} where \begin{equation*} \M'(y) = \sum_{z} \left( \sum_{t} \alpha(t)\,\nu_t(y\mid z) \right) \M(z) \, . \end{equation*} The post-processing can hence be included in the measurement and the guessing game reduces to that without posterior information, as expected. A related special case is the one in which Alice may send useful posterior information but Bob is not taking advantage of it, i.e., Bob is post-processing his measurement outcome in a fixed manner. Formally, this means that the post-processing map $\nu:t\mapsto\nu_t$ is constant, hence the measurement $$ \M''(y) = \sum_{z} \nu_t(y\mid z)\,\M(z) $$ does not depend on $t$, and \eqref{eq:Pgfpost} takes the form \begin{align*} \Efpost(\en;\M,\nu) & =\sum_{x,y,t} f(x,y)\,\alpha(t \mid x)\, \tr{\en(x)\,\M''(y)} = \Ef(\en;\M'')\,. \end{align*} Choosing $Y=Z$ and $\nu_t(y\mid z) = \delta_{y,z}$ one has $\M''=\M$ and this confirms the intuitively clear fact that \begin{equation} \Efpost(\en) \geq \Ef(\en) \end{equation} for any choice of $\alpha$, as Bob can always decide to ignore the posterior information. \end{example} \begin{example}(\emph{Deterministic posterior information})\label{ex:non-overlapping} Suppose that $\alpha(t\mid x)\in\{0,1\}$ for all $x,t$. Since $\sum_t \alpha(t \mid x) =1$, this means that for each $x\in X$ there is a unique $\tau(x)\in T$ such that $\alpha(\tau(x)\mid x) = 1$. Therefore, the input label $x$ specifies the later sent posterior information deterministically. By denoting $X_t = \tau^{-1}(t)$, the sets $(X_t)_{t\in T}$ constitute a partition of $X$ and $\alpha = \alpha_\tau$, where \begin{equation}\label{eq:non-overlapping} \alpha_\tau(t \mid x)= \delta_{\tau(x),t} = \begin{cases} 1 & \text{if $x\in X_t$} \\ 0 & \text{otherwise} \end{cases}\,. \end{equation} We refer to this case as the case of \emph{deterministic posterior information}. For the task of state discrimination, this scenario has been discussed in \cite{CaHeTo18}. As a paradigmatic exemplary case of the previously explained deterministic posterior information, we recall the discrimination task presented in \cite{AkKaMa19}, where $|X|=|Y|=4$ and $|T|=2$. In this guessing game the set $X$ can be chosen to contain four symbols $\{\clubsuit, \spadesuit, \diamondsuit, \heartsuit\}$, and Alice chooses the input label among them with uniform probability. She uses a qutrit system to send her message to Bob, and the respective (pure) qutrit states correspond to the unit vectors \begin{equation*} \varrho_{\clubsuit} \sim \frac{1}{\sqrt{2}} \left(\begin{array}{c}1 \\ 1 \\0\end{array}\right) , \ \varrho_{\spadesuit} \sim \frac{1}{\sqrt{2}} \left(\begin{array}{c}1 \\ -1 \\0\end{array}\right) , \ \varrho_{\diamondsuit}\sim \frac{1}{\sqrt{2}} \left(\begin{array}{c}1 \\ 0 \\ 1\end{array}\right) , \ \varrho_{\heartsuit}\sim \frac{1}{\sqrt{2}} \left(\begin{array}{c}1 \\ 0 \\ -1\end{array}\right) . \end{equation*} These are four states of a three dimensional system, hence there is no measurement that would perfectly discriminate them. However, Bob knows that after he has performed the measurement, Alice will inform him about the color of the symbol (black for $\{\clubsuit, \spadesuit\}$ and red for $\{\diamondsuit, \heartsuit\}$). In our notation, this means that the partition of $X$ is $X_{\textrm{black}}=\{\clubsuit, \spadesuit\}$ and $X_{\textrm{red}}=\{\diamondsuit, \heartsuit\}$. The measurement $\M$ that Bob wisely decides to use is \begin{align*} \M(1) & =\frac{1}{4}\left(\begin{array}{ccc}1 & 1 & 1 \\1 & 1 & 1 \\1 & 1 & 1\end{array}\right) , & \M(2) & =\frac{1}{4}\left(\begin{array}{ccc}1 & 1 & -1 \\1 & 1 & -1 \\-1 & -1 & 1\end{array}\right) , \\ \M(3) & =\frac{1}{4}\left(\begin{array}{ccc}1 & -1 & 1 \\-1 & 1 & -1 \\1 & -1 & 1\end{array}\right) , & \M(4) & =\frac{1}{4}\left(\begin{array}{ccc}1 & -1 & -1 \\-1 & 1 & -1 \\-1 & -1 & 1\end{array}\right) . \end{align*} This leads to the probability distributions \begin{align*} \tr{\varrho_{\clubsuit}\,\M(\cdot)} & = \left(\half,\half,0,0\right) \, , \\ \tr{\varrho_{\spadesuit}\,\M(\cdot)} & = \left(0,0,\half,\half\right) \, , \\ \tr{\varrho_{\diamondsuit}\,\M(\cdot)} & = \left(\half,0,\half,0\right) \, , \\ \tr{\varrho_{\heartsuit}\,\M(\cdot)} & = \left(0,\half,0,\half\right) \, . \end{align*} From these probabilities we confirm that Bob can indeed infer the correct input label if he gets the color of the input symbol as a posterior information. For example, if the outcome is $z=2$, then Bob needs to post-process it to $\clubsuit$ if the color is black, and to $\heartsuit$ if the color is red. \end{example} \begin{example}(\emph{Excluding wrong options})\label{ex:ruling-out} Let us set $T=X$ and define \begin{equation}\label{eq:ruling-out} \alpha_{\rm ex}(t\mid x) = \frac{1}{|X|-1}\, (1-\delta_{x,t}) \,. \end{equation} This partial information map means that Alice announces one wrong option $t$ after Bob has performed his measurement, and she picks it with uniform probability within the set $X\setminus\{x\}$. More generally, we can fix any positive integer $k < |X|$ and define \begin{equation*} T = \{S\subset X : \mo{S} = k\}\,,\qquad\qquad \alpha_{\rm ex}(S\mid x) = \frac{k!\,(|X|-k-1)!}{(|X|-1)!}\, 1_{X\setminus S}(x) \,, \end{equation*} where the normalization constant of $\alpha_{\rm ex}$ is the inverse cardinality of the set $T_x = \{S\in T : x\notin S\}$. This choice of $\alpha$ means that Alice announces a collection of $k$ wrong options $S=\{x_1,\ldots,x_k\}$, and she picks it with uniform probability within the set $T_x$. \end{example} \section{Standard form of guessing games with posterior information}\label{sec:standard} As we have seen in the last section, Bob's guessing strategy is determined by a measurement $\M$ and post-processing map $\nu$. There is a certain freedom in choosing $\M$ and $\nu$, still leading to the same average score for a given state ensemble $\en$. To see this, we write the average score \eqref{eq:Pgfpost} as \begin{equation}\label{eq:compatible} \Efpost(\en;\M,\nu) = \sum_{x,y,t} f(x,y)\, \alpha(t \mid x)\, \tr{\en(x)\,\N_t(y)} \, , \end{equation} where $\N_t$ are the post-processed measurements defined as \begin{equation}\label{eq:compatible-N} \N_t(y) = \sum_{z} \nu_t(y \mid z) \, \M(z)\,. \end{equation} Thus, different measurements $\M$ and post-processing maps $\nu$ which yield the same measurements $\N_t$ in \eqref{eq:compatible-N} lead to equal average scores. Given a collection of measurements $(\N_t)_{t \in T}$, all with the same outcome set $Y$, we recall that the collection is called {\em compatible} if each $\N_t$ can be written as in \eqref{eq:compatible-N} for some choice of $\M$ and $\nu$ \cite{HeMiZi16}. Otherwise, one says that the collection is {\em incompatible}. As a consequence of \eqref{eq:Ppost_f,alpha} and \eqref{eq:compatible}, we can write \begin{equation*} \Efpost(\en) = \max\bigg\{ \sum_{x,y,t} f(x,y)\, \alpha(t \mid x)\, \tr{\en(x)\,\N_t(y)} : (\N_t)_{t \in T} \text{ is compatible} \bigg\}\,. \end{equation*} The compatibility constraint guarantees that the two measurement scenarios -- using $\M$ and post-processing, or using the collection $(\N_t)_{t\in T}$ -- are equivalent. In fact, without the compatibility constraint, the scenario with many measurements becomes a guessing game with \emph{prior} information. We come back to this point in Section \ref{sec:prior}. The outcome set of $\M$ in the definition of compatibility of $(\N_t)_{t \in T}$ is not fixed and it can be arbitrary. However, every compatible collection of measurements has a {\em joint measurement}, i.e., a measurement defined on their product outcome set and giving them as marginals \cite{AlCaHeTo09}. In the current context, this means that we can always switch from $\M$ to a measurement with the outcome set $Y^T$ and to a fixed post-processing map, defined as \begin{equation}\label{eq:def_pi} \pi_t(y\mid \phi)= \delta_{y,\phi(t)} = \begin{cases} 1 & \text{ if $y=\phi(t)$} \\ 0 & \text{ if $y\neq\phi(t)$} \end{cases}\,. \end{equation} (Here and in the following, we use the customary notation $Y^T$ for the set of all maps $\phi:T\to Y$. If $T=\{1,\ldots,m\}$, then $Y^T$ is identified with the product set $Y^m$ canonically. The functional notation is convenient especially when $T$ does not have any natural ordering, see e.g.~Example \ref{ex:ruling-out}.) In fact, starting from $\M$ and $\nu$, we define a measurement $\bar{\M}_{\nu}$ with the outcome set $Y^T$ as \begin{equation*} \bar{\M}_{\nu}(\phi) = \sum_z \M(z) \prod_t \nu_t(\phi(t) \mid z) \end{equation*} and then we have \begin{equation*} \begin{aligned} \sum_\phi \pi_t (y \mid \phi)\,\bar{\M}_{\nu}(\phi) & = \sum_z \M(z) \,\nu_t(y \mid z) \prod_{t'\neq t} \sum_{y'} \nu_{t'}(y' \mid z) \\ & = \sum_{z} \nu_t(y \mid z)\,\M(z)\,, \end{aligned} \end{equation*} which means that the post-processed measurements \eqref{eq:compatible-N} are the \emph{marginals} of $\bar{\M}_{\nu}$. In particular, \begin{equation}\label{eq:Epost(M,nu)=Epost(barM,pi)} \Efpost(\en;\M,\nu)=\Efpost(\en;\bar{\M}_{\nu},\pi) \, . \end{equation} The importance of this transition from $\M$ and $\nu$ to $\bar{\M}_{\nu}$ and $\pi$ is that for the latter pair the outcome set is fixed and so is also the post-processing map. We thus reach the following conclusion. \begin{proposition}\label{prop:max_barM} The maximum in \eqref{eq:Ppost_f,alpha} is attained and \begin{equation}\label{eq:max_barM} \Efpost(\en) = \max_{\bar{\M}}\, \Efpost(\en;\bar{\M},\pi) \,, \end{equation} where the optimization is over all measurements $\bar{\M}$ with the outcome set $Y^T$. \end{proposition} \begin{proof} Clearly, $\Efpost(\en;\bar{\M},\pi)\leq\Efpost(\en)$ for all $\bar{\M}$, and \eqref{eq:max_barM} is then a consequence of the bound $$ \Efpost(\en;\M,\nu) \leq \max_{\bar{\M}}\, \Efpost(\en;\bar{\M},\pi) $$ following from \eqref{eq:Epost(M,nu)=Epost(barM,pi)}. The maxima in \eqref{eq:Ppost_f,alpha} and \eqref{eq:max_barM} are attained, since the measurements with the outcome set $Y^T$ form a compact set and in \eqref{eq:max_barM} the post-processing map $\pi$ is fixed. \end{proof} As a result, if the objective is to optimize the average score of a guessing game with posterior information that has $Y$ as the input label set and $T$ as the partial information set, it is enough to consider guessing strategies of the following \emph{standard form}: \begin{itemize} \item Bob is using a measurement $\bar{\M}$ with the outcome set $Y^T$. From the obtained measurement outcome $\phi$, he chooses $\phi(t)$ based on the posterior information $t \in T$. \end{itemize} This general formulation is useful for proving results in the subsequent sections. \section{Guessing games with prior information and detection of incompatibility}\label{sec:prior} We recall that $\Efpost(\en)$ denotes the best achievable average score when the optimization is over all measurements $\M$ and post-processing maps $\nu$. In Section \ref{sec:standard} we saw that finding $\Efpost(\en)$ is equivalent to optimizing the sum in \eqref{eq:compatible} over the collections of $|T|$ compatible measurements with the outcome set $Y$. One can obviously write such a sum also without the assumption of compatibility, but ignoring this constraint may lead to a larger maximal average score than $\Efpost(\en)$. In fact, the usage of the additional information $t$ for the choice of the measurement $\N_t$ means that $t$ is used prior the measurement happens. We call this new scenario a \emph{guessing game with prior information} (see Figure \ref{fig:pre}), and we write \begin{equation}\label{eq:pre} \Efpre(\en;( \N_t )_{t \in T}) = \sum_{x,y,t} f(x,y)\, \alpha(t \mid x)\, \tr{\en(x)\,\N_t(y)} \end{equation} for its average score. \begin{figure}[h!] \centering \includegraphics[scale=0.7]{Pre.pdf} \caption{In a guessing game with prior information Bob arranges his measurement after he receives Alice's partial information. The postprocessing of the obtained outcome can now be included in the measurement itself. In this scenario Bob is allowed to optimize his measurement in order to get the highest score. \label{fig:pre}} \end{figure} Summarizing the previous discussion, $f$ and $\alpha$ define a guessing game, and the partial information described by $\alpha$ can be delivered to Bob either before or after he is performing a measurement. If Bob can access this information before, then the average score is $\Efpre(\en;( \N_t )_{t \in T})$ given in \eqref{eq:pre} and its maximal value is \begin{equation}\label{eq:sup_any} \Efpre(\en) = \max\big\{ \Efpre(\en;( \N_t )_{t \in T}) : ( \N_t )_{t \in T} \text{ is any collection of measurements} \big\}\,. \end{equation} While if Bob gets the information only later and his usage of it is therefore limited to post-processing the measurement outcomes, then we are back in the guessing game with posterior information, and the maximal average score is \begin{equation}\label{eq:sup_compatible} \Efpost(\en) = \max\big\{ \Efpre(\en;( \N_t )_{t \in T}) : ( \N_t )_{t \in T} \text{ is compatible} \big\} \end{equation} as discussed in Section \ref{sec:standard}. We thus see that the difference of the two games is really about (in)compatibility of measurements. The following result, first proved in \cite{CaHeTo18} for a more restricted scenario, is based on these observations. \begin{proposition}\label{prop:greater->incomp} If $\Efpre(\en;( \N_t )_{t \in T}) > \Efpost(\en)$, then $( \N_t )_{t \in T}$ is incompatible. \end{proposition} The opposite question is then: if $( \N_t )_{t \in T}$ is a collection of incompatible measurements, can we detect their incompatibility by a guessing game? This means that we compare the average score $\Efpre(\en;( \N_t )_{t \in T})$ to the maximal average score with posterior information, $\Efpost(\en)$. The first one can even be calculated from experimental data if $\N_t$ are real devices, whereas $\Efpost(\en)$ can be determined or at least upper bounded analytically (more about that in later sections). This question has been studied from various different angles in \cite{CaHeTo19, UoKrShYuGu19,SkSuCa19,BuChZh20,Kuramochi20} and important findings have been reported. Extensions to the detection of incompatibility of quantum channels have also been developed \cite{CaHeMiTo19JMP,Mori20}. One statement is the following (see Theorem 2 in \cite{CaHeTo19}). \begin{theorem}\label{prop:incomp->greater} Let $X=Y\times T$ and $\upsilon:X\to Y$, $\tau:X\to T$ be the projections of $X$ onto the respective factors. Moreover, fix the score function $f_\upsilon$ and the partial information map $\alpha_\tau$ as in \eqref{eq:partition} and \eqref{eq:non-overlapping}, respectively. Then, for any incompatible collection of measurements $( \N_t )_{t \in T}$ with the outcome set $Y$, there exists a state ensemble $\en$ with the label set $X$ such that $\mathbf{E}^{\mathrm{prior}}_{f_\upsilon,\alpha_\tau}(\en;( \N_t )_{t \in T}) > \mathbf{E}^{\mathrm{post}}_{f_\upsilon,\alpha_\tau}(\en)$. \end{theorem} \begin{proof} The proof is a straightforward adaptation of the argument provided in \cite{CaHeTo19}.\\ Let $\mathcal{V}$ be the linear space of all collections $(F_t)_{t\in T}$ of operator valued functions $F_t : Y\to\lh$. Any collection of measurements $(\N_t)_{t\in T}$ with the outcome set $Y$ is an element of $\mathcal{V}$, and all collections which are compatible constitute a compact convex subset $\mathcal{C}\subset\mathcal{V}$. Indeed, by the discussion in Section \ref{sec:standard}, a collection $(\N_t)_{t\in T}$ is compatible if and only if each measurement $\N_t$ is obtained as the marginal of a joint measurement, and joint measurements form a compact convex subset of the linear space of all $\lh$-valued functions on $Y^T$. Now, suppose $(\N_t)_{t\in T}$ is an incompatible collection of measurements. By a standard separation argument, there exists a hyperplane in $\mathcal{V}$ which separates $(\N_t)_{t\in T}$ from $\mathcal{C}$. Equivalently, one can find $(F_t)_{t\in T}\in\mathcal{V}$ and $\kappa\in\R$ such that, by defining $$ \xi\big((\N'_t)_{t\in T}\big) = \kappa - \sum_{y,t} \tr{F_t(y)\,\N'_t(y)} $$ for all collections of measurements $(\N'_t)_{t\in T}$, the inequality $\xi\geq 0$ holds on the set $\mathcal{C}$, while $\xi\big((\N_t)_{t\in T}\big) < 0$ for the incompatible collection $(\N_t)_{t\in T}$. Fix $\mu>0$ satisfying $F_t(y)+(\mu/2)\,\id \geq 0$ for all $y,t$, and let $1/\lambda = \sum_{y,t} {\rm tr}\big[F_t(y)+\mu\,\id\big]>0$. Define $$ \en(y,t) = \lambda\,\big(F_t(y)+\mu\,\id\big) \,. $$ It is easy to check that $\en$ is a state ensemble with the label set $X$. Moreover, $$ \mathbf{E}^{\mathrm{prior}}_{f_\upsilon,\alpha_\tau}(\en;( \N'_t )_{t \in T}) = -\lambda\,\xi\big((\N'_t)_{t\in T}\big) + \kappa'\,, $$ where $\kappa' = \lambda\,(\kappa + d\,\mu\,|T|)$. By \eqref{eq:sup_compatible}, it follows that \begin{align*} \mathbf{E}^{\mathrm{post}}_{f_\upsilon,\alpha_\tau}(\en) & = -\lambda\,\min\big\{ \xi\big((\N'_t)_{t\in T}\big) : ( \N'_t )_{t \in T} \in\mathcal{C} \big\} + \kappa' \\ & < -\lambda\,\xi\big((\N_t)_{t\in T}\big) + \kappa' = \mathbf{E}^{\mathrm{prior}}_{f_\upsilon,\alpha_\tau}(\en;( \N_t )_{t \in T}) \end{align*} as claimed in the theorem. \end{proof} We underline that, in order to detect all incompatible collections of measurements with the outcome set $Y$ in a guessing game with partial information from the set $T$, Theorem \ref{prop:incomp->greater} requires a sufficiently large label set $X$, namely, $|X| = |Y|\,|T|$. Combined together, Proposition \ref{prop:greater->incomp} and Theorem \ref{prop:incomp->greater} lead to the conclusion that a collection $( \N_t )_{t \in T}$ is incompatible if and only if there is a guessing game such that $\Efpost(\en;( \N_t )_{t \in T}) > \Efpost(\en)$ for some choice of $f$, $\alpha$ and $\en$. It appears that the full realm of guessing games with posterior information has not yet been investigated from the viewpoint of incompatibility detection. For instance, when a given class of such guessing games is enough to detect all incompatible collections of measurements? In particular, is it possible to use smaller state ensembles and still be able to detect incompatibility? Further, what is the condition for a pair of a score function $f$ and a partial information map $\alpha$ to detect some incompatible pair? Proposition \ref{prop:greater->incomp} and Theorem \ref{prop:incomp->greater} also point out a fundamental difference between quantum and classical theory: while quantum theory admits guessing games in which prior information gives an advantage over posterior information, in classical theory the two scenarios are equivalent. In terms of the maximal average scores \eqref{eq:sup_any} and \eqref{eq:sup_compatible}, this amounts to say that for any classical state ensemble $\en$, we have $\Efpre(\en) = \Efpost(\en)$ for all $f$ and $\alpha$. To give a precise explanation of this statement, we recall that the states of a (finite dimensional) classical system are just probability distributions on a fixed finite set $H$. Denoting by $\ell(\cdot)$ the linear space of all complex functions on a given set, measurements on $H$ with the outcome set $Z$ are described by linear positive maps $\M^{\scriptscriptstyle\wedge} {:\ell(H)}\to\ell(Z)$ which send the probability distributions on $H$ into those on $Z$. The general structure is $$ \big[\M^{\scriptscriptstyle\wedge}(q)\big](z) = \sum_h \mu(z\mid h)\,q(h) \quad \forall q\in\ell(H) \,, $$ where $\mu(z\mid h)$ are conditional probabilities uniquely determined by the measurement $\M^{\scriptscriptstyle\wedge}$. For classical guessing games, everything goes as in the quantum case up to replacing the Born rule $\tr{\en(x)\,\M(z)}$ with the probabilities $\big[\M^{\scriptscriptstyle\wedge}(\en(x))\big](z)$ inside the expressions of the average scores. In classical theory, any collection $(\N^{\scriptscriptstyle\wedge}_t)_{t\in T}$ of measurements with the outcome set $Y$ is compatible. Indeed, if $$ \big[\N^{\scriptscriptstyle\wedge}_t(q)\big](y) = \sum_h \nu_t(y\mid h)\,q(h) \,, $$ then each $\N^{\scriptscriptstyle\wedge}_t$ is the marginal of the following measurement $\bar{\M}^{\scriptscriptstyle\wedge}_\nu$ with the product outcome set $Y^T$ $$ \big[\bar{\M}^{\scriptscriptstyle\wedge}_\nu (q)\big](\phi) = \sum_h q(h) \prod_t \nu_t(\phi(t) \mid h)\,. $$ In particular, for all $f$, $\alpha$ and $\en$, we have $\Efpre(\en;(\N^{\scriptscriptstyle\wedge}_t)_{t \in T}) = \Efpost(\en;\bar{\M}^{\scriptscriptstyle\wedge}_\nu,\pi)$, where $\pi$ is the post-processing map defined in \eqref{eq:def_pi}. This implies that the probability $\Efpre(\en;(\N^{\scriptscriptstyle\wedge}_t)_{t \in T})$ can not exceed the bound $\Efpost(\en)$, as claimed. When the equality $\Efpre(\en) = \Efpost(\en)$ holds, we say that \emph{the timing of partial information is irrelevant} for the state ensemble $\en$ in the guessing game with score function $f$ and partial information map $\alpha$. As we have just seen, this is always the case for guessing games based on classical systems. It is still true for quantum state ensembles which are diagonal with respect to a fixed reference basis of the system Hilbert space, as shown in the following statement. \begin{theorem} Suppose $\en$ is a state ensemble such that the operators $\en(x)$ and $\en(x')$ commute for all $x$, $x'$ belonging to the label set of $\en$. Then, the timing of partial information is irrelevant for $\en$ in all guessing games. \end{theorem} \begin{proof} We show that for all collections of measurements $(\N_t)_{t\in T}$ there exists a compatible collection $(\N'_t)_{t\in T}$ such that \begin{equation}\label{eq:equ_proof}\tag{$\ast$} \Efpre(\en;(\N_t)_{t\in T}) = \Efpre(\en;(\N'_t)_{t\in T})\,, \end{equation} and then the claim follows from \eqref{eq:sup_any} and \eqref{eq:sup_compatible}. Let $(\varphi_h)_{h\in H}$ be an orthonormal basis of $\hh$ which diagonalizes all the operators $\{\en(x) : x\in X\}$. We define two linear maps $\Phi_{\rm meas}:\lh\to\ell(H)$ and $\Phi_{\rm prep}:\ell(H)\to\lh$ as follows: $$ \big[\Phi_{\rm meas}(\varrho)\big](h) = {\rm tr}\big[\kb{\varphi_h}{\varphi_h}\,\varrho\big]\,,\qquad\qquad \Phi_{\rm prep}(q) = \sum_h q(h)\,\kb{\varphi_h}{\varphi_h}\,. $$ The state ensemble $\en$ is invariant with respect to the composed map $\Phi_{\rm prep}\circ\Phi_{\rm meas}$, that is, $\Phi_{\rm prep}\big(\Phi_{\rm meas}(\en(x))\big) = \en(x)$ for all $x$. Let $\N^{\scriptscriptstyle\wedge}_t$ be the classical measurement on $H$ with the outcome set $Y$ which is given by $$ \big[\N^{\scriptscriptstyle\wedge}_t (q)\big](y) = \tr{\Phi_{\rm prep}(q)\,\N_t(y)} \,. $$ We have \begin{align*} \tr{\en(x)\,\N_t(y)} & = \tr{\Phi_{\rm prep}\big(\Phi_{\rm meas}(\en(x))\big)\,\N_t(y)} = \big[\N^{\scriptscriptstyle\wedge}_t \big(\Phi_{\rm meas}(\en(x))\big)\big](y) \\ & = \dual{(\N^{\scriptscriptstyle\wedge}_t\circ\Phi_{\rm meas})(\en(x))}{\delta_y} = \tr{\en(x)\,(\N^{\scriptscriptstyle\wedge}_t\circ\Phi_{\rm meas})^*(\delta_y)} \,, \end{align*} where $\dual{q}{q'}=\sum_y q(y)\,q'(y)$ is the duality relation for elements $q,q'\in\ell(Y)$, $\delta_y$ is the delta function at $y$, and $(\N^{\scriptscriptstyle\wedge}_t\circ\Phi_{\rm meas})^*:\ell(Y)\to\lh$ is the dual map of $\N^{\scriptscriptstyle\wedge}_t\circ\Phi_{\rm meas}$. If we set $\N'_t(y) = (\N^{\scriptscriptstyle\wedge}_t\circ\Phi_{\rm meas})^*(\delta_y)$, then the collection of measurements $(\N'_t)_{t\in T}$ so obtained is compatible, since such is the collection of classical measurements $(\N^{\scriptscriptstyle\wedge}_t)_{t\in T}$. Moreover, \eqref{eq:equ_proof} holds for $(\N'_t)_{t\in T}$, thus completing the proof. \end{proof} We conclude this section with a brief analysis of the maximal average score in the scenario with prior information. The evaluation of $\Efpre(\en)$ boils down to determining the maximal average scores of $|T|$ different guessing games of the usual type. To see this, we introduce the total probability \begin{equation*} q_{\en,\alpha}(t) = \sum_x \alpha(t\mid x)\,\tr{\en (x)} \end{equation*} and, whenever $q_{\en,\alpha}(t)$ is nonzero, we define the {\em conditional state ensemble} $\en_t$ as follows: \begin{equation}\label{eq:ent} \en_t(x) = q_{\en,\alpha}(t)^{-1}\alpha(t\mid x)\,\en(x)\,. \end{equation} With the above definition, we can rewrite \eqref{eq:pre} as \begin{equation*} \Efpre(\en;(\N_t)_{t\in T}) = \sum_t q_{\en,\alpha}(t)\,\Ef(\en_t;\N_t) \end{equation*} and by combining \eqref{eq:def_E(E)} and \eqref{eq:sup_any}, we then obtain \begin{equation} \Efpre(\en) = \sum_t q_{\en,\alpha}(t)\,\Ef(\en_t) \, . \end{equation} Thus, the maximal average score with prior information $\Efpre(\en)$ is a convex sum of maximal average scores $\Ef(\en_t)$ for different $t$. It can hence be evaluated by means of the techniques of usual minimum error state discrimination, applied to each conditional state ensemble $\en_t$. The definition of $\en_t$ is subject to the same remarks as those after the introduction of the auxiliary state ensemble in \eqref{eq:enf_0}. Note that in the present case the label sets of $\en_t$ and $\en$ coincide, and that their states are essentially the same. Indeed, $\varrho_{t,x} = \en_t(x)/\tr{\en_t(x)}$ and $\varrho_x = \en(x)/\tr{\en(x)}$ are equal for all $t$ and $x$ such that $q_{\en,\alpha}(t)\neq 0$ and $\en_t(x)\neq 0$. On the other hand, the probabilities $p_t(x) = \tr{\en_t(x)}$ and $p(x) = \tr{\en(x)}$ may be different in general. \section{Reduction to usual state discrimination games}\label{sec:reduction} In this section we present the basic steps how the maximal average score in a guessing game with posterior information can be calculated. Our approach is related but more general than a result presented in \cite{CaHeTo18}. The main point is that a standard form guessing game with posterior information can be translated to a usual state discrimination game. We first recall from Section \ref{sec:standard} than in the standard form Bob's measurement is defined on the product outcome set $Y^T$. For any measurement $\bar{\M}$ with the product outcome set $Y^T$ and for the post-processing map $\pi$ defined in \eqref{eq:def_pi}, the average score \eqref{eq:Pgfpost} can be rewritten as \begin{equation}\label{eq:aux_Pgfpost} \begin{aligned} \Efpost(\en;\bar{\M},\pi) & = \sum_{x,y,t,\phi} f(x,y)\, \alpha(t \mid x)\,\pi_t(y\mid\phi)\, \tr{\en(x)\,\bar{\M}(\phi)} \\ & = \sum_\phi {\rm tr}\bigg[\bigg( \sum_{x,y,t} f(x,y) \, \alpha(t \mid x)\, \delta_{y,\phi(t)}\, \en(x) \bigg)\,\bar{\M}(\phi)\bigg] \\ & = \sum_\phi {\rm tr}\bigg[\bigg( \sum_{x,t} f(x,\phi(t)) \, \alpha(t \mid x)\, \en(x) \bigg)\,\bar{\M}(\phi)\bigg] \\ & = \mo{Y}^{\mo{T}-1}\Delta(\en,f)\ \Pg\left(\en_{f,\alpha}\, ;\, \bar{\M}\right) \,. \end{aligned} \end{equation} In the last expression, $\Delta(\en,f)$ is the constant defined in \eqref{eq:Delta}, while $\en_{f,\alpha}$ is a new state ensemble with the label set $Y^T$, which extends the auxiliary state ensemble \eqref{eq:enf_0} to the scenario with posterior information. Under the presumption $\Delta(\en,f)\neq 0$ it is defined as \begin{equation}\label{eq:enf-alpha} \en_{f,\alpha} (\phi) = \big(\mo{Y}^{\mo{T}-1}\Delta(\en,f)\big)^{-1} \sum_{x,t} f(x,\phi(t)) \, \alpha(t \mid x) \, \en(x) \, . \end{equation} (In the case $\Delta(\en,f) = 0$ we can set, for instance, $\en_{f,\alpha} (\phi) = \big(d\,|Y|^{|T|}\big)^{-1}\,\id$ and then the following formulae cover also this situation.) The normalization constant before the sum in \eqref{eq:enf-alpha} is due to the fact that \begin{equation*} \begin{aligned} \sum_\phi{\rm tr}\bigg[\sum_{x,t} f(x,\phi(t)) \, \alpha(t \mid x)\, \en(x)\bigg] & = \mo{Y}^{\mo{T}-1} \sum_{x,y,t} f(x,y) \, \alpha(t \mid x) \, p(x) \\ & = \mo{Y}^{\mo{T}-1} \Delta(\en,f) \,. \end{aligned} \end{equation*} The main purpose of introducing the auxiliary state ensemble is summarized in the following statement. \begin{theorem}\label{thm:reduction} For any $\en,\alpha$ and $f$, we have \begin{equation}\label{eq:reduction} \Efpost(\en) = \mo{Y}^{\mo{T}-1}\Delta(\en,f)\ \Pg(\en_{f,\alpha}) \, . \end{equation} \end{theorem} \begin{proof} The claim follows by combining \eqref{eq:max_barM} and \eqref{eq:aux_Pgfpost}. \end{proof} We remark that the definition of the auxiliary state ensemble $\en_{f,\alpha}$ is consistent with the earlier definition of the auxiliary state ensemble \eqref{eq:enf_0}. Indeed, if the posterior information is trivial, then $|T|=1$, implying that $\en_{f,\alpha} = \en_f$ and \eqref{eq:aux_Pgfpost}, \eqref{eq:reduction} reduce to \eqref{eq:aux_Pgfpost_1}, \eqref{eq:aux_Pgfpost_2}, respectively. \begin{example}(\emph{Discrimination with deterministic posterior information})\label{ex:non-overlapping-reduction} Let $T = \{1,\ldots,m\}$, fix a function $\tau:X\to T$, set $X_t=\tau^{-1}(t)$ and define the partial information map $\alpha_\tau$ as in \eqref{eq:non-overlapping}. Moreover, let $Y=X$ and fix the standard discrimination score function $f=f_\delta$ (see Example \ref{ex:discr}). The auxiliary state ensemble \eqref{eq:enf-alpha} becomes \begin{equation}\label{eq:enfalpha_discr2} \en_{f_\delta,\alpha_\tau} (x_1,\ldots,x_m) = |X|^{1-m} \sum_{t\in X_t} \en(x_t) \,, \end{equation} where we write elements $\phi\in X^T$ as ordered $m$-tuples $(x_1,\ldots,x_m)$ with $x_t = \phi(t)$. This case was already studied in \cite{CaHeTo18}, where it was proved that $\mathbf{E}^{\mathrm{post}}_{f_\delta,\alpha_\tau}(\en) = \Delta'\ \Pg(\enf)$ for another definition of the constant $\Delta'$ and the auxiliary state ensemble $\enf$ (see equations (22) and (23) therein). The difference between the ensembles $\en_{f_\delta,\alpha_\tau}$ and $\enf$ is in the respective label sets, which is the power set $X^m$ for the former ensemble and the Cartesian product $X_1\times\ldots\times X_m$ for the latter one. Actually, up to the constant factor $\Delta' / |X|^{m-1}$, the state ensemble $\enf$ coincides with the restriction of $\en_{f_\delta,\alpha_\tau}$ to the set $X_1\times\ldots\times X_m$. Therefore, we see that in this particular case there is a certain amount of redundancy in employing the auxiliary state ensemble $\en_{f,\alpha}$ to evaluate $\Efpost(\en)$. \end{example} \begin{example}(\emph{Excluding one wrong option}) Suppose that $T=X$ and $\alpha=\alpha_{\rm ex}$ is the partial information map \eqref{eq:ruling-out}. The auxiliary state ensemble \eqref{eq:enf-alpha} becomes \begin{equation}\label{eq:ruling-out_enf} \begin{aligned} \en_{f,\alpha_{\rm ex}}(\phi) & = C\,\sum_t \sum_x f(x,\phi(t))\,(1-\delta_{x,t})\,\en(x) \\ & = C\,\sum_{y\in\phi(X)} \sum_{\substack{t\text{ s.t.}\\ \phi(t) = y}}\, \sum_x f(x,y)\,(1-\delta_{x,t})\, \en(x) \\ & = C\,\sum_{x,y} f(x,y)\,\mo{\phi^{-1}(y)\setminus\{x\}}\,\en(x) \,, \end{aligned} \end{equation} where $1/C=(\mo{X}-1)\,|Y|^{|T|-1}\Delta(\en,f)$. We observe that the dependence on the outcome $\phi$ is only in the cardinalities $\mo{\phi^{-1}(y)\setminus\{x\}}$ appearing in the last line of \eqref{eq:ruling-out_enf}. These are integer numbers between $0$ and $|X|-1$ such that $\sum_x\mo{\phi^{-1}(y)\setminus\{x\}}\leq |X|$. \end{example} \section{Symmetry in guessing games}\label{sec:symmetry} As we have seen in Theorem \ref{thm:reduction}, evaluating the maximal average score $\Efpost(\en)$ boils down to a usual state discrimination problem for the auxiliary state ensemble $\en_{f,\alpha}$ defined in \eqref{eq:enf-alpha}. However, finding the maximal guessing probability $\Pg(\en_{f,\alpha})$ may still be a difficult task since the number of states involved in the calculation scales as $|Y|^{|T|}$. Even assuming that the states of $\en$ are pure does not provide any actual simplification, as typically those of $\en_{f,\alpha}$ are mixed. A natural attempt to reduce the complexity of the problem is by assuming that the state ensemble $\en$ possesses some symmetry, and then exploiting group theory in order to obtain the desired results. This indeed works for standard state discrimination \cite{Holevo73,ElMeVe04}, and our objective is now to provide an extension to the present more general setting. For the rest of the section we fix a finite group $G$ acting on the sets $X$, $Y$ and $T$, and we assume that the quantities $f$ and $\alpha$ are $G$-invariant, i.e., invariant under the action of $G$. More precisely, denoting by $g$ both an element of the group and its (left) action on the three sets above, we require that \begin{enumerate} \item[(S1)] $f(gx,gy) = f(x,y)$ for all $x\in X$, $y\in Y$ and $g\in G$,\label{it:cov1} \item[(S2)] $\alpha(gt\mid gx) = \alpha(t\mid x)$ for all $x\in X$, $t\in T$ and $g\in G$.\label{it:cov2} \end{enumerate} \begin{example}(\emph{Invariance in partition guessing games and deterministic posterior information})\label{ex:invar} Let $\upsilon:X\to Y$ be a surjective function and $(X_y)_{y\in Y}$ the partition of $X$ determined by $\upsilon$ as described in Example \ref{ex:partition}. Moreover, suppose the group $G$ acts on $X$ in a way that for all $y$ there is $y'$ such that $gX_y = \{gx : x\in X_y\} = X_{y'}$. Then, we can define an action of $G$ on $Y$ by setting $gy = y'$. This action satisfies $\upsilon(gx)=g\upsilon(x)$ for all $x$ and $g$. Therefore, the score functions $f_\upsilon$ and $f_{\neg \upsilon}$ associated with $\upsilon$ are $G$-invariant. In the same way suppose $\tau:X\to T$ determines a partition $(X_t)_{t\in T}$ of $X$ which is preserved by the action of $G$. Then, the partial information map $\alpha_\tau$ of Example \ref{ex:non-overlapping} is invariant with respect to the action of $G$ on $T$ defined by $X_{gt} = gX_t$. \end{example} In order to describe symmetry on the operator side, we fix a projective unitary representation $U$ of $G$ on $\hh$ and we suppose that the state ensemble $\en$ is $G$-covariant in the following sense: \begin{enumerate} \item[(S3)] $U(g)\,\en(x)\,U(g)^* = \en(gx)$ for all $x\in X$ and $g\in G$.\label{it:cov3} \end{enumerate} With these notions we can state the following straightforward result. \begin{proposition}\label{prop:covariance} If $f$, $\alpha$ and $\en$ satisfy conditions (S1)--(S3) above, then the auxiliary state ensemble $\en_{f,\alpha}$ defined in \eqref{eq:enf-alpha} satisfies \begin{equation} U(g)\,\en_{f,\alpha}(\phi)\,U(g)^* = \en_{f,\alpha}(g.\phi) \end{equation} for all $\phi\in Y^T$ and $g\in G$, where the action of $G$ on $Y^T$ is defined as \begin{equation} (g.\phi)(t) = g\phi(g^{-1}t) \end{equation} for all $t\in T$. \end{proposition} \noindent In other words, $G$-invariance of $f$ and $\alpha$ together with $G$-covariance of $\en$ imply $G$-covariance of $\en_{f,\alpha}$ if we regard the set $Y^T$ as a $G$-space in the natural way. \begin{theorem}\label{thm:covariance} Suppose $f$, $\alpha$ and $\en$ satisfy the symmetry conditions (S1)--(S3). Moreover, assume that the representation $U$ is irreducible. The following facts are true. \begin{enumerate}[(a)] \item Denote by $\Lambda(\en_{f,\alpha})$ the largest eigenvalue of all the operators $\en_{f,\alpha}(\phi)$, $\phi\in Y^T$. Then, \begin{equation} \Efpost(\en) = d\,|Y|^{|T|-1} \Delta(\en,f)\,\Lambda(\en_{f,\alpha}) \,. \end{equation} \item Fix $\phi_0\in Y^T$ such that the operator $\en_{f,\alpha}(\phi_0)$ has $\Lambda(\en_{f,\alpha})$ among its eigenvalues, and denote by $\Pi_0$ the orthogonal projection onto the eigenspace of $\en_{f,\alpha}(\phi_0)$ associated with $\Lambda(\en_{f,\alpha})$. The equality $\Efpost(\en;\bar{\M},\pi) = \Efpost(\en)$ is attained by the measurement\label{it:2_thm_covariance} \begin{equation}\label{eq:M_opt_cov} \bar{\M}(\phi) = \begin{cases} \displaystyle\frac{d}{\mo{G.\phi_0}\rank{\Pi_0}}\,U(g)\,\Pi_0\,U(g)^* & \text{ if $\phi = g.\phi_0$ for some $g\in G$}\\[0.3cm] 0 & \text{ otherwise} \end{cases}\,. \end{equation} \end{enumerate} \end{theorem} \begin{proof} By Proposition \ref{prop:covariance}, for all $g\in G$ we have $$ \en_{f,\alpha}(g.\phi_0)\,U(g)\,\Pi_0\,U(g)^* = \Lambda(\en_{f,\alpha})\,U(g)\,\Pi_0\,U(g)^*\,. $$ In particular, $\Pi_0$ commutes with $U(g)$ for all $g$ belonging to the stabilizer subgroup $G_0 = \{g\in G : g.\phi_0 = \phi_0\}$, and therefore the operator $\bar{\M}(\phi)$ given by \eqref{eq:M_opt_cov} is well defined. It also follows that $\en_{f,\alpha}(\phi)\,\bar{\M}(\phi) = \Lambda(\en_{f,\alpha})\,\bar{\M}(\phi)$ for all $\phi\in Y^T$. In order to apply Proposition \ref{prop:Pbound} to the state ensemble $\en_{f,\alpha}$ and the measurement $\bar{\M}$, we still need to check that $\sum_\phi\bar{\M}(\phi) = \id$. Indeed, since $U(g)\,\bar{\M}(\phi)\,U(g)^* = \bar{\M}(g.\phi)$ and $$ U(g)\sum_\phi\bar{\M}(\phi)\,U(g)^* = \sum_\phi\bar{\M}(g.\phi) = \sum_\phi\bar{\M}(\phi) \,, $$ Schur's lemma implies that $\sum_\phi\bar{\M}(\phi) = \mu\,\id$ for some $\mu\in\real$, where $\mu=1$ because \begin{align*} d\,\mu & = \tr{\mu\,\id} = \sum_\phi\tr{\bar{\M}(\phi)} = \sum_{\phi\in G.\phi_0} \frac{d}{|G.\phi_0|} = d \,. \end{align*} By Proposition \ref{prop:Pbound}, it then follows that $\Pg(\en_{f,\alpha}) = \Pg(\en_{f,\alpha};\bar{\M}) = d\,\Lambda(\en_{f,\alpha})$. Combining this fact with \eqref{eq:aux_Pgfpost} and \eqref{eq:reduction} yields the statement of the theorem. \end{proof} For all $\phi\in Y^T$, the set $G.\phi = \{g.\phi : g\in G\}$ is the orbit of $G$ passing through $\phi$. Item \eqref{it:2_thm_covariance} of the previous theorem means that we can always find an optimal measurement that is concentrated on such an orbit. As already remarked in the proof, the measurement \eqref{eq:M_opt_cov} satisfies the covariance condition $$ \bar{\M}(g.\phi) = U(g)\,\bar{\M}(\phi)\,U(g)^* $$ for all $\phi\in Y^T$ and $g\in G$. This fact combined with the equality $$ \pi_{gt}(gy\mid g.\phi) = \pi_t(y\mid\phi) $$ implies that the marginals $(\N_t)_{t\in T}$ of $\bar{\M}$ are such that \begin{equation}\label{eq:cov_Nt} \N_{gt}(gy) = U(g)\,\N_t(y)\,U(g)^* \end{equation} for all $g\in G$, $y\in Y$ and $t\in T$. Therefore, different marginals are related by a permutation of the outcome set $Y$ and a unitary conjugation by $U$. \section{Example: two pairs of orthogonal qubit states}\label{sec:qubit} In this section we demonstrate the results of the previous sections by fixing four noncommuting qubit states as our state ensemble and evaluating $\Efpre(\en)$ and $\Efpost(\en)$ for several choices of $f$ and $\alpha$. We will see two cases in which $\Efpre(\en) > \Efpost(\en)$ (Sections \ref{sec:qubit_1} and \ref{sec:qubit_2}) and one in which the timing of partial information is irrelevant (Sections \ref{sec:qubit_3}). \subsection{Notation} We recall that the Hilbert space of a qubit system is $\hh=\mathbb C^2$ and that any qubit state $\varrho$ is represented as a vector in the Bloch ball $\{\vr\in\R^3 : \no{\vr}\leq 1\}$ by means of the relation $$ \varrho = \tfrac{1}{2}\left(\id+\vr\cdot\vsigma\right)\,. $$ In this formula we have denoted $\vr\cdot\vsigma = r_1\sigma_1 + r_2\sigma_2 + r_3\sigma_3$ for the vector $\vr=r_1\ve_1+r_2\ve_2+r_3\ve_3$, where $\sigma_1$, $\sigma_2$ and $\sigma_3$ are the three Pauli matrices and $\ve_1$, $\ve_2$ and $\ve_3$ the unit vectors along the coordinate axes. More generally, any selfadjoint operator $M\in\lc$ can be written as $$ M = \mu\,\id + \vm\cdot\vsigma $$ for some $\mu\in\R$ and $\vm\in\R^3$, uniquely detemined by $M$. If $\vm$ is nonzero, the eigenvalues $\lambda_+$, $\lambda_-$ of $M$ and the corresponding eigenprojections $\Pi_+$, $\Pi_-$ are \begin{align*} \lambda_\pm = \mu\pm\no{\vm}\,,\qquad\qquad \Pi_\pm = \tfrac{1}{2}\left(\id\pm\vmh\cdot\vsigma \right) \, , \end{align*} where $\vmh = \vm/\no{\vm} = \vm/(\lambda_+ - \mu)$ is the unit vector along the direction of $\vm$. For $\theta\in (0,\pi/2]$, we fix $$ \va = \cos\left(\half\theta\right) \ve_1 + \sin\left(\half\theta\right) \ve_2\,,\qquad\qquad \vb = \cos\left(\half\theta\right) \ve_1 - \sin\left(\half\theta\right) \ve_2 $$ and define $$ X = \{+\va,\,-\va,\,+\vb,\,-\vb\} $$ as the label set of Alice. The state ensemble $\en$ is chosen to be \begin{equation}\label{eq:4states} \en(\vx) = \tfrac{1}{8}\left(\id+\vx\cdot\vsigma\right) \end{equation} for all $\vx\in X$. It hence corresponds to two orthogonal pairs of pure states, $\varrho_{+\va}$, $\varrho_{-\va}$ and $\varrho_{+\vb}$, $\varrho_{-\vb}$, all apprearing with the same probability $1/4$ in the state ensemble $\en$ (see Fig. \ref{fig:Bloch} for an illustration in the Bloch ball). \begin{figure}[h!] \centering\includegraphics{cerchio.pdf} \caption{The states of the ensemble \eqref{eq:4states} represented in a section of the Bloch ball. Each state is chosen with uniform probability and is directed along one of the labels $+\va$, $-\va$, $+\vb$ and $-\vb$.\label{fig:Bloch}} \end{figure} The elements of $X$ are permuted by the dihedral group $D_2\subset SO(3)$, which consists of the identity element $I$ together with the three $180^\circ$ rotations $R_1$, $R_2$ and $R_3$ along the respective coordinate axes. The group $D_2$ acts on $\mathbb C^2$ by means of the projective unitary representation $$ U(I) = \id \, , \qquad\qquad U(R_i) = \sigma_i \,, $$ and the state ensemble $\en$ is manifestly $D_2$-covariant. Since the representation $U$ is irreducible, we can use Theorem \ref{thm:covariance} to evaluate $\Efpost(\en)$ provided that $f$ and $\alpha$ are $D_2$-invariant. Below, we do it for the standard discrimination score function of Example \ref{ex:discr}, for a partition guessing game as in Example \ref{ex:partition} and for the posterior information of the kind described in Examples \ref{ex:non-overlapping} and \ref{ex:ruling-out}. \subsection{Discrimination and antidiscrimination without partial information}\label{sec:qubit_0} For comparison we recall the maximal guessing probabilities in the usual discrimination and antidiscrimination guessing games when there is no partial information available. By using Proposition \ref{prop:Pbound}, it is straightforward to show that $\Pg(\en)=1/2$ irrespective of the angle $\theta$. A different proof of this fact can be found e.g.~in \cite{Bae13}. The maximal guessing probability in the antidiscrimination guessing game can be evaluated by forming first the auxiliary state ensemble given in \eqref{eq:enf_0}. An alternative way to see that the guessing probability is $1$ is by observing that $\sum_{\vx\in X} \varrho_\vx = 2\,\id$. This condition implies that the four states can be perfectly antidiscriminated with any prior probability distribution $p$ \cite{HeKe18}. \subsection{Discrimination with deterministic posterior information}\label{sec:qubit_1} Let us consider discrimination of the state ensemble \eqref{eq:4states} with deterministic posterior information, hence we choose $X=Y$ and $f=f_\delta$ as described in Example \ref{ex:discr}. The set $X$ is partitioned into two disjoint subsets $X_a$ and $X_b$, where \begin{equation}\label{eq:qubit_partition} X_a = \{+\va,\,-\va\}\,,\qquad\qquad X_b = \{+\vb,\,-\vb\}\,. \end{equation} The partial information consists in giving the correct subset of the input label, thus $T=\{a,b\}$ and the partial information map is $\alpha_\tau$ with $\tau(\pm\va) = a$ and $\tau(\pm\vb) = b$ as described in Example \ref{ex:non-overlapping}. We begin by evaluating the maximal average score $\mathbf{E}^{\mathrm{prior}}_{f_\delta,\alpha_\tau}(\en)$. To do it, it is enough to observe that the conditional state ensemble \eqref{eq:ent} is $$ \en_t(\vx) = \begin{cases} \frac{1}{4} \left(\id+\vx\cdot\vsigma\right) & \text{ if $\vx\in X_t$}\\ 0 & \text{ otherwise} \end{cases} $$ and that $\en_t$ is perfectly discriminated by means of the sharp measurement \begin{equation}\label{eq:4states-ex1-opti-meas} \N_t(\vx) = \begin{cases} \frac{1}{2}\left(\id+\vx\cdot\vsigma\right) & \text{ if $\vx\in X_t$}\\ 0 & \text{ otherwise} \end{cases} \,. \end{equation} It follows that \begin{equation}\label{eq:4states-ex1-pre} \mathbf{E}^{\mathrm{prior}}_{f_\delta,\alpha_\tau}(\en) = 1 \,. \end{equation} In order to calculate the maximal average score $\mathbf{E}^{\mathrm{post}}_{f_\delta,\alpha_\tau}(\en)$ in the posterior information guessing game, we use the symmetry of the problem. The score function $f_\delta$ and the partial information map $\alpha_\tau$ are $D_2$-invariant by the discussion in Example \ref{ex:invar}. Thus, by Theorem \ref{thm:covariance}, evaluating the maximal average score $\mathbf{E}^{\mathrm{post}}_{f_\delta,\alpha_\tau}(\en)$ amounts to finding the maximal eigenvalue of the operators $\en_{f_\delta,\alpha_\tau}(\phi)$, $\phi\in Y^T$, defined by \eqref{eq:enfalpha_discr2}, which in the current case become $$ \en_{f_\delta,\alpha_\tau}(\vx_1,\vx_2) = \frac{C}{8}\cdot\begin{cases} \left(\id + \vx_1\cdot\vsigma\right) & \text{ if $\vx_1,\vx_2\in X_1$}\\ \left(\id + \vx_2\cdot\vsigma\right) & \text{ if $\vx_1,\vx_2\in X_2$}\\ \left[2\,\id + \left(\vx_1 + \vx_2\right)\cdot\vsigma\right] & \text{ if $\vx_1\in X_1$ and $\vx_2\in X_2$} \\ 0 & \text{ if $\vx_1\in X_2$ and $\vx_2\in X_1$} \end{cases} $$ with $1/C=|Y|^{|T|-1}\Delta(\en,d)=4$. By means of straightforward calculations, we get $$ \Lambda (\en_{f_\delta,\alpha_\tau})= \frac{C}{8}\,\big(2+\big\|\va+\vb\big\|\big) = \frac{C}{4} \left(1+\sqrt{\frac{1+\cos\theta}{2}}\right) \,, $$ and then Theorem \ref{thm:covariance} yields \begin{equation}\label{eq:4states-ex1-post} \mathbf{E}^{\mathrm{post}}_{f_\delta,\alpha_\tau}(\en) = \frac{1}{2} \left(1+\sqrt{\frac{1+\cos\theta}{2}}\right)\,. \end{equation} The average scores \eqref{eq:4states-ex1-pre} and \eqref{eq:4states-ex1-post} were already obtained in \cite{CaHeTo18}, where a detailed description of the optimal measurements was also provided. We remark that $$ \mathbf{E}^{\mathrm{prior}}_{f_\delta,\alpha_\tau}(\en) > \mathbf{E}^{\mathrm{post}}_{f_\delta,\alpha_\tau}(\en) > \Eg_{f_\delta}(\en) $$ for all $\theta\in (0,\pi/2]$ and of these three quantities only $\mathbf{E}^{\mathrm{post}}_{f_\delta,\alpha_\tau}(\en)$ varies with $\theta$. \subsection{Discrimination by excluding one wrong option}\label{sec:qubit_2} Let us still consider the discrimination game, but now with a different kind of partial information. Namely, Alice excludes one wrong option. We hence keep $X=Y$ and $f=f_\delta$, but now $X=T$ and the partial information map is $\alpha_{\rm ex}$ described in Example \ref{ex:ruling-out}, that is, $\alpha_{\rm ex}(\vt\mid\vx) = \left(1-\delta_{\vx,\vt}\right)/3$. In the present case, the conditional state ensemble \eqref{eq:ent} is \begin{equation}\label{eq:4states-ex2-ent} \en_\vt(\vx) = \tfrac{1}{6}\left(1-\delta_{\vx,\vt}\right)\left(\id+\vx\cdot\vsigma\right) \,. \end{equation} Using Proposition \ref{prop:Pbound} we conclude that $\Eg_{f_\delta}(\en_\vt) = \Pg(\en_\vt) = 2/3$, the unique optimal measurement being still given by \eqref{eq:4states-ex1-opti-meas} with $t=\tau(\vt)$. The maximal average score with prior information is then \begin{equation} \mathbf{E}^{\mathrm{prior}}_{f_\delta,\alpha_{\rm ex}}(\en) = \frac{2}{3} \,. \end{equation} Since the sharp optimal measurements \eqref{eq:4states-ex1-opti-meas} do not commute for $t\neq t'$, we expect that $\mathbf{E}^{\mathrm{prior}}_{f_\delta,\alpha_{\rm ex}}(\en) > \mathbf{E}^{\mathrm{post}}_{f_\delta,\alpha_{\rm ex}}(\en)$. In order to evaluate $\mathbf{E}^{\mathrm{post}}_{f_\delta,\alpha_{\rm ex}}(\en)$, we observe that the partial information map $\alpha_{\rm ex}$ is $D_2$-invariant, hence Theorem \ref{thm:covariance} applies also in this case. The auxiliary state ensemble \eqref{eq:ruling-out_enf} becomes \begin{align*} \en_{f_\delta,\alpha_{\rm ex}}(\phi) & = \frac{C}{24} \sum_{\vx} \mo{\phi^{-1}(\vx)\setminus\{\vx\}} \left(\id+\vx\cdot\vsigma\right) \\ & = \frac{C}{24}\, \Big\{ \big(\alpha^\phi_+ + \alpha^\phi_- + \beta^\phi_+ + \beta^\phi_-\big)\,\id + \Big[\big(\alpha^\phi_+ - \alpha^\phi_-\big)\,\va + \big(\beta^\phi_+ - \beta^\phi_-\big)\,\vb\Big]\cdot\vsigma\Big\}\,, \end{align*} where $1/C=|Y|^{|T|-1}\Delta(\en,d)=64$ and we have denoted $$ \alpha^\phi_\pm = \mo{\phi^{-1}(\pm\va)\setminus\{\pm\va\}}\,,\qquad\qquad \beta^\phi_\pm = \big|\phi^{-1}(\pm\vb)\setminus\{\pm\vb\}\big|\,. $$ The largest eigenvalue of $\en_{f_\delta,\alpha_{\rm ex}}(\phi)$ is \begin{equation*} \begin{aligned} \lambda(\phi) & = \frac{C}{24}\, \Big\{\alpha^\phi_+ + \alpha^\phi_- + \beta^\phi_+ + \beta^\phi_- + \Big\|\big(\alpha^\phi_+ - \alpha^\phi_-\big)\,\va + \big(\beta^\phi_+ - \beta^\phi_-\big)\,\vb\Big\|\Big\} \\ & = \frac{C}{24}\,\gamma\big(\alpha^\phi_+,\,\alpha^\phi_-,\,\beta^\phi_+,\,\beta^\phi_-\big)\,, \end{aligned} \end{equation*} where $\gamma$ is the function \begin{equation}\label{eq:gamma} \begin{aligned} & \gamma\big(\alpha_+,\,\alpha_-,\,\beta_+,\,\beta_-\big) = \alpha^\phi_+ + \alpha^\phi_- + \beta^\phi_+ + \beta^\phi_- \\ & \qquad\qquad + \big[\big(\alpha_+ - \alpha_-\big)^2 + \big(\beta_+ - \beta_-\big)^2 + 2\,\big(\alpha_+ - \alpha_-\big)\big(\beta_+ - \beta_-\big)\cos\theta\big]^{\frac{1}{2}} \,. \end{aligned} \end{equation} The corresponding eigenprojection is $$ \Pi(\phi) = \tfrac{1}{2}\left(\id+\vmh(\phi)\cdot\vsigma\right) $$ with $$ \vmh(\phi) = \frac{\big(\alpha^\phi_+ - \alpha^\phi_-\big)\,\va + \big(\beta^\phi_+ - \beta^\phi_-\big)\,\vb}{\gamma\big(\alpha_+,\,\alpha_-,\,\beta_+,\,\beta_-\big) - \big(\alpha^\phi_+ + \alpha^\phi_- + \beta^\phi_+ + \beta^\phi_-\big)}\,. $$ For all $\phi\in X^X$, the numbers $\alpha^\phi_\pm$ and $\beta^\phi_\pm$ satisfy the constraints \begin{equation}\label{eq:constraint1} \alpha^\phi_\pm,\beta^\phi_\pm\in\naturale\,,\qquad\quad \alpha^\phi_\pm,\beta^\phi_\pm \leq |X|-1\,,\qquad\quad \alpha^\phi_+ + \alpha^\phi_- + \beta^\phi_+ + \beta^\phi_- \leq |X|\,. \end{equation} The maximum of \eqref{eq:gamma} with $\alpha_\pm$, $\beta_\pm$ subject to the constraints \eqref{eq:constraint1} is equal to $4+\sqrt{10+6\cos\theta}$ (see Appendix \ref{app:max} for details) and it is attained at the feasible points $$ f_0 = (1,0,3,0),\,\qquad f_1 = (3,0,1,0),\,\qquad f_2 = (0,3,0,1),\,\qquad f_3 = (0,1,0,3) \,. $$ If $\phi_0\in X^X$ is given by $$ \phi_0(+\va) = \phi_0(-\va) = \phi_0(-\vb) = +\vb\,,\qquad\qquad \phi_0(+\vb) = +\va $$ and we further define $$ \phi_i = R_i.\phi_0 \quad\text{for } i=1,2,3\,, $$ then with straightforward calculations $$ f_i = \big(\alpha^{\phi_i}_+,\,\alpha^{\phi_i}_-,\,\beta^{\phi_i}_+,\,\beta^{\phi_i}_-\big) \quad\text{for all } i=0,1,2,3\,. $$ Using the notations of Theorem \ref{thm:covariance}, it follows that $$ \Lambda(\en_{f_\delta,\alpha_{\rm ex}}) = \frac{C}{24}\,\left(4+\sqrt{10+6\cos\theta}\right) $$ and the operator $\en_{f_\delta,\alpha_{\rm ex}}(\phi_0)$ has $\Lambda(\en_{d,\alpha_{\rm ex}})$ among its eigenvalues. Therefore, $$ \mathbf{E}^{\mathrm{post}}_{f_\delta,\alpha_{\rm ex}}(\en) = \frac{1}{12} \left(4+\sqrt{10+6\cos\theta}\right)\,. $$ The optimal measurement \eqref{eq:M_opt_cov} is \begin{equation*} \bar{\M}(\phi) = \begin{cases} \tfrac{1}{4}\left(\id+\vmh(\phi)\cdot\vsigma\right) & \text{ if $\phi\in\{\phi_0,\phi_1,\phi_2,\phi_3\}$} \\ 0 & \text{ otherwise} \end{cases} \end{equation*} with $$ \vmh(\phi_0) = - \vmh(\phi_2) = \frac{\va+3\vb}{\sqrt{10+6\cos\theta}}\,, \qquad\qquad \vmh(\phi_1) = - \vmh(\phi_3) = \frac{3\va+\vb}{\sqrt{10+6\cos\theta}}\,. $$ Its marginal $\N_{+\va}$ is \begin{align*} \N_{+\va}(+\va) & = 0\,, & \quad \N_{+\va}(+\vb) & = \tfrac{1}{4}\left[2\,\id+\left(\vmh(\phi_0)+\vmh(\phi_1)\right)\cdot\vsigma\right],\\ \N_{+\va}(-\va) & = \tfrac{1}{4}\left(\id-\vmh(\phi_1)\cdot\vsigma\right), & \quad \N_{+\va}(-\vb) & = \tfrac{1}{4}\left(\id-\vmh(\phi_0)\cdot\vsigma\right), \end{align*} and the other marginals $\N_{-\va}$, $\N_{+\vb}$ and $\N_{-\vb}$ are obtained from $\N_{+\va}$ by means of the relation \eqref{eq:cov_Nt}. \subsection{Partition guessing game by excluding one wrong option}\label{sec:qubit_3} Finally, we consider a partition guessing game of the general kind described in Example \ref{ex:partition}. We choose $Y = \{a,b\}$ and let $\upsilon:X\to Y$ be the function $\upsilon(\pm\va)=a$, $\upsilon(\pm\vb)=b$. With this choice of $Y$ and $\upsilon$, we consider the score function $f_\upsilon$ defined in \eqref{eq:partition}. Thus, the task is to detect the correct direction of the label $\vx$, i.e., to guess whether $\vx\in X_a$ or $\vx\in X_b$ for the two sets $X_a$, $X_b$ defined in \eqref{eq:qubit_partition}. We still have $X=T$ and the partial information map is $\alpha_{\rm ex}(\vt\mid \vx) = (1-\delta_{\vx,\vt})/3$ as in the previous example. We first recall from Example \ref{ex:mixtures} that without partial information the task is equivalent to discriminating two totally mixed states. Indeed, in the current case, $\en_{f_\upsilon}(y) = (1/4)\,\id$ for both $y=a,b$, and therefore the best discrimination strategy is random guessing, i.e., $$ \Eg_{f_\upsilon} (\en) = \Pg(\en_{f_\upsilon}) = \frac{1}{2} \, . $$ In other words, we can reach the maximal average score without making any measurement. To calculate the optimal average score in the cases with partial information, we first observe that the conditional state ensemble $\en_\vt$ is the same as \eqref{eq:4states-ex2-ent}, but now the score function has changed. We evaluate $\Eg_{f_\upsilon}(\en_\vt)$ by using \eqref{eq:aux_Pgfpost_1}-\eqref{eq:aux_Pgfpost_2}, where in the present case $\Delta(\en,f_\upsilon)=1$ and the auxiliary state ensemble \eqref{eq:enf_0} is $$ (\en_\vt)_{f_\upsilon} (y) = \frac{1}{6}\cdot\begin{cases} \left(\id -\vt\cdot\vsigma\right) & \text{ if $y=\upsilon(\vt)$}\\ 2\,\id & \text{ otherwise} \end{cases}\,. $$ We obtain $$ \Eg_{f_\upsilon}(\en_\vt) = \Delta(\en,f_\upsilon)\ \Pg\big((\en_\vt)_{f_\upsilon}\big) = \frac{2}{3} \,, $$ where we used Proposition \ref{prop:Pbound} to evaluate $\Pg\big((\en_\vt)_{f_\upsilon}\big) = 2/3$. A measurement $\N_\vt$ maximizing $\Eg_{f_\upsilon}(\en_\vt;\N_\vt) = \Delta(\en,f_\upsilon)\ \Pg\big((\en_\vt)_{f_\upsilon};\N_\vt\big)$ is the trivial measurement $$ \N_\vt(y) = \left(1-\delta_{y,\upsilon(\vt)}\right)\id\,. $$ Clearly, the collection of measurements $(\N_\vt)_{\vt\in T}$ is compatible. By \eqref{eq:sup_any} and \eqref{eq:sup_compatible}, it follows that $$ \mathbf{E}^{\mathrm{prior}}_{f_\upsilon,\alpha_{\rm ex}} (\en) = \mathbf{E}^{\mathrm{post}}_{f_\upsilon,\alpha_{\rm ex}} (\en) = \frac{2}{3} $$ independently of the angle $\theta$. As in the earlier consideration of the same task but without partial information, also in this case the maximal average score can be reached without making any measurement. \newpage
2,869,038,154,171
arxiv
\section{Introduction} Topological surface waves have several important features; namely, they are unidirectional, and they operate in the bulk bandgap of a topologically nontrivial material \cite{18-ozawa2018topological,19-Soljacic2014,23-hasan2010colloquium,25-rechtsman2013photonic,26-chen2014experimental,7-PTI-Notes,17-wang2009observation,casimir}. Upon encountering a discontinuity, they are immune to back-scattering, and because they operate in the bulk bandgap, they do not radiate into the bulk. As such, they are forced to pass over the discontinuity, and the lack of scattering or diffraction makes them interesting from a wave-propagation aspect, and promising for device applications \cite{ 13-Ferrite,20-wang2008reflection,21-yu2008one,22-yang2016one}. The topological SPPs can be characterized by an integer invariant (e.g., the Chern number), which cannot change except when the underlying momentum-space topology of the bulk bands is changed \cite{17-wang2009observation,14-Mario-chern,15-Haldane-chern,16-raghu-chern,27-gangaraj2017berry,28-skirlo2014multimode}. Thus, another view of the reflection- and diffraction-free aspect of topological SPPs is that they are governed by the bulk properties so that they are not sensitive to surface features, and can only change qualitatively when the bulk topology changes. A change in topology arises when a bandgap is closed or opened, which occurs for the biased plasma considered here when the bias field is reversed in direction. A static magnetic bias field applied to a plasma breaks time reversal symmetry and leads to topologically non-trivial properties, bringing about the existence of topologically-protected unidirectional photonic surface states \cite{22-yang2016one, 27-gangaraj2017berry, 29-khanikaev2013photonic}. In this paper, we examine a newly-discovered regime of gyrotropic SPPs \cite{6-PRL}-\cite{9-Trully}, wherein the SPPs are, similar to topological SPPs, unidirectional, operate in a bulk bandgap (and so are diffraction-free), and only change their properties qualitatively when the topology of momentum space is changed. Moreover, they form narrow beam-like patterns, similar to the case of hyperbolic media. Unlike in isotropic media, which is described by a single bulk dispersion diagram identical in every direction, for the anisotropic case, the possibility of a bulk bandgap must be considered in different propagation directions. In this work, we have identified a bulk bandgap common to all propagation directions, within which the SPPs exist. However, it seems difficult or perhaps impossible to assign a topological integer-invariant to describe these SPPs as they propagate in different directions at different frequencies within the gap, and so, strictly-speaking, these SPPs are not topological. Nevertheless, we show that they still exhibit unidirectional propagation and inherent robustness to discontinuities. In the following, the common bulk bandgap is discussed, the behavior of the SPPs is determined, and a Green function is obtained for a finite-thickness gyrotropic layer. Additionally, we investigate the back-scattering immune properties of a surface wave propagating at the magnetized plasma-air interface, and also on the surface of a magnetized plasma slab in the presence of a defect in the lower bandgap frequency regime. \section{Bulk-Mode and SPP Dispersion Analysis} \label{formulation} The geometry of interest is depicted in Fig. \ref{geom}, showing a finite-thickness gyrotropic slab immersed in a simple medium characterized by $\varepsilon_{r,0}$ for $z>z_{1}=0$ and $\varepsilon_{r,2}$ for $z<-z_{2}=-h$. The gyrotropic medium is assumed to be a plasma immersed in a static external magnetic field $\mathbf{B}_{0}=\mathbf{\hat{y}}B_{0}$. Assuming time harmonic variation $e^{-j\omega t}$, the magnetized plasma is characterized by the dielectric tensor, \begin{align} \mathbf{\bar{\varepsilon}}_{r}=\varepsilon_{t}\left( \mathbf{\bar{I}% }-\mathbf{\hat{y}\hat{y}}\right) +j\varepsilon_{g}\left( \mathbf{\hat{y}% }\times\mathbf{\bar{I}}\right) +\varepsilon_{a}\mathbf{\hat{y}\hat{y},} \label{r1}% \end{align} where the permittivity elements, $\left\{ \varepsilon_{t},\varepsilon_{a},\varepsilon_{g}\right\} $ are \cite{coordinateBook} \begin{align} \varepsilon_{t} & =1-\frac{\omega_{p}^{2}}{\left( \omega+j\Gamma\right) ^{2}-\omega_{c}^{2}},\nonumber\\ \varepsilon_{a} & =1-\frac{\omega_{p}^{2}}{\omega\left( \omega +j\Gamma\right) },\ \varepsilon_{g}=\frac{\omega_{c}\omega_{p}^{2}}% {\omega\left[ \omega_{c}^{2}-\left( \omega+j\Gamma\right) ^{2}\right] }.\label{r2}% \end{align} such that $\omega_{p}=\sqrt{Nq_{e}^{2}/m_{e}\varepsilon_{0}}$, $\omega_{c}=-q_{e}B_{0}/m$, and $\Gamma=1/\tau$ denote the plasma, cyclotron, and collision frequencies, respectively, where $N$ is the free electron density, $q_{e}=-e$ is the electron charge, $m_{e}$ is the electron mass, and $\tau$ is the relaxation time between collisions. The above model is local; as studied in \cite{Shi}, a nonlocal Drude model leads to the presence of a backward propagating modes. However, the effect of non-locality is evident only for very large wavenumbers and the backward waves vanish when considering realistic levels of loss \cite{9-Trully}, and so non-locality is ignored here. \begin{figure}[!htbp] \includegraphics[width=0.99\columnwidth]{3D_SLAB-eps-converted-to.pdf} \caption{Slab of gyrotropic material with finite thickness, $h$. The slab is biased with a static magnetic field in the xoy plane. A vertical dipole is suspended a distance $d$ above the slab and is responsible for exciting the displayed field pattern near the top surface of the slab. The wavenumber associated with a bulk mode propagating within the slab is denoted $\mathbf{k}_{b}$, and is represented in a local coordinate system where $\alpha_{b}$ denotes the angle which $\mathbf{k}_{b}$ makes with respect to the y-axis.}\label{geom} \end{figure} \subsection{Dispersion of bulk modes in a gyrotropic medium -- the existence of a common bandgap} The characteristics of the bulk modes in an anisotropic medium depend on the direction of propagation. In a structure exhibiting bulk band-gaps, these will also be direction-dependent. In this section, we study the bulk dispersion behavior of a gyrotropic medium in order to identify a bulk bandgap, common to all propagation directions. We begin with a plane wave having wave vector, $\mathbf{k}_{b}$, propagating in a gyrotropic medium at angle, $\alpha_{b}$, with respect to the bias field ($y$) direction. Assuming a plane wave solution to Maxwell's equations leads to a homogeneous system of equations for which non-trivial solutions are obtained when \cite{coordinateBook} \begin{equation} \left\vert k_{0}^{2}\mathbf{\bar{\varepsilon}}_{r}-k_{b}^{2}\mathbf{\bar{I}% }+\mathbf{k}_{b}\mathbf{k}_{b}\right\vert =0, \label{r3}% \end{equation} where $\mathbf{k}_{b}=\mathbf{k}_{t}+\mathbf{\hat{y}}k_{y}$ such that $\left\vert\mathbf{k}_{t}\right\vert=k_{b}\sin \alpha_{b}$ and $k_{y}=k_{b}\cos \alpha_{b}$. Evaluation of the determinant leads to the dispersion equation for the bulk modes, \begin{align} 0 & =k_{b}^{2}k_{0}^{2}\left\{ \left[ \varepsilon_{t}\left( \varepsilon _{t}+\varepsilon_{a}\right) -\varepsilon_{g}^{2}\right] \sin^{2}\alpha _{b}+2\varepsilon_{t}\varepsilon_{a}\cos^{2}\alpha_{b}\right\} \nonumber\\ & -k_{b}^{4}\left[ \varepsilon_{t}\sin\alpha_{b}+\varepsilon_{a}\cos ^{2}\alpha_{b}\right] -k_{0}^{4}\left( \varepsilon_{t}^{2}-\varepsilon _{g}^{2}\right) \varepsilon_{a}.\label{dispersion_eq}% \end{align} \begin{figure}[!tbp] \includegraphics[width=0.99\columnwidth]{bulk_panel-eps-converted-to.pdf} \caption{Dispersion diagram of plasma bulk modes for different angles of propagation, where $k_p=\omega_p /c$. Gray shaded regions highlight bandgaps in the dispersion. The dashed red line corresponds to an ordinary wave (independent of bias) while the solid black lines correspond to the extraordinary wave (dependent on bias).} \label{bulk} \end{figure} The dispersion diagrams associated with the bulk modes of a magneto-plasma are shown in Fig. \ref{bulk}. We consider $\omega _{p}=2\pi$(20 THz) and $\omega_{c}/\omega _{p}=0.4$ here and throughout the rest of the paper. Figures \ref{bulk}a and \ref{bulk}b show the dispersion of bulk modes which propagate parallel ($\alpha_{b}=0^{\circ}$) and perpendicular ($\alpha_{b}=90^{\circ}$) to the magnetic bias, respectively. In the parallel case, the two intersection points correspond to Weyl points arise from crossings between longitudinal plasma modes and transverse helical modes \cite{10-weyl}. Figures \ref{bulk}c and \ref{bulk}d show the dispersion for two arbitrary angles in the range, $0^{\circ} < \alpha_{b} < 90^{\circ}$. As seen in Fig. \ref{bulk}, there are four branches of the dispersion. The second branch from the top (dashed red) corresponds to an ordinary wave, independent of the magnetic bias, which does not lead to a topological SPP. Two bandgaps form between the other three branches as shown in the shaded regions of Fig. \ref{bulk}. The size of the bandgaps depend on the propagation direction as well as the magnetic bias field strength. The upper bandgap is smallest when $\alpha_{b}=90^{\circ}$. Conversely, the lower band-gap is smallest when $\alpha_{b}=0^{\circ}$. As such, we take the smallest upper (lower) band-gap to represent the common upper (lower) bandgap for all propagation angles, $0^{\circ}<\alpha_{b}<90^{\circ}$. Points $a$ and $b$ do not change with the propagation angle. The common bandgap and its impact on surface waves is considered further in the following. \subsection{Surface Plasmon Polariton Dispersion}\label{using} A surface wave that propagates along the interface between a gyrotropic medium and an isotropic medium has a longitudinal wave vector component, $\mathbf{k}_{s}=\mathbf{\hat{x}}k_{x}+\mathbf{\hat{y}}k_{y}$, where the propagation angle, $\phi_{s}$, is made with respect to the $x$ axis. Solving the bulk dispersion equation (\ref{dispersion_eq}), we obtain $\mathbf{k}_{b,i}=\mathbf{\hat{x}}k_{x}+\mathbf{\hat{y}}k_{y}+\mathbf{\hat{z}}mk_{z,i}$ for $i\in\left\{1,2\right\}$ and $m\in\left\{\pm\right\}$ where we define $k_{z,i}= j\gamma_{i}$ such that \cite{2-silveirinha2017topological} \begin{align} \gamma_{i}^{2} & =k_{x}^{2}\mp\frac{1}{2\varepsilon_{t}}\sqrt{\kappa }\nonumber\\ & -\frac{1}{2\varepsilon_{t}}\left[ \left( \varepsilon_{t}\left( \varepsilon_{t}+\varepsilon_{a}\right) -\varepsilon_{g}^{2}\right) k_{0}% ^{2}-\left( \varepsilon_{a}+\varepsilon_{t}\right) k_{y}^{2}\right], \label{r5}% \end{align} and \begin{align} \kappa & =\left[ \left( \varepsilon_{t}\left( \varepsilon _{t}+\varepsilon_{a}\right) -\varepsilon_{g}^{2}\right) k_{0}^{2}-\left( \varepsilon_{a}+\varepsilon_{t}\right) k_{y}^{2}\right] ^{2}\nonumber\\ & -4\varepsilon_{t}\varepsilon_{a}\left[ \left( \varepsilon_{g}% +\varepsilon_{t}\right) k_{0}^{2}-k_{y}^{2}\right] \left[ \left( \varepsilon_{t}-\varepsilon_{g}\right) k_{0}^{2}-k_{y}^{2}\right]. \label{r6}% \end{align} The dispersion relation for the SPP can be obtained by matching the tangential components of the electric and magnetic fields at the interface [Appendix C, \cite{3-PRA}], leading to the $4\times4$ system of homogeneous equations \begin{equation} \left( \begin{array} [c]{cccc}% \beta_{1}^{-} & \beta_{2}^{-} & k_{y} & j\gamma k_{x}\\ k_{y}\theta_{1} & k_{y}\theta_{2} & -k_{x} & j\gamma k_{y}\\ k_{y}\phi_{1}^{-} & k_{y}\phi_{2}^{-} & j\gamma k_{x} & -k_{y}k^{2}\\ -\delta_{1}k_{t,1}^{2} & -\delta_{2}k_{t,2}^{2} & j\gamma k_{y} & k_{x}k^{2}% \end{array} \right) \left( \begin{array} [c]{c}% A_{1}^{-}\\ A_{2}^{-}\\ B_{1}^{+}\\ B_{2}^{+}% \end{array} \right) =\mathbf{0,}\label{r7}% \end{equation} where $k^{2}=k_{0}^{2}\varepsilon_{r,0}$, $\gamma=\sqrt{k_{x}^{2}+k_{y}^{2}-k^{2}}$, and \begin{align} \delta_{i} & =j\varepsilon_{g}/\xi_{i},\ \theta_{i}=-k_{t,i}^{2}/\varpi _{i},\nonumber\\ \beta_{i}^{m} & =k_{x}-mk_{z,i}\delta_{i},\ \phi_{i}^{m}=\delta_{i}% k_{x}-mk_{z,i}\left( \theta_{i}-1\right), \end{align} such that $\xi_{i}=k_{0}^{2}\varepsilon_{t}-k_{i}^{2}$ and $\varpi_{i}=k_{0}^{2}\varepsilon_{a}-k_{t,i}^{2}$. Non-trivial solutions are obtained when the determinant of the coefficient matrix on the left hand side of (\ref{r7}) is set equal to zero. Evaluation of the determinant and division through by $-jk_{s}^{2}k_{y}/\varpi_{1}\varpi_{2}\xi_{1}\xi_{2}\neq0$, leads to \begin{align} 0 & =\left( k_{y}^{2}+k_{z}^{2}\right) n_{A}-k_{x}n_{B}^{-}+k_{x}k_{y}% ^{2}n_{C}^{-}\nonumber\\ & -\left( k_{x}^{2}+k_{z}^{2}\right) n_{D}^{-}-jk_{z}\left( n_{E}% ^{-}-\varepsilon_{r,0}\chi^{-}\right), \label{SPPdisp} \end{align} where $k_{z}=j\gamma$ and the quantities $n_{A}$, $n_{B}^{-}$, $n_{C}^{-}$, $n_{D}^{-}$ and $n_{E}^{-}$ are defined in the Appendix. \begin{figure}[tbh] \includegraphics[width=0.99\columnwidth]{SPP_DISPERSION-eps-converted-to.pdf} \caption{SPP dispersion surface for a biased-plasma-vacuum interface, obtained by solving for the roots of (\protect\ref{SPPdisp}), for $\omega_{c}=0.4\omega_{p}$. (a) Perspective (zoomed) view of the upper and lower bands. (b) Perspective view of the lower band where the solid black lines are the equi-frequency contours for a few representative frequencies and $\omega^{\pm}$ outline the region of SPP resonance. The designations, I-IV, refer to Fig. \ref{commonBG}.} \label{3DSPP} \end{figure} In what follows, we assume that the upper medium is characterized by $\varepsilon_{r,0}=1$. For the well-studied \cite{4-davoyan2013theory} case of propagation perpendicular to the bias ($k_{y}=0$) the SPP dispersion is found to be \begin{equation} \sqrt{k_{x}^{2}-k_{0}^{2}}+\frac{\sqrt{k_{x}^{2}-k_{0}^{2}\varepsilon_{eff}}% }{\varepsilon_{eff}}=\frac{\varepsilon_{g}k_{x}}{\varepsilon_{t}% \varepsilon_{eff}}, \label{r10}% \end{equation} where $\varepsilon_{eff}=\left( \varepsilon_{t}^{2}-\varepsilon_{g}^{2}\right) /\varepsilon_{t}$. For $k_{y}\neq0$, the general dispersion equation (\ref{SPPdisp}) must be used. As considered in recent photonic topological work \cite{5-PRA-june}, we are interested in bulk-bandgap crossing SPPs. Since the upper bandgap for the perpendicular case and lower bandgap for the parallel case determine the common bandgap of all bulk modes, we consider the SPP modes that cross these two common bandgaps. A surface mode propagating in the xoy plane generally possesses two wave vector components, $k_{x}$ and $k_{y}$. Therefore, a three-dimensional surface is needed to completely describe the SPP dispersion. As shown in Fig. \ref{3DSPP} the SPP modes form two frequency bands. The upper band is asymmetric about the $k_{x}=0$ plane and symmetric about the $k_{y}=0$ plane and passes through the upper bulk bandgap. The upper band of SPP modes in the magnetized plasma-opaque structure lead to topological unidirectional and back-scattering immune SPPs which has been well studied in \cite{7-PTI-Notes, 6-PRL, 5-PRA-june, 8-three-defect}. For the case that the magnetized plasma immersed in a transparent medium, the upper band represents fast surface waves. These surface waves leak rapidly into the transparent medium. Similarly, the lower band is asymmetric about the $k_{x}=0$ plane and symmetric about the $k_{y}=0$ plane. Furthermore, this lower band passes through the lower bulk bandgap. Dispersion in this lower band leads to beam-like SPPs and has only recently been considered in our previous paper \cite{9-Trully}; this is the main subject of this work. \begin{figure}[!b] \includegraphics[width=0.99\columnwidth]{distribution_panel-eps-converted-to.pdf} \caption{Density plot of the Sommerfeld integrand, $\left\vert F \right\vert $, (\ref{integrand}) and equi-frequency contours (solid red) extracted from (\protect\ref{SPPdisp}), for a biased-plasma-vacuum interface at different frequencies for $\Gamma /\protect\omega _{p}=0.015$. The notation I-IV refers to Fig. \ref{commonBG}.} \label{Density} \end{figure} Figure \ref{Density} shows several equi-frequency contours (EFC) of the dispersion surface at different frequencies (red lines). Also shown in Fig. \ref{Density} are density plots of the distribution function, $\left\vert F \right\vert $, obtained from the Green function and given by (\ref{integrand}). The phase and group velocities of an SPP are calculated as $\mathbf{v}_{p}=\mathbf{\hat{k}}_{s}\omega/\left\vert \mathbf{k}_{s}\right\vert$ and \begin{equation} \mathbf{v}_{g}=\mathbf{\nabla}_{\mathbf{k}s}\omega\left( \mathbf{k}% _{s}\right) =\mathbf{\hat{x}}\frac{\partial\omega}{\partial k_{x}% }+\mathbf{\hat{y}}\frac{\partial\omega}{\partial k_{y}}, \end{equation} respectively. This means that the group velocity, representing the directional flow of electromagnetic energy, is orthogonal to the equi-frequency contours. According to Fig. \ref{Density}a the EFCs at low frequencies are nearly circular such that energy flows isotropically. Hence, the resulting field pattern is essentially omni-directional (see Fig. \ref{polar}a discussed in the next section). As frequency increases, the semi-major axis of the EFC becomes elongated (Fig. \ref{Density}b) such that the energy begins to flow asymmetrically. For $\omega=0.53\omega_{p}$, the EFC becomes hyperbolic with the arms of the hyperbola widening as frequency increases (see Fig. \ref{Density}c-f). When the EFC becomes hyperbolic, two directional, narrow beams form in the SPP field pattern (see, e.g., Fig. \ref{polar}c,d). Moreover, the equi-frequency contours of the upper band in Fig. \ref{3DSPP}a show that the surface plasmons in this frequency range are mainly directed along the y direction (along the bias), existing down to the limit $k_{y}\rightarrow 0$. \begin{figure}[!t] \includegraphics[width=0.99\columnwidth]{2d_spp_disp-eps-converted-to.pdf} \caption{Two dimensional dispersion of the SPP for different propagation angles, $\protect\phi_{s}$, with respect to the positive (negative) x-axis for right (left) branches of the dispersion. The bulk dispersion (solid black) for $\alpha_{b}=0^{\circ}$ indicates the lower bandgap (BG), common to all propagation angles. The solid orange lines, symmetric with respect to $k_{s}=0$, show the dispersion of light in vacuum, e.g., $\omega/\omega_{p}= \pm k_{s}/k_{p}$.} \label{commonBG} \end{figure} Figure \ref{commonBG} shows the SPP dispersion behavior for the lower band, at different propagation angles (i.e., it shows several two dimensional traces of the SPP dispersion surface shown in Fig. \ref{3DSPP}b). Each branch of the SPP dispersion converges to \cite{3-PRA} \begin{align} \omega_{\mathbf{k}} & =\frac{1}{2}\omega_{c}\cos \phi_{s}+\frac{1}{2}\sqrt{2\omega_{p}^{2}+\omega_{c}^{2}\left( 1+\sin^{2}\phi _{s}\right) }, \end{align} in the limit $k_{s}\rightarrow\infty$, derived using the quasi-static approximation. The maximum and minimum quasi-static resonance, $\omega^{\pm}=\omega_{k}\left( \phi_{s}=0\right) $, indicated in Figs. \ref{3DSPP}b and \ref{commonBG}, correspond to an SPP mode which propagates perpendicular to the bias. The dispersion is divided into four frequency regions: in Regions I and IV, there is no common bulk bandgap, whereas in Regions II and III, there exists a common bulk bandgap. In Region II, where the EFC is hyperbolic (see Fig. \ref{Density}c-f), we have directional propagation and the SPP field pattern consists of two narrow beams which are symmetric with respect to the $x$ axis (e.g. Fig. \ref{polar}c,d), and since $\omega(-k_{s})\neq\omega(k_{s})$, unidirectional behavior is also possible, making this frequency regime of central interest. Although in Region III there still exists a common bulk bandgap, narrow beams do not form in the SPP field pattern due to the fact that the EFC is ellipsoidal (see Fig. \ref{Density}b). Moreover, SPP propagation is nearly reciprocal. In Region IV, the EFC is circular (Fig. \ref{Density}a), indicating that the expected SPP field pattern is omni-directional (see Fig. \ref{polar}a), and from the dispersion shown in Fig. \ref{commonBG}, it is evident that the SPP is reciprocal, i.e. $\omega(-k_{s})=\omega(k_{s})$. As a partial summary, we have carefully studied the recently-identified lower band dispersion of surface waves on a dielectric-gyrotropic plasma interface, and have identified four regions (I-IV in Figs. \ref{3DSPP} and \ref{commonBG}) with different characteristics. \section{Green function for a finite-thickness plasma, and SPP Beam Pattern in Space} In the last section we considered a simple material-gyrotropic plasma interface, the Green function for which is provided in \cite{3-PRA}. In this section, we expand that analysis to consider a finite-thickness gyrotropic layer. We present a closed-form expression (as a Sommerfeld integral) for the Green function in the simple dielectric regions above and below the slab, which we believe to be a new result. Importantly, we also provide the Green function coefficient in quotient form for each case, which leads to the identification of the SPP dispersion equation (setting the denominator to zero), and allows the residue of the Green function, corresponding to the SPP, to be evaluated. The procedure to derive the Green function follows that in \cite{3-PRA, 11-mario-optical}. The incident field excited by an electric dipole source, with dipole moment $\mathbf{p}_{e}=\mathbf{\hat{x}}p_{x}+\mathbf{\hat{y}}p_{y}+\mathbf{\hat{z}}p_{z}$, suspended a distance $d$ above the first interface, is given by $\mathbf{E}^{p}\left( \mathbf{r}\right) =\left( \mathbf{\nabla\nabla}+\mathbf{\bar{I}}k_{0}^{2}\varepsilon_{r,0}\right) \cdot\mathbf{\pi}^{p}\left( \mathbf{r}\right)$, where $\mathbf{\pi}^{p}\left( \mathbf{r}\right) $ denotes the principal hertzian potential due to the dipole source, which we write in terms of the principal Green function, $\mathbf{\pi}^{p}\left( \mathbf{r}\right) =g^{p}\left(\mathbf{r,r}_{0}\right) \mathbf{p}_{e}/\varepsilon_{0}\varepsilon_{r,0}$, where $g^{p}\left(\mathbf{r,r}_{0}\right) =e^{j k_{0}\sqrt{\varepsilon_{r,0}}\left\vert \mathbf{r}-\mathbf{r}_{0}\right\vert}/4\pi\left\vert \mathbf{r}-\mathbf{r}_{0}\right\vert $ such that $\varepsilon_{r,0}$ is the relative permitivitty of the top layer (see Fig. \ref{geom}) and $\mathbf{r}_{0}=(0,0,d)$. Following \cite{3-PRA}, the principal and scattered fields may be written similarly in Sommerfeld integral form, \begin{align} \mathbf{E}^{p}\left( \mathbf{r}\right) & =\int d^{2}\mathbf{k}% _{s}e^{j\mathbf{k}_{s}\cdot\mathbf{r}}\frac{e^{-\gamma\left\vert z-z_{0}\right\vert }}{8\pi^{2}\varepsilon_{0}\varepsilon_{r,0}\gamma }\mathbf{\bar{C}}_{z \gtrless d}^{p}\cdot\mathbf{p,}\label{r14}\\ \mathbf{E}^{r}\left( \mathbf{r}\right) & =\int d^{2}\mathbf{k}% _{s}e^{j\mathbf{k}_{s}\cdot\mathbf{r}}\frac{e^{-\gamma\left( z+z_{0}\right) }}{8\pi^{2}\varepsilon_{0}\varepsilon_{r,0}\gamma}\mathbf{\bar{C}}^{r}% \cdot\mathbf{p,}\label{r15}\\ \mathbf{E}^{t}\left( \mathbf{r}\right) & =\int d^{2}\mathbf{k}% _{s}e^{j\mathbf{k}_{s}\cdot\mathbf{r}}\frac{e^{\gamma\left( z-z_{0}\right) }}{8\pi^{2}\varepsilon_{0}\varepsilon_{r,0}\gamma}\mathbf{\bar{C}}^{t}% \cdot\mathbf{p,} \label{r16}% \end{align} where $\mathbf{\bar{C}}_{z \gtrless d}^{p}$ and $\mathbf{\bar{C}}^{r,t}$ take the form, \begin{align} \mathbf{\bar{C}}_{z \gtrless d}^{p} & =\mathbf{\bar{A}}_{z \gtrless d}% \cdot\mathbf{\bar{I}}_{s}\cdot\mathbf{\bar{B},}\label{r17}\\ \mathbf{\bar{C}}^{r,t} & =\mathbf{\bar{A}}^{r,t}\cdot\left\{ \mathbf{\bar{R},\bar{T}}\right\} \cdot\mathbf{\bar{B},} \label{r18}% \end{align} such that \begin{align} \mathbf{\bar{A}}_{z \gtrless d} & =\mathbf{\bar{I}}_{s}\mp\frac{1}{k_{z}% }\mathbf{\hat{z}k}_{s},\label{r19}\\ \mathbf{\bar{A}}^{r,t} & =\mathbf{\bar{I}}_{s}\mp\frac{1}{k_{z}^{r,t}% }\mathbf{\hat{z}k}_{s},\label{r20}\\ \mathbf{\bar{B}} & = k_{0}^{2}\varepsilon_{r,0}\mathbf{\bar{I}}_{s}-\mathbf{k}_{s}% \mathbf{k}_{s}+k_{z}\mathbf{k}_{s}\mathbf{\hat{z},} \label{r21}% \end{align} where $\mathbf{\bar{I}}_{s}=\mathbf{\hat{x}\hat{x}}+\mathbf{\hat{y}\hat{y}}$, $k_{z}=k_{z}^{r}=\sqrt{k_{0}^{2}\varepsilon_{r,0}-k_{x}^{2}-k_{y}^{2}}$, and $k_{z}^{t}=\sqrt{k_{0}^{2}\varepsilon_{r,2}-k_{x}^{2}-k_{y}^{2}}$. The reflection and transmission coefficients for a slab of finite depth, $h$, are denoted by $\mathbf{\bar{R}}\left( \omega,\mathbf{k}_{s}\right) $ and $\mathbf{\bar{T}}\left(\omega,\mathbf{k}_{s}\right) $, respectively. It is shown in the appendix that these $2\times2$ tensor coefficients take the form \begin{align} \mathbf{\bar{R}} & =\mathbf{\bar{R}}_{01}+\mathbf{\bar{T}}_{10}% \cdot\mathbf{\bar{R}}_{12}^{\prime}\cdot\left( \mathbf{\bar{I}}_{s}% -\mathbf{\bar{R}}_{10}\cdot\mathbf{\bar{R}}_{12}^{\prime}\right) ^{-1}% \cdot\mathbf{\bar{T}}_{01},\label{r22}\\ \mathbf{\bar{T}} & =\mathbf{\bar{T}}_{12}\cdot\mathbf{\bar{P}}_{E}^{-}% \cdot\left( \mathbf{\bar{I}}_{s}-\mathbf{\bar{R}}_{10}\cdot\mathbf{\bar{R}}% _{12}^{\prime}\right) ^{-1}\cdot\mathbf{\bar{T}}_{01}, \label{r23}% \end{align} where $\mathbf{\bar{T}}_{nn^{\prime }}=\mathbf{\bar{I}}_{s}+\mathbf{\bar{R}}_{nn^{\prime }}$ for $\left( n,n^{\prime }\right) \in\left\{ \left( 0,1\right) ,\left(1,0\right) ,\left( 1,2\right) \right\} $ and \begin{equation} \mathbf{\bar{R}}_{12}^{\prime}=\mathbf{\bar{P}}_{E}^{+}\cdot\mathbf{\bar{R}% }_{12}\cdot\mathbf{\bar{P}}_{E}^{-},\label{r24}% \end{equation} such that $\mathbf{\bar{P}}_{E}^{m}$ denotes the spacial propagator, which accounts for the accumulated phase as the wave propagates within the gyrotropic medium in the $\pm z$ directions. The single interface reflection coefficients associated with each interface, $\mathbf{\bar{R}}_{nn^{\prime }}$, along with the spacial propagator, $\mathbf{\bar{P}}_{E}^{m}$, can alternatively be expressed in numerator/denominator form as% \begin{align} \mathbf{\bar{R}}_{nn^{\prime }} & =\frac{1}{k_{y}\Omega^{nn^{\prime }}}\left( \begin{array} [c]{cc}% k_{y}\Pi_{11}^{nn^{\prime }} & \Pi_{12}^{nn^{\prime }}\\ k_{y}^{2}\Pi_{21}^{nn^{\prime }} & k_{y}\Pi_{22}^{nn^{\prime }}% \end{array} \right) ,\label{r25}\\ \mathbf{\bar{P}}_{E}^{m} & =\frac{1}{k_{y}\chi^{m}}\left( \begin{array} [c]{cc}% k_{y}\Delta_{11}^{m} & \Delta_{12}^{m}\\ k_{y}^{2}\Delta_{21} & k_{y}\Delta_{22}^{m}% \end{array} \right) ,\label{r26} \end{align} where the quantities $\Omega^{nn^{\prime }}$, $\Pi^{nn^{\prime }}$, $\chi^{m}$, and $\Delta^{m}$ are defined in the appendix. For the single interface case, we find that setting $\Omega^{01}$ to zero in $\mathbf{\bar{R}}_{01}$ gives the expected dispersion relation for the SPP (\ref{SPPdisp}). \begin{figure}[!t] \includegraphics[width=0.99\columnwidth]{field_profile_panel-eps-converted-to.pdf} \caption{Scattered electric field, $\left| E_{z}^{r} \right|$, obtained from the Green function (solid black lines) for a biased-plasma-vacuum interface, with $\rho=0.7\protect\lambda$, $z=0.008\protect\lambda _{p}$, where $\protect\lambda=2\protect\pi c/\omega$ and $\protect\lambda _{p}=2\protect\pi c/\protect\omega _{p}$. For comparison, the electric field distribution generated using COMSOL is also shown.} \label{polar} \end{figure} \begin{figure*}[tbh] \includegraphics[width=1.99\columnwidth]{comsol_panel-eps-converted-to.pdf} \caption{(a,b) Electric field (computed using COMSOL) at the interface of a thick (essentially infinite) gyrotropic plasma slab in the presence of (a) a hole discontinuity and (b) a block discontinuity. (c,d) Electric field at the top (c) and bottom (d) of a finite thickness slab ($h=0.12\lambda_{p}$) in the presence of a discontinuity. The SPP, excited on the top interface by a point dipole, propagates around the open surface to the bottom side of the plasma.} \label{discon1} \end{figure*} In the special case where a $z$ directed dipole moment, $\mathbf{p}_{e}=\mathbf{\hat{z}}p_{z}$, is placed at a height $d$ above the first interface ($z>0$), the $z$ component of the scattered electric field simplifies to \begin{equation} E_{z}^{r}\left( \mathbf{r}\right) =\int d^{2}\mathbf{k}_{s}F(\mathbf{k}_{s},% \mathbf{r},\omega ), \label{GreenFn} \end{equation} where \begin{equation} F\left(\mathbf{k}_{s},\mathbf{r},\omega \right)=e^{j\mathbf{k}_{s}\cdot\mathbf{r}}\frac{e^{-\gamma\left( z+d\right)}}{8\pi^{2}\varepsilon_{0}\varepsilon_{r,0}\gamma} C_{zz}^{r}p_{z},\label{integrand} \end{equation} such that, for a single interface,% \begin{equation} C_{zz}^{r}=\frac{-k_{x}\left( \Pi_{12}^{01}+k_{x}\Pi_{11}^{01}\right) }{\Omega^{01}}-\frac{k_{y}^{2}\left( \Pi_{22}^{01}+k_{x}\Pi_{21}^{01}\right) }{\Omega^{01}}.% \label{Czz} \end{equation} Using (\ref{GreenFn}), the electric field distribution near the interface of a half-space gyrotropic media for $\rho=0.7\lambda$, $z=0.008\lambda_{p}$, and $0<\phi<2\pi$, is shown in Fig. \ref{polar}. The results obtained in COMSOL are also shown Fig. \ref{polar}, and agree with the Green function analysis. As shown in Fig. \ref{polar}a, the expected behavior of surface wave propagation for operating frequencies that lie in Region IV of the dispersion (see Figs. 3 and 5), is omnidirectional. In Region III, propagation is bi-directional, with the SPP intensity concentrated to one half plane as depicted in Fig. \ref{polar}b. Transitioning from Region IV to Region I, the expected behavior increasingly tends toward unidirectional. Interestingly, for frequencies that satisfy the SPP resonant condition, $\omega^{-}<\omega<\omega^{+}$ (Regions I and II), Fig. \ref{polar}c,d, show that narrow-beam directional propagation is obtained, consistent with the previous discussion of equi-frequency contours; two representative results which satisfy the resonant condition, $\omega=0.6\omega_{p}$ and $\omega=0.65\omega_{p}$ are shown. At $\omega=\omega^{-}$, the field pattern forms two narrow beams which approach each other as the operating frequency increases. Eventually, the two beams join to form a single beam at $\omega=0.76\omega_{p}$, corresponding to the saturation frequency of the $\phi_{s}=90^{\circ}$ branch in Fig. \ref{commonBG}, and then split to form two beams for $0.76\omega_{p}<\omega<\omega^{+}$. Therefore, the angle of the beams with respect to the $x$ axis is adjustable with frequency as well as the magnetic bias. Furthermore, if the direction of the magnetic bias is flipped, the beams propagate in the opposite direction. To have an indication of the inherent robustness of the SPP within the resonant range, a discontinuity in the form of a hole/block is constructed in an attempt to impede the SPP. A unidirectional SPP that crosses a band gap in reciprocal space is immune to the effects of back-scattering and diffraction. To illustrate this, Fig. \ref{discon1}a,b shows the electric field due to a electric point source near the vacuum-plasma interface of a plasma half-space. The SPP passes through the discontinuity without reflection or diffraction. Similarly, for a finite-thickness slab, the SPP excited on the top surface, upon encountering the end of the plasma, passes onto the bottom surface, as shown in Fig. \ref{discon1}c (top view) and Fig. \ref{discon1}d (bottom view). As shown above, the vacuum-plasma interface can support a uni-directional SPP. However, it is not clear if a thin, finite-thickness slab can also support such an SPP. Figure \ref{sppvsloss} shows the SPP pattern obtained by evaluating the scattered/reflected Green function field (\ref{r15}) as a function of angular position in the xoy plane. For this analysis, we consider a vertical dipole source, operating with frequency $\omega=0.65\omega_{p}$ and positioned at the upper interface ($z_{1}=0$) of a gyrotropic plasma slab with a fixed thickness $h=\lambda_{p}$. Figure \ref{sppvsloss} shows the scattered field for the fixed observation point $\left(\rho,z\right)=\left(0.08\lambda_{p},0.008\lambda_{p}\right)$ and several values of loss within the range $0<\Gamma<10^{-4}\omega_{p}$. For a sufficient amount of loss, $\Gamma=10^{-4}\omega_{p}$, only two beams appear in the field pattern, similar to those obtained for a single interface (see Fig. \ref{polar}d). As the loss decreases from $\Gamma= 10^{-4}\omega_{p}$ to $\Gamma= 0$, we see the emergence of two backward beams present on the upper interface (due to the evanescent tail of the bottom-surface SPP), which indicates the breakdown of uni-directional behavior. \begin{figure}[!bth] \includegraphics[width=0.99\columnwidth]{beam_pattern_panel-eps-converted-to.pdf} \caption{SPP beam pattern excited by a vertical dipole source at the interface of a finite slab of thickness $h = \lambda_{p}$, obtained by evaluating (\ref{r15}) for set observation height, $z = 0.008\lambda_{p}$ and in-plane radial distance, $\rho=0.08\lambda_{p}$, $\lambda_{p} = 2 \pi c / \omega_{p}$. Four values of loss are considered such that $\Gamma= 10^{-4}\omega_{p}$ (a), $\Gamma=10^{-5}\omega_{p}$ (b), $\Gamma=30^{-6}\omega_{p}$ (c), and $\Gamma=0$ (d). These results are normalized with respect to the beam maximum extracted from the field profile shown in (a).}\label{sppvsloss} \end{figure} \subsection{Quasi-Static Approximation}\label{quasi1} Further insight can be gained by a quasi static approximation, where the electric field is written in terms of the electro-static potential, $\phi_{k}$, such that $E_{k}\approx-\nabla\phi_{k}$, assuming the associated magnetic field is negligible. Solving Gauss' law in both isotropic and gyrotropic media, and applying boundary conditions for the tangential components of the electric field at each interface, the electric potential for a symmetric slab (centered at $z=0$) is obtained as \begin{widetext} \begin{equation} \phi_{k}=e^{j\mathbf{k}_{s}\cdot\mathbf{r}}\left\{ \begin{array} [c]{cc}% \left[ jC_{1}\sinh\left( \tilde{k}_{s}h/2\right) +C_{2}\cosh\left( \tilde{k}_{s}h/2\right) \right] e^{-k_{s}\left( z-h/2\right) } & z>h/2\\ jC_{1}\sinh\left( \tilde{k}_{s}z\right) +C_{2}\cosh\left( \tilde{k}% _{s}z\right) & -h/2<z<h/2\\ \left[ -jC_{1}\sinh\left( \tilde{k}_{s}h/2\right) +C_{2}\cosh\left( \tilde{k}_{s}h/2\right) \right] e^{-k_{s}\left( z-h/2\right) } & z<-h/2 \end{array} \right., \label{r29}% \end{equation} \end{widetext} where $\tilde{k}_{s}=\sqrt{k_{x}^{2}+\varepsilon_{a}k_{y}^{2}/\varepsilon_{t}}$, $h$ denotes the slab thickness and $C_{1}$ and $C_{2}$ are parameters can be obtained by applying the mode orthogonality condition. Enforcing continuity of the normal components of electric displacement at the two interfaces leads to the quasi-static SPP dispersion relation \begin{equation} \varepsilon_{g}^{2}k_{x}^{2}-\varepsilon_{t}^{2}\tilde{k}_{s}^{2}% -k_{s}^{2}=2\varepsilon_{t}k_{s}% \tilde{k}_{s}\coth\left( \tilde{k}_{s}h\right). \label{quasidisp} \end{equation} The quasi-static approximation is valid only for SPPs with short wavelength ($k_{s}\rightarrow\infty$). In the limit $h\rightarrow\infty$, \ the dispersion relation reduces to that derived for a single interface \cite{3-PRA}, \begin{equation} k_{s}+k_{x}\varepsilon_{g}+\tilde{k}_{s}\varepsilon_{t}=0. \label{quasidisp2} \end{equation} Figure \ref{Quasi} shows the solutions to the quasi-static relation (\ref{quasidisp}) for several values of cyclotron frequency, representing the SPP resonance in the quasi-static limit. For a given $\omega$ value, there are four values of $\phi_{s}$, two of which correspond to the forward beams and the other two correspond to the backward beams (see Fig. \ref{sppvsloss}). In the presence of a magnetic bias, the SPP resonance depends on the direction of the SPP modes, however, it is independent of the slab thickness for large values of $k_{s}$. Numerically we find that in the absence of magnetic bias ($\omega_{c}=0$), the SPP resonance at $\underset{\omega _{c}\rightarrow 0}{\lim }\omega _{SPP}=\omega _{p}/\sqrt{2}$, which shows that SPPs become direction independent in this limit, as expected. The quasi-static dispersion in Fig. \ref{Quasi} suggests that four beams may be present in the scattered field profile for operating frequencies that fall within the SPP resonant range $\omega^{-}<\omega<\omega^{+}$. For example, consider an operating frequency of $\omega = 0.65\omega_{p}$ and cyclotron frequency $\omega_{c} = 0.4\omega_{p}$. From the quasi static dispersion, we find that the in-plane wave vector, and hence, phase velocity, of the SPP (approximately) makes an angle $\phi_{s} \in \left\{ 60^{\circ},120^{\circ},240^{\circ},300^{\circ} \right\}$ with respect to the x-axis. The group velocity (i.e. the direction of energy flow as indicated by the direction of the beams) of the SPP is perpendicular to the phase velocity and therefore, makes an angle $\phi_{s}+90^{\circ} \in \left\{ 150^{\circ},210^{\circ},330^{\circ},30^{\circ}\right\}$ with respect to the x-axis. In the low loss limit, the scattered field profile shows four beams with the expected aforementioned angles made with respect to the x-axis (see Fig. \ref{sppvsloss}d). However, for a lossy slab, we find that only two beams become present on any given surface at angles $\phi_{s}+90^{\circ} \in \left\{ 330^{\circ},30^{\circ}\right\}$ (top) and $\phi_{s}+90^{\circ} \in \left\{ 150^{\circ},210^{\circ}\right\}$ (bottom) (see Fig. \ref{sppvsloss}a). That is, the quasi-static analysis provides four symmetric beams, two of which will be excited on a given interface (top or bottom). \begin{figure}[!htb]\includegraphics[width=0.99\columnwidth]{Quasi3-eps-converted-to.pdf} \caption{Solutions to the quasi-static SPP dispersion relation (\ref{quasidisp}) for a finite thickness slab of thickness $h=0.25\lambda_p$ and wavenumber $k_{s}= 10k_{p} \gg 1/h$. The cyclotron frequency ranges from $0$ to $0.4\omega_{p}$. From these results, we find that for a given operation frequency, a maximum of four beams is possible in the SPP beam pattern. Additionally, we find that as magnetic bias increases, the SPP resonant range also increases.} \label{Quasi} \end{figure} \section{Conclusion} We have investigated the behavior of surface plasmon polaritons propagating at the interface between vacuum and gyrotropic plasma for both infinite- and finite-thickness slab configurations. We have identified a bulk bandgap, common to all propagation angles. The operating frequency is chosen to lie within the lower common band gap, wherein omni-directional, bidirectional, and narrow directional beam patterns are observed. Operating in the bandgap gives the SPP interesting properties that protect it from back scatter and diffraction in the presence of a discontinuity. The direction of the SPP beams are adjustable with operation frequency and also the bias magnetic field. The Green function and quasi-static approximation to the dispersion have also been obtained for a finite-thickness slab.\label{SectConcl} \section*{Appendix: Dyadic Green function for a finite thickness slab}\label{ApGreenHalfSpace} \begin{figure}[!thb] \includegraphics[width=0.95\columnwidth]{2D_SLAB-eps-converted-to.pdf} \caption{Cross sectional view of Fig. \ref{geom}. The top and bottom interfaces are positioned at $z=z_{1} = 0$ and $z=z_{2} = -h$ respectively. Regions (0) and (2) are characterized by $\varepsilon_{r,0}$ and $\varepsilon_{r,2}$ respectively, while Region (1) is characterized by the gyrotropic permitivitty tensor, $\bar{\varepsilon}_{r,1}$, defined in (\ref{r1}). The electric fields associated with plane waves propagating in each region, with group velocity in the $\pm z$ directions, are also shown.} \end{figure} Here, we derive the plane wave reflection and transmission coefficients which relate the tangential field components of the electric field reflected and transmitted from a gyrotropic slab of finite thickness, $h$. As in \cite{3-PRA}, it is important to define a convenient, orthogonal coordinate system in which to expand the amplitude vector of a plane wave propagating in the gyrotropic medium. The set of orthogonal unit vectors which span this coordinate system is given by $\left\{ \mathbf{\hat{k}}_{t,i}^{m},\mathbf{\hat{y}},\mathbf{\hat{k}}_{t,i}^{m}\times\mathbf{\hat{y}}\right\} $, where $\mathbf{\hat{k}}_{t,i}^{m}=\mathbf{\hat{x}}k_{x}+\mathbf{\hat{z}}mk_{z,i}$ for $m\in\left\{ \pm\right\}$ and $i\in\left\{1,2\right\}$. The fields above and below the interface, are simply expanded in terms of the Cartesian basis, $\left\{ \mathbf{\hat{x}},\mathbf{\hat{y}},\mathbf{\hat{z}}\right\}$. The relationship between the electric and magnetic fields above and below the slab is given by \begin{equation} \left( \begin{array} [c]{c}% \omega\mu_{0}H_{y}^{m}\\ \omega\mu_{0}H_{x}^{m}% \end{array} \right) =\left\{ \mathbf{\bar{Y}}^{m},\mathbf{\bar{Y}}_{g} ^{m}\right\} \cdot\left( \begin{array} [c]{c}% E_{x}^{m}\\ E_{y}^{m}% \end{array} \right) , \label{r40}% \end{equation} where the electric and magnetic fields in the dielectric regions are related using% \begin{equation} \mathbf{\bar{Y}}^{m}=\frac{1}{mk_{z}}\left( \begin{array} [c]{cc}% \left( k_{x}^{2}+k_{z}^{2}\right) & k_{x}k_{y}\\ -k_{x}k_{y} & -\left( k_{y}^{2}+k_{z}^{2}\right) \end{array} \right) ,\label{r50}% \end{equation} while the electric and magnetic fields within the gyrotropic plasma are related using \begin{align} \mathbf{Y}_{g}^{m}=\left( \begin{array} [c]{cc}% -\delta_{1}k_{t,1}^{2} & -\delta_{2}k_{t,2}^{2}\\ k_{y}\phi_{1}^{m} & k_{y}\phi_{2}^{m}% \end{array} \right) \cdot\left( \begin{array} [c]{cc}% \beta_{1}^{m} & \beta_{2}^{m}\\ k_{y}\theta_{1} & k_{y}\theta_{2}% \end{array} \right) ^{-1}.\label{rr}% \end{align} Matching the tangential components of the electric and magnetic fields at each interface yields \begin{align} \mathbf{\bar{T}}_{01}\cdot\mathbf{E}_{0}^{-}\left( z_{1}\right) & =\left( \mathbf{\bar{I}}_{s}+\mathbf{\bar{R}}_{01}\right) \cdot\mathbf{E}_{0}% ^{-}\left( z_{1}\right),\label{r61}\\ \mathbf{\bar{T}}_{10}\cdot\mathbf{E}_{1}^{+}\left( z_{1}\right) & =\left( \mathbf{\bar{I}}_{s}+\mathbf{\bar{R}}_{10}\right) \cdot\mathbf{E}_{1}% ^{+}\left( z_{1}\right),\label{r62}\\ \mathbf{\bar{T}}_{12}\cdot\mathbf{E}_{1}^{-}\left( z_{2}\right) & =\left( \mathbf{\bar{I}}_{s}+\mathbf{\bar{R}}_{12}\right) \cdot\mathbf{E}_{1}% ^{-}\left( z_{2}\right),\label{r63}\\ \mathbf{Y}_{g}^{-}\cdot\mathbf{\bar{T}}_{01}\cdot\mathbf{E}_{0}^{-}\left( z_{1}\right) & =\mathbf{\bar{Y}}^{-}\cdot\mathbf{E}_{0}^{-}\left( z_{1}\right) \nonumber\\ & +\mathbf{\bar{Y}}^{+}\cdot\mathbf{\bar{R}}_{01}\cdot\mathbf{E}_{0}% ^{-}\left( z_{1}\right),\label{r64}\\ \mathbf{Y}^{+}\cdot\mathbf{\bar{T}}_{10}\cdot\mathbf{E}_{1}^{+}\left( z_{1}\right) & =\mathbf{\bar{Y}}_{g}^{+}\cdot\mathbf{E}_{1}^{+}\left( z_{1}\right) \nonumber\\ & +\mathbf{\bar{Y}}_{g}^{-}\cdot\mathbf{\bar{R}}_{10}\cdot\mathbf{E}_{1}% ^{+}\left( z_{1}\right),\label{r65}\\ \mathbf{Y}^{-}\cdot\mathbf{\bar{T}}_{12}\cdot\mathbf{E}_{1}^{-}\left( z_{2}\right) & =\mathbf{\bar{Y}}_{g}^{-}\cdot\mathbf{E}_{1}^{-}\left( z_{2}\right) \nonumber\\ & +\mathbf{\bar{Y}}_{g}^{+}\cdot\mathbf{\bar{R}}_{12}\cdot\mathbf{E}_{1}% ^{-}\left( z_{2}\right).\label{r66}% \end{align} From (\ref{r61})-(\ref{r66}) we find $\mathbf{\bar{T}}_{nn^{\prime }}=\mathbf{\bar{I}% }_{s}+\mathbf{\bar{R}}_{nn^{\prime }}$ where% \begin{equation} \mathbf{\bar{R}}_{nn^{\prime}}=\left( \mathbf{\bar{Y}}^{m_{1}}-\mathbf{\bar{Y}}% _{g}^{m_{2}}\right) ^{-1}\cdot\left( \mathbf{\bar{Y}}_{g}^{m_{3}% }-\mathbf{\bar{Y}}^{m_{3}}\right) , \label{r67}% \end{equation} such that \begin{equation} \left( m_{1},m_{2},m_{3}\right) =\left\{ \begin{array} [c]{cc}% \left( +,-,-\right) & \left( n,n^{\prime}\right) =\left( 0,1\right) \\ \left( +,-,+\right) & \left( n,n^{\prime}\right) =\left( 1,0\right) \\ \left( -,+,-\right) & \left( n,n^{\prime}\right) =\left( 1,2\right) \end{array} \right. .\label{r69}% \end{equation} Furthermore, it is noted that% \begin{align} \mathbf{E}_{1}^{-}\left( z_{1}\right) & =\mathbf{\bar{T}}_{01}% \cdot\mathbf{E}_{0}^{-}\left( z_{1}\right) +\mathbf{\bar{R}}_{10}% \cdot\mathbf{E}_{1}^{+}\left( z_{1}\right),\label{r70}\\ \mathbf{E}_{0}^{+}\left( z_{1}\right) & =\mathbf{\bar{R}}_{01}% \cdot\mathbf{E}_{0}^{-}\left( z_{1}\right) +\mathbf{\bar{T}}_{10}% \cdot\mathbf{E}_{1}^{+}\left( z_{1}\right),\label{r71}\\ \mathbf{E}_{1}^{+}\left( z_{2}\right) & =\mathbf{\bar{R}}_{12}% \cdot\mathbf{E}_{1}^{-}\left( z_{2}\right),\label{r72}\\ \mathbf{E}_{2}^{-}\left( z_{2}\right) & =\mathbf{\bar{T}}_{12}% \cdot\mathbf{E}_{1}^{-}\left( z_{2}\right), \label{r73}% \end{align} where the electric field associated with a plane wave propagating a distance, $h=\left\vert z_{2}-z_{1}\right\vert $, along the $\pm z$ direction within the gyrotropic slab, is given by% \begin{align} \mathbf{E}_{1}^{-}\left( z_{2}\right) & =\mathbf{\bar{P}}_{E}^{-}% \cdot\mathbf{E}_{1}^{-}\left( z_{1}\right),\label{r74}\\ \mathbf{E}_{1}^{+}\left( z_{1}\right) & =\mathbf{\bar{P}}_{E}^{+}% \cdot\mathbf{E}_{1}^{+}\left( z_{2}\right), \label{r75}% \end{align} where $\mathbf{\bar{P}}_{E}^{m}$ denotes the spacial propagator, which effectively propagates the electric field a distance $h$ through the slab and takes the form \begin{equation} \mathbf{\bar{P}}_{E}^{m}=\mathbf{\bar{U}}_{m}\cdot\mathbf{\bar{P}}^{m}% \cdot\mathbf{\bar{U}}_{m}^{-1}, \label{r76}% \end{equation} where \begin{align} \mathbf{\bar{U}}_{m} & =\left( \begin{array} [c]{cc}% \beta_{1}^{m}/k_{t,1} & \beta_{2}^{m}/k_{t,2}\\ k_{y}\theta_{1}/k_{t,1} & k_{y}\theta_{2}/k_{t,2}% \end{array} \right) ,\label{r77}\\ \mathbf{\bar{P}}^{m} & =\left( \begin{array} [c]{cc}% e^{jk_{z,1}h} & 0\\ 0 & e^{jk_{z,2}h}% \end{array} \right) .\label{r78}% \end{align} Using (\ref{r74})-(\ref{r75}) in (\ref{r70})-(\ref{r73}) leads to \begin{align} \mathbf{E}_{0}^{+}\left( z_{1}\right) & =\mathbf{\bar{R}}\cdot \mathbf{E}_{0}^{-}\left( z_{1}\right) ,\label{r79}\\ \mathbf{E}_{2}^{-}\left( z_{2}\right) & =\mathbf{\bar{T}\cdot E}_{0}% ^{-}\left( z_{1}\right),\label{r80}% \end{align} where \begin{align} \mathbf{\bar{R}} & =\mathbf{\bar{R}}_{01}+\mathbf{\bar{T}}_{10}% \cdot\mathbf{\bar{R}}_{12}^{\prime}\cdot\left( \mathbf{\bar{I}}_{s}% -\mathbf{\bar{R}}_{10}\cdot\mathbf{\bar{R}}_{12}^{\prime}\right) ^{-1}% \cdot\mathbf{\bar{T}}_{01},\label{r81}\\ \mathbf{\bar{T}} & =\mathbf{\bar{T}}_{12}\cdot\mathbf{\bar{P}}_{E}^{-}% \cdot\left( \mathbf{\bar{I}}_{s}-\mathbf{\bar{R}}_{10}\cdot\mathbf{\bar{R}}% _{12}^{\prime}\right) ^{-1}\cdot\mathbf{\bar{T}}_{01}, \label{r82}% \end{align} such that $\mathbf{\bar{R}}_{12}^{\prime}=\mathbf{\bar{P}}_{E}^{+}\cdot\mathbf{\bar{R}}_{12}\cdot\mathbf{\bar{P}}_{E}^{-}$. After some algebra, we find that (\ref{r67}), (\ref{r76}), (\ref{r81}), and (\ref{r82}) may be written in numerator/denominator form as \begin{align} \mathbf{\bar{R}}_{nn^{\prime}} & =\frac{1}{\Omega^{nn^{\prime}}}\left( \begin{array} [c]{cc}% \Pi_{11}^{nn^{\prime}} & \Pi_{12}^{nn^{\prime}}/k_{y}\\ k_{y}\Pi_{21}^{nn^{\prime}} & \Pi_{22}^{nn^{\prime}}% \end{array} \right) ,\label{r84}\\ \mathbf{\bar{P}}_{E}^{m} & =\frac{1}{\chi^{m}}\left( \begin{array} [c]{cc}% \Delta_{11}^{m} & \Delta_{12}^{m}/k_{y}\\ k_{y}\Delta_{21} & \Delta_{22}^{m}% \end{array} \right) ,\label{r85}\\ \mathbf{\bar{R}} & =\frac{1}{\Lambda\Omega^{01}}\left( \begin{array} [c]{cc}% \Xi_{11} & \Xi_{12}/k_{y}\\ k_{y}\Xi_{21} & \Xi_{22}% \end{array} \right) ,\label{rr86}\\ \mathbf{\bar{T}} & =\frac{\Omega^{10}\chi^{+}}{\Lambda\Omega^{01}}\left( \begin{array} [c]{cc}% \Psi_{11} & \Psi_{12}/k_{y}\\ k_{y}\Psi_{21} & \Psi_{22}% \end{array} \right) ,\label{rr87}% \end{align} where we define \begin{widetext} \begin{align} \Lambda & =\left( \Omega^{10}\Phi-\Theta_{11}\right) \left( \Omega^{10}\Phi-\Theta_{22}\right) -\Theta_{12}\Theta_{21},\label{58}\\ \Xi_{11} & =\Lambda\Pi_{11}^{01}+\left( \Omega^{10}+\Pi_{11}% ^{10}\right) \left( \Upsilon_{11}\Sigma_{11}+\Upsilon_{12}\Sigma _{21}\right) +\Pi_{12}^{10}\left( \Upsilon_{21}\Sigma_{11}+\Upsilon _{22}\Sigma_{21}\right) ,\label{59}\\ \Xi_{12} & =\Lambda\Pi_{12}^{01}+\left( \Omega^{10}+\Pi_{11}% ^{10}\right) \left( \Upsilon_{11}\Sigma_{12}+\Upsilon_{12}\Sigma _{22}\right) +\Pi_{12}^{10}\left( \Upsilon_{21}\Sigma_{12}+\Upsilon _{22}\Sigma_{22}\right) ,\label{60}\\ \Xi_{21} & =\Lambda\Pi_{21}^{01}+\left( \Omega^{10}+\Pi_{22}% ^{10}\right) \left( \Upsilon_{21}\Sigma_{11}+\Upsilon_{22}\Sigma _{21}\right) +\Pi_{21}^{10}\left( \Upsilon_{11}\Sigma_{11}+\Upsilon _{12}\Sigma_{21}\right) ,\label{61}\\ \Xi_{22} & =\Lambda\Pi_{22}^{01}+\left( \Omega^{10}+\Pi_{22}% ^{10}\right) \left( \Upsilon_{21}\Sigma_{12}+\Upsilon_{22}\Sigma _{22}\right) +\Pi_{21}^{10}\left( \Upsilon_{11}\Sigma_{12}+\Upsilon _{12}\Sigma_{22}\right) ,\label{62}\\ \Psi_{11} & =\left( \Omega^{12}+\Pi_{11}^{12}\right) \left( \Delta_{11}^{-}\Sigma_{11}+\Delta_{12}^{-}\Sigma_{21}\right) +\Pi_{12}% ^{12}\left( \Delta_{21}\Sigma_{11}+\Delta_{22}^{-}\Sigma_{21}\right) ,\label{63}\\ \Psi_{12} & =\left( \Omega^{12}+\Pi_{11}^{12}\right) \left( \Delta_{11}^{-}\Sigma_{12}+\Delta_{12}^{-}\Sigma_{22}\right) +\Pi_{12}% ^{12}\left( \Delta_{21}\Sigma_{12}+\Delta_{22}^{-}\Sigma_{22}\right) ,\label{64}\\ \Psi_{21} & =\left( \Omega^{12}+\Pi_{22}^{12}\right) \left( \Delta_{21}\Sigma_{11}+\Delta_{22}^{-}\Sigma_{21}\right) +\Pi_{21}% ^{12}\left( \Delta_{11}^{-}\Sigma_{11}+\Delta_{12}^{-}\Sigma_{21}\right) ,\label{65}\\ \Psi_{22} & =\left( \Omega^{12}+\Pi_{22}^{12}\right) \left( \Delta_{21}\Sigma_{12}+\Delta_{22}^{-}\Sigma_{22}\right) +\Pi_{21}% ^{12}\left( \Delta_{11}^{-}\Sigma_{12}+\Delta_{12}^{-}\Sigma_{22}\right) ,\label{66}\\ \Omega^{nn^{\prime}} & = m_{1}m_{3}k_{z}\chi^{m_{3}}\left( n_{E}% ^{m_{2}}-\varepsilon_{r,0}\chi^{m_{2}}\right) \nonumber\\ & +jm_{3}\chi^{m_{3}}\left[ \left( k_{y}^{2}+k_{z}^{2}\right) n_{A}% -k_{x}n_{B}^{m_{2}}+k_{x}k_{y}^{2}n_{C}^{m_{2}}-\left( k_{x}^{2}+k_{z}% ^{2}\right) n_{D}^{m_{2}}\right] ,\label{67}\\ \Pi_{11}^{nn^{\prime}} & = k_{z}\left[ \varepsilon_{r}\chi^{m_{2}}% \chi^{m_{3}}+m_{1}m_{3}k_{0}^{2}\left( n_{A}n_{D}^{m_{2}}-n_{B}^{m_{2}}% n_{C}^{m_{3}}\right) \right] \nonumber\\ & +j\left( m_{1}\chi^{m_{3}}\left[ \left( k_{x}^{2}+k_{z}^{2}\right) n_{D}^{m_{2}}+k_{x}n_{B}^{m_{2}}\right] -m_{3}\chi^{m_{2}}\left[ \left( k_{y}^{2}+k_{z}^{2}\right) n_{A}+k_{x}k_{y}^{2}n_{C}^{m_{3}}\right] \right) ,\label{68}\\ \Pi_{12}^{nn^{\prime}} & = m_{1}m_{3}k_{z}k_{0}^{2}\left( n_{D}^{m_{2}% }n_{B}^{m_{3}}-n_{D}^{m_{3}}n_{B}^{m_{2}}\right) \nonumber\\ & +j\left[ k_{x}k_{y}^{2}\left( m_{1}n_{D}^{m_{2}}\chi^{m_{3}}-m_{3}% n_{D}^{m_{3}}\chi^{m_{2}}\right) +\left( k_{y}^{2}+k_{z}^{2}\right) \left( m_{1}n_{B}^{m_{2}}\chi^{m_{3}}-m_{3}n_{B}^{m_{3}}\chi^{m_{2}}\right) \right] ,\label{69}\\ \Pi_{21}^{nn^{\prime}} & = m_{1}m_{3}k_{z}k_{0}^{2}n_{A}\left( n_{C}^{m_{3}}-n_{C}^{m_{2}}\right) \nonumber\\ & +j\left[ k_{x}n_{A}\left( m_{3}\chi^{m_{2}}-m_{1}\chi^{m_{3}}\right) +\left( k_{x}^{2}+k_{z}^{2}\right) \left( m_{3}n_{C}^{m_{3}}\chi^{m_{2}% }-m_{1}n_{C}^{m_{2}}\chi^{m_{3}}\right) \right] ,\label{70}\\ \Pi_{22}^{nn^{\prime}} & = k_{z}\left[ \varepsilon_{r,0}\chi^{m_{2}% }\chi^{m_{3}}+m_{1}m_{3}k_{0}^{2}\left( n_{A}n_{D}^{m_{3}}-n_{C}^{m_{2}}% n_{B}^{m_{3}}\right) \right] \nonumber\\ & +j\left( m_{3}\left[ k_{x}n_{B}^{m_{3}}\chi^{m_{2}}+\left( k_{x}% ^{2}+k_{z}^{2}\right) n_{D}^{m_{3}}\chi^{m_{2}}\right] -m_{1}\chi^{m_{3}% }\left[ k_{x}k_{y}^{2}n_{C}^{m_{2}}+\left( k_{y}^{2}+k_{z}^{2}\right) n_{A}\right] \right) ,\label{71}% \end{align} \end{widetext} such that \begin{align} \Phi & =\Omega^{12}\chi^{+}\chi^{-},\label{72}\\ \Upsilon_{11} & =\Delta_{11}^{+}\left( \Pi_{11}^{12}\Delta_{11}^{-}+\Pi _{12}^{12}\Delta_{21}\right) \nonumber\\ & +\Delta_{12}^{+}\left( \Pi_{21}^{12}\Delta_{11}^{-}+\Pi_{22}^{12}% \Delta_{21}\right) ,\label{73}\\ \Upsilon_{12} & =\Delta_{11}^{+}\left( \Pi_{11}^{12}\Delta_{12}^{-}+\Pi _{12}^{12}\Delta_{22}^{-}\right) \nonumber\\ & +\Delta_{12}^{+}\left( \Pi_{21}^{12}\Delta_{12}^{-}+\Pi_{22}^{12}% \Delta_{22}^{-}\right) ,\label{74}\\ \Upsilon_{21} & =\Delta_{21}\left( \Pi_{11}^{12}\Delta_{11}^{-}+\Pi _{12}^{12}\Delta_{21}\right) \nonumber\\ & +\Delta_{22}^{+}\left( \Pi_{21}^{12}\Delta_{11}^{-}+\Pi_{22}^{12}% \Delta_{21}\right) ,\label{75}\\ \Upsilon_{22} & =\Delta_{21}\left( \Pi_{11}^{12}\Delta_{12}^{-}+\Pi _{12}^{12}\Delta_{22}^{-}\right) \nonumber\\ & +\Delta_{22}^{+}\left( \Pi_{21}^{12}\Delta_{12}^{-}+\Pi_{22}^{12}% \Delta_{22}^{-}\right) ,\label{76}\\ \Theta_{11} & =\Pi_{11}^{10}\Upsilon_{11}+\Pi_{12}^{10}\Upsilon _{21},\label{77}\\ \Theta_{12} & =\Pi_{11}^{10}\Upsilon_{12}+\Pi_{12}^{10}\Upsilon _{22},\label{78}\\ \Theta_{21} & =\Pi_{21}^{10}\Upsilon_{11}+\Pi_{22}^{10}\Upsilon _{21},\label{79}\\ \Theta_{22} & =\Pi_{21}^{10}\Upsilon_{12}+\Pi_{22}^{10}\Upsilon _{22},\label{80}\\ \Sigma_{11} & =\left( \Omega^{10}\Phi-\Theta_{22}\right) \left( \Omega^{01}+\Pi_{11}^{01}\right) +\Theta_{12}\Pi_{21}^{01},\label{81}\\ \Sigma_{12} & =\left( \Omega^{10}\Phi-\Theta_{22}\right) \Pi_{12}% ^{01}+\Theta_{12}\left( \Omega^{01}+\Pi_{22}^{01}\right) ,\label{82}\\ \Sigma_{21} & =\left( \Omega^{10}\Phi-\Theta_{11}\right) \Pi_{21}% ^{01}+\Theta_{21}\left( \Omega^{01}+\Pi_{11}^{01}\right) ,\label{83}\\ \Sigma_{22} & =\left( \Omega^{10}\Phi-\Theta_{11}\right) \left( \Omega^{01}+\Pi_{22}^{01}\right) +\Theta_{21}\Pi_{12}^{01},\label{84}% \end{align} and \begin{align} n_{A} & =\varepsilon_{g}k_{t,1}^{2}k_{t,2}^{2}\left( \varpi_{1}\xi _{2}-\varpi_{2}\xi_{1}\right) ,\label{85}\\ n_{B}^{m} & =\varepsilon_{g}\varpi_{1}\varpi_{2}\left( k_{t,1}^{2}\alpha _{2}^{m}-k_{t,2}^{2}\alpha_{1}^{m}\right) ,\label{86}\\ n_{C}^{m} & =k_{t,1}^{2}\zeta_{2}^{m}\xi_{1}-k_{t,2}^{2}\zeta_{1}^{m}\xi _{2},\label{87}\\ n_{D}^{m} & =\zeta_{2}^{m}\alpha_{1}^{m}\varpi_{1}-\zeta_{1}^{m}\alpha _{2}^{m}\varpi_{2},\label{88}\\ n_{E}^{m} & =\varepsilon_{g}k_{0}^{2}\left( k_{t,1}^{2}\zeta_{2}^{m}% \varpi_{1}-k_{t,2}^{2}\zeta_{1}^{m}\varpi_{2}\right) ,\label{89}\\ \zeta_{i}^{m} & =\varepsilon_{g}k_{x}\varpi_{i}-j\varepsilon_{a}\xi _{i}mk_{z,i},\label{90}\\ \alpha_{i}^{m} & =k_{x}\xi_{i}-j\varepsilon_{g}k_{0}^{2}mk_{z,i},\label{91}\\ \chi^{m} & =k_{t,1}^{2}\varpi_{2}\xi_{1}\alpha_{2}^{m}-k_{t,2}^{2}\varpi _{1}\xi_{2}\alpha_{1}^{m},\label{92}\\ \Delta_{11} & = k_{t,1}^{2}\varpi_{2}\xi_{1}\alpha_{2}^{m}e^{jk_{z,2}% h}\nonumber\\ & -k_{t,2}^{2}\varpi_{1}\xi_{2}\alpha_{1}^{m}e^{jk_{z,1}h},\label{93}\\ \Delta_{12} & =\varpi_{1}\varpi_{2}\alpha_{1}^{m}\alpha_{2}^{m}\left( e^{jk_{z,2}h}-e^{jk_{z,1}h}\right) ,\label{94}\\ \Delta_{21} & = k_{t,1}^{2}k_{t,2}^{2}\xi_{1}\xi_{2}\left( e^{jk_{z,1}h}-e^{jk_{z,2}h}\right) ,\label{95}\\ \Delta_{22} & = k_{t,1}^{2}\varpi_{2}\xi_{1}\alpha_{2}^{m}e^{jk_{z,1}% h}\nonumber\\ & -k_{t,2}^{2}\varpi_{1}\xi_{2}\alpha_{1}^{m}e^{jk_{z,2}h}.\label{96}% \end{align} \section*{}
2,869,038,154,172
arxiv
\section{Introduction} Dirac delta potentials, also known as point interactions in general, are among the exactly solvable classes of potentials studied from both physical and mathematical points of view. A detailed review and history of the subject together with their mathematically rigorous constructions and spectral properties are given in the monographs \cite{Albeverio2012solvable, AlbeverioKurasov}. Extension of such singular class of delta potentials to the ones supported by a sphere is formally studied in several quantum mechanics textbooks \cite{Demkov, Gottfried, Griffiths}. A precise mathematical treatment of the so-called delta shell potentials is first given in \cite{AntoineGesztesyShabani}, where the addition of a delta potential supported by a point to the center of the sphere is also included in the zero angular momentum sector $l=0$. Actually, more general and systematic studies are later developed and they are known as the leaky quantum graphs, where the support of the delta potentials are considered to be a curve or surface \cite{exner2001geometrically, exner2002curvature, exner2002bound, Exner}. The generalizations to delta functions supported on curves and surfaces embedded in manifolds are presented in the works \cite{burak1, burak2}. Such singular interactions are recently studied numerically \cite{maioli2018exact, azado2021quantum} as a model for circular/spherical billiards. Higher dimensional delta shell type of interactions have been also studied in the literature from the differential equations point of view \cite{Demiralp2003properties} using the partial wave analysis. Small geometric deformations of the support of the delta potentials recently attract some attention \cite{exner2009geometric}, where the area preserving small deformations can give rise to the isolated eigenvalues. Furthermore, the scattering theory for delta-potentials supported by locally deformed planes is constructed in \cite{CacciapuotiFermiPosilicano}. In this paper, we consider the Schr\"{o}dinger operators with the following type of interactions and study their bound state and scattering spectrum: \begin{itemize} \item[(i)] Delta potential supported by a circle and delta potential supported by a point outside of the circle. \item[(ii)] Delta potential supported by a sphere and delta potential supported by a point outside of the sphere. \item[(iii)] Delta potential supported by a small deformation in the normal direction of a circle. \item[(iv)] Delta potential supported by a small deformation in the normal direction of a sphere. \end{itemize} It is well known that the resolvent of each separate interaction (delta potential supported by a point, a curve, or a surface) can be expressed in terms of the resolvent of the free Hamiltonian and these are commonly known as Krein's formula in the literature \cite{Albeverio2012solvable, AlbeverioKurasov, Exner}. To find the resolvent of the above hybrid type of potentials, we first regularize the ill-defined interaction terms by finite rank projections acting on Hilbert space and then find the regularized resolvent associated with these regularized Hamiltonians. Then, considering the strong limit of these regularized resolvents, as we remove the regularization parameter, allows us to define a self-adjoint operator corresponding to their limits due to the Trotter-Kato theorem. If the support of the interactions are codimension two or three, then it is well-known that we need to renormalize the problem, see e.g., \cite{Huang, Jackiw} for the point interactions in two and three dimensions. In this case, we need to choose the coupling constants or strengths as functions of regularization parameter such that the limit converges. For an alternative treatment of scattering applied to coexisting point and line defects see the recent work \cite{mostafazadeh}. For the sake of brevity, we only present the construction of the self-adjoint operator associated with the first system (i) and skip the technical details of the construction of the self-adjoint Hamiltonian associated with the other systems (ii)-(iv) since the idea of construction is essentially the same. The main results of the paper is to find explicitly the bound state energies and differential cross section for each system (i)-(iv) and show that the change in the bound state energies under small deformations in the normal directions of the circle/sphere is the same as the first order bound state energy calculated from the delta potential supported by a circle/sphere with radius equal to the averaged of the deformation additional to the original radius of the circle/sphere. The method developed in this paper is in fact rather general and can be applied also to delta potentials supported by curves and surfaces in principle. \begin{mynotation*} Dirac delta function supported by a point $\mathbf{a}$ is defined on the test functions $\psi$ by $\langle \delta_{\mathbf{a}}| \psi \rangle =\langle \mathbf{a}| \psi \rangle := \psi(\mathbf{a})$. Similarly, the Dirac delta function $\delta_{\Gamma}$ supported by the curve $\Gamma$ and the Dirac delta function $\delta_{\Sigma}$ supported by the surface $\Sigma$ are defined by their action on $\psi$ \cite{Appel2007mathematics} \begin{eqnarray} \langle \delta_{\Gamma}| \psi \rangle = \langle \Gamma | \psi \rangle & := & \frac{1}{L(\Gamma)} \int_{\Gamma} \psi \; d s \;, \\ \langle \delta_{\Sigma}| \psi \rangle = \langle \Sigma | \psi \rangle & := & \frac{1}{A(\Sigma)} \int_{\Sigma} \psi \; d A \;, \end{eqnarray} where $ds$ is the integration element over the curve $\Gamma$ and $d A$ is the integration element over the surface. For the circle $\Gamma=S^1$, $ds=R d \theta$ and $L(\Gamma)=2 \pi R$. For the sphere $\Sigma=S^2$, $d A =R^2 \sin \theta d \theta d \phi$ and $A(\Sigma)=4 \pi R^2$. \end{mynotation*} The paper is organized as follows. In Section \ref{Delta Potential Supported by a Circle and a Point}, we explicitly show that there exists a self-adjoint operator associated with the initial formal Hamiltonian where the interaction contains a delta potential supported by a circle centered at the origin and a point outside of this circle (point being inside does not present any difficulties, it can equally be considered). Then, we briefly discuss the bound state analysis as well as scattering solutions. Section \ref{Delta Potential Supported by a Sphere and a Point} deals with the bound state spectrum and scattering states for the delta potential supported by a sphere centered at the origin and a point outside of the sphere. Moreover, we study how the small deformations of the circle and sphere in the normal directions change the bound state spectrum and scattering properties in Sections \ref{Small Deformations of a Circle} and \ref{Small Deformations of a Sphere}. Finally, Appendix A is devoted to the Trotter-Kato theorem, which is needed to prove the self-adjointness of the Hamiltonian. \section{Delta Potential Supported by a Circle and a Point} \label{Delta Potential Supported by a Circle and a Point} We first consider the delta potential supported by a circle and a point, given formally in Dirac notation by \begin{eqnarray} H = H_0 -\lambda_1 |\mathbf{a} \rangle \langle \mathbf{a}| - \lambda_2 |\Gamma \rangle \langle \Gamma | \;, \end{eqnarray} where $\Gamma$ is the circle centered at the origin with radius $R$. We shall use units such that $\hbar=2m=1$ for simplicity. In order to make sense of the above expression, we first regularize the Hamiltonian $H$ by heat kernel $K_{\epsilon/2}$ in the following way: \begin{eqnarray} \label{regularizedH1} H_{\epsilon}= H_0 - \lambda_1(\epsilon) |\mathbf{a}^{\epsilon}\rangle \langle \mathbf{a}^{\epsilon}| - \lambda_2 |\Gamma^{\epsilon} \rangle \langle \Gamma^{\epsilon} | \;, \end{eqnarray} where \begin{eqnarray} \langle \mathbf{a}^{\epsilon}|\psi \rangle & = & \int_{\mathbb{R}^2} K_{\epsilon/2}(\mathbf{r}, \mathbf{a}) \psi(\mathbf{r}) \; d^2 r \;, \\ \langle \Gamma^{\epsilon} |\psi \rangle & = & \frac{1}{L(S^1)} \int_{S^1} \left(\int_{\mathbb{R}^2} K_{\epsilon/2}(\mathbf{r}, \boldsymbol{\gamma}(s)) \psi(\mathbf{r}) \; d^2 r \right) \; d s \;. \label{definitionaepsgammaeps} \end{eqnarray} Here $\epsilon>0$ is the regularization parameter or cut-off and $\boldsymbol{\gamma}(s)=(R\cos (s/R), R \sin (s/R))$ is the parametrization of the circle $S^1$. The explicit form of the heat kernel in $\mathbb{R}^n$ is given by \cite{Evans} \begin{eqnarray} \label{heatkernel} K_{\epsilon/2}(\mathbf{r}, \mathbf{r}')= \frac{1}{(2 \pi \epsilon)^{n/2}} \; e^{-|\mathbf{r}- \mathbf{r}'|^2/2\epsilon} \;. \end{eqnarray} The strength or coupling constant of the point Dirac delta interaction, denoted by $\lambda_1$, is considered to be a function of $\epsilon>0$, whose explicit form will be determined later. In this way, $H_{\epsilon}$ becomes a finite rank perturbation of the free Hamiltonian so that it is self-adjoint on the domain of $H_0$ thanks to the Kato-Rellich theorem \cite{Reedsimonv2}. This choice of the regularization is based on the fact that the heat kernel converges to the Dirac delta function as $\epsilon \rightarrow 0^+$ in the distributional sense. Such choice is one of the most natural ones if we consider such singular potentials in manifolds \cite{pointinteractionsonmanifolds1, pointinteractionsonmanifolds2} and here it is not only useful for the regularization but also allows us to approximate rather singular interaction supported by a circle with a more regular one. Next, we find the resolvent of the regularized Hamiltonian (\ref{regularizedH1}). For this, we need to solve the following inhomogenous Schr\"{o}dinger equation \begin{eqnarray} (H_{\epsilon}-E)|\psi \rangle = |\rho \rangle \;, \label{inhomogenoussch1} \end{eqnarray} for a given function $\langle \mathbf{r}|\rho \rangle= \rho(\mathbf{r}) \in L^2(\mathbb{R}^2)$. The existence of the solution is guaranteed by the basic self-adjointness criteria $\Ran(H_{\epsilon}-E)=L^2(\mathbb{R}^2)$ for at least one $E$ in the upper half-plane and one in the lower half-plane \cite{Reedsimonv2}. By defining $|\mathbf{a}^{\epsilon} \rangle = |f_1(\epsilon) \rangle $ and $|\Gamma^{\epsilon} \rangle = |f_2(\epsilon) \rangle$, we can express the interaction as the sum of the rescaled projection operators: \begin{eqnarray} H_{\epsilon} =H_0 - \sum_{j=1}^{2} |\tilde{f}_j(\epsilon) \rangle \langle \tilde{f}_j(\epsilon)| \;, \end{eqnarray} where $|\tilde{f}_i(\epsilon) \rangle = \sqrt{\lambda_i(\epsilon)}|f_i(\epsilon) \rangle$ in $L^2(\mathbb{R}^2)$. Then, applying the free resolvent $R_0(E)=(H_0-E)^{-1}$ defined on the resolvent set $\rho(H_0)=\mathbb{C} \setminus [0,\infty)$ to the equation (\ref{inhomogenoussch1}), we find \begin{eqnarray} |\psi \rangle = R_0(E) |\rho \rangle + R_0(E) \sum_{j=1}^{2} |\tilde{f}_j(\epsilon) \rangle \langle \tilde{f}_j(\epsilon)| \psi \rangle \;. \label{psimodel1unknown} \end{eqnarray} The right hand side of this expression involves unknown complex numbers $\langle \tilde{f}_j(\epsilon)| \psi \rangle $. In order to find them, we project this equation onto the $\langle \tilde{f}_i(\epsilon)|$ and isolating the $j=i$th term in the summation to get the following matrix equation \begin{eqnarray} \sum_{j=1}^{2} \tilde{\Phi}_{ij}(\epsilon, E) \langle \tilde{f}_j(\epsilon)|\psi \rangle = \langle \tilde{f}_i(\epsilon) |R_0(E)|\rho \rangle \;, \label{phimatrixequation} \end{eqnarray} where \begin{eqnarray} \tilde{\Phi}_{ij} (\epsilon, E) = \begin{cases} 1- \langle \tilde{f}_i(\epsilon)| R_0(E) | \tilde{f}_i(\epsilon) \rangle & i=j \\ - \langle \tilde{f}_i(\epsilon)| R_0(E) | \tilde{f}_j(\epsilon) \rangle & i \neq j \;. \end{cases} \end{eqnarray} Here the Dirac's notation $\langle \tilde{f}_i(\epsilon)| R_0(E) | \tilde{f}_i(\epsilon) \rangle $ is used in the sense that $\langle \tilde{f}_i(\epsilon), R_0(E) \tilde{f}_i(\epsilon) \rangle$. Assume that the matrix $\tilde{\Phi}$ is invertible for some subset of the free resolvent set to be determined below. Then, the solution of (\ref{phimatrixequation}) exists and unique and substituting this solution into (\ref{psimodel1unknown}), we get \begin{align} |\psi \rangle = R_0(E) |\rho \rangle + R_0(E) \sum_{i,j=1}^{2} | \tilde{f}_i \rangle \left(\tilde{\Phi}^{-1}(\epsilon, E) \right)_{ij} \langle \tilde{f}_j| R_0(E)|\rho \rangle \;. \end{align} The resolvent of the regularized Hamiltonian can be directly read from the above result \begin{eqnarray} R(\epsilon, E)= R_0(E) + R_0(E) \sum_{i,j=1}^{2} | \tilde{f}_i (\epsilon)\rangle \left(\tilde{\Phi}^{-1}(\epsilon, E) \right)_{ij} \langle \tilde{f}_j(\epsilon)| R_0(E) \;. \end{eqnarray} It is convenient to express the above sum in the following way \begin{eqnarray} \sum_{i,j=1}^{2} | \tilde{f}_i (\epsilon) \rangle \left(\tilde{\Phi}^{-1}(\epsilon, E) \right)_{ij} \langle \tilde{f}_j(\epsilon)| & = & \Tr\left(\tilde{F}(\epsilon) \tilde{\Phi}^{-1}(\epsilon, E)\right) \;, \label{regularizedresolvent1ststep} \end{eqnarray} where we have defined the matrix $\tilde{F}_{ij} := | \tilde{f}_i \rangle \langle \tilde{f}_j|$. If we define the diagonal matrix $D_{ij}(\epsilon):= \sqrt{\lambda_{i}(\epsilon)} \delta_{ij}$ we can decompose $\tilde{F}=D F D$, where $F_{ij}=|f_i\rangle \langle f_j|$. This helps us to write the summation term (\ref{regularizedresolvent1ststep}) as $\Tr\left(\tilde{F} \tilde{\Phi}^{-1}\right) = \Tr(D F D \tilde{\Phi}^{-1}) = \Tr(F D \tilde{\Phi}^{-1} D) = \Tr\left(F \Phi^{-1}\right)$, where $\Phi$ is related $\tilde{\Phi}$ by a similarity transformation $\Phi=D^{-1} \tilde{\Phi} D^{-1}$ and given by \begin{eqnarray} \label{regularizedPhi1} \Phi_{ij}(\epsilon, E)= \begin{cases} \frac{1}{\lambda_{i}(\epsilon)} - \langle f_i(\epsilon)| R_0(E) |f_i(\epsilon) \rangle & i=j \\ - \langle f_i(\epsilon)| R_0(E) |f_j(\epsilon) \rangle & i \neq j \;. \end{cases} \end{eqnarray} Hence, we explicitly find the resolvent formula for the regularized Hamiltonian \begin{eqnarray} R(\epsilon, E)= R_0(E) + R_0(E) \sum_{i,j=1}^{2} | f_i (\epsilon)\rangle \left(\Phi^{-1}(\epsilon, E) \right)_{ij} \langle f_j(\epsilon)| R_0(E) \;. \label{regularizedresolvent} \end{eqnarray} We now claim that $E \in \rho(H_0)$ lies in the resolvent set for $ H_{\epsilon}-E$ if and only if the matrix $\Phi(\epsilon, E)$ is invertible. To prove this, we first assume that $\Phi(\epsilon, E)$ is invertible for some values of $E \in \rho(H_0)$. From the triangle inequality, we have \begin{eqnarray} ||R(\epsilon, E)|\psi \rangle|| \leq ||R_0(E)|\psi \rangle || + 4 \max_{1\leq i,j \leq 2} |\left(\Phi^{-1}(\epsilon, E) \right)_{ij}| \; |\langle f_j(\epsilon)| R_0(E)|\psi \rangle| \; ||R_0(E) | f_i (\epsilon)\rangle|| \;. \label{Rpsibound} \end{eqnarray} We need to show that the right hand side of this inequality is a bounded function of $E$ where $E$ must lie in $\rho(H_0)$ and satisfy $\det \Phi(\epsilon, E) \neq 0$. Moreover, this bound must also be a regular function of $\epsilon$ since we will consider the limiting case as $\epsilon \to 0^+$ by appropriately choosing $\lambda_1(\epsilon)$, as we will show later on. A direct application of Cauchy-Schwarz inequality to the inner product in the right hand side of the above inequality does not yield a regular estimate in $\epsilon$ since the norm of the function $f_i(\epsilon)$ are not regular. For this reason, we may think that the adjoint of the bounded free resolvent operator acts on the first entry in the inner product and then apply the Cauchy-Schwarz inequality \begin{eqnarray} |\langle f_i(\epsilon)| R_0(E)|\psi \rangle| \leq ||R_0(E^*)|f_i(\epsilon)\rangle|| \,||\psi||< \infty \;, \end{eqnarray} where we have used the fact that $R_0^\dag (E)=R_0(E^*)$. Since $E$ is inside the resolvent set of $H_0$, the expression $||R_0(E) | f_i (\epsilon)\rangle||$ and the inner product in the right hand side of the inequality (\ref{Rpsibound}) is finite as long as $|f_i(\epsilon) \rangle$ lie in $L^2(\mathbb{R}^2)$. However, we must also show that their bounds must be regular in $\epsilon$ as $\epsilon \to 0^+$. It is easy to see that \begin{eqnarray} ||R_0(E^*)|\mathbf{a}^{\epsilon} \rangle||^2 = \int_{\mathbb{R}^2} \frac{|\langle \mathbf{p}|\mathbf{a}^{\epsilon} \rangle|^2}{(p^2-E)(p^2 - E^*)} \; \frac{d^2 p}{(2\pi)^2} \;. \label{normsquareofR0a} \end{eqnarray} Using (\ref{definitionaepsgammaeps}) and the explicit form of the heat kernel (\ref{heatkernel}), we find \begin{eqnarray} \langle \mathbf{p}|\mathbf{a}^{\epsilon} \rangle = \frac{e^{-i \mathbf{p} \cdot \mathbf{a}}}{2\pi \epsilon} \int_{\mathbb{R}^2} e^{-i \mathbf{p} \cdot (\mathbf{r}-\mathbf{a})} e^{-\frac{|\mathbf{r}-\mathbf{a}|^2}{2\epsilon}}d^2 r \;. \end{eqnarray} By writing the integral in polar coordinates and using the integral representation of the Bessel function of the first kind $J_0(x)$ \begin{eqnarray} J_0(x)=\frac{1}{2\pi} \int_{0}^{2\pi} e^{-i x \cos \theta} \; d \theta \label{intrepofbessel1stkind} \end{eqnarray} and the result \cite{gradshteyn2014table} \begin{eqnarray} \int_{0}^{\infty} x^{\nu +1} e^{-\alpha x^2} J_{\nu}(\beta x) \; \; d x = \frac{\beta^{\nu}}{(2\alpha)^{\nu+1}} \; e^{-\frac{\beta^2}{4 \alpha}} \;, \end{eqnarray} we get \begin{eqnarray} \langle \mathbf{p}|\mathbf{a}^{\epsilon} \rangle = e^{- i \mathbf{p} \cdot \mathbf{a}} \; e^{-\epsilon p^2 /2} \;. \label{paepsilon} \end{eqnarray} Substituting this result into (\ref{normsquareofR0a}) yields the following bound \begin{eqnarray} ||R_0(E^*)|\mathbf{a}^{\epsilon} \rangle||^2 \leq \frac{1}{2\pi} \int_{0}^{\infty} \frac{e^{-\epsilon p^2} p}{|p^4 -2 p^2 \Real(E) + (\Real(E)^2 + \Imaginary(E)^2)|} \; d p \;. \end{eqnarray} Except for the positive real $E$ axis, the above integral converges and one can estimate its upper bound if $\Real(E)<0$ by \begin{eqnarray} ||R_0(E^*)|\mathbf{a}^{\epsilon} \rangle||^2 \leq \frac{1}{2\pi} \int_{0}^{\infty} \frac{e^{-\epsilon p^2} p}{p^4 + A} \; d p \;, \end{eqnarray} where $A=\Real(E)^2 + \Imaginary(E)^2$. Thanks to the result (3.354) in \cite{gradshteyn2014table}, we can evaluate the above integral so that \begin{eqnarray} ||R_0(E^*)|\mathbf{a}^{\epsilon} \rangle||^2 \leq \frac{1}{4\pi \sqrt{A}} \left( \ci(\epsilon \sqrt{A}) \sin(\epsilon \sqrt{A}) - \si(\epsilon \sqrt{A}) \cos(\epsilon \sqrt{A}) \right) \;, \label{R0abound} \end{eqnarray} where $\si(x)=-\int_{x}^{\infty} \frac{\sin t}{t} dt$ is the sine integral function, and $\ci(x)=-\int_{x}^{\infty} \frac{\cos t}{t} dt$ is the cosine integral function. It is easy to see that this bound is a regular function of $\epsilon$ for all $A\neq 0$. If $\Real(E)>0$, \begin{eqnarray} ||R_0(E^*)|\mathbf{a}^{\epsilon} \rangle||^2 & = & \frac{1}{2} \int_{0}^{\infty} \frac{e^{-\epsilon u}}{(u-\Real(E))^2+\Imaginary(E)^2} \; d u \nonumber \\ & = & \frac{1}{2} \int_{-\Real(E)}^{0} \frac{e^{-\epsilon (v+\Real(E))}}{v^2+\Imaginary(E)^2} \; d v + \frac{1}{2} \int_{0}^{\infty} \frac{e^{-\epsilon (v+\Real(E))}}{v^2+\Imaginary(E)^2} \; d v \nonumber \\ & \leq & \frac{1}{2} \int_{-\Real(E)}^{0} \frac{1}{v^2+\Imaginary(E)^2} \; d v + \frac{1}{2} \int_{0}^{\infty} \frac{e^{-\epsilon v}}{v^2+\Imaginary(E)^2} \; d v \; , \label{R0abound2} \end{eqnarray} which are finite and regular in $\epsilon$ by the same reason given above. We can similarly show that the following norm \begin{eqnarray} ||R_0(E^*)|\Gamma^{\epsilon} \rangle||^2 = \int_{\mathbb{R}^2} \frac{|\langle \mathbf{p}|\Gamma^{\epsilon} \rangle|^2}{(p^2-E)(p^2 - E^*)} \; \frac{d^2 p}{(2\pi)^2} \;, \label{normsquareofR0g} \end{eqnarray} is bounded function of $E$ on $\rho(H_0)$ and regular in $\epsilon$. For this, we need to find \begin{eqnarray} \langle \mathbf{p} | \Gamma^{\epsilon} \rangle = \frac{1}{L} \int_{\mathbb{R}^2} e^{i \mathbf{p} \cdot \mathbf{r}} \left( \int_{S^1} K_{\epsilon/2}(\mathbf{r}, \boldsymbol{\gamma}(s)) \; d s \right) \; d^2 r \;. \end{eqnarray} Using the explicit expression of the heat kernel (\ref{heatkernel}) and the integral representation of the modified Bessel function of the first kind \cite{Lebedev1965special} \begin{eqnarray} I_0(x)= \frac{1}{2\pi} \int_{0}^{2\pi} e^{x \cos \theta} d\theta \label{intrepI0} \;, \end{eqnarray} we get \begin{eqnarray} \int_{S^1} K_{\epsilon/2}(\mathbf{r}, \boldsymbol{\gamma}(s)) \; d s = \frac{R}{\epsilon} \; e^{-\frac{(r^2 + R^2)}{2\epsilon}} \; I_0 \left(\frac{R}{\epsilon}r\right) \;. \end{eqnarray} Then, from the result (6.633) in \cite{gradshteyn2014table} \begin{eqnarray} \int_{0}^{\infty} x \, e^{-\alpha x^2} I_{\nu}(\beta x) J_{\nu}(\gamma x) \; d x = \frac{1}{2\alpha} e^{\frac{(\beta^2-\gamma^2)}{4 \alpha}} J_{\nu}\left(\frac{\beta \gamma}{2 \alpha}\right) \;, \end{eqnarray} we obtain \begin{eqnarray} \langle \mathbf{p} | \Gamma^{\epsilon} \rangle = e^{-\frac{\epsilon p^2}{2}} \, J_0(p R) \;. \label{pgammaepsilon} \end{eqnarray} Combining all these results yield \begin{eqnarray} ||R_0(E^*)|\Gamma^{\epsilon} \rangle||^2 = \frac{1}{2\pi} \int_{0}^{\infty} \frac{p \, e^{-\epsilon p^2} J_{0}^{2}(p R)}{(p^2-E)(p^2-E^*)} \; d p \;. \end{eqnarray} Since $J_{0}^{2}(p R) \leq 1$, we obtain the same form of the estimate (\ref{R0abound}) and (\ref{R0abound2}) as above. All these show that $R(\epsilon,E)$ is bounded for the above values of $E$, that is, if $E \in \rho(H_0)$ and satisfies $\det(\Phi(\epsilon,E)) \neq 0$, then $E \in \rho(H_{\epsilon})$. Conversely, if $E \in \rho(H_{\epsilon})$, then $\det(\Phi(\epsilon,E)) \neq 0$. For this, suppose that $E \in \mathbb{C}\setminus [0, \infty)$ satisfies $\det(\Phi(\epsilon,E))=0$. We need to show that $E \notin \rho(H_{\epsilon})$ or $E$ lies in the spectrum of $H_{\epsilon}$, or in particular $E$ is an eigenvalue of $H_{\epsilon}$: \begin{eqnarray} H_{\epsilon} |\psi \rangle = E |\psi \rangle \;, \label{regularizedeigenvalueequation} \end{eqnarray} for some non-zero $|\psi \rangle \in L^2(\mathbb{R}^2)$. The above eigenvalue problem for the regularized Hamiltonian is equivalent to the problem of finding non-trivial solution $\langle \tilde{f}_j(\epsilon)|\psi \rangle$ of the equation (\ref{phimatrixequation}) with $|\rho\rangle=|0\rangle$: \begin{eqnarray} \sum_{j=1}^{2} \tilde{\Phi}_{ij}(\epsilon,E) \langle \tilde{f}_j(\epsilon)|\psi \rangle =0 \;. \label{eigenvalueequation2ndform} \end{eqnarray} Since the equation (\ref{eigenvalueequation2ndform}) is derived from the eigenvalue equation (\ref{regularizedeigenvalueequation}) of the regularized Hamiltonian, the set of $E$ satisfying (\ref{regularizedeigenvalueequation}) must satisfy the equation (\ref{eigenvalueequation2ndform}). To prove the converse of it, we first need to show that $\langle f_i(\epsilon)| R_0(E) |f_j(\epsilon) \rangle \neq 0$ for all $i,j$. Otherwise, $\tilde{\Phi}$ would be identity matrix, which is invertible. Then, the equation (\ref{eigenvalueequation2ndform}) implies that $\langle \tilde{f}_j(\epsilon)|\psi \rangle=0$ for all $j$. Expanding explicitly the form of the matrix $\tilde{\Phi}$ in (\ref{eigenvalueequation2ndform}) and using the above fact, it follows that $E$ must satisfy the eigenvalue equation for the regularized Hamiltonian. Hence, we have a non-trivial solution of the above linear equation (\ref{eigenvalueequation2ndform}) for $\langle \tilde{f}_j(\epsilon)|\psi \rangle$ with some $|\psi\rangle \in L^2(\mathbb{R}^2)$ if and only if $\det \tilde{\Phi}(\epsilon, E)=\det \Phi(\epsilon, E) =0$. Let us summarize this short result as the following lemma: \begin{mylemma*} \label{lemma1} Let $\lambda_1(\epsilon)$ be a continuous function of $\epsilon$, which converges to zero as $\epsilon \to 0^+$ and $\lambda_2> 0$ be an arbitrary positive real number. The resolvent of the regularized Hamiltonian $$ H_{\epsilon}=H_0- \sum_{i=1}^{2} \lambda_i(\epsilon) |f_i(\epsilon) \rangle \langle f_i(\epsilon)|$$ is given by \begin{eqnarray*} R(\epsilon, E)= R_0(E) + R_0(E) \sum_{i,j=1}^{2} | f_i (\epsilon)\rangle \left(\Phi^{-1}(\epsilon, E) \right)_{ij} \langle f_j(\epsilon)| R_0(E) \;, \end{eqnarray*} where \begin{eqnarray} \Phi_{ij}(\epsilon, E)= \frac{\delta_{ij}}{\lambda_{i}(\epsilon)} - \langle f_i(\epsilon)| R_0(E) |f_j(\epsilon) \rangle \;, \end{eqnarray} and its resolvent set is given by $\rho(H_{\epsilon})=\{E \in \rho(H_0): \det(\Phi(\epsilon, E)) \neq 0 \; \text{for} \; \text{all} \; \epsilon>0 \}$. \end{mylemma*} Now, we consider the limiting case as $\epsilon \rightarrow 0$ to properly define the initial formal Hamiltonian. For this reason, we choose $\lambda_1(\epsilon)$ in such a way that the regularized Hamiltonian has a reasonable and non-trivial limit as we remove the cut-off parameter, that is, $\epsilon \to 0^+$. The off-diagonal elements of the matrix $\Phi(\epsilon, E)$ for $E=-\nu^2$ in the limit $\epsilon \rightarrow 0$ can be directly calculated using the Lebesgue dominated convergence theorem and the integral \cite{gradshteyn2014table} \begin{eqnarray} \int_{0}^{\infty} J_{\xi}(a x) J_{\xi}(b x) \frac{x}{x^2 + c^2} \; d x = \begin{cases} K_{\xi}(a c) I_{\xi}(b c) & 0<b<a \\ K_{\xi}(b c) I_{\xi}(a c) & 0<a<b \end{cases} \;, \label{integralofJ0fraction} \end{eqnarray} for $\Real(\xi)>-1$, so that \begin{eqnarray} \lim_{\epsilon \to 0^+} \Phi_{12}(\epsilon, -\nu^2) & = & \lim_{\epsilon \to 0^+} \Phi_{21}(\epsilon, -\nu^2)= - \lim_{\epsilon \to 0^+} \langle \mathbf{a}^{\epsilon}| R_0(-\nu^2)|\Gamma^{\epsilon} \rangle \\ & = & - \frac{1}{2\pi} K_0\left(a \nu\right) I_0\left(R \nu \right) \;. \end{eqnarray} The limit of the second diagonal term of the matrix $\Phi(\epsilon, -\nu^2)$ as $\epsilon \rightarrow 0$ can be evaluated easily thanks to the Lebesgue dominated convergence theorem so we have \begin{eqnarray} \lim_{\epsilon \to 0^+} \Phi_{22}(\epsilon, -\nu^2) & = & \frac{1}{\lambda_2}- \langle \Gamma^{\epsilon} |R_0(\epsilon, -\nu^2)|\Gamma^{\epsilon} \rangle = \frac{1}{\lambda_2}- \lim_{\epsilon \to 0^+} \int_{\mathbb{R}^2} \frac{|\langle \mathbf{p}|\Gamma^{\epsilon}\rangle |^2}{p^2 + \nu^2} \frac{d^2 p}{(2\pi)^2} \nonumber \\ & = & \frac{1}{\lambda_2}-- \frac{1}{2\pi} I_{0}(\nu R) K_0(\nu R) \;, \label{matrix22element}\end{eqnarray} where we have used the result (\ref{pgammaepsilon}) and the continuity of the integral (\ref{integralofJ0fraction}) in the limiting case $a \to b$. The $\epsilon \to 0^+$ limit of the first diagonal element of the matrix $\Phi(\epsilon, E)$ given in (\ref{regularizedPhi1}) includes a divergent term due to the singular behaviour of the heat kernel around $t=0$. \begin{eqnarray} \lim_{\epsilon \rightarrow 0} \langle \mathbf{a}^{\epsilon}|R_0(-\nu^2)|\mathbf{a}^{\epsilon} \rangle = \int_{0}^{\infty} K_t(\mathbf{a}, \mathbf{a}) e^{-t \nu^2} dt \;. \end{eqnarray} For this reason, we apply the idea of the renormalization, that is, introduce a new parameter $\mu>0$ and make the following choice of the coupling constant $\lambda_1$ as a function of the regularization parameter $\epsilon$: \begin{eqnarray} \label{barecouplingconstant} \frac{1}{\lambda_1(\epsilon)}:= \int_{0}^{\infty} K_{t+\epsilon}(\mathbf{a},\mathbf{a}) e^{-t\mu^2} d t \;. \end{eqnarray} Applying the method of renormalization to find well-defined results for point Dirac delta potentials in two and three dimensions is actually not new, see e.g. \cite{Huang, Jackiw}. After substituting (\ref{barecouplingconstant}) to first diagonal element of the matrix (\ref{regularizedPhi1}) for $E=-\nu^2$ and then taking the limit as $\epsilon \rightarrow 0$, we get \begin{eqnarray} \lim_{\epsilon \to 0^+} \Phi_{11}(\epsilon, -\nu^2)= \frac{1}{4\pi} \log(\nu^2/\mu^2) \;. \end{eqnarray} Hence, we define the limit of the matrix $\Phi(\epsilon, -\nu^2)$ as $\Phi(-\nu^2)$ given by \begin{eqnarray} \Phi(-\nu^2) :=\left( \begin{array}{cccc} \frac{1}{4\pi} \log \left(\frac{\nu^2}{\mu^2}\right)& & -\frac{1}{2\pi} K_0\left(\nu a \right) I_0\left(\nu R \right) \\ \\ -\frac{1}{2\pi} K_0\left(\nu a \right) I_0\left(\nu R \right) & & \frac1{\lambda_2}- \frac{1}{2\pi} K_{0}(\nu R) I_0(\nu R) \end{array} \right) \;. \label{formallimitofPhicirclept} \end{eqnarray} The formal limit of the resolvent (\ref{regularizedresolvent}) of the regularized Hamiltonian as we take $\epsilon\to 0^+$, is given by \begin{eqnarray} \label{resolventcirclepoint} R(E):= R_0(E) + R_0(E) \sum_{i,j=1}^{2} | f_i \rangle \left(\Phi^{-1}(E) \right)_{ij} \langle f_j| R_0(E) \;, \end{eqnarray} for $E =-\nu^2$, which satisfies $\det \Phi(E) \neq 0$ and the matrix $\Phi$ is given by (\ref{formallimitofPhicirclept}). Here $|f_1 \rangle = |\mathbf{a} \rangle$ and $|f_2 \rangle= | \Gamma \rangle$. Actually, one can show that the above regularized resolvent (\ref{regularizedresolvent}) converges strongly to the expression (\ref{resolventcirclepoint}) as $\epsilon \rightarrow 0^+$ for real negative values of $E$ that satisfy $\det \Phi(E)\neq 0$, that is, \begin{eqnarray} \lim_{\epsilon \to 0^+} || \left(R(\epsilon, E) - R(E)\right) |f \rangle|| = 0 \;, \end{eqnarray} for any $|f\rangle \in L^2(\mathbb{R}^2)$ and $E \in \rho(H_{\epsilon})$. Since $E$ is assumed to satisfy $\det \Phi(E) \neq 0$, we conclude that $\det \Phi(\epsilon, E) \neq 0$ for sufficiently small $\epsilon>0$. Then, if we show \begin{eqnarray} \lim_{\epsilon \to 0^+} ||R_0(E)|f_i(\epsilon)\rangle-R_0(E)|f_i \rangle||=0 \;, \end{eqnarray} strong convergence of the resolvent easily follows. The above condition is easy to check once we write them explicitly \begin{eqnarray} \lim_{\epsilon \to 0^+} ||R_0(E)|f_i(\epsilon)\rangle-R_0(E)|f_i \rangle||^2 = \begin{cases} \int_{\mathbb{R}^2} \left(1+e^{-\epsilon p^2} -2 e^{-\epsilon p^2/2} \right) \frac{1}{(p^2-E)^2}\frac{d^2 p}{(2\pi)^2} & \; \text{for} \; i=1 \\ \int_{\mathbb{R}^2} \left(1+e^{-\epsilon p^2} -2 e^{-\epsilon p^2/2} \right) \frac{J_{0}^{2}(p R)}{(p^2-E)^2}\frac{d^2 p}{(2\pi)^2} & \; \text{for} \; i=2 \end{cases} \;, \end{eqnarray} where we have used the equations (\ref{paepsilon}) and (\ref{pgammaepsilon}). Then, the Lebesgue dominated convergence theorem implies that this limit is zero. Hence, we prove that \begin{mylemma*} \label{lemma2} Let $E$ be a real negative number that satisfies $\det \Phi(E) \neq 0$. Then, the resolvent $R(\epsilon, E)$ of the regularized Hamiltonian $H_{\epsilon}$ converges strongly to the expression $R(E)$, given by (\ref{resolventcirclepoint}) as $\epsilon \to 0^+$. \end{mylemma*} However, it is now natural to ask whether the above limiting expression is the resolvent of some self-adjoint operator. This can be answered affirmatively by following the ideas given in \cite{Albeverio2012solvable} or in \cite{existence}. Here we essentially follow a similar argument presented in \cite{rajeevdimock} developed for the point interactions in the plane. To show this, we first show that the limit operator $R(E)$ for the real negative values of $E$ that satisfies $\det \Phi(E)\neq 0$ is invertible (equivalently, $\Ker(R(E))=\{|0 \rangle \}$). Suppose that $R(E)|f\rangle = |0 \rangle $ for some $|f \rangle \in L^2(\mathbb{R}^2)$. From the explicit expression of the operator $R(E)$ given by (\ref{resolventcirclepoint}) and writing the equation in momentum representation, we find \begin{eqnarray} & & \hat{f}(\mathbf{p}) = - \sum_{i,j=1}^{2} \frac{\langle \mathbf{p}|f_i \rangle}{p^2-E} \left[\Phi^{-1}(E)\right]_{ij} \int_{\mathbb{R}^2} \langle f_j | R_0(E) | \mathbf{q} \rangle \langle \mathbf{q}|\psi \rangle \frac{d^2 q}{(2\pi)^2} \;. \end{eqnarray} By Cauchy-Schwarz inequality, we have \begin{eqnarray} \int_{\mathbb{R}^2} \langle \mathbf{a} | R_0(E) | \mathbf{q} \rangle \langle \mathbf{q}|\psi \rangle \frac{d^2 q}{(2\pi)^2} & = & \int_{\mathbb{R}^2} \frac{e^{i \mathbf{q} \cdot \mathbf{a}}}{q^2-E} f(\mathbf{q}) \frac{d^2 q}{(2\pi)^2} \nonumber \\ & \leq & \left(\int_{0}^{\infty} \frac{q}{(q^2-E)^2}\; \frac{dq}{2\pi}\right)^{1/2} ||f|| < \infty \;, \label{csbound1} \end{eqnarray} and \begin{eqnarray} \int_{\mathbb{R}^2} \langle \Gamma | R_0(E) | \mathbf{q} \rangle \langle \mathbf{q}|\psi \rangle \frac{d^2 q}{(2\pi)^2} & = & \int_{\mathbb{R}^2} \frac{J_{0}(q R)}{q^2-E} f(\mathbf{q}) \frac{d^2 q}{(2\pi)^2} \nonumber \\ & \leq & \left(\int_{0}^{\infty} \frac{q}{(q^2-E)^2}\; \frac{dq}{2\pi}\right)^{1/2} ||f|| < \infty \; \label{csbound2} \end{eqnarray} With the above bounds (\ref{csbound1}) and (\ref{csbound2}), we show that \begin{eqnarray} & & \hskip-2cm \hat{f}(\mathbf{p}) = - \Bigg[ e^{-i \mathbf{p} \cdot \mathbf{a}} \left[\Phi^{-1}(E)\right]_{11} C_1 + e^{-i \mathbf{p} \cdot \mathbf{a}} \left[\Phi^{-1}(E)\right]_{12} C_2 \nonumber \\ & & \hspace{3cm} + \, J_{0}(p R) \left[\Phi^{-1}(E)\right]_{21} C_2 + J_{0}(p R) \left[\Phi^{-1}(E)\right]_{22} C_1 \Bigg] \;, \end{eqnarray} where $C_1$, $C_2$ are finite real numbers and $E$ is a negative real number that satisfies $\det \Phi(E) \neq 0$. However, this solution $\hat{f}(\mathbf{p})$ can not be in $L^2(\mathbb{R}^2)$ unless $|f \rangle = |0 \rangle$. This allows us to define an operator $H$ depending on the parameter $\mu$ via \begin{eqnarray} R(E):= (H(\mu)-E)^{-1} \end{eqnarray} for the above values of $E$. Hence, we have \begin{mylemma*} \label{lemma3} Let $E$ be a real negative number that satisfies $\det \Phi(E) \neq 0$. Then, $R(E)$ is invertible. \end{mylemma*} After all these preliminary steps together with a version of Trotter-Kato theorem, quoted in Appendix A (see also \cite{rajeevdimock}), it follows that the limit $R(\epsilon, E)$ converges strongly to $R(E)$ as $\epsilon \to 0^+$ for all complex numbers $E$ except the interval $[0, \infty)$ and $\det \Phi(E)\neq 0$. Moreover, there exists a self-adjoint operator $H(\mu)$ such that $R(E)=(H(\mu)-z)^{-1}$ and the matrix $\Phi$ for complex values are defined through its analytic continuation, given by \begin{eqnarray} \Phi(k^2)=\left( \begin{array}{cccc} \frac{1}{4\pi} \log \left(-\frac{k^2}{\mu^2}\right)& & -\frac{1}{2\pi} K_0\left(-i k a \right) I_0\left(-i k R \right) \\ \\ -\frac{1}{2\pi} K_0\left(-i k a \right) I_0\left(-i k R\right) & & \frac1{\lambda_2}- \frac{1}{2\pi} K_{0}(- i k R) I_0(- i k R) \end{array} \right) \;, \end{eqnarray} where we parametrize $E=k^2$ with unambiguous square root $k$ with $\Imaginary(k)>0$ for convenience. We shall call this matrix as principal matrix from now on. Let us summarize the last result as the following: \begin{mytheo*} \label{theo1} For complex $E$ not in $\det \Phi(E)=0$ and $[0, \infty)$, the resolvent $R(\epsilon, E)$ of regularized Hamiltonian $H_{\epsilon}$ converges strongly to $R(E)$. Furthermore, there exists a self-adjoint operator $H(\mu)$ such that $R(E)=(H(\mu)-E)^{-1}$. \end{mytheo*} Suppose $E=k^2$ with unambiguous square root $k$ where $\Imaginary(k)>0$ and $\phi_k(\mathbf{r}) \in D(H_0)=H^{2}(\mathbb{R}^2)$. Thanks to the self-adjointness of $H$, we have \begin{eqnarray} D(H) = (H-k^2)^{-1} L^2(\mathbb{R}^2) = (H-k^2)^{-1}(H_0-k^2)D(H_0) \;. \end{eqnarray} Then, using the explicit form of the resolvent formula (\ref{resolventcirclepoint}), we have the following characterization of the domain of $H$: \begin{eqnarray} D(H)= \left( 1+ \sum_{i,j=1}^{2} R_0(k^2) | f_i \rangle \left[\Phi^{-1}(k^2)\right]_{ij} \langle f_j| \right) D(H_0) \;. \end{eqnarray} This means that the domain of $H$ consists of all functions of the following form \begin{eqnarray} \psi(\mathbf{r}) = \phi_k(\mathbf{r}) +\sum_{i,j=1}^{2} \langle \mathbf{r} | R_0(k^2) |f_i \rangle \left[\Phi^{-1}(k^2)\right]_{ij} \langle f_j|\phi_k \rangle \;, \end{eqnarray} where $\langle \mathbf{a}|\phi_k\rangle=\phi_k(\mathbf{a})$, $\langle \Gamma|\phi_k \rangle = \frac{1}{L} \int_{S^1} \phi_k(\boldsymbol{\gamma}(s)) \; d s$, and \begin{eqnarray} \hskip-1cm \langle \mathbf{r}|R_0(k^2)|f_1 \rangle = \langle \mathbf{r}|R_0(k^2)|\mathbf{a}\rangle & =& \int_{\mathbb{R}^2} \frac{e^{i \mathbf{p} \cdot (\mathbf{r}-\mathbf{a})}}{p^2-k^2} \; \frac{d^2 p}{(2\pi)^2} = \frac{i}{4} H_{0}^{(1)}(k|\mathbf{r}-\mathbf{a}|) \;, \label{rR0f1} \\ \langle \mathbf{r}|R_0(k^2)|f_2 \rangle = \langle \mathbf{r}|R_0(k^2)|\Gamma \rangle & = & \int_{\mathbb{R}^2} \frac{e^{i \mathbf{p} \cdot \mathbf{r}}}{p^2-k^2} \;J_0(p R) \frac{d^2 p}{(2\pi)^2} \nonumber \\ & = & \int_{0}^{\infty} \frac{p J_0(p r) J_0(p R)}{p^2-k^2} \frac{d p}{(2\pi)} \nonumber \\ & & \hskip-2cm = \frac{i}{4} \left(H_{0}^{(1)}(k r) J_0(k R) \theta(R-r) +H_{0}^{(1)}(k R) J_0(k r) \theta(r-R) \right) \;. \label{rR0f2} \end{eqnarray} We have evaluated the last integral by the analytic continuation of the result (\ref{integralofJ0fraction}) and used the fact that $K_0(z)= \frac{i \pi}{2} H_{0}^{(1)}(e^{i \pi/2} z)$ and $I_0(z)=e^{-i \pi/2} J_0(e^{i \pi/2}z)$ for $-\pi < arg(z) < \pi/2$ \cite{Lebedev1965special}, where $H_{0}^{(1)}$ is the zeroth order Hankel function of the first kind. Hence, we obtain \begin{eqnarray} & & \hskip-1cm \psi(\mathbf{r}) = \phi_k(\mathbf{r}) + \frac{i}{4} H_{0}^{(1)}(k|\mathbf{r}-\mathbf{a}|) \Bigg(\left[\Phi^{-1}(k^2)\right]_{11} \phi_k(\mathbf{a}) + \left[\Phi^{-1}(k^2)\right]_{12} \bigg(\frac{1}{L} \int_{S^1} \phi_k(\boldsymbol{\gamma}(s))\; d s \bigg) \Bigg) \nonumber \\ & & \hspace{2cm} + \, \frac{i}{4} \left(H_{0}^{(1)}(k r) J_0(k R) \theta(R-r) +H_{0}^{(1)}(k R) J_0(k r) \theta(r-R) \right) \nonumber \\ & & \hspace{3cm} \times \, \Bigg(\left[\Phi^{-1}(k^2)\right]_{21} \phi_k(\mathbf{a}) + \left[\Phi^{-1}(k^2)\right]_{22} \bigg(\frac{1}{L} \int_{S^1} \phi_k(\boldsymbol{\gamma}(s))\; d s \bigg) \Bigg) \;. \label{domaindecomposition} \end{eqnarray} Indeed, this decomposition (\ref{domaindecomposition}) is unique. For this, let $\psi(\mathbf{r})=0$ identically. Then, it follows from the above decomposition that \begin{eqnarray} & & \hskip-1cm \phi_k(\mathbf{r})= - \frac{i}{4} H_{0}^{(1)}(k|\mathbf{r}-\mathbf{a}|) \Bigg(\left[\Phi^{-1}(k^2)\right]_{11} \phi_k(\mathbf{a}) + \left[\Phi^{-1}(k^2)\right]_{12} \bigg(\frac{1}{L} \int_{S^1} \phi_k(\boldsymbol{\gamma}(s))\; d s \bigg) \Bigg) \nonumber \\ & & \hspace{2cm} - \, \frac{i}{4} \left(H_{0}^{(1)}(k r) J_0(k R) \theta(R-r) +H_{0}^{(1)}(k R) J_0(k r) \theta(r-R) \right) \nonumber \\ & & \hspace{3cm} \times \, \Bigg(\left[\Phi^{-1}(k^2)\right]_{21} \phi_k(\mathbf{a}) + \left[\Phi^{-1}(k^2)\right]_{22} \bigg(\frac{1}{L} \int_{S^1} \phi_k(\boldsymbol{\gamma}(s))\; d s \bigg) \Bigg) \;. \label{domaindecomposition2} \end{eqnarray} Since the functions $H_{0}^{(1)}(k|\mathbf{r}-\mathbf{a}|)$ and $H_{0}^{(1)}(k r) J_0(k R) \theta(R-r) +H_{0}^{(1)}(k R) J_0(k r) \theta(r-R)$ in each term are discontinuous at $\mathbf{r}=\mathbf{a}$ and $r=R$, the function $\phi_k(\mathbf{r})$ can only be continuous if \begin{eqnarray} \left[\Phi^{-1}(k^2)\right]_{11} \phi_k(\mathbf{a}) + \left[\Phi^{-1}(k^2)\right]_{12} \bigg(\frac{1}{L} \int_{S^1} \phi_k(\boldsymbol{\gamma}(s))\; d s \bigg) & = & 0 \;, \\ \left[\Phi^{-1}(k^2)\right]_{21} \phi_k(\mathbf{a}) + \left[\Phi^{-1}(k^2)\right]_{22} \bigg(\frac{1}{L} \int_{S^1} \phi_k(\boldsymbol{\gamma}(s))\; d s \bigg) & = & 0 \;. \end{eqnarray} However, these conditions imply that the decomposition (\ref{domaindecomposition}) is unique. It is also straightforward to show that $(H-k^2)^{-1}(H_0-k^2)|\phi_k \rangle =|\psi \rangle$, which is equivalent to $(H-k^2)|\psi \rangle = (H_0-k^2)|\phi_k \rangle$. After showing the existence of the self-adjoint operator $H$ associated with the resolvent $R(E)$, we may not guarantee that $H$ must be of the form $H_0+V$ with some operator $V$. Nevertheless, we can show that $H$ is a local operator in the sense that $\psi(\mathbf{r})=0$ in an open set $U \subseteq \mathbb{R}^2$ implies that $H\psi(\mathbf{r})= \langle \mathbf{r}|H|\psi\rangle = 0$. For this, let $\psi(\mathbf{r})=0$ for all $\mathbf{r} \in U$. Then, the function $\phi_k(\mathbf{r})$ for $\mathbf{r} \in U$ is given by equation (\ref{domaindecomposition2}). If $U \cap \{\mathbf{a} \cup \Gamma \} = \emptyset$, the action of $H_0-k^2$ onto the function $\phi_k(\mathbf{r})$ vanishes. Since $H_{0}^{(1)}$ is the Green's function of Helmholtz equation in two dimensions and $J_0(k r)$ satisfies Helmholtz equation we get $H\psi(\mathbf{r})= k^2 \psi(\mathbf{r})+(H_0-k^2)\phi_k(\mathbf{r})=0$ in $U$. For the case $\mathbf{a} \in U$, the continuity of the function $\phi_k$ at $\mathbf{r}=\mathbf{a}$ from the equation (\ref{domaindecomposition2}) implies that $\left[\Phi^{-1}(k^2)\right]_{11} \phi_k(\mathbf{a}) + \left[\Phi^{-1}(k^2)\right]_{12} \bigg(\frac{1}{L} \int_{S^1} \phi_k(\boldsymbol{\gamma}(s))\; d s \bigg)=0$. Similarly, if $\Gamma \in U$, the term $\left[\Phi^{-1}(k^2)\right]_{21} \phi_k(\mathbf{a}) + \left[\Phi^{-1}(k^2)\right]_{22} \bigg(\frac{1}{L} \int_{S^1} \phi_k(\boldsymbol{\gamma}(s))\; d s \bigg)$ must vanish. Hence, we obtain $H\psi(\mathbf{r})=0$ in $U$. Let us summarize the above results as \begin{mytheo*} \label{theo2} The domain of the self-adjoint operator $H$ defined by its resolvent $R(E)=(H-E)^{-1}$ consists of all functions $\psi(\mathbf{r})$ in the following form for $\mathbf{r} \in \mathbb{R}^2 \setminus \{\mathbf{a} \cup \Gamma\}$ \begin{eqnarray} \psi(\mathbf{r}) = \phi_k(\mathbf{r}) +\sum_{i,j=1}^{2} F_i(\mathbf{r}) \left[\Phi^{-1}(k^2)\right]_{ij} \langle f_j|\phi_k \rangle \;, \end{eqnarray} where $F_i(\mathbf{r})=\langle \mathbf{r} | R_0(k^2) |f_i \rangle$, given explicitly by the equations (\ref{rR0f1}) and (\ref{rR0f2}). Here $\phi_k \in D(H_0)=H^{2,2}(\mathbb{R}^2)$ and $k^2 \in \rho(H)$ with $\Imaginary(k)>0$. The above decomposition is unique and $(H-k^2)|\psi\rangle = (H_0-k^2)|\phi_k\rangle$. Moreover, suppose that $D(H) \ni \psi(\mathbf{r})=0$ in an open set $U \subseteq \mathbb{R}^2$. Then, $H\psi(\mathbf{r})=0$ for all $\mathbf{r} \in U$. \end{mytheo*} \subsection{Bound State Analysis} \label{Bound State Analysis for Circle and Point} As it is well-known that the point spectrum $\sigma_p$ of an operator $H$ consists of the set of complex numbers $E$ such that $\Ker(H-E)\neq \{|0 \rangle\}$. From the explicit expression of the resolvent $R(k^2)$ given by (\ref{resolventcirclepoint}) for $E=k^2$, the poles of the resolvent for $k^2<0$ can only appear if the matrix $\Phi(k^2)$ is singular, that is, if \begin{eqnarray} \det \Phi(k^2)=0 \;.\label{boundstatecondition} \end{eqnarray} Let $|\psi_{ev} \rangle$ be an eigenvector of $H$ with corresponding eigenvalue $E_{ev}=k_{ev}^{2}$, i.e., \begin{eqnarray} H|\psi_{ev} \rangle = E_{ev} |\psi_{ev} \rangle \;, \end{eqnarray} where $|\psi_{ev} \rangle \in D(H)$. Since any function in the domain of $H$ can be decomposed according to Theorem \ref{theo2}, we have \begin{eqnarray} |\psi_{ev}\rangle = |\phi_k \rangle + \sum_{i,j=1}^{2} R_0(k^2) |f_i \rangle \left[\Phi^{-1}(k^2)\right]_{ij} \langle f_j|\phi_k \rangle \;, \label{psiev} \end{eqnarray} for some $k^2 \in \rho(H)$ with $\Imaginary(k)>0$ and $|\phi_k\rangle \in D(H_0)$. Actually, Theorem \ref{theo2} provides us another relation between $|\psi_{ev}\rangle$ and $|\phi_k\rangle$: \begin{eqnarray} |\phi_k\rangle = (k_{ev}^{2}- k^2)R_0(k^2)|\psi_{ev}\rangle \;. \label{psievphik} \end{eqnarray} Substituting equation (\ref{psiev}) into (\ref{psievphik}), we find \begin{eqnarray} |\phi_k \rangle = (k_{ev}^{2}-k^2) \Bigg(R_0(k^2) |\phi_k \rangle + \sum_{i,j=1}^{2} R_0(k^2) R_0(k^2) |f_i \rangle \left[\Phi^{-1}(k^2)\right]_{ij} \langle f_j|\phi_k \rangle \Bigg) \;. \label{psievphik2} \end{eqnarray} By acting $H_0-k^2$ on this vector, we obtain \begin{eqnarray} (H_0-k_{ev}^{2})|\phi_k \rangle = (k_{ev}^{2}-k^2) \sum_{i,j=1}^{2} R_0(k^2) |f_i \rangle \left[\Phi^{-1}(k^2)\right]_{ij} \langle f_j|\phi_k \rangle \;. \label{solutionphik} \end{eqnarray} or in momentum representation \begin{eqnarray} \hat{\phi}_k(\mathbf{p})= \frac{(k_{ev}^{2}-k^2)}{p^2-k_{ev}^{2}} \sum_{i,j=1}^{2} \frac{\langle \mathbf{p}|f_i \rangle}{p^2-k^2} \left[\Phi^{-1}(k^2)\right]_{ij} \langle f_j|\phi_k \rangle \;. \label{solutionphikmomentumspace} \end{eqnarray} If $E_{ev}=k_{ev}^2 \geq 0$, then this equation has no nontrivial solutions since $|\hat{\phi}_k \rangle$ cannot lie in $L^2(\mathbb{R}^2)$ unless it is identically zero. This implies that $|\phi_k \rangle \notin L^2(\mathbb{R}^2)$ thanks to the Plancherel theorem. Hence, $\psi_{ev}(\mathbf{r})=0$ for all $\mathbf{r} \in \mathbb{R}^2$, which proves that there is no nonnegative eigenvalues of $H$. However, if $E_{ev}=k_{ev}^{2}=-\nu_{*}^2<0$ with $\nu>0$, it is legitimate to apply $R_0(-\nu_{*}^2)$ on each side of the equation (\ref{solutionphik}) and get \begin{eqnarray} |\phi_k \rangle = \left(R_0(-\nu_{*}^2) - R_0(k^2)\right) \sum_{i,j=1}^{2} |f_i \rangle \left[\Phi^{-1}(k^2)\right]_{ij} \langle f_j|\phi_k \rangle \;. \label{phiksolutionbound} \end{eqnarray} Inserting this solution into (\ref{psiev}), we formally find the eigenfunctions of $H$ \begin{eqnarray} \psi_{ev}(\mathbf{r})= \sum_{i,j=1}^{2} \langle \mathbf{r} | R_0(-\nu_{*}^2)|f_i \rangle \left[\Phi^{-1}(k^2)\right]_{ij} \langle f_j|\phi_k \rangle \;. \label{eigenfunctionsolution} \end{eqnarray} This formal solution includes unknown factors $\langle f_j|\phi_k \rangle$. In order to find them, we first note that the principal matrix $\Phi$ can also be expressed purely in terms of the free resolvent kernels, that is, \begin{eqnarray} \label{Phifreeresolventkernel} \Phi(k^2) = \left( \begin{array}{cccc} \langle f_1|\left( R_0(-\mu^2)-R_0(k^2)\right) |f_1 \rangle & & - \langle f_1| R_0(k^2) |f_2 \rangle \\ \\ - \langle f_2| R_0(k^2) |f_1 \rangle & & \frac1{\lambda_2}-\langle f_2|\left( R_0(-\mu^2)-R_0(k^2)\right) |f_2 \rangle \end{array} \right) \;. \end{eqnarray} Then, it is easy to check that \begin{eqnarray} \langle f_i|\left( R_0(-\nu_{*}^2)-R_0(k^2)\right) |f_j(\epsilon) \rangle = \Phi_{ij}(k^2)-\Phi_{ij}(-\nu_{*}^2) \;. \label{differenceinresolventkernel} \end{eqnarray} Using this result in (\ref{phiksolutionbound}) after taking the projection onto $\langle f_{j'}|$, we obtain \begin{eqnarray} \sum_{j=1}^{2} \Phi_{ij}(-\nu_{*}^2) A_j = 0 \;, \label{zeroeigenvalueofPhi} \end{eqnarray} where $A_j= \sum_{i=1}^{2} \left[\Phi^{-1}(k^2)\right]_{ji}\langle f_i |\phi_k \rangle$. This equation tells us that $A_i$ is an eigenvector of the matrix $\Phi(-\nu_{*}^2)$ with eigenvalue zero. Conversely, let us suppose that \begin{eqnarray} |\psi_{ev} \rangle =\sum_{i=1}^{2} R_0(-\nu_{*}^2)|f_i \rangle A_i \;, \label{psievnew} \end{eqnarray} where $A_i=\sum_{j=1}^{2} \left[\Phi^{-1}(k^2)\right]_{ij}\langle f_j |\phi_k \rangle$ be an eigenvector of $\Phi(-\nu_{*}^2)$ with eigenvalue zero. We will show that $|\psi_{ev} \rangle \in D(H)$ and $H|\psi_{ev}\rangle = -\nu_{*}^2 |\psi_{ev} \rangle$. First, we need to show that $|\psi_{ev}\rangle \in D(H)$. For this, we define \begin{eqnarray} |\phi_k \rangle = (k_{ev}^{2}-k^2)R_0(k^2)|\psi_{ev}\rangle \;, \label{phiknew} \end{eqnarray} for some $k^2 \in \rho(H)$ with $\Imaginary(k)>0$. Then, it follows easily that $|\phi_k \rangle \in D(H_0)$ and inserting (\ref{psievnew}) into (\ref{phiknew}) and using the first resolvent identity for the free resolvent we obtain \begin{eqnarray} |\phi_k \rangle = \left(R_0(-\nu_{*}^2)-R_0(k^2)\right) \sum_{i=1}^{2} |f_i \rangle A_i \;, \label{phiknew2} \end{eqnarray} or \begin{eqnarray} |\phi_k \rangle + R_0(k^2) \sum_{i=1}^{2} |f_i \rangle A_i = |\psi_{ev}\rangle \;. \end{eqnarray} Moreover, by taking the projection of (\ref{phiknew}) onto $\langle f_{j'}|$ and using the above result (\ref{differenceinresolventkernel}), we show that $A_i$ is an eigenvector of the matrix $\Phi(-\nu_{*}^2)$ with eigenvalue zero. Hence, $|\psi_{ev} \rangle \in D(H)$ by the Theorem \ref{theo2}. Finally, using the result $(H_0-k^2)|\phi_k \rangle = (H-k^2)|\psi_{ev} \rangle$ in Theorem \ref{theo2} for the eigenstate $|\psi_{ev}\rangle$, and the equation (\ref{phiknew}) we deduce that \begin{eqnarray} H|\psi_{ev}\rangle = (H_0-k^2)|\phi_k \rangle + k^2 |\psi_{ev}\rangle = -\nu_{*}^2 |\psi_{ev}\rangle \;. \end{eqnarray} It is useful to express the condition (\ref{boundstatecondition}) in terms of a real positive parameter $\nu$, defined by $\nu=-i k>0$. Then, the solutions of the equation $\det \Phi(-\nu^2)=0$ determine the point spectrum of $H$ or bound state spectrum of $H$. However, finding the roots of the equation (\ref{boundstatecondition}) is analytically not possible. Nevertheless, we may obtain some information about the bound states as follows. First, suppose that the principal matrix $\Phi$ has an eigenvector $A$ associated with the eigenvalue $\omega$, \begin{eqnarray} \Phi A =\omega A \;. \end{eqnarray} The eigenvalues can be explicitly calculated \begin{align} \omega_{1}(\nu) & = \frac{1}{4 \pi \lambda } \Bigg\{ 2 \pi + \lambda \log \left(\frac{\nu }{\mu }\right)-\lambda I_0(\nu R) K_0(\nu R) - \Bigg[ \lambda^2 I_{0}^{2}(\nu R) \left(4 K_{0}^{2}(\nu a)+ K_{0}^{2}(\nu R)\right) \\ & \hspace{2cm} + \left(\lambda \log \left(\frac{\nu }{\mu }\right)-2 \pi \right)^2 + 2 \lambda I_0(\nu R) K_0(\nu R) \left(\lambda \log \left(\frac{\nu }{\mu }\right)-2 \pi \right)\Bigg]^{1/2} \Bigg\} \;, \end{align} and \begin{align} \omega_{2}(\nu) & = \frac{1}{4 \pi \lambda } \Bigg\{ 2 \pi + \lambda \log \left(\frac{\nu }{\mu }\right)-\lambda I_0(\nu R) K_0(\nu R) + \Bigg[ \lambda^2 I_{0}^{2}(\nu R) \left(4 K_{0}^{2}(\nu a)+ K_{0}^{2}(\nu R)\right) \\ & \hspace{2cm} + \left(\lambda \log \left(\frac{\nu }{\mu }\right)-2 \pi \right)^2 + 2 \lambda I_0(\nu R) K_0(\nu R) \left(\lambda \log \left(\frac{\nu }{\mu }\right)-2 \pi \right)\Bigg]^{1/2} \Bigg\} \;. \end{align} Finding zeroes of the determinant of the matrix $\Phi$ is equivalent to finding the zeroes of its eigenvalues. We will show that these are increasing functions of $\nu$ by expressing the principal matrix $\Phi$ in its closed form. Suppose for simplicity that the eigenvectors $A$ are normalized. Then we can determine how the eigenvalues change with respect to $\nu$ according to the Feynman-Hellman theorem \cite{thirring2013quantum} \begin{equation} \label{derivativeofeigenvalue} \frac{\partial \omega}{\partial \nu} = A^{*T} \frac{\partial \Phi}{\partial \nu} A \;, \end{equation} where $*$ and $T$ denote the complex conjugation and transpose, respectively. Here, it is convenient to express the derivative of the principal matrix not in explicit form but in the following equivalent way, \begin{eqnarray} \frac{\partial \Phi_{11}}{\partial \nu} & = & \frac{1}{2\pi \nu} \;, \\ \frac{\partial \Phi_{12}}{\partial \nu} = \frac{\partial \Phi_{21}^{*}}{\partial \nu} & = & (2\nu) \int_{\mathbb{R}^2} \frac{e^{i \mathbf{p} \cdot \mathbf{a}}}{(p^2 + \nu^2)^2} J_{0}(p R) \frac{d^2 p}{(2\pi)^2} \;, \\ \frac{\partial \Phi_{22}}{\partial \nu} & = & (2\nu) \int_{\mathbb{R}^2} \frac{J_{0}^{2}(p R)}{(p^2 + \nu^2)^2} \frac{d^2 p}{(2\pi)^2} \;, \end{eqnarray} by taking the derivative of $\Phi$ under the integral sign thanks to the Lebesgue dominated convergence theorem. Then, one can show that \begin{eqnarray} \frac{\partial \omega}{\partial \nu} = (2\nu) \int_{\mathbb{R}^2} \bigg| A_1 e^{i \mathbf{p} \cdot \mathbf{a}} + A_2 J_0(p R) \bigg|^2 \; \frac{1}{(p^2 + \nu^2)^2} \; \frac{d^2 p}{(2\pi)^2} > 0 \;, \label{eigenvaluesincreasing} \end{eqnarray} for all $\nu>0$, that is, all the eigenvalues $\omega$ of the principal matrix $\Phi$ are strictly increasing functions of $\nu$. Fig. \ref{fig:eigenvaluesptcircle} below shows how the eigenvalues change with respect to $\nu$ for the particular values of parameters. \begin{figure}[h!] \centering \includegraphics{eigenvaluesflowcirclept.eps} \caption{Eigenvalues of the principal matrix $\Phi$ for $\lambda=10, \mu=1, R=1, a=2$ units.} \label{fig:eigenvaluesptcircle} \end{figure} The solutions of (\ref{boundstatecondition}) can also be considered as the zeroes of the eigenvalues of the principal matrix $\Phi$ so all the bound state energies can be found from the zeroes of the eigenvalues, say $\nu_*$, for which \begin{eqnarray} E=-\nu_{*}^{2} \;. \end{eqnarray} The positivity condition (\ref{eigenvaluesincreasing}) implies that there are at most two bound state energies since each eigenvalue can cross the $\nu$ axis only once. In Fig. \ref{fig:eigenvaluesptcircle}, the zeroes of the eigenvalue $\omega_1$ corresponds to the ground state energy. This bound state always exists for all values of the parameter since $\lim_{\nu \rightarrow 0^+} \omega_1=-\infty$ and it is an increasing function of $\nu$ and positive for sufficiently large values of $\nu$. However, the second eigenvalue $\omega_2$ may not have any zeroes if it is not negative around $\nu=0$. One can also numerically calculate the bound state energies and plot them as a function of $a$ and $R$, respectively for the fixed given values of the parameters, as shown in Fig. \ref{fig:ptcircleenergyvsa} and Fig. \ref{fig:ptcircleenergyvsR}. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{ptcircleenergyvsa1} \caption{Ground state energy versus $a$} \label{fig:energyvsa1} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{ptcircleenergyvsa2} \caption{Excited state energy versus $a$} \label{fig:energyvsa2} \end{subfigure} \caption{Bound state energies versus $a$ for $\lambda=10, R=1, \mu=1$ units.} \label{fig:ptcircleenergyvsa} \end{figure} \begin{figure}[h!] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{ptcircleenergyvsR1} \caption{Ground state energy versus $R$.} \label{energyvsa1} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{ptcircleenergyvsR2} \caption{Excited state energy versus $R$.} \label{energyvsa2} \end{subfigure} \caption{Bound state energies $E_B$ versus $R$ for $\lambda=10, a=5.1, \mu=1$ units.} \label{fig:ptcircleenergyvsR} \end{figure} It follows from Weyl's theorem \cite{reedsimonv4} that the essential spectra of $H$ and $H_0$ coincide, that is, $\sigma_{ess}(H)=\sigma_{ess}(H_0)=[0,\infty)$ if we show that $R(E)-R_0(E)$ is compact for some $E \in \rho(H) \cap \rho(H_0)$. Note that we have $R(E)-R_0(E)$ given by an explicit formula (\ref{resolventcirclepoint}), here $\Phi_{ij}$ is invertible for a sufficiently negative $E_*$ on the real axis and all its eigenvalues become positive. Therefore \begin{equation} R(E_*)-R_0(E_*)= R_0(E_*) \sum_{i,j=1}^{2} |f_i\rangle \Phi^{-1}_{ij} \langle f_j| R_0(E_*) \;, \end{equation} indeed becomes a finite rank operator. For this note that the principal matrix has a spectral decomposition $\Phi^{-1}(E_*)=\sum_k \omega_k^{-1}(E_*) A^{(k)}(E_*)A^{(k)*}(E_*)$ with $A^{(k)}$ representing the $k$th eigenvector of $\Phi(E)$ and $\omega_k$ is the corresponding eigenvalue. All the eigenvalues become positive for $E_*$. We therefore need to observe that all the vectors \begin{eqnarray} \sum_{i=1}^{2} \omega_k^{-1/2}(E_*) A_i^{(k)}R_0(E_*)|f_i\rangle \;, \end{eqnarray} for $k=1,2$ have a finite norm, as can be seen as follows; \begin{eqnarray} || \sum_{i=1}^{2} \omega_k^{-1/2}(E_*) A^{(k)}_iR_0(E_*)|f_i\rangle||\leq \sum_{i=1}^{2} |\omega_k^{-1/2}(E_*) A_i^{(k)}|||R_0(E_*)|f_i\rangle|| \;. \end{eqnarray} Hence, we have shown that $R(E)-R_0(E)$ is a trace class operator, which is sufficient for compactness. To summarize the above results, we have \begin{mytheo*} Let $\mathbf{a} \in \mathbb{R}^2$ and $\Gamma$ be the circle centered at the origin with radius $R<a$. Then, the essential spectrum of $H$ associated with the point delta and delta potential supported by $\Gamma$ coincides with the essential spectrum of the free Hamiltonian, i.e., $\sigma_{ess}(H)=\sigma_{ess}(H_0)=[0, \infty)$. Furthermore, the point spectrum $\sigma_p(H)$ of $H$ lies in the negative real axis and $H$ has at most two negative eigenvalues (counting multiplicity) and always has one. Let $\Real(k)=0$ and $\Imaginary(k)>0$, then $k^2 \in \sigma_p(H)$ if and only if $\det \Phi(k^2)=0$ and multiplicity (degeneracy) of the eigenvalue $k^2$ is the same as the multiplicity of the zero eigenvalue of the matrix $\Phi(k^2)$. Moreover, let $E=-\nu_{*}^2<0$ be an eigenvalue of $H$, then the eigenfunctions $|\psi_{ev}\rangle$ associated with this eigenvalue are given by \begin{eqnarray*} \psi_{ev}(\mathbf{r})= \sum_{i=1}^{2} \langle \mathbf{r}| R_0(-\nu_{*}^2)|f_i \rangle A_i \;, \end{eqnarray*} where $(A_1, A_2)$ are eigenvectors with zero eigenvalue of $\Phi(-\nu_{*}^2)$. \end{mytheo*} \subsection{Stationary Scattering Problem} \label{Stationary Scattering Problem for Circle and Point} Stationary scattering problem for such singular potentials is well-defined, that is, the wave operators $\Omega_{\pm}$ exist and are complete thanks to Birman-Kuroda theorem \cite{blank2008hilbert}. This theorem states that if the difference between the resolvent of the full system $(H-E)^{-1}$ associated with the self-adjoint operator $H$ and the resolvent of the free self-adjoint Hamiltonian $(H_0-E)^{-1}$ defined on their common resolvent set is trace class, then wave operators exist and are complete. We have already shown above that $R(E)-R_0(E)$ is trace class. Therefore, the wave operators for defining scattering phenomena exist. Once we have well-defined wave operators, we can study the direct physical quantities (cross section) about the scattering experiment by finding the scattering amplitudes. For this reason, we first need to determine the boundary values of the $T(E)$ operator as $E$ approaches to the positive real axis from above. This is accomplished from the explicit formula of the resolvent written on the complex plane. For convenience, let $E=E_k+ i \epsilon$ where $E_k=k^2$ with $k>0$. The relation between the resolvent and $T$ operator or matrix is given by \cite{Taylor} \begin{eqnarray} R(E)= R_0(E)- R_0(E) T(E) R_0(E) \;. \label{resolventToperator} \end{eqnarray} Since we have the explicit expression for the resolvent (\ref{resolventcirclepoint}) extended onto the complex plane, we can read off the boundary values of $T(E)$ operator on the positive real axis: \begin{eqnarray} T(E_k+i0)= - \sum_{i,j=1}^{2} |f_i \rangle \left(\Phi^{-1}(E_k+i0)\right)_{ij} \langle f_j| \;, \label{Toperator} \end{eqnarray} where \begin{eqnarray} \Phi(E_k+i0)=\left( \begin{array}{cccc} \frac{1}{2\pi} \left( - \frac{i \pi}{2} + \log \left(\frac{k}{\mu}\right) \right) & & -\frac{1}{4} H_{0}^{(1)}\left(k a\right) J_0\left(k R\right) \\ \\ -\frac{1}{4} H_{0}^{(1)}\left(k a\right) J_0\left(k R\right) & & \frac1{\lambda_2}- \frac{1}{4} H_{0}^{(1)}\left(k R\right) J_0\left(k R\right) \end{array} \right) \;. \label{Phimatrixonboundary} \end{eqnarray} Here we have used $K_0(z)= \frac{i \pi}{2} H_{0}^{(1)}(e^{i \pi/2} z)$ and $I_0(z)=e^{-i \pi/2} J_0(e^{i \pi/2}z)$ for $-\pi < arg(z) < \pi/2$ \cite{Lebedev1965special}. The scattering amplitude $f$ and the boundary values of the $T$ operator in two dimensions is related by \begin{eqnarray} f(\mathbf{k} \rightarrow \mathbf{k}')= -\frac{e^{i\pi/4}}{4} \sqrt{\frac{2}{\pi k}} \langle \mathbf{k}' | T(E_k+i0)| \mathbf{k} \rangle \;, \label{scatteringamplitudeTmatrixin2D} \end{eqnarray} where $|\mathbf{k} \rangle$ is the generalized Dirac ket vector and $|\mathbf{k}'|=|\mathbf{k}|$. Indeed, there is an another choice for the scattering amplitude by ignoring the factor $\sqrt{i/k}$ to be able to give some desirable properties \cite{adhikari1986quantum}. Substituting the result (\ref{Toperator}) into \begin{eqnarray} \langle \mathbf{k}' | T(E_k+i0)| \mathbf{k} \rangle = \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} e^{i \mathbf{k} \cdot \mathbf{x}-i \mathbf{k}' \cdot \mathbf{x}'} \langle \mathbf{x}'|T(E_k+i0)| \mathbf{x} \rangle \; d^2 x \, d^2 x' \end{eqnarray} and using the integral representation of the Bessel function $J_0(x)$ given in (\ref{intrepofbessel1stkind}) we find \begin{eqnarray} & \langle \mathbf{k}' | T(E_k+i0)| \mathbf{k} \rangle = e^{i (\mathbf{k}-\mathbf{k}')\cdot \mathbf{a}} \left(\Phi^{-1}(E_k+i0)\right)_{11} + J_0(kR) \left(e^{-i \mathbf{k}' \cdot \mathbf{a}} +e^{i \mathbf{k} \cdot \mathbf{a}} \right) \left(\Phi^{-1}(E_k+i0)\right)_{12} \nonumber \\ & + \, J_{0}^{2}(kR) \left(\Phi^{-1}(E_k+i0)\right)_{22} \;, \end{eqnarray} where $\left(\Phi^{-1}(E_k+i0)\right)_{ij}$ is the $ij$th element of the inverse of the matrix $\Phi(E_k+i0)$ given in equation (\ref{Phimatrixonboundary}). \begin{mytheo*} The differential cross section for the delta potential supported by a circle of radius $R$ centered at the origin and by the point at $\mathbf{a}$ outside of the circle is given by \begin{eqnarray} & & \frac{d \sigma}{d \theta}=|f(\mathbf{k} \rightarrow \mathbf{k}')|^2 = \frac{1}{8 \pi k}\Bigg| e^{i (\mathbf{k}-\mathbf{k}')\cdot \mathbf{a}} \left(\Phi^{-1}(E_k+i0)\right)_{11} + J_0(kR) \left(e^{-i \mathbf{k}' \cdot \mathbf{a}} +e^{i \mathbf{k} \cdot \mathbf{a}} \right) \left(\Phi^{-1}(E_k+i0)\right)_{12} \nonumber \\ & & \hspace{5cm} + \, J_{0}^{2}(kR) \left(\Phi^{-1}(E_k+i0)\right)_{22} \Bigg|^2 \end{eqnarray} \end{mytheo*} The differential cross section is plotted as a function of $\theta$ in Fig. \ref{fig:scatteringcirclept}. Here we assume that the support of the point defect is at $x=a$ without loss of generality and $\theta$ is the angle between $\mathbf{k}'$ and $\mathbf{k}$ chosen to be along the positive $x$ axis. \begin{figure}[h!] \centering \includegraphics{scatteringcirclept.eps} \caption{Differential Cross Section versus $\theta$ for $k=2$, $\lambda_2=20$, $a=5$, $R=1$, $\mu=1$ units.} \label{fig:scatteringcirclept} \end{figure} One can also plot the differential cross section as a function of $k$ for different choice of parameters, as shown in Fig. \ref{diffcrosssectionptcircle1} and Fig. \ref{diffcrosssectionptcircle2}. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{diffcrosssectionptcircle1.eps} \caption{Differential Cross Section versus $k$ for $\theta=0$, $\lambda_2=20$, $a=2$, $R=1$, $\mu=10$ units.} \label{diffcrosssectionptcircle1} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{diffcrosssectionptcircle2.eps} \caption{Differential Cross Section versus $k$ for $\theta=0$, $\lambda_2=20$, $a=20$, $R=1$, $\mu=10$ units.} \label{diffcrosssectionptcircle2} \end{subfigure} \caption{Differential Cross Section versus $k$} \label{fig:diffcrosssectionptcircle} \end{figure} \section{Delta Potential Supported by a Sphere and a Point} \label{Delta Potential Supported by a Sphere and a Point} In this section, we will consider the spherical shell delta potential perturbed by point like delta potential in three dimensions. Since all the techniques and results are similar to the case discussed in the previous section, we will summarize some results without giving the technical detailed proofs. The regularized Hamiltonian for this model is given by \begin{eqnarray} \label{regularizedH2} H_{\epsilon}= H_0 - \lambda_1(\epsilon) |\mathbf{a}^{\epsilon}\rangle \langle \mathbf{a}^{\epsilon}| - \lambda_2 |\Sigma^{\epsilon} \rangle \langle \Sigma^{\epsilon} | \;, \end{eqnarray} where $\Sigma$ is the sphere centered at the origin with radius $R$ and \begin{eqnarray} \langle \mathbf{a}^{\epsilon}|\psi \rangle & = & \int_{\mathbb{R}^3} K_{\epsilon/2}(\mathbf{r}, \mathbf{a}) \psi(\mathbf{r}) \; d^3 r \;, \\ \langle \Sigma^{\epsilon} |\psi \rangle & = & \frac{1}{A(S^2)} \int_{S^2} \left(\int_{\mathbb{R}^3} K_{\epsilon/2}(\mathbf{r}, \boldsymbol{\sigma}(\theta, \phi)) \psi(\mathbf{r}) \; d^3 r \right) d A \;. \end{eqnarray} Here $\boldsymbol{\sigma}:(0,2\pi) \times (0, \pi) \rightarrow S^2$ is the local parametrization given by \begin{eqnarray} \boldsymbol{\sigma}(\theta, \phi):=(R \sin \theta \cos \phi, R \sin \theta \sin \phi, R \cos \theta) \;. \label{localchartsphere} \end{eqnarray} Proceeding analogously to the previous construction of the resolvent for the circular defect perturbed by a point defect problem, we obtain essentially the same form of the resolvent (\ref{resolventcirclepoint}). In this case, the first diagonal element of the matrix $\Phi$ for $E=-\nu^2$ can be similarly calculated: \begin{eqnarray} \Phi_{11}(-\nu^2) = \lim_{\epsilon \rightarrow 0^+} \int_{0}^{\infty} K_{t+\epsilon} (\mathbf{a}, \mathbf{a}) \left(e^{-t \mu^2}-e^{-t \nu^2}\right) d t = \frac{(\nu -\mu)}{4 \pi} \;, \end{eqnarray} by choosing the bare coupling constant $\lambda_1(\epsilon)$ of the point interaction to be of the same type as (\ref{barecouplingconstant}) except the heat kernel here is written in three dimensions. Choosing the support of the point defect along the $z$ axis, we find the off-diagaonal matrix elements of $\Phi$ by going to spherical coordinates and evaluating the radial part of the integral by the residue theorem: \begin{eqnarray} \Phi_{12}(-\nu^2)=\Phi_{21}(-\nu^2) & = & -\langle \mathbf{a}|R_0(-\nu^2)|\Sigma \rangle = - \int_{\mathbb{R}^3} \frac{e^{i \mathbf{p} \cdot \mathbf{a}}}{(p^2 + \nu^2)}\;\frac{\sin (p R)}{p R} \frac{d^3 p}{(2\pi)^3} \nonumber \\ & = & - \frac{1}{4\pi \nu a R} \; e^{-\nu a} \sinh (\nu R) \label{Phioffdiagonal1spherept}\;, \end{eqnarray} where we have used \begin{eqnarray} \langle \mathbf{p} |\Sigma \rangle =\frac{\sin (p R)}{p R} \;. \end{eqnarray} Similarly, \begin{eqnarray} \Phi_{22}(-\nu^2) &=& \frac{1}{\lambda_2} - \langle \Sigma | R_0(-\nu^2)| \Sigma \rangle = \frac{1}{\lambda_2}- \int_{\mathbb{R}^3} \frac{1}{p^2+\nu^2} \frac{\sin^2(p R)}{(p R)^2} \frac{d^3 p}{(2\pi)^3}\nonumber \;, \\ & = & \frac{1}{\lambda_2} - \frac{1}{4 \pi \nu R^2} e^{-\nu R} \sinh(\nu R) \;. \label{Philastdiagonalsphere1} \end{eqnarray} The resolvent of the model is formally given by the same equation (\ref{resolventcirclepoint}), where $|f_2 \rangle =|\Sigma\rangle$ and the matrix $\Phi$ can be defined on the complex plane by an analytic continuation of the above expressions. It is easy to see that the form of the above matrix looks similar to our two dimensional version if we express its entries in terms of the Bessel functions using $I_{1/2}(z)=\sqrt{\frac{2}{\pi z}}\sinh z$ and $K_{1/2}(z)=\sqrt{\frac{\pi}{2z}}e^{-z}$: \begin{eqnarray} \Phi(-\nu^2) =\left( \begin{array}{cccc} \frac{1}{4\pi}(\nu-\mu) & & - \frac{1}{4\pi \sqrt{a R}} \; K_{1/2}(\nu a) \; I_{1/2}(\nu R) \\ \\ - \frac{1}{4\pi \sqrt{a R}} \; K_{1/2}(\nu a) \; I_{1/2}(\nu R) & & \frac{1}{\lambda_2} - \frac{1}{4\pi R} K_{1/2}(\nu R) \; I_{1/2}(\nu R) \end{array} \right) \;. \label{Phispherepointnew} \end{eqnarray} \subsection{Bound State Problem} \label{Bound State Problem for Sphere and Point} Bound state analysis of this problem is formulated exactly in the same manner as in the case of delta potential supported by a circle and a point. For this reason, we are not going to derive the analogous expressions for the flow of the eigenvalues with respect to $\nu$. Positivity of the flow is still true in this case so we conclude that there are at most two bound states (and at least one bound state). One can plot the eigenvalues as a function of $\nu$ for particular values of the parameters. As shown in the previous section, zeroes $\nu_*$ of the eigenvalues correspond to the bound state energies $E=-\nu_{*}^2$. It is interesting to notice that there is only one bound state if we choose the same values of the parameters for the circular defect perturbed by a point defect problem, as shown in Fig. \ref{fig:eigenvaluesspherept1}. The reason for this may be based on the fact that particle has more freedom to escape from the spherical defect compared to the circular defect. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{eigenvaluesspherept1.eps} \caption{Eigenvalues of the principal matrix $\Phi$ versus $\nu$ for $\lambda_2=10$, $a=2$, $R=1$, and $\mu=1$ units.} \label{fig:eigenvaluesspherept1} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{eigenvaluesspherept2.eps} \caption{Eigenvalues of the principal matrix $\Phi$ versus $\nu$ for $\lambda_2=20$, $a=2$, $R=1$, and $\mu=1$ units.} \label{fig:eigenvaluesspherept2} \end{subfigure} \caption{Eigenvalues of $\Phi$ versus $\nu$} \label{fig:eigenvaluessphere} \end{figure} If we increase the strength of the spherical defect potential, the second bound state appears as shown in Fig. \ref{fig:eigenvaluesspherept2}. One can find how the bound state energies change with respect to the parameters $R$ and $a$ by numerically solving the zeroes of the eigenvalues $\omega_1$ and $\omega_2$. They are plotted in Figs. \ref{fig:ptsphereenergyvsa} and \ref{fig:ptsphereenergyvsR}. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{ptsphereenergyvsa1} \caption{Ground state energy versus $a$} \label{fig:sphereenergyvsa1} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{ptsphereenergyvsa2} \caption{Excited state energy versus $a$} \label{fig:sphereenergyvsa2} \end{subfigure} \caption{Bound state energies $E_B$ versus $a$ for $\lambda_2=20, R=1, \mu=1$ units.} \label{fig:ptsphereenergyvsa} \end{figure} \begin{figure}[h!] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{ptsphereenergyvsR1} \caption{Ground state energy versus $R$} \label{fig:sphereenergyvsR1} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{ptsphereenergyvsR2} \caption{Excited state energy versus $R$} \label{fig:sphereenergyvsR2} \end{subfigure} \caption{Bound state energies versus $a$ for $\lambda_2=150, a=10.1, \mu=1$ units.} \label{fig:ptsphereenergyvsR} \end{figure} By following the same line of arguments, one can show \begin{mytheo*} Let $\mathbf{a} \in \mathbb{R}^3$ and $\Sigma$ be the sphere centered at the origin with radius $R<a$. Then, the essential spectrum of $H$ associated with the point delta and delta potential supported by $\Sigma$ coincides with the essential spectrum of the free Hamiltonian, i.e., $\sigma_{ess}(H)=\sigma_{ess}(H_0)=[0, \infty)$. Furthermore, the point spectrum $\sigma_p(H)$ of $H$ lies in the negative real axis and $H$ has at most two negative eigenvalues (counting multiplicity) and always has one. Let $\Real(k)=0$ and $\Imaginary(k)>0$, then $k^2 \in \sigma_p(H)$ if and only if $\det \Phi(k^2)=0$ and multiplicity (degeneracy) of the eigenvalue $k^2$ is the same as the multiplicity of the zero eigenvalue of the matrix $\Phi(k^2)$. Moreover, let $E=-\nu_{*}^2<0$ be an eigenvalue of $H$, then the eigenfunctions $|\psi_{ev}\rangle$ associated with this eigenvalue are given by \begin{eqnarray*} \psi_{ev}(\mathbf{r})= \sum_{i=1}^{2} \langle \mathbf{r}| R_0(-\nu_{*}^2)|f_i \rangle A_i \;, \end{eqnarray*} where $(A_1, A_2)$ are eigenvectors with zero eigenvalue of $\Phi(-\nu_{*}^2)$ and $|f_1 \rangle =|\mathbf{a} \rangle$, $|f_2 \rangle = |\Sigma \rangle$. \end{mytheo*} \subsection{Stationary Scattering Problem} \label{Stationary Scattering Problem for Sphere and Point} For the scattering problem, we similarly obtain the find the boundary values of the principal operator by analytical continuation \begin{eqnarray} \Phi(E_k+i0)=\left( \begin{array}{cccc} \frac{1}{4\pi} \left( -i k - \mu \right) & & -\frac{1}{4 \pi a R k} e^{i k a} \; \sin (k R) \\ \\ -\frac{1}{4 \pi a R k} e^{i k a} \; \sin (k R) & & \frac1{\lambda_2}- \frac{e^{i k R}}{4 \pi R^2 k} \sin (k R) \\ \end{array} \right) \;. \label{Phimatrixonboundaryspherept} \end{eqnarray} and \begin{eqnarray} & & \hskip-1cm \langle \mathbf{k}' | T(E_k+i0)| \mathbf{k} \rangle = - \sum_{i,j=1}^{2} \langle \mathbf{k}'|f_i \rangle \left(\Phi^{-1}(E_k+i0)\right)_{ij} \langle f_j| \mathbf{k} \rangle \nonumber \\ & & = - \bigg(e^{i (\mathbf{k}-\mathbf{k}')\cdot \mathbf{a}} \left(\Phi^{-1}(E_k+i0)\right)_{11} + \frac{\left(e^{-i \mathbf{k}' \cdot \mathbf{a}} + e^{i \mathbf{k} \cdot \mathbf{a}}\right) \sin (k R)}{k R} \; \left(\Phi^{-1}(E_k+i0)\right)_{12} \nonumber \\ & & \hspace{2cm} + \, \frac{\sin^2 (k R)}{k^2 R^2} \; \left(\Phi^{-1}(E_k+i0)\right)_{22} \bigg) \;, \end{eqnarray} we find the scattering amplitude $f(\mathbf{k} \rightarrow \mathbf{k}')=-\frac{1}{4 \pi}\langle \mathbf{k}' | T(E_k+i0)| \mathbf{k} \rangle $, and the graph of the differential cross section $\frac{d \sigma}{d \Omega}=|f(\mathbf{k} \rightarrow \mathbf{k}')|^2 $ as a function of $\theta$ is given in Fig. \ref{fig:diffcrosssectionspherept}. \begin{figure}[h!] \centering \includegraphics{diffcrosssectionspherept.eps} \caption{Differential cross section versus $\theta$ for $k=2$, $\lambda_2=10$, $a=5$, $R=1$, $\mu=1$ units.} \label{fig:diffcrosssectionspherept} \end{figure} Let us summarize the result: \begin{mytheo*} The differential cross section for the delta potential supported by a sphere of radius $R$ centered at the origin and by the point at $\mathbf{a}$ outside of the sphere is given by \begin{eqnarray} & & \frac{d \sigma}{d \Omega}=|f(\mathbf{k} \rightarrow \mathbf{k}')|^2 = \frac{1}{16 \pi^2} \bigg|e^{i (\mathbf{k}-\mathbf{k}')\cdot \mathbf{a}} \left(\Phi^{-1}(E_k+i0)\right)_{11} + \frac{\left(e^{-i \mathbf{k}' \cdot \mathbf{a}} + e^{i \mathbf{k} \cdot \mathbf{a}}\right) \sin (k R)}{k R} \; \left(\Phi^{-1}(E_k+i0)\right)_{12} \nonumber \\ & & \hspace{4cm} + \, \frac{\sin^2 (k R)}{k^2 R^2} \; \left(\Phi^{-1}(E_k+i0)\right)_{22} \bigg|^2 \;. \end{eqnarray} \end{mytheo*} For the forward scattering, the differential cross section is plotted as a function of $k$ for the below values of the parameters is shown in Figs. \ref{fig:diffcrosssectionsphereptvsk1} and \ref{fig:diffcrosssectionsphereptvsk2}. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{diffcrosssectionsphereptvsk1.eps} \caption{Differential cross section versus $k$ for $\theta=0$, $\lambda_2=5$, $a=2$, $R=1$, $\mu=1$ units.} \label{fig:diffcrosssectionsphereptvsk1} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{diffcrosssectionsphereptvsk2.eps} \caption{Differential cross section versus $k$ for $\theta=0$, $\lambda_2=20$, $a=2$, $R=1$, $\mu=1$ units.} \label{fig:diffcrosssectionsphereptvsk2} \end{subfigure} \caption{Differential cross section versus $k$.} \label{fig:diffcrosssectionsphereptvsk} \end{figure} \section{Small Deformations of a Circle} \label{Small Deformations of a Circle} It would be interesting to ask how the bound state spectrum and scattering cross section from the above type of potentials change under small deformation of the support of the potentials. Let us first briefly define the normal deformations of a general curve in two dimensions. We consider a planar regular curve $\Gamma$ parametrized with its arc length $s$ with finite length. The Serret-Frenet equations for this curve are given by $\mathbf{t}=\frac{d \boldsymbol{\gamma}}{ds}$, $\frac{d \mathbf{t}}{d s}=\kappa \mathbf{n}$, and $\frac{d \mathbf{n}}{d s}= - \kappa \mathbf{t}$, where $\mathbf{t}$, $\mathbf{n}$ are the tangent and normal vectors and $\kappa$ is the curvature of the curve $\Gamma$ \cite{do2016differential}. The small deformation along its normal direction of the curve $\Gamma$ is defined by \begin{equation} \tilde{\boldsymbol{\gamma}}(s)= \boldsymbol{\gamma}(s)+ \epsilon \psi(s) \mathbf{n} (s) \;, \label{deformationofcurve} \end{equation} where $\psi$ is assumed to be a smooth function of $s$. It is worth pointing out that $\epsilon$ here is a small deformation parameter, \textit{not the same parameter used for regularization}. The length of the deformed curve $\tilde{\Gamma}$ is given by \begin{eqnarray} L(\tilde{\Gamma}) & = & \int_{0}^{L} \frac{d \tilde{s}}{ds} \; d s = \int_{0}^{L} \left( \frac{d \boldsymbol{\tilde{\gamma}}}{ds} \cdot \frac{d \boldsymbol{\tilde{\gamma}}}{ds} \right)^{1/2}d s = \int_{0}^{L} \left(\left(1-\epsilon \kappa(s)\psi(s)\right)^2 + \epsilon^2 \left(\frac{d \psi(s)}{ds}\right)^2\right)^{1/2} \; d s \nonumber \\ & = & \int_{0}^{L} \left( 1- \epsilon \kappa(s) \psi(s)+ O(\epsilon^2) \right) \; d s = L(\Gamma)- \epsilon \int_{0}^{L} \kappa(s)\psi(s) ds +O(\epsilon^2) \;. \end{eqnarray} If $\Gamma$ is a circle of radius $R$, $\kappa=1/R$ so that \begin{eqnarray} L(\tilde{\Gamma})=2\pi R - \frac{\epsilon}{R} \int_{0}^{L} \psi(s) ds \;. \label{lengthofdeformedcircle} \end{eqnarray} \subsection{First Order Calculation of the Bound State Energy} \label{First Order Calculation of the Bound State Energy for Deformed Circle} Since the support of the defect has codimension one, the renormalization is not required for this model and the resolvent of the Hamiltonian $H$ associated with deformed circular defect potential can be found by following the same line of arguments summarized previously and we find \begin{eqnarray} R(E)= R_0(E)+R_0(E) | \tilde{\Gamma} \rangle \frac{1}{\tilde{\Phi}(E)} \langle \tilde{\Gamma}| R_0(E) \;, \label{resolventofdeformedcircle} \end{eqnarray} where we denote the deformation of the circle $\tilde{S^{1}}$ by $\tilde{\Gamma}$ for notational simplicity. For bound state, we calculate \begin{eqnarray} \tilde{\Phi}(E=-\nu^2)= \frac{1}{\lambda} - \langle \tilde{\Gamma} | R_0(-\nu^2) | \tilde{\Gamma} \rangle = \frac{1}{\lambda} - \int_{\mathbb{R}^2} \frac{|\langle \tilde{\Gamma}|\mathbf{p} \rangle|^2}{p^2+\nu^2}\; \frac{d^2 p}{(2\pi)^2} \;. \label{Phideformedcircle1} \end{eqnarray} Using \begin{eqnarray} \langle \tilde{\Gamma}|\mathbf{p} \rangle = \frac{1}{L(\tilde{\Gamma})} \int_{0}^{L} e^{i \mathbf{p}\cdot \boldsymbol{\tilde{\gamma}}(s)} \; |\boldsymbol{\tilde{\gamma}'}(s)| d s \end{eqnarray} and expanding the exponential $e^{i \epsilon \psi(\theta) \mathbf{p} \cdot \mathbf{n}(\theta)}$ in $\epsilon$ and $|\boldsymbol{\tilde{\gamma}'}(s)|= 1-\frac{\epsilon}{R}\psi(s) + O(\epsilon^2)$, it is easy to show that \begin{eqnarray} & & \hskip-1cm \Tilde{\Phi}(-\nu^2) = \frac{1}{\lambda} - \frac{1}{(2\pi)^2} \left( 1 + \frac{\epsilon}{\pi R} \int_{0}^{2\pi} \psi(\theta) d \theta \right) \Bigg[ \int_{\mathbb{R}^2} \bigg( \int_{0}^{2\pi} \int_{0}^{2\pi} \frac{e^{i \mathbf{p} \cdot (\boldsymbol{\gamma}(\theta_1)-\boldsymbol{\gamma}(\theta_2)) }}{p^2 + \nu^2} \; \nonumber \\ & & \times \bigg(1- \frac{\epsilon}{R} (\psi(\theta_1)+\psi(\theta_2)) + i \epsilon ((\mathbf{k} \cdot \mathbf{n}(\theta_1))\psi(\theta_1) - (\mathbf{p} \cdot \mathbf{n}(\theta_2))\psi(\theta_2) ) \bigg) d \theta_1 d \theta_2 \bigg) \Bigg] \frac{d^2 p}{(2\pi)^2} \nonumber \\ & & \hspace{5cm} + \, O(\epsilon^2) \;. \label{defomedcirclephiexpanded} \end{eqnarray} Let us consider the first integral in the square bracket above: \begin{eqnarray} \int_{\mathbb{R}^2} \left( \int_{0}^{2\pi} \int_{0}^{2\pi} \frac{e^{i \mathbf{p} \cdot (\boldsymbol{\gamma}(\theta_1)-\boldsymbol{\gamma}(\theta_2)) }}{p^2 + \nu^2} d \theta_1 d \theta_2 \right) \frac{d^2 p}{(2\pi)^2} \;. \end{eqnarray} The uniformly convergent plane wave expansion in two dimensions \begin{eqnarray} e^{i \mathbf{p} \cdot \mathbf{r}} = \sum_{m=0}^{\infty} \varepsilon_m i^m J_m(p r) \cos(m \theta) \;, \label{planewaveexpansion2d} \end{eqnarray} with $\varepsilon_0=1$, $\varepsilon_m=2$ if $m>0$, and $\theta$ being the angle between $\mathbf{p}$ and $\mathbf{r}$ helps us to compute the above integral with respect to the angle variables easily and left with the integration over the variable $p$ only: \begin{eqnarray} (2\pi) \int_{0}^{\infty} \frac{J_{0}^{2}(p R)}{p^2 + \nu^2} \; p \, d p \;, \end{eqnarray} where we have used $\int_{0}^{2\pi} \cos(m (\theta-\theta_k)) d \theta=2\pi \delta_{m0}$. Thanks to the integral representation \cite{gradshteyn2014table} \begin{eqnarray} \int_{0}^{\infty} \frac{x}{x^2+a^2} J_{0}^{2}(x) d x = I_0(a)K_0(a) \;, \label{I0K0integral} \end{eqnarray} we find \begin{eqnarray} \int_{\mathbb{R}^2} \left( \int_{0}^{2\pi} \int_{0}^{2\pi} \frac{e^{i \mathbf{p} \cdot (\boldsymbol{\gamma}(\theta_1)-\boldsymbol{\gamma}(\theta_2)) }}{p^2 + \nu^2} d \theta_1 d \theta_2 \right) \frac{d^2 p}{(2\pi)^2} = (2 \pi) I_{0}(\nu R) K_{0}(\nu R) \;. \label{1stintegralPhideformedcircle} \end{eqnarray} For the second integral in equation (\ref{defomedcirclephiexpanded}), it is sufficient to consider \begin{eqnarray} \int_{\mathbb{R}^2} \left( \int_{0}^{2\pi} \int_{0}^{2\pi} \frac{e^{i \mathbf{p} \cdot (\boldsymbol{\gamma}(\theta_1)-\boldsymbol{\gamma}(\theta_2)) }}{p^2 + \nu^2} \psi(\theta_1) d \theta_1 d \theta_2 \right) \frac{d^2 p}{(2\pi)^2} \;. \end{eqnarray} With the help of the plane wave expansion (\ref{planewaveexpansion2d}) and the formula (\ref{I0K0integral}), the above integral becomes \begin{eqnarray} I_0(\nu R) K_0(\nu R) \left( \int_{S^1} \psi(\theta) d \theta \right) \;. \label{deformedPhicircle2}\end{eqnarray} The last integral in (\ref{defomedcirclephiexpanded}) can be computed similarly by first rewriting the expression $i (\mathbf{p} \cdot \mathbf{n}(\theta)) e^{i \mathbf{p} \cdot \boldsymbol{\gamma}(\theta)} = \frac{\partial}{\partial R} (e^{i \mathbf{p} \cdot \boldsymbol{\gamma}(\theta)})$ and $\frac{d J_0(x)}{d x}=-J_1(x)$ we find \begin{eqnarray} & & \hskip-2cm \int_{\mathbb{R}^2} \bigg( \int_{0}^{2\pi} \int_{0}^{2\pi} \frac{e^{i \mathbf{p} \cdot (\boldsymbol{\gamma}(\theta_1)-\boldsymbol{\gamma}(\theta_2)) }}{p^2 + \nu^2} \; i \, (\mathbf{p} \cdot \mathbf{n}(\theta_1))\psi(\theta_1) d \theta_1 d \theta_2 \bigg) \frac{d^2 p}{(2\pi)^2} \nonumber \\ & & = - \int_{0}^{\infty} J_0(p R) J_1(p R) \, \frac{p^2}{p^2+\nu^2} \; d p \;. \end{eqnarray} Rewriting $\frac{p^2}{p^2 +\nu^2}$ as $1-\frac{\nu^2}{p^2+\nu^2}$, and using the formula (6.512) in \cite{gradshteyn2014table} \begin{eqnarray} \int_{0}^{\infty} J_{\nu}(\alpha x) J_{\nu-1}(\alpha x) d x = \frac{1}{2 \alpha} \;, \label{intofJnuJnu-1} \end{eqnarray} and the formula (6.577) in \cite{gradshteyn2014table}), \begin{eqnarray} \int_{0}^{\infty} \frac{J_0 (p R) J_1(p R)}{p^2 + \nu^2} d p = \frac{1}{\nu} I_{1}(\nu R) K_0(\nu R) \;, \end{eqnarray} it follows that \begin{eqnarray} & & \hskip-3cm \int_{\mathbb{R}^2} \bigg( \int_{0}^{2\pi} \int_{0}^{2\pi} \frac{e^{i \mathbf{p} \cdot (\boldsymbol{\gamma}(\theta_1)-\boldsymbol{\gamma}(\theta_2)) }}{p^2 + \nu^2} \; i \, (\mathbf{p} \cdot \mathbf{n}(\theta_1))\psi(\theta_1) d \theta_1 d \theta_2 \bigg) \frac{d^2 p}{(2\pi)^2} \nonumber \\ & & =-\left( \frac{1}{2 R} - \nu I_1 (\nu R) K_0(\nu R) \right) \left( \int_{0}^{2\pi} \psi(\theta) d \theta \right) \;. \label{deformedPhicircle3} \end{eqnarray} After combining all the above results (\ref{1stintegralPhideformedcircle}), (\ref{deformedPhicircle2}), and (\ref{deformedPhicircle3}), we finally obtain \begin{eqnarray} & & \hskip-1.5cm \Tilde{\Phi}(-\nu^2) = \frac{1}{\lambda} - \frac{1}{2\pi} I_0(\nu R) K_0(\nu R) + \frac{\epsilon}{2 \pi^2} \left( - \frac{1}{2R} + \nu I_0(\nu R) K_1(\nu R) \right) \left(\int_{0}^{2\pi} \psi(\theta) d \theta \right) + O(\epsilon^2) \;,\label{deformedcirclePhiforboundstate} \end{eqnarray} where we have used $I_1(x)K_0(x)+I_0(x)K_1(x)=1/x$. When there is no deformation ($\epsilon=0$), we have only one bound state. This can be seen by simply expressing the second term $I_0(\nu R) K_0(\nu R)$ using its integral representation (\ref{I0K0integral}): \begin{eqnarray} \frac{1}{\lambda} = \frac{1}{2\pi} I_0(\nu R) K_0(\nu R) = \frac{1}{2\pi} \int_{0}^{\infty} \frac{x}{x^2+\nu^2 R^2} J_{0}^{2}(x) d x \;. \end{eqnarray} Then, by taking the derivative of the right hand side with respect to $\nu$ under the integral sign, it is easy to see that the right hand side of the above equation is a decreasing function of $\nu$ for given parameters $\lambda$ and $R$. Therefore, there is a unique solution, say $\nu_0$, to the above equation. It is important to notice that deformations satisfying $\int_{0}^{2\pi} \psi(\theta) d \theta =0$ do not change the bound state energies up to first order in $\epsilon$. Since we evaluate the deformation to order $\epsilon$ we can actually solve the bound state energy for the deformed curve to the same order. In \cite{pointinteractionsonmanifolds2, erman2019perturbative} we derived a general formula for perturbations of eigenvalues for small perturbations of the principal matrix $\Phi$, here we have a one-dimensional version of this formula so we can use directly the expansion above. Let $\nu= \nu_0 + \epsilon \nu_1 + O(\epsilon^2)$, where $\nu_0$ denotes the solution to the original unperturbed circle case. Then, the bound state energy $E_B=-(\nu_0+\epsilon \nu_1)^2$ for the deformed circular defect can be found by the zeroes of $\tilde{\Phi}$. This is achieved up to order $\epsilon$ by simply expanding its first term around $\nu_0$ \begin{equation} \frac{1}{\lambda} - \frac{1}{2\pi} I_0((\nu_0+\epsilon \nu_1) R) K_0((\nu_0+ \epsilon \nu_1) R) + \frac{\epsilon}{2 \pi^2} \left( - \frac{1}{2R} + \nu_0 I_0(\nu_0 R) K_1(\nu_0 R) \right) \left(\int_{0}^{2\pi} \psi(\theta) d \theta \right) =0 \;, \end{equation} and using the fact that the zeroth order term cancels out $\frac{1}{\lambda}$ above to get the solution $\nu_1$. Hence, we obtain an explicit formula for the bound state energy up to order $\epsilon$ \begin{eqnarray} E_B = -\nu_{0}^{2} - \epsilon \, \frac{2 \nu_0}{\pi R} \; \left( \frac{\left( \frac{1}{2 R} - \nu_0 I_0(\nu_0 R) K_1(\nu_0 R)\right)}{I_1(\nu_0 R) K_0(\nu_0 R) - I_0(\nu_0 R) K_1(\nu_0 R)}\right) \left( \int_{0}^{2\pi} \psi(\theta) d \theta \right) + O(\epsilon^2) \; ,\end{eqnarray} which can be further simplified into \begin{eqnarray} E_B = -\nu_{0}^{2} - \epsilon \, \frac{ \nu^2_0}{\pi R} \left( \int_{0}^{2\pi} \psi(\theta) d \theta \right) + O(\epsilon^2) \;. \end{eqnarray} The simplicity of the first order result is remarkable, and hints at a geometric interpretation. Suppose that instead of the original circle with radius $R$ we replace the circle with a circle of radius $R-\epsilon R_1$ where $\epsilon R_1=\frac{1}{2 \pi R}\int_{0}^{2\pi} \epsilon \psi(\theta) R d\theta$ (note that the normal in the curvature description is inward). Because we are looking at a circle, we do have the same eigenvalue equation, \begin{equation} \frac{1}{\lambda} - \frac{1}{2\pi} I_0((\nu_0+\epsilon \nu_1) (R-\epsilon R_1)) K_0((\nu_0+ \epsilon \nu_1) (R-\epsilon R_1))=0 \;. \end{equation} If we expand all the terms to order $\epsilon$, we find the relation $R\nu_1=\nu R_1$. By using $E_B=-(\nu_0+\epsilon \nu_1)^2=-\nu_0^2-2\epsilon \nu_0 \nu_1$ we find exactly the above result. So we state this observation as, \begin{mylemma*} A small deformation in the normal direction of a given circle, which supports an attractive delta function, leads to a perturbation of the original bound state energy, to first order the resulting change can be obtained as follows: increase the initial radius by an amount equal to the average of the deformation over the given circle, then compute the first order perturbation of the bound state energy corresponding to this new circle with the same coupling constant. \end{mylemma*} \begin{myremark*} It is tempting to push this to the second order and search for, if there is any, a geometric interpretation of the result. But the calculations are rather involved so we postpone it for future work. Note that the circle problem per se can be solved by elementary methods, that is by choosing polar coordinates at the center and writing the delta function along the radial direction. However, a general curve cannot be solved by this approach as there is no natural coordinate system to choose. In the case of a small deformation, one can think of delta potential supported on this curve as a delta function supported on the original circle plus a series of perturbations. This idea leads to, even to first order, a term of the form of $\epsilon \frac{d \delta}{d r}(r-R)\int_{0}^{2\pi} \psi(\theta) d\theta $ and some additional ones coming from the change of arc-length as well as the change of total length. Here the derivative of delta function term is important since the wave function is of the form (disregarding the normalization) $$ I_0( r\nu_0)K_0(R\nu_0) \theta(R-r) +I_0(R\nu_0)K_0(r\nu_0)\theta(r-R), $$ and the usual first order perturbation of energy, which is found by evaluating the expectation value in the state of interest, leads to a divergence (here we need to use the symmetric choice for the theta function as often used in distribution theory). \end{myremark*} The single bound state energy $E_B$ for the original circular defect can numerically be plotted as a function of $R$ with fixed values of $\lambda$. For a particular deformation $\psi(\theta)=\sin^2 \theta$, we can plot how the bound state energy $E_{B}$ changes with respect to $R$ numerically with the help of Mathematica, as shown in Fig. \ref{fig:boundstatedeformedcircle}. \begin{figure}[h!] \centering \includegraphics{boundstatedeformedcircle.eps} \caption{Bound state energy for the circular defect and for the deformed circular defect versus $R$ with $\epsilon=0.1$, $\lambda=10$ units.} \label{fig:boundstatedeformedcircle} \end{figure} For a given $R$, it is easy to see that the function $\tilde{\Phi}$ is a decreasing function of $\lambda$ for all $\nu>0$. This implies that the bound state energies decrease with increasing strength $\lambda$, as expected. \subsection{First Order Stationary Scattering Problem} \label{First Order Stationary Scattering Problem for deformed Circle} The function $\tilde{\Phi}$ can be analytically continued onto the complex plane using (\ref{deformedcirclePhiforboundstate}) and $\tilde{\Phi}(E_k+i0)$ can be evaluated in terms of the variable $k>0$ \begin{eqnarray} & & \hskip-1cm \tilde{\Phi}(E_k+i0) = \frac{1}{\lambda}- \frac{i}{4} J_0(kR) H_{0}^{(1)}(kR) \nonumber \\ & & \hspace{3cm} + \, \frac{\epsilon}{2\pi^2} \left(-\frac{1}{2R} + \frac{i \pi k}{2} J_0(kR) H_{1}^{(1)}(kR) \right) \left(\int_{0}^{2\pi} \psi(\theta) d \theta \right) + O(\epsilon^2) \;. \label{Phideformedcirclescattering} \end{eqnarray} Let $\theta'$ be the angle between $\mathbf{k}'$ and $\mathbf{k}$, which is the momentum of the incoming particle chosen to be parallel to the $x$ axis for simplicity. Then, we get \begin{eqnarray} & & \hskip-1cm \langle \mathbf{k}' | \tilde{\Gamma} \rangle = \left(1 + \frac{\epsilon}{2\pi R} \int_{0}^{2\pi} \psi(\theta) d \theta \right) \bigg(J_0(kR) -\frac{\epsilon}{2\pi R} \int_{0}^{2\pi} e^{-ik R \cos(\theta-\theta')} \psi(\theta) d \theta \nonumber \\ & & \hspace{4cm} - \, \frac{i k \epsilon}{2\pi} \int_{0}^{2\pi} e^{-i k R \cos(\theta-\theta')} \cos(\theta-\theta') \psi(\theta) d \theta \bigg) + O(\epsilon^2) \;. \label{k'gamma} \end{eqnarray} Hence, using the above results (\ref{gammak}), (\ref{k'gamma}) and (\ref{Phideformedcirclescattering}), the scattering amplitude is given by \begin{eqnarray} & & \tilde{f}(\mathbf{k} \rightarrow \mathbf{k}') = \frac{e^{\frac{i \pi}{4}}}{4} \sqrt{\frac{2}{\pi k}} \langle \mathbf{k}' | \tilde{\Gamma} \rangle (\tilde{\Phi}(E_k+i0))^{-1} \langle \tilde{\Gamma} | \mathbf{k} \rangle \nonumber \\ & & = \frac{1}{4} \sqrt{\frac{2}{\pi k}} \Bigg[ \left(1 + \frac{\epsilon}{2\pi R} \int_{0}^{2\pi} \psi(\theta) d \theta \right) \bigg(J_0(kR) -\frac{\epsilon}{2\pi R} \int_{0}^{2\pi} e^{-ik R \cos(\theta-\theta')} \psi(\theta) d \theta \nonumber \\ & & \hspace{4cm} - \, \frac{i k \epsilon}{2\pi} \int_{0}^{2\pi} e^{-i k R \cos(\theta-\theta')} \cos(\theta-\theta') \psi(\theta) d \theta \bigg) + O(\epsilon^2) \Bigg] \nonumber \\ & & \times \Bigg[ \frac{1}{\lambda}- \frac{i}{4} J_0(kR) H_{0}^{(1)}(kR)+ \frac{\epsilon}{2\pi^2} \left(-\frac{1}{2R} + \frac{i \pi k}{2} J_0(kR) H_{1}^{(1)}(kR) \right) \left(\int_{S^1} \psi(\theta) d \theta \right) + O(\epsilon^2) \Bigg]^{-1} \nonumber \\ & & \hspace{2cm} \times \Bigg[ \left(1 + \frac{\epsilon}{2\pi R} \int_{0}^{2\pi} \psi(\theta) d \theta \right) \bigg(J_0(kR) -\frac{\epsilon}{2\pi R} \int_{0}^{2\pi} e^{ik R \cos(\theta)} \psi(\theta) d \theta \nonumber \\ & & \hspace{5cm} + \, \frac{i k \epsilon}{2\pi} \int_{0}^{2\pi} e^{i k R \cos(\theta)} \cos(\theta) \psi(\theta) d \theta \bigg) + O(\epsilon^2) \Bigg] \;. \end{eqnarray} The differential cross sections as a function of $k$ for the circular defect and deformed circular defect for a particular deformation $\psi(\theta)=\sin^2 \theta$ is plotted in Fig. \ref{fig:diffcrosssectiondeformedcircle}. \begin{figure} \centering \includegraphics{diffcrosssectiondeformedcircle.eps} \caption{Differential cross sections as a function of $k$ from a circular defect and deformed circular defect (red curve) for $\psi(\theta)=\sin^2 \theta$, and $R=5$, $\lambda=40$, and $\epsilon=0.1$ units.} \label{fig:diffcrosssectiondeformedcircle} \end{figure} \section{Small Deformations of a Sphere} \label{Small Deformations of a Sphere} We consider a particular regular surface, a sphere $S^2$ centered at the origin with radius $R$. Let $\boldsymbol{\sigma}:(0,\pi)\times (0,2\pi) \rightarrow S^2$ be a local chart, given by (\ref{localchartsphere}). Suppose that $\tilde{\Sigma}$ is the small deformation of the sphere along its normal direction, defined by \begin{eqnarray} \boldsymbol{\tilde{\sigma}}(\theta, \phi):= \boldsymbol{\sigma}(\theta, \phi) + \epsilon \psi(\theta, \phi) \mathbf{N}(\theta, \phi) \;, \label{deformedshpere} \end{eqnarray} where $\epsilon$ is a small deformation parameter, $\mathbf{N}$ is the normal vector field on the sphere, and $\psi$ is a smooth function on the sphere. If $|\epsilon|$ is sufficiently small, it is well-known that the deformed sphere $\tilde{\Sigma}$ is a regular surface \cite{bar2010elementary} and its surface area is given by \begin{eqnarray} A(\tilde{\Sigma}) = A(\Sigma) - 2 \epsilon \int_{0}^{2\pi} \int_{0}^{\pi} H(\theta, \phi) \psi(\theta, \phi) R^2 \sin \theta d \theta d \phi \;, \label{deformationofarea} \end{eqnarray} where $H=1/R$ is the mean curvature of the sphere. To simplify the notation, we will write $d \Omega$ instead of $\sin \theta d \theta d \phi$, and $\Omega$ as the argument of the functions on the sphere. The resolvent can be similarly constructed for the deformed spherical defect by following the same line of arguments discussed above. The explicit form of the resolvent operator is given by \begin{eqnarray} R(E)= R_0(E) + R_{0}(E) |\tilde{\Sigma} \rangle \tilde{\Phi}^{-1}(E) \langle \tilde{\Sigma}| R_0(E)\;, \end{eqnarray} where \begin{eqnarray} \tilde{\Phi}(E)= \frac{1}{\lambda} - \langle \Tilde{\Sigma} | R_0(E) | \tilde{\Sigma} \rangle \;. \label{Phideformedsphere} \end{eqnarray} \subsection{First Order Calculation of the Bound State Energy} \label{First Order Calculation of the Bound State Energy for Deformed Sphere} For this part, we assume that the sphere problem has a bound state solution. We will choose $E=-\nu^2$, as we will be interested in a bound state to begin with. If we use the realization in the Fourier domain, the resolvent kernel is given by \begin{eqnarray} R_0(\mathbf{r}, \mathbf{r'}|-\nu^2)= \int_{\mathbb{R}^3} \frac{e^{i \mathbf{p}\cdot (\mathbf{r}-\mathbf{r'})}}{p^2 +\nu^2} \frac{d^3 p}{(2\pi)^3} \label{resolventkernel} \end{eqnarray} Our aim is to calculate the function $\tilde{\Phi}(-\nu^2)$ up to order $\epsilon$. Using (\ref{deformationofarea}) and expanding the terms up to order $\epsilon$, we have \begin{eqnarray} & & \tilde{\Phi}(-\nu^2)= \frac{1}{\lambda} - \frac{1}{(4\pi)^2} \left(1+\frac{\epsilon}{\pi R} \int_{S^2} \psi(\Omega) d \Omega \right) \nonumber \\ & & \hspace{1cm} \times \int_{S^{2}\times S^2} R_0 \left(\boldsymbol{\tilde{\sigma}}(\Omega), \boldsymbol{\tilde{\sigma}}(\Omega')|-\nu^2 \right) \left(1-\frac{2 \epsilon}{R} \left( \psi(\Omega)+\psi(\Omega')\right)\right) d\Omega d\Omega' + O(\epsilon^2) \;. \label{Phitildedeformedsphere1} \end{eqnarray} The resolvent kernel up to order $\epsilon$ can be calculated using (\ref{resolventkernel}) \begin{eqnarray} & & R_0 \left(\boldsymbol{\tilde{\sigma}}(\Omega), \boldsymbol{\tilde{\sigma}}(\Omega')|-\nu^2 \right) \nonumber \\ & & = \int_{\mathbb{R}^3} e^{i \mathbf{p} \cdot \mathbf{\boldsymbol{\sigma}}(\Omega)} e^{-i \mathbf{p} \cdot \mathbf{\boldsymbol{\sigma}}(\Omega')} \frac{(1+ \epsilon (i \mathbf{p} \cdot (\psi(\Omega) \mathbf{N}(\Omega))-\psi(\Omega') \mathbf{N}(\Omega'))}{p^2 +\nu^2} \frac{d^3 p}{(2\pi)^3} + O(\epsilon^2) \;. \end{eqnarray} Substituting this into (\ref{Phitildedeformedsphere1}), and keeping the first order terms in $\epsilon$ for the surface integrals of the resolvent kernel, we obtain \begin{eqnarray} & & \tilde{\Phi}(-\nu^2)= \frac{1}{\lambda} - {1 \over (4\pi)^2} \left(1+\frac{\epsilon}{\pi R} \int_{S^2} \psi(\Omega) d \Omega \right) \Bigg[ \int_{\mathbb{R}^3} \bigg( \int_{S^2 \times S^2} e^{i \mathbf{p} \cdot (\mathbf{\boldsymbol{\sigma}}(\Omega)- \mathbf{\boldsymbol{\sigma}}(\Omega'))} d\Omega d \Omega' \bigg) \nonumber \\ & & \times \, \frac{1}{p^2 +\nu^2} \frac{d^3 p}{(2\pi)^3} + \epsilon \bigg( 2 \int_{\mathbb{R}^3} \bigg( \int_{S^2 \times S^2} e^{i \mathbf{p} \cdot (\mathbf{\boldsymbol{\sigma}}(\Omega)- \mathbf{\boldsymbol{\sigma}}(\Omega'))} (i \mathbf{p} \cdot \mathbf{N}(\Omega)) \, \psi(\Omega) \; d\Omega d \Omega' \bigg) \frac{1}{p^2 +\nu^2} \frac{d^3 p}{(2\pi)^3} \nonumber \\ & & \hspace{1cm} - \, \frac{4}{R} \int_{\mathbb{R}^3} \bigg( \int_{S^2 \times S^2} e^{i \mathbf{p} \cdot (\mathbf{\boldsymbol{\sigma}}(\Omega)- \mathbf{\boldsymbol{\sigma}}(\Omega'))} \psi(\Omega) \; d\Omega d \Omega' \bigg) \frac{1}{p^2 +\nu^2} \frac{d^3 p}{(2\pi)^3} \bigg) \Bigg] \;. \end{eqnarray} We have already computed the above first integral in evaluating the second diagonal element of the matrix $\Phi$ in equation (\ref{Phispherepointnew}) and the result can be expressed as \begin{eqnarray} \langle \Sigma | R_0(-\nu^2)|\Sigma \rangle & = & {1 \over (4\pi)^2} \int_{\mathbb{R}^3} \bigg( \int_{S^2 \times S^2} e^{i \mathbf{p} \cdot (\mathbf{\boldsymbol{\sigma}}(\Omega)- \mathbf{\boldsymbol{\sigma}}(\Omega'))} d\Omega d \Omega' \bigg) \frac{1}{p^2 +\nu^2} \frac{d^3 p}{(2\pi)^3} \nonumber \\ & = & \frac{1}{4 \pi R} K_{1/2}(\nu R) I_{1/2}(\nu R) \;. \label{Phideformedsphere1stintegral} \end{eqnarray} For the second integral, we will use the identity $\left(i\mathbf{p}\cdot \mathbf{N}(\Omega) \right) e^{i\mathbf{p}\cdot \boldsymbol{\sigma}(\Omega)} = {\partial \over \partial R} e^{i\mathbf{p}\cdot \boldsymbol{\sigma}(\Omega)}$. The exponential factors can be expressed in terms of the spherical Bessel functions of first kind and spherical harmonics using the well-known expansion of the plane waves into spherical harmonics: \begin{eqnarray} e^{i\mathbf{p}\cdot \boldsymbol{\sigma}(\Omega)}=4\pi \sum_{l=0}^{\infty} \sum_{m=-l}^{l} i^l j_l(p R) Y_{lm}^* (\Omega_p)Y_{lm} (\Omega) \;. \label{planewaveexpansion} \end{eqnarray} Here $\Omega_p$ and $\Omega$ are the polar angles of the vector $\mathbf{p}$ and $\boldsymbol{\sigma}$, respectively. Hence, we obtain \begin{eqnarray} & & \int_{\mathbb{R}^3} \bigg( \int_{S^2 \times S^2} e^{i \mathbf{p} \cdot (\mathbf{\boldsymbol{\sigma}}(\Omega)- \mathbf{\boldsymbol{\sigma}}(\Omega'))} (i \mathbf{p} \cdot \mathbf{N}(\Omega)) \, \psi(\Omega) \; d\Omega d \Omega' \bigg) \frac{1}{p^2 +\nu^2} \frac{d^3 p}{(2\pi)^3} \nonumber \\ & & = (4\pi)^2 \int_{0}^{\infty} \int_{S^2} \bigg( \int_{S^2 \times S^2} \sum_{l=0}^{\infty} \sum_{m=-l}^{l} i^l \frac{\partial j_l(p R)}{\partial R} Y_{lm}^* (\Omega_p) Y_{lm} (\Omega) \psi(\Omega) \nonumber \\ & & \hspace{2cm} \times \sum_{l'=0}^{\infty} \sum_{m'=-l}^{l} (-i)^{l'} \frac{\partial j_{l'}(p R)}{\partial R} Y_{l'm'} (\Omega_p)Y_{l'm'}^{*}(\Omega') d \Omega d \Omega' \bigg) \frac{d \Omega_p p^2 d p }{(2\pi)^3} \;. \end{eqnarray} By the orthonormality of the spherical harmonics $\int_{S^2} Y_{lm}(\Omega) Y_{l'm'}(\Omega) d\Omega=\delta_{ll'} \delta_{mm'}$ we can integrate of the functions over $\Omega_p$ and $\Omega'$ to get \begin{eqnarray} & & \int_{\mathbb{R}^3} \bigg( \int_{S^2 \times S^2} e^{i \mathbf{p} \cdot (\mathbf{\boldsymbol{\sigma}}(\Omega)- \mathbf{\boldsymbol{\sigma}}(\Omega'))} (i \mathbf{p} \cdot \mathbf{N}(\Omega)) \, \psi(\Omega) \; d\Omega d \Omega' \bigg) \frac{1}{p^2 +\nu^2} \frac{d^3 p}{(2\pi)^3} \nonumber \\ & & =\frac{(4\pi)^2}{(2\pi)^3} \int_{0}^{\infty} j_0(p R) (-j_1(p R)) \frac{p^3}{p^2+\nu^2} d p \left( \int_{S^2} \psi(\Omega) d \Omega \right) \;, \end{eqnarray} where we have used $Y_{00}(\Omega)=1/\sqrt{4\pi}$ and the relation ${d j_0(x)\over d x}=-j_1(x)$. We now use $j_l(x)=\sqrt{ \pi\over 2x} J_{l+1/2} (x)$ and decompose ${p^2\over p^2+\nu^2}$ as $1-{\nu^2\over p^2+\nu^2}$ together with the formulas (6.512) and (6.577) in \cite{gradshteyn2014table} for the integrals of the Bessel functions \begin{eqnarray} \int_{0}^{\infty} J_{1/2}(p R) J_{3/2}(p R) dp & = & \frac{1}{2R} \;, \\ \int_0^\infty J_{3/2} (p R) J_{1/2}(p R) {d p\over p^2+\nu^2} & = & {1\over \nu} I_{3/2}(\nu R) K_{1/2}(\nu R) \;, \end{eqnarray} to get \begin{eqnarray} & & \hskip-3cm \int_{\mathbb{R}^3} \bigg( \int_{S^2 \times S^2} e^{i \mathbf{p} \cdot (\mathbf{\boldsymbol{\sigma}}(\Omega)- \mathbf{\boldsymbol{\sigma}}(\Omega'))} (i \mathbf{p} \cdot \mathbf{N}(\Omega)) \, \psi(\Omega) \; d\Omega d \Omega' \bigg) \frac{1}{p^2 +\nu^2} \frac{d^3 p}{(2\pi)^3} \nonumber \\ & & =- \frac{1}{R} \left( \int_{S^2} \psi(\Omega) d \Omega \right) \left( \frac{1}{2R}- \nu K_{1/2}(\nu R) I_{3/2}(\nu R) \right) \;. \label{Phideformedsphere2ndintegral} \end{eqnarray} By applying similar arguments, we can find easily for the last integral \begin{eqnarray} & & \hskip-3cm \int_{\mathbb{R}^3} \bigg( \int_{S^2 \times S^2} e^{i \mathbf{k} \cdot (\mathbf{\boldsymbol{\sigma}}(\Omega)- \mathbf{\boldsymbol{\sigma}}(\Omega'))} \psi(\Omega) \; d\Omega d \Omega' \bigg) \frac{1}{k^2 +\nu^2} \frac{d^3 k}{(2\pi)^3} \nonumber \\ & & \hspace{3cm} = \frac{2}{R} K_{1/2}(\nu R) I_{1/2}(\nu R) \left( \int_{S^2} \psi(\Omega) d\Omega \right) \;. \label{Phideformedsphere3thintegral} \end{eqnarray} Combining all these results (\ref{Phideformedsphere1stintegral}), (\ref{Phideformedsphere2ndintegral}) and (\ref{Phideformedsphere3thintegral}), we obtain \begin{eqnarray} & & \hskip-3cm \tilde{\Phi}(-\nu^2) = \frac{1}{\lambda} - \frac{1}{4\pi R} I_{1/2}(\nu R) K_{1/2}(\nu R) \nonumber \\ \hspace{3cm} & & + \, \frac{\epsilon}{8 \pi^2 R} \left(- \frac{1}{2R} + \nu I_{1/2}(\nu R) K_{3/2}(\nu R) \right) \left(\int_{S^2} \psi(\Omega) d \Omega \right) \;, \label{deformedspherephi} \end{eqnarray} where we have used $I_{1/2}(x)K_{3/2}(x)+I_{3/2}(x)K_{1/2}(x)=1/x$. It is important to notice that the formula for the function $\tilde{\Phi}$ is very similar to the one obtained for the deformed circular defect case, however there is a difference. The eigenvalue flow can be obtained again by writing $I_{1/2}(\nu R) K_{1/2}(\nu R)$ as (from the formula (6.577) in \cite{gradshteyn2014table}): \begin{eqnarray} \frac{1}{\lambda} = \frac{1}{4\pi R} I_{1/2}(\nu R) K_{1/2}(\nu R) = \frac{1}{4\pi R} \int_{0}^{\infty} \frac{x}{x^2+\nu^2 R^2} J_{1/2}^{2}(x) d x \;. \end{eqnarray} As one can see, the right hand side of the above equation is a decreasing function of $\nu$ for given parameters $\lambda$ and $R$. Yet the product $I_{1/2}(\nu R) K_{1/2}(\nu R)$ is finite as $\nu\to 0^+$, so there may not always be a solution if $\lambda$ is small enough. If there is a solution then it is unique, say $\nu_0$, to the above equation. We assume that this is the case. Let $\nu= \nu_0 + \epsilon \nu_1 + O(\epsilon^2)$, then the bound state energy up to order $\epsilon$ can then be found by solving the zeroes of the $\tilde{\Phi}$ by expanding terms around $\nu=\nu_0$. Hence, we find \begin{eqnarray} & & \hskip-1cm E_B \nonumber = -\nu_{0}^{2} \\ & & \hskip-0.5cm - \, \epsilon \, \frac{\nu_0}{\pi R} \; \left( \frac{\frac{1}{2 R} - \nu_0 I_{1/2}(\nu_0 R) K_{3/2}(\nu_0 R)}{I_{3/2}(\nu_0 R) K_{1/2}(\nu_0 R) - I_{1/2}(\nu_0 R) K_{3/2}(\nu_0 R) + \frac{1}{\nu_0 R} I_{1/2}(\nu_0 R) K_{1/2}(\nu_0 R) }\right)\nonumber \\ & & \hspace{5cm} \times \, \left( \int_{S^2} \psi(\Omega) d \Omega \right) + O(\epsilon^2) \;. \end{eqnarray} Not surprisingly, {\it this result has the same geometric interpretation as in the case of circle}, we replace the original sphere with another sphere of slightly different radius $R-\epsilon R_1$, with $R_1={1\over 4\pi R^2}\int_{S^2} \psi(\Omega) R^2 d\Omega $ and then look for the small change in the energy because of this alteration, as a result of this computation, we recover the above expression. Hence, \begin{mylemma*} A small deformation in the normal direction of a given sphere, which supports an attractive delta function, leads to a perturbation of the original bound state energy, to first order the resulting change can be obtained as follows: increase the initial radius by an amount equal to the average of the deformation over the given sphere, then compute the first order perturbation of the bound state energy corresponding to this new sphere with the same coupling constant. \end{mylemma*} For a particular deformation $\psi(\theta)=\sin \theta$, one can numerically plot how the bound state energies change with respect to $R$ for a given $\lambda$, as shown in Fig. \ref{fig:boundstatevsRdeformedsphere}. \begin{figure}[h!] \centering \includegraphics{boundstatevsRdeformedsphere.eps} \caption{Bound state energy for the spherical defect and for the deformed spherical defect (red curve) versus $R$ with $\epsilon=0.1$, $\lambda=10$ units.} \label{fig:boundstatevsRdeformedsphere} \end{figure} \subsection{First Order Stationary Scattering States} \label{First Order Stationary Scattering States for Deformed Sphere} For the deformed spherical defect, the function $\tilde{\Phi}$ can be analytically continued onto the complex plane using (\ref{deformedspherephi}) and $\tilde{\Phi}(E_k+i0)$ can then be evaluated in terms of the variable $k>0$ \begin{eqnarray} & & \hskip-1cm \tilde{\Phi}(E_k+i0) = \frac{1}{\lambda}- \frac{i}{8 R} J_{1/2}(kR) H_{1/2}^{(1)}(kR) \nonumber \\ & & \hspace{2cm} + \,\frac{\epsilon}{8 \pi^2 R} \left(-\frac{1}{2R} + \frac{i \pi k}{2} J_{1/2}(kR) H_{3/2}^{(1)}(kR) \right) \left(\int_{S^2} \psi(\Omega) d \Omega \right) + O(\epsilon^2) \;. \label{Phideformedspherescattering} \end{eqnarray} For the scattering amplitude, we need to find the expression $ \langle \tilde{\Sigma}|\mathbf{k} \rangle$ in terms of the deformation function $\psi(\Omega)$: \begin{eqnarray} \langle \tilde{\Sigma}|\mathbf{k} \rangle = \frac{1}{A(\tilde{\Sigma})} \int_{S^2} e^{i \mathbf{k}\cdot \boldsymbol{\tilde{\sigma}}(\Omega)} R^2 \left(1-\frac{2 \epsilon}{R} \psi(\Omega)\right) \; d \Omega \;. \end{eqnarray} By expanding the exponential $e^{i \epsilon \psi(\Omega) \mathbf{k} \cdot \mathbf{N}(\Omega)}$ in $\epsilon$ and expanding $A(\tilde{\Sigma})$ in $\epsilon$ from the formula (\ref{deformationofarea}), it is easy to show that \begin{eqnarray} & & \hskip-2cm \langle \tilde{\Sigma}|\mathbf{k} \rangle = \left(1 + \frac{\epsilon}{2\pi R} \int_{S^2} \psi(\Omega) d \Omega \right) \bigg(\frac{\sin (k R)}{k R} -\frac{\epsilon}{2\pi R} \int_{S^2} e^{i \mathbf{k} \cdot \boldsymbol{\sigma}(\Omega)} \psi(\Omega) d \Omega \nonumber \\ & & \hspace{4cm} + \, \frac{i \epsilon}{4\pi} \int_{S^2} e^{i \mathbf{k} \cdot \boldsymbol{\sigma}(\Omega)} (\mathbf{k} \cdot \mathbf{N}(\Omega)) \psi(\Omega) d \Omega \bigg) + O(\epsilon^2) \;. \label{gammak} \end{eqnarray} For simplicity, we consider a particular class of deformations, where $\psi(\Omega)=\psi(\theta)$. In this case, let $\theta'$ be the angle between $\mathbf{k}'$ and $\mathbf{k}$, which is the momentum of the incoming particle chosen to be parallel to the $z$ axis. Then, we get the explicit expression for the scattering amplitude for a given deformation $\psi$, given by \begin{eqnarray} & & \tilde{f}(\mathbf{k} \rightarrow \mathbf{k}') = - \frac{1}{4\pi} \langle \mathbf{k}' | \tilde{\Sigma} \rangle (\tilde{\Phi}(E_k+i0))^{-1} \langle \tilde{\Sigma} | \mathbf{k} \rangle \nonumber \\ & & = - \frac{1}{4 \pi} \Bigg[ \left(1 + \frac{\epsilon}{R} \int_{0}^{\pi} \psi(\theta) \sin \theta d \theta \right) \bigg(\frac{\sin k R}{k R} -\frac{\epsilon}{R} \int_{0}^{\pi} e^{-ik R \cos(\theta-\theta')} \psi(\theta) \sin \theta d \theta \nonumber \\ & & \hspace{4cm} - \, \frac{i k \epsilon}{2} \int_{0}^{\pi} e^{-i k R \cos(\theta-\theta')} \cos(\theta-\theta') \psi(\theta) \sin \theta d \theta \bigg) + O(\epsilon^2) \Bigg] \nonumber \\ & & \times \, \Bigg[ \frac{1}{\lambda}- \frac{i}{8 R} J_{1/2}(kR) H_{1/2}^{(1)}(kR)+ \frac{\epsilon}{4\pi R} \left(-\frac{1}{2R} + \frac{i \pi k}{2} J_{1/2}(k R) H_{3/2}^{(1)}(k R) \right) \left(\int_{0}^{\pi} \psi(\theta) \sin \theta d \theta \right) \nonumber \\ & & +\, O(\epsilon^2) \Bigg]^{-1} \times \Bigg[ \left(1 + \frac{\epsilon}{R} \int_{0}^{\pi} \psi(\theta) \sin \theta d \theta \right) \bigg(\frac{\sin k R}{k R} -\frac{\epsilon}{R} \int_{0}^{\pi} e^{i k R \cos(\theta)} \psi(\theta) \sin \theta d \theta \nonumber \\ & & \hspace{5cm} + \, \frac{i k \epsilon}{2} \int_{0}^{\pi} e^{i k R \cos(\theta)} \cos(\theta) \sin \theta \psi(\theta) d \theta \bigg) + O(\epsilon^2) \Bigg] \;. \end{eqnarray} The differential cross sections as a function of $k$ for the spherical defect and deformed spherical defect for a particular deformation $\psi(\theta)=\sin \theta$ is plotted in Fig. \ref{fig:diffcrosssectiondeformedsphere}. \begin{figure} \centering \includegraphics{diffcrosssectiondeformedsphere.eps} \caption{Differential cross sections as a function of $k$ from a spherical defect and deformed spherical defect (red curve) for $\psi(\theta)=\sin \theta$, and $R=1$, $\lambda=100$, and $\epsilon=0.1$ units.} \label{fig:diffcrosssectiondeformedsphere} \end{figure} \section*{Appendix A: Trotter-Kato Theorem} \label{TrotterKatotheorem} This is a different version of Trotter-Kato theorem \cite{Reed1972methods}, stated also in \cite{rajeevdimock}: \begin{mytheo*} Suppose that $H_n$ be a sequence of self-adjoint operators with resolvents $R_n(E)=(H_n-E)^{-1}$ defined for all complex numbers $E$ except a closed proper subset $U$ of $\mathbb{R}$. Furthermore, assume that $R_n(E)$ converges strongly for some $E \notin U$ and this limit is invertible. Then, there exists a self-adjoint operator $H$ with resolvents $R(E)=(H-E)^{-1}$ such that $R_n(E)$ converges strongly to $R(E)$ for all complex numbers $E \notin U$. \end{mytheo*} The idea of the proof is essentially the same as the original Trotter-Kato theorem. In our problem, we consider the family of operators $H_{\epsilon}$, where $\epsilon>0$ is the parameter and $U=\{E \in \mathbb{C}: \det \Phi(\epsilon, E) =0, \; \Real(E) \geq 0 \}$. We consider a sequence of positive real numbers such that the regularized resolvents converge strongly to $R(E)$ for some $E \notin U$, e.g., we can choose $E$ to be sufficiently negative real values. \section*{Acknowledgments} O. T. Turgut would like to thank A. Michelangeli for many informative discussions on mathematical aspects of singular interactions in general as well as A. Mostafazadeh for his interest in these problems.
2,869,038,154,173
arxiv
\section{Introduction} In this paper we will prove metastability results for the contact process on the configuration model with a power-law degree distribution, extending the main results of \cite{CD, MVY,MMVY} to the case when the exponent of the power-law is smaller than or equal to $2$. \vspace{0.2cm} The contact process is one of the most studied interacting particle systems, see in particular Liggett's book \cite{L}, and is also often interpreted as a model for the spread of a virus in a population or a network. Mathematically, it can be defined as follows: given a countable locally finite graph $G$ and $\lambda >0$, the contact process on $G$ with infection rate $\lambda$ is a continuous-time Markov process $(\xi_t)_{t\geq 0}$ on $\{0,1\}^V$, with $V$ the vertex set of $G$. The elements of $V$, also called sites, are regarded as individuals which are either infected (state $1$) or healthy (state $0$). By considering $\xi_t$ as a subset of $V$ via $\xi_t \equiv \{v: \xi_t(v)=1\}$, the transition rates are given by \begin{align*} \xi_t \rightarrow \xi_t \setminus \{v\} & \textrm{ for $v \in \xi_t$ at rate $1,$ and } \\ \xi_t \rightarrow \xi_t \cup \{v\} & \textrm{ for $v \not \in \xi_t$ at rate } \lambda \, \textrm{deg}_{\xi_t}(v), \end{align*} where $\textrm{deg}_{\xi_t}(v)$ denotes the number of edges between $v$ and another infected site (note that if $G$ is a simple graph, in the sense that there is only one edge between any pair of vertices, then $\textrm{deg}_{\xi_t}(v)$ is just the number of infected neighbors of $v$ at time $t$). \vspace{0.2cm} Since the empty configuration is an absorbing state (and the only one), a quantity of particular interest is the extinction time, defined by $$\tau_G = \inf \{t: \xi_t = \varnothing\}.$$ Exploiting the fact that the contact process is stochastically increasing in $\lambda$, one can show that some graphs exhibit a nontrivial phase transition, regarding the finiteness of $\tau_G$. For instance on $\mathbb{Z}^d$, there exists a critical value $\lambda_c(d)>0$, such that for $\lambda\le \lambda_c(d)$, $\tau_{\mathbb{Z}^d}$ is a.s. finite (when $\xi_0$ is finite), whereas when $\lambda>\lambda_c(d)$, it has positive probability to be infinite (even when starting from a single vertex), see \cite{L} Section I.2 for a proof of this and references. Here we will only consider finite graphs, in which case the extinction time is always almost surely finite. However, it is still interesting to understand its order of magnitude as a function of the size of the graph. For instance a striking phenomenon occurs on finite boxes $\llbracket 0,n \rrbracket^d$: one can show that with high probability (w.h.p.), if the process starts from full occupancy, the extinction time is of logarithmic order when $\lambda<\lambda_c(d)$, of polynomial order when $\lambda=\lambda_c(d)$ (at least in dimension one), and of exponential order when $\lambda>\lambda_c(d)$, see \cite{L} Section I.3 for a discussion on this and a complete list of references. In fact such result seems intimately related to the fact that finite boxes converge to $\mathbb{Z}^d$ when $n$ tends to infinity, in the sense of the Benjamini--Schramm's local weak convergence of graphs \cite{BS}. If a rigorous connection between the two phenomena still remains conjectural at the moment, recently many examples gave substancial credit to this conjecture, see for instance \cite{CD, CMMV, MV, MMVY}. \vspace{0.2cm} The case of the configuration model (a definition will be given later) is particularly interesting in this regard, at least when the degree distribution has finite mean. Indeed in this case it is not difficult to see that when the number of vertices increases, the sequence of graphs converges toward a Galton Watson tree. In \cite{CD} Chatterjee and Durret have shown that when the degree distribution has a power law (with exponent larger than two), the extinction time grows faster than any stretched exponential (in the number of vertices), which can be interpreted in saying that the critical value is zero for these graphs (invalidating thereby some physicists predictions). Since on the other hand one can show that the critical value on the limiting Galton Watson tree is also zero (the process has always a positive probability to survive for any $\lambda>0$), the conjecture mentioned above is satisfied for this class of examples. It is worth noting that the case of degree distributions with lighter tails than polynomial seems much harder (in particular understanding the case of Poisson distributions would be of great interest due to its connection with Erd\"os-R\'enyi random graphs). But the configuration model is also interesting for another reason, highlighted in \cite{CD}: when the degree sequence has a power law, the contact process exhibits a metastable behaviour. This was first proved under a finite second moment hypothesis (equivalently for exponents larger than three) in \cite{CD}, and the result has been later strengthened and extended to exponents larger than two in \cite{MVY, MMVY}. To be more precise now, in \cite{CD} the authors proved that when the degree distribution has a power law with finite second moment, then $$\mathbb{P} \left( c\lambda^{1+(a-2)(2- \delta)} \leq \frac{|\xi_{\exp(\sqrt n)}|}{n} \leq C \lambda^{1+(a-2)(1- \delta)} \right) \rightarrow 1,$$ for some positive constants $c$ and $C$ (independent of $\lambda$), where $\xi$ denotes the contact process starting from full occupancy. In \cite{MMVY} the authors have shown that when the degree distribution has finite mean (and a power law), the extinction time is w.h.p. exponential in the size of the graph (when starting from full occupancy), and combined with the results of \cite{MVY}, one obtains that $$ \mathbb{P} \left( c \rho_{a}(\lambda) \leq \frac{|\xi_{t_n}|}{n} \leq C \rho_{a}(\lambda) \right) \rightarrow 1,$$ for any sequence $(t_n)$ satisfying $t_n \to \infty$ and $t_n \le \exp(cn)$, where \begin{displaymath} \rho_{a}(\lambda) = \left \{ \begin{array}{ll} \lambda^{\frac{1}{3-a}} & \textrm{ if } 2 < a \leq 5/2\\ \frac{\lambda ^{2a-3}}{\log ^{a-2} (\frac{1}{\lambda})} & \textrm{ if } 5/2 < a \leq 3\\ \frac{\lambda ^{2a-3}}{\log ^{2a-4} (\frac{1}{\lambda})} & \textrm{ if } a > 3. \end{array} \right. \end{displaymath} \noindent In this paper we complete this picture by studying the case of power laws with exponents $ a\in (1,2]$. To simplify the discussion and some proofs we have chosen to consider mainly only two special choices of degree distribution. Namely we assume that it is given either by \begin{equation} \label{pnaj} p_{n,a}(j)= c_{n,a}\, j^{-a}\qquad \textrm{for }j=1,\ldots,n, \end{equation} for graphs of size $n$, or by \begin{equation} \label{paj} p_a(j) = c_{\infty,a}\, j^{-a} \qquad \textrm{for }j\ge 1, \end{equation} independently of the size of the graph, where $(c_{n,a})$ and $c_{\infty,a}$ are normalizing constants. However, at the end of the paper we also present straightforward extensions of our results to more general distributions, see Section \ref{secext} for more details. Our first main result in this setting is the following: \begin{theo} \label{td} For each $n$, let $G_n$ be the configuration model with $n$ vertices and degree distribution given either by \eqref{pnaj} or \eqref{paj} with $a\in (1,2]$. Consider the contact process $(\xi_t)_{t\ge 0}$ with infection rate $\lambda>0$ starting from full occupancy on $G_n$. Then there is some positive constant $c = c(\lambda)$, such that the following convergence in probability holds: \begin{equation} \label{etd} \frac{|\xi_{t_n}|}{n}\quad \mathop{\longrightarrow}^{(\mathbb{P} )}_{n\to\infty} \quad \rho_{a}(\lambda), \end{equation} for any sequence $(t_n)$ sastifying $ t_n \to \infty$ and $t_n \leq \exp(c n)$, where \begin{equation} \rho_{a}(\lambda) = \sum \limits_{j=1}^{\infty} \frac{j\lambda}{j\lambda+1} p_a(j). \end{equation} \end{theo} \noindent Note that as $\lambda \rightarrow 0$, \begin{displaymath} \rho_{a}(\lambda)\asymp \left \{ \begin{array}{ll} \lambda^{a-1} & \textrm{ if } 1 < a < 2 \\ \lambda \log \frac{1}{\lambda} & \textrm{ if } a =2, \end{array} \right. \end{displaymath} which in particular shows that the guess of Chatterjee and Durrett \cite{CD} that $\rho_a(\lambda)$ should be $\mathcal{O}(\lambda)$ was not correct. \vspace{0.2cm} Now let us make some comments on the proof of this result. One first remark is that one of the main ingredients in the approach of \cite{MVY} completely breaks down when the degree distribution has infinite mean (or when its mean is unbounded like in the case \eqref{pnaj}), since in this case the sequence of graphs $(G_n)$ does not locally converge anymore. In particular we cannot transpose the analysis of the contact process on $G_n$ (starting from a single vertex) into an analysis on an infinite limit graph. So instead we have to work directly on the graph $G_n$. In fact we will show that it contains w.h.p. a certain number of disjoint star graphs (i.e. graphs with one central vertex and all the others connected to the central vertex), which are all connected, and whose total size is of order $n$ (the size of $G_n$). It is well known that the contact process on a star graph remains active w.h.p. for a time exponential in the size of the graph. So our main contribution here is to show that when we connect disjoint star graphs together, the process survives w.h.p. for a time which is exponential in the total size of these graphs. To this end we use the machinery introduced in \cite{CD}, with their notion of lit stars. We refer to Proposition \ref{psta} and its proof for more details. Now it is interesting to notice that while this strategy works in all the cases we consider, the details of the arguments strongly depend on whether $a<2$ or $a=2$, and on the choice of the degree distribution. This explains why we found interesting to present the proof for the two examples \eqref{pnaj} and \eqref{paj} (note that these distributions were also considered in \cite{VVHZ}, where it was already proved that the distance between two randomly chosen vertices was a.s. equal either to two or three). Then to obtain the asymptotic expression for the density \eqref{etd}, the point is to use the self-duality of the contact process. This allows to transpose the problem on the density of infected sites in terms of survival of the process starting from a single vertex. But starting from a single vertex, the process has a real chance to survive for a long time only if it infects one of its neighbors before extinction. Moreover, when it does, one can show that w.h.p. it immediately infects one of the star graphs mentioned above, and therefore the virus survives w.h.p. for a time at least $t_n$. The conclusion of the theorem follows once we observe that the probability to infect a neighbor before extinction starting from any vertex is exactly equal to $\rho_a(\lambda)$ in case \eqref{paj} and to \begin{equation} \label{lm} \rho_{n,a}(\lambda) := \sum \limits _{j=1}^n \frac{j \lambda}{j \lambda +1} p_{n,a}(j), \end{equation} in case \eqref{pnaj}, which converges to $ \rho_a(\lambda)$, as $n \rightarrow \infty$. \vspace{0.2cm} \noindent Our second result is often considered in the literature as another (weaker) expression of the metastability: \begin{theo} \label{propexp} Assume that the degree distribution on $G_n$ is given either by \eqref{pnaj} or \eqref{paj} with $a\in (1,2]$, and let $\tau_n$ be the extinction time of the contact process with infection rate $\lambda>0$ starting from full occupancy. Then \begin{itemize} \item[(i)] the following convergence in law holds $$\frac{\tau_n}{\mathbb{E} (\tau_n)}\ \mathop{\longrightarrow}^{(\mathcal L)}_{n\to \infty} \ \mathcal{E}(1),$$ with $\mathcal{E}(1)$ an exponential random variable with mean one, \\ \item[(ii)] there exists a constant $C>0$, such that $\mathbb{E} (\tau_n) \le \exp(Cn)$, for all $n\ge 1$. \end{itemize} \end{theo} In particular this result shows that Theorem \ref{td} cannot be extended to sequences $(t_n)$ growing faster than exponentially. In fact one can prove (see Remark \ref{tn}) that Theorem \ref{td} holds true for any constant $c$ smaller than $\liminf (1/n) \log \mathbb{E} (\tau_n)$, and cannot be extended above this limit. This of course raises the question of knowing if the sequence $(1/n)\log \mathbb{E} (\tau_n)$ admits a limit or not. Such result has been obtained in a number of contexts, for instance in \cite{MMVY} or on finite boxes $\llbracket 0,n\rrbracket^d$ (see \cite{L} Section I.3), but we could not obtain it in our setting. One reason, which for instance prevents us to apply the strategy of \cite{MMVY}, is that there does not seem to be a natural way to embed $G_n$ into $G_{n+1}$ (or another configuration model with larger size). \vspace{0.2cm} Our method for proving Theorem \ref{propexp} (i) is rather general and only requires some simple hypothesis on the maximal degree and the diameter of the graph, which is satisfied in most scale-free random graphs models, like the configuration model with power law distribution having a finite mean (with the same hypothesis as in \cite{CD,MVY}), or the preferential attachment graph (see \cite{C}). We refer the reader to Proposition \ref{pcel} and Remark \ref{remexp} for more details. \vspace{0.2cm} Let us also stress the fact that (ii) would be well known if the graph had order $n$ edges, as when the degrees have finite mean, but here it is not the case, so we have to use a more specific argument, see Section 6. \vspace{0.2cm} Now the paper is organized as follows. In the next section, we recall the well-known and very usefull graphical construction of the contact process. We also give a definition of the configuration model, fix some notation, and prove preliminary results on the graph structure. In Section 3, we prove that $G_n$ contains w.h.p. a subgraph, called two-step star graph, which is made of several star graphs connected together, whose total size is comparable to the size of the whole graph. We refer to this section for a precise statement, which in fact depends on which case we consider ($a<2$ or $a=2$, and distribution \eqref{pnaj} or \eqref{paj}). In Section 4 we show that once a vertex (with high degree) of the two-step star graph is infected, the virus survives for an exponential time. Then we prove Theorem \ref{td} and \ref{propexp} in Sections 5 and 6 respectively. Finally in the last section we discuss several extenstions of our results to more general degree distributions. \section{Preliminaries} \subsection{Graphical construction of the contact process.} We briefly recall here the graphical construction of the contact process (see more in Liggett's book \cite{L}). Fix $\lambda>0$ and an oriented graph $G$ (recall that a non-oriented graph can also be seen as oriented by associating to each edge two oriented edges). Then assign independent Poisson point processes $\mathcal{N}_v$ of rate $1$ to each vertex $v \in V$ and $\mathcal{N}_{e}$ of rate $\lambda$ to each oriented edge $e$. Set also $\mathcal{N}_{(v,w)}:=\cup_{e : v\to w}\, \mathcal{N}_e$, for each ordered pair $(v,w)$ of vertices, where the notation $e: v\to w$ means that the oriented edge $e$ goes from $v$ to $w$. We say that there is an infection path from $(v,s)$ to $(w,t)$, and we denote it by \begin{eqnarray} \label{vswt} (v,s)\longleftrightarrow (w,t), \end{eqnarray} either if $s=t$ and $v=w$, or if $s<t$ and if there is a sequence of times $s=s_0< s_1<\ldots<s_l<s_{l+1}=t,$ and a sequence of vertices $v=v_0,v_1,\ldots,v_l=w$ such that for every $i=1,\ldots,l$ \begin{displaymath} \left \{ \begin{array}{ll} s_i \in \mathcal{N}_{(v_{i-1},v_i)} \quad \textrm{ and }\\ \mathcal{N}_{v_i} \cap [s_i, s_{i+1}] = \varnothing. \end{array} \right. \end{displaymath} Furthermore, for any $A$, $B$ two subsets of $V_n$ and $I$, $J$ two subsets of $[0,\infty)$, we write $$A\times I \longleftrightarrow B\times J,$$ if there exists $v\in A$, $w\in B$, $s\in I$ and $t\in J$, such that \eqref{vswt} holds. Then for any $A\subset V_n$, the contact process with initial configuration $A$ is defined by $$\xi^A_t :=\left\{v \in V_n: A\times\{0\}\longleftrightarrow (v,t)\right\},$$ for all $t\ge 0$. It is well known that $(\xi^A_t)_{t \geq 0}$ has the same distribution as the process defined in the introduction. Just note that in our definition, the Poisson processes associated to edges forming loops play no role (we could in particular remove them), but this definition will be convenient at one place of the proof (when we will use that the $Y_{n,v}$'s are i.i.d. in Subsection \ref{subsectionYnv}). We define next $\tau_n^A$ as the extinction time of the contact process starting from $A$. However, we will sometimes drop the superscript $A$ from the notation when it will be clear from the context. We will also simply write $\xi^v_t$ or $\tau_n^v$ when $A=\{v\}$. \vspace{0.3cm} \noindent Finally we introduce the following related notation: \begin{equation} \label{sm} \sigma(v)= \inf \{s \geq 0: s \in \mathcal{N}_v \}, \end{equation} and \begin{equation} \label{sme} \sigma(e)= \inf \{s \geq 0: s \in \mathcal{N}_e \}, \end{equation} for any vertex $v$ and oriented edge $e$. \subsection{Configuration model and notation.} The configuration model is a well known model of random graph with prescribed degree distribution, see for instance \cite{V}. In fact here we will consider a sequence $(G_n)$ of such graphs. To define it, start for each $n$ with a vertex set $V_n$ of cardinality $n$ and construct the edge set as follows. Consider a sequence of i.i.d. integer valued random variables $(D_v)_{v\in V_n}$ (whose law might depend on $n$) and assume that $L_n =\sum_v D_v$ is even (if not increase one of the $D_v$'s by $1$, which makes no difference in what follows). For each vertex $v$, start with $D_v$ half-edges (sometimes called stubs) incident to $v$. Then match uniformly at random all these stubs by pairs. Once paired two stubs form an edge of the graph. Note that the random graph we obtain may contain multiple edges (i.e. edges between the same two vertices), or loops (edges whose two extremities are the same vertex). In fact one can also define $G_n$ by matching the stubs sequentially. This equivalent construction will be used in particular in Lemma \ref{lta1}, \ref{2a<2} and \ref{ltb1}, so let us describe it now. As with the previous construction we start with a sequence of degrees $(D_v)_{v\in V_n}$, and for each $v\in V_n$, $D_v$ half-edges emanating from $v$. Then we denote by $\mathcal{H}$ the set of all the half-edges. Select one of them $h_1$ arbitrarily and then choose a half-edge $h_2$ uniformly from $\mathcal{H} \setminus \{h_1\}$, and match $h_1$ and $h_2$ to form an edge. Next, select arbitrarily another half-edge $h_3$ from $\mathcal{H} \setminus \{h_1, h_2\}$ and match it to another $h_4$ uniformly chosen from $\mathcal{H} \setminus \{h_1, h_2, h_3\}$. Then continue this procedure until there are no more half-edges. It is possible to show that the two constructions of $G_n$ have the same law. \vspace{0,2cm} Now we introduce some notation. We denote the indicator function of a set $E$ by ${\bf 1}(E)$. For any vertices $v$ and $w$ we write $v \sim w$ if there is an edge between them (in which case we say that they are neighbors or connected), and $v \not \sim w$ otherwise. We also denote by $s_v$ the number of half-edges forming loops attached to a vertex $v$. We call size of a graph $G$ the cardinality of its set of vertices, and we denote it by $|G|$. \vspace{0.2cm} A graph in which all vertices have degree one, except one which is connected to all the others is called a {\bf star graph}. The only vertex with degree larger than one is called the center of the star graph, or central vertex. We call {\bf two-step star graph} a graph formed by a family of disjoints star graphs, denoted by $S(v_i)_{1\le i\le k}$, centered respectively in vertices $(v_i)_{1\le i\le k}$, plus an additional vertex $v_0$ and edges between $v_0$ and all the $v_i'$'s (or equivalently it is just a tree, which is of height $2$ when rooted at $v_0$). The notation ${\bf S(k; d_1,\dots,d_k)}$ will refer to the two-step star graph where $v_i$ has degree $d_i+1$ for all $i$ (which means that inside $S(v_i)$, $v_i$ has degree $d_i$, or that $S(v_i)$ has size $d_i+1$). These graphs will play a crucial role in our proof of Theorem \ref{td}. \vspace{0.2cm} Furthermore we denote by $\mathcal{B}(n,p)$ the binomial distribution with parameters $n$ and $p$. If $f $ and $g$ are two real functions, we write $f= \mathcal{O}(g)$ if there exists a constant $C>0,$ such that $f(x) \leq C g(x)$ for all $x ;$ $f \asymp g $ if $f= \mathcal{O}(g)$ and $g= \mathcal{O}(f);$ $f=o(g)$ if $g(x)/f(x) \rightarrow 0$ as $x \rightarrow \infty$. Finally for a sequence of random variables $(X_n)$ and a function $f:\mathbb{N} \to (0,\infty)$, we say that $X_n \asymp f(n)$ holds w.h.p. if there exist positive constants $c$ and $C,$ such that $\mathbb{P} (c f(n) \leq X_n \leq C f(n)) \rightarrow 1$, as $n\to \infty$. \subsection{Preliminary estimates on the graph structure} We first recall a large deviations result which we will use throughout this paper (see for instance \cite{DZ}): if $X \sim \mathcal{B}(n,p),$ then for all $c>0$, there exists $\theta>0$, such that \begin{equation} \label{ld} \mathbb{P} (|X-np| \geq cnp) \leq \exp(-\theta np ) \quad \textrm{ for all } n \in \mathbb{N} \textrm{ and } p \in [0,1]. \end{equation} \noindent Now we present a series of lemmas deriving basic estimates on the degree sequence and the graph structure. The first one is very elementary and applies to all the cases we will consider in this paper. \begin{lem} \label{lb} Assume that the degree sequence is given either by \eqref{pnaj} or \eqref{paj}, with $1< a \leq 2$. For $j\ge 1$, let $A_j:=\{v : D_v =j\}$ and $n_j= |A_j|$. Then there exist positive constants $c$ and $C$, such that $$ \mathbb{P} (n_j \in (c n j^{-a},C n j^{-a}) \textrm{ for all }j=1,...,n^{1/2a}) = 1- o(1).$$ \end{lem} \begin{proof} Observe that we always have $ n_j \sim \mathcal{B} (n,p_j)$, for some $p_j\in (c_{\infty,a} j^{-a}, j^{-a})$, with $c_{\infty,a}$ as in \eqref{paj}. Thus the result directly follows from \eqref{ld}. \end{proof} \noindent Our next results depend more substantially on the value of $a$ and the choice of the degree distribution. \begin{lem} \label{lta1} Assume that the degree distribution is given by \eqref{pnaj}, with $a \in(1,2)$. Let $E := \{ v\, :\, D_v \geq n/2 \}$. Let also $\kappa > 2-a$ and $\chi<1$ be some constants. Then the following assertions hold \begin{itemize} \item[(i)] $ L_n \asymp n^{3-a} $ w.h.p., \\ \item[(ii)] $ |E| \asymp n^{2-a} $ w.h.p., \\ \item[(iii)] $\mathbb{P} ( v\sim w \textrm{ for all $v$ and $w$ such that $D_v \geq n/2$ and $D_w \geq n^{\kappa}$}) = 1-o(1)$, \\ \item[(iv)] $\mathbb{P} (s_v\ge 1) = o(1)$, for any $v\in V_n$, \\ \item[(v)] $\mathbb{P} \left(\textrm{All neighbors of $v$ have degree larger than $n^\chi$} \right) = 1-o(1)$, for any $v\in V_n$. \end{itemize} \end{lem} \begin{proof} Let us start with Part (i). It follows from the definition \eqref{pnaj} that $$\mathbb{E} \left( D_v\right) \asymp n^{2-a} \quad \textrm{and} \quad \textrm{Var}(D_v) \asymp n^{3-a}. $$ The result follows by using Chebyshev's inequality. Part (ii) is similar to Lemma \ref{lb}. For Part (iii), let $v$ and $w$ be two vertices such that $D_v \geq n/2 $ and $ D_w \geq n^{\kappa}$. Then conditionally on $(D_z)_{z \in V_n}$, the probability that the $n/8$ first stubs of $v$ do not connect to $w$ is smaller than $(1- \frac{n^{\kappa}}{L_n-n/4})^{n/8}$. Hence, $$ \mathbb{P} \left(v \not \sim w \mid (D_z)_{z \in V_n}, \, L_n \in (c n^{3-a},C n^{3-a})\right) \leq \left(1- \frac{n^{\kappa}}{Cn^{3-a}-n/4} \right)^{n/8} = o(n^{-2}), $$ which proves (iii) by using (i) and a union bound. We now prove (iv). To this end, notice that conditionally to $D_v$ and $L_n$, $s_v$ is stochastically dominated by a binomial random variable with parameters $D_v$ and $D_v/(L_n-2D_v+2)$ (remark in particular that since $D_z\ge 1$ for all $z$, the denominator in the last term is always positive). Hence Markov's inequality shows that $$\mathbb{P} (s_v\ge 1\mid D_v,L_n) \le \frac{D_v^2}{L_n-2D_v+2}.$$ The result follows by using (i) and that for any fixed $\varepsilon>0$, $\mathbb{P} (D_v\ge n^\varepsilon) = o(1)$. It remains to prove (v). Denote the degrees of the neighbors of $v$ by $D_{v,i}$, $i = 1,\ldots,D_v$. It follows from the definition of the configuration model that for any $i\le D_v$ and $k\neq D_v$, $$\mathbb{P} (D_{v,i} =k \mid (D_z)_{z\in V_n} )= \frac{kn_k}{L_n-1},$$ where we recall that $n_k$ is the number of vertices of degree $k$. Therefore, \begin{align*} \mathbb{P} (D_{v,i} \le n^\chi \mid (D_z)_{z\in V_n} ) \le \frac{K_n}{L_n-1}, \end{align*} where $$K_n = \sum_{k\le n^\chi} k n_k.$$ Summing over $i$, we get $$\mathbb{P} (\exists i\le D_v : D_{v,i} \le n^\chi \mid (D_z)_{z\in V_n} ) \le \frac{K_nD_v}{L_n-1}.$$ Moreover, similarly to the proof of (i), we can see that w.h.p. $$K_n\asymp n ^{1+\chi(2-a)}.$$ Together with (i), and using again that $D_v\le n^\varepsilon$ w.h.p. for any fixed $\varepsilon>0$, we get (v). \end{proof} \vspace{0.3cm} \noindent Things drastically change when the degree distribution is given by \eqref{paj}. In this case $L_n$, as well as the $k$ maximal degrees, for any fixed $k$, are all of order $n^{1/(a-1)}$ (for the comparison with the previous case note that $1/(a-1)$ is always larger than $a-3$ when $a\in(1,2)$, which is consistent with the fact that the distribution \eqref{paj} stochastically dominates \eqref{pnaj}): \begin{lem} \label{2a<2} Assume that the degree distribution is given by \eqref{paj}, with $a\in(1,2)$. Denote by $(D_i)_{1\le i\le n}$ the sequence of degrees ranged in decreasing order (in particular $D_1$ is the maximal degree). Let also $\kappa > (2-a)/(a-1)$ and $\chi<1/(a-1)$ be some constants. Then the following assertions hold \begin{itemize} \item[(i)] there exist (a.s. positive and finite) random variables $(\gamma_i)_{i\ge 0}$, such that for any fixed $k\ge 1$, \begin{equation*} \left( \frac{L_n}{n^{1/(a-1)}}, \frac{D_1}{n^{1/(a-1)}},\ldots,\frac{D_k}{n^{1/(a-1)}} \right)\ \mathop{\longrightarrow}^{(\mathcal{L})}_{n\to \infty} \ (\gamma_0, \gamma_1,\ldots,\gamma_k). \end{equation*} \item[(ii)] For any $\varepsilon >0$, there exists a positive constant $\eta=\eta(\varepsilon)$, such that for any fixed $k\ge 1$, $$\liminf_{n\to \infty} \, \mathbb{P} \left(D_i/L_n \ge \eta\, i^{-1/(a-1)} \quad \textrm{for all } 1\le i\le k\right) \ge 1-\varepsilon,$$ and an integer $k=k(\varepsilon)$, such that $$\liminf_{n\to \infty} \, \mathbb{P} (D_1+\dots +D_k\ge L_n/2) \ge 1-\varepsilon.$$ \item[(iii)] $\mathbb{P} ( v\sim w \textrm{ for all $v$ and $w$ such that $D_v \geq n$ and $D_w \geq n^{\kappa}$}) = 1-o(1)$, \\ \item[(iv)] $\mathbb{P} (s_v\ge 1) = o(1)$, for any $v\in V_n$, \\ \item[(v)] $\mathbb{P} \left(\textrm{All neighbors of $v$ have degree larger than $n^\chi$} \right) = 1-o(1)$, for any $v\in V_n$. \end{itemize} \end{lem} \begin{proof} Part (i) is standard, we refer for instance to Lemma 2.1 in \cite{VVHZ}. More precisely let $(e_i)_{i \geq 1} $ be an i.i.d. sequence of exponential random variables with mean one and $\Gamma_i= e_1+\ldots+e_i$, for all $i\ge 1$ (in particular $\Gamma_i$ is a Gamma random variable with parameters $i$ and $1$). Then the result holds with $$\gamma_i = ((a-1)\Gamma_i/c_{\infty,a})^{-1/(a-1)},$$ for all $i\ge 1$, and $\gamma_0=\sum_i \gamma_i$ (which is well a.s. a convergent series). For (ii) note that $\Gamma_i/i\to 1$ a.s. as $i\to \infty$. In particular for any $\varepsilon$, there exists $C>0$, such that $$\mathbb{P} (\Gamma_i\le Ci \textrm{ for all }i\ge 1)\ge 1-\varepsilon/2.$$ The first assertion follows with (i), using also that $\mathbb{P} (\gamma_0\le C)\ge 1-\varepsilon/2$, for $C$ large enough. The second one is an immediate corollary of (i) and the definition of $\gamma_0$ as the limit of the partial sum $\sum_{i\le k} \gamma_i$, as $k\to \infty$. Parts (iii)-(v) are similar to the previous case. \end{proof} \noindent We now give an analogous result for the case $a=2$, which we will not prove here since it is entirely similar to the case $a<2$ (just for the case when the degree distribution is given by \eqref{paj}, one can use the elementary fact that w.h.p. all vertices have degree smaller than $n\log \log n$). \begin{lem} \label{ltb1} Assume that the degree distribution is given either by \eqref{pnaj} or \eqref{paj}, with $a=2$. Let $E': =\{v : D_v \geq n^{3/4}\}$. Then the following assertions hold \begin{itemize} \item[(i)] $L_n \asymp n \log n$ w.h.p., \\ \item[(ii)] $|E'|\asymp n^{1/4} \quad \textrm{and}\quad \sum_{v\in E'} D_v \asymp n\log n \quad \textrm{w.h.p.}$, \\ \item[(iii)] $\mathbb{P} (v\sim w \textrm{ for all $v$ and $w$ such that } D_v \geq n/\log n \textrm{ and } D_w \geq (\log n)^4) =1- o(1)$,\\ \item[(iv)] $\mathbb{P} (s_v\ge 1) = o(1)$, for any $v\in V_n$.\\ \item[(v)] $\mathbb{P} \left(\textrm{All neighbors of $v$ have degree larger than $(\log n)^4$} \right) = 1-o(1)$, for any $v\in V_n$. \end{itemize} \end{lem} \section{Existence of a large two-step star graph} In this section we will prove that the graph $G_n$ contains w.h.p. a large two-step star graph $S(k;d_1,\dots,d_k)$, the term large meaning that $d_1+\dots +d_k$ will be of order $n$, and all the $d_i$'s of order at least $\log n$. However, the precise values of $k$ and the $d_i$'s will depend on which case we consider (to be more precise, in the case of degree distribution given by \eqref{paj} with $a\in(1,2)$ we prove that for any $\varepsilon>0$, $G_n$ contains a large two-step star graph with probability at least $1-\varepsilon$, with $k$ and the $d_i$'s depending on $\varepsilon$. Nevertheless, the rest of the proof works mutadis mutandis). \subsection{Case $1<a<2$} \subsubsection{Bounded degree sequence} We assume here that the law of the degrees is given by \eqref{pnaj}. Recall that $E= \{v: D_v \geq n/2\}$ and $A_1 =\{v : D_v =1\}$. In addition for any vertex $v$, let us denote by $$d_1(v):= \sum_{w \in A_1} {\bf 1}(\{w \sim v\}),$$ the number of neighbors of $v$ in $A_1$. \begin{lem} \label{q} There exist positive contants $\beta $ and $\kappa$, such that \begin{equation} \label{eta2} \mathbb{P} \left( \# \{v \in E: d_1(v) \geq \beta n^{a-1}\} \geq \kappa\, n^{2-a}\right) = 1-o(1). \end{equation} \end{lem} \begin{proof} It follows from the definition of the configuration model that for any $w \in A_1$ and $v \in E$, \begin{align} \label{wv} \mathbb{P} (w \sim v \mid (D_z)_{z\in V_n}) & = \frac{D_v}{L_n-1}. \end{align} Similarly for any $v\in E$ and $w \neq w' \in A_1$, \begin{align} \label{c1} |\textrm{Cov}(w \sim v, w' \sim v\mid (D_z)) |& = \left| \frac{D_v(D_v-1)}{(L_n-1)(L_n-3)} -\left( \frac{D_v}{L_n-1}\right)^2\right| \notag \\ & = \mathcal O\left( \frac{D_v}{L_n^2}\right) \end{align} Define now the set $$\mathcal{A}_n := \left\{cn^{3-a}\le L_n\le Cn^{3-a}\right\}\cap \left\{|A_1|\ge c n\right\},$$ with $0<c\le C$, such that \begin{equation} \label{Gn} \mathbb{P}(\mathcal{A}_n)= 1-o(1). \end{equation} Note that the existence of $c$ and $C$ is guaranteed by Lemma \ref{lb} and \ref{lta1}. Set also $\beta=c/(4C)$. Then \eqref{wv} and \eqref{c1} show that on $\mathcal{A}_n$, \begin{equation*} \label{sumwv} \sum\limits_{w \in A_1} \mathbb{P} (w \sim v \mid (D_z)) \geq 2\beta n^{a-1}, \end{equation*} and $$\sum\limits_{w \neq w'\in A_1} \textrm{Cov}(w \sim v,w'\sim v \mid (D_z)) =o(n^{2a-2}).$$ Thus by using Chebyshev's inequality, we deduce that on $\mathcal{A}_n$, \begin{equation*} \mathbb{P} (d_1(v) \geq \beta n^{a-1} \mid (D_z)) = \mathbb{P} \left(\sum\limits_{w \in A_1} {\bf 1}( \{w \sim v \}) \geq \beta n^{a-1} \ \Big| \ (D_z)\right) = 1- o(1). \end{equation*} Hence for any $v \neq w \in E$, \begin{align*} \textrm{Cov}(d_1(v) \geq \beta n^{a-1}, d_1(w) \geq \beta n^{a-1}\mid (D_z)_{z\in V_n}) =o(1). \end{align*} Then by using Chebyshev's inequality again we obtain that on the event $\{|E|\ge 2\kappa n^{2-a}\}$, \begin{equation*} \mathbb{P} \left( \# \{v \in E: d_1(v) \geq \beta n^{a-1}\} \geq \kappa\, n^{2-a}\mid (D_z)_{z\in V_n}\right) = 1-o(1). \end{equation*} Then \eqref{eta2} follows by using \eqref{Gn}, Lemma \ref{lta1} (ii) and taking expectation. \end{proof} \noindent As a corollary we get the following result: \begin{prop} \label{stara<2} Assume that the law of the degree sequence is given by \eqref{pnaj} with $a\in(1,2)$. There exist positive contants $\beta $ and $\kappa$, such that w.h.p. $G_n$ contains as a subgraph a copy of $S(k;d_1,\dots, d_k)$, with $k= \kappa n^{2-a}$ and $d_i= \beta n^{a-1}$, for all $i\le k$. \end{prop} \begin{proof} This is a direct consequence of Lemma \ref{lta1} (iii) and Lemma \ref{q}. \end{proof} \subsubsection{Unbounded degree sequences} We assume here that the law of the degrees is given by \eqref{paj}. The proof of the next result is similar to the one of Lemma \ref{q}, so we omit it. \begin{lem} \label{viri} With the notation of Lemma \ref{2a<2}, let $(v_i)_{i\le n}$ be a reordering of the vertices of $G_n$, such that the degree of $v_i$ is $D_i$ for all $i$ (in particular $v_1$ is a vertex with maximal degree). Then for any fixed $i$, $$\mathbb{P} (d_1(v_i)\ge D_i n_1/(2L_n)) = 1-o(1).$$ \end{lem} \noindent As a consequence we get \begin{prop} Assume that the degree distribution is given by \eqref{paj}, with $a\in(1,2)$. There exists a constant $c>0$, such that for any $\varepsilon>0$, there exists $\eta=\eta(\varepsilon)>0$ and an integer $k=k(\varepsilon)$, such that for $n$ large enough, with probability at least $1-\varepsilon$, $G_n$ contains as a subgraph a copy of $S(k;d_1,\dots,d_k)$, with $d_i\ge \eta i^{-1/(a-1)}n$ for all $i\ge 1$, and $d_1+\dots +d_k\ge cn$. \end{prop} \begin{proof} It follows from Lemma \ref{viri} that for any $i$, \begin{align*} \mathbb{P} (d_1(v_i)\ge D_i n_1/(2L_n)) = 1-o(1). \end{align*} Hence for any fixed $k$, \begin{align*} \mathbb{P} \left( d_1(v_1) + \ldots + d_1(v_k)\ge \frac{n_1(D_1+ \ldots+ D_k)} { 2 L_n} \right)= 1-o(1). \end{align*} Moreover, by Lemma \ref{lb} we have $\mathbb{P} (n_1 \in (cn, Cn)) =1-o(1)$. On the other hand, for any $\varepsilon >0$, by Lemma \ref{2a<2} (ii), there exist $\eta= \eta(\varepsilon)$ and $k=k(\varepsilon)$, such that \begin{align*} \mathbb{P} \left( \frac{D_1+ \ldots + D_k}{L_n } \geq \frac{1}{2} \right) & \geq 1- \varepsilon/4, \\ \mathbb{P} (D_i/ L_n \geq \eta i^{-1/(a-1)} \quad \forall \, i \leq k+1) & \geq 1- \varepsilon/4. \end{align*} Therefore with probability at least $1- (3\varepsilon/4)$, for $n$ large enough, $\eta$ and $k$ as above, \begin{align*} d_1(v_1) + \ldots + d_1(v_k) \geq cn/4, \\ d_1(v_i) \geq c \eta i^{-1/(a-1)} n/2 \, \, \forall \, i \leq k+1. \end{align*} Then by using a similar argument as in the proof of Lemma \ref{lta1} (iii), we can show that with probability larger than $1- (\varepsilon/4)$, $v_{k+1}$ and $v_i$ are connected for all $i \leq k$. The result follows. \end{proof} \subsection{Case $a=2$} In this case we can treat both distributions \eqref{pnaj} and \eqref{paj} in the same way. Recall that $E'=\{v : D_v \geq n^{3/4}\}$, and that $d_1(v)$ denotes the number of neighbors in $A_1$ of a vertex $v$. \begin{lem} \label{ex4} There exists a positive constant $\beta$, such that \begin{equation} \label{e17} \mathbb{P} \left(d_1(v) \ge \beta D_v/\log n \quad \textrm{for all }v\in E'\right) =1-o(1). \end{equation} \end{lem} \begin{proof} The proof is very close to the proof of Lemma \ref{q}. First, for any $v\in E'$ and $w\in A_1$, we have $$ \mathbb{P} (w \sim v\mid (D_z)) \asymp \frac{ D_v}{L_n} ,$$ and furthermore for any $w\neq w'\in A_1$, $$|\textrm{Cov}(w \sim v, w' \sim v\mid (D_z)) | = \mathcal{O}\left(\frac{D_v}{L_n^2} \right).$$ Then by using Chebyshev's inequality, we get that for any $v\in E'$, $$ \mathbb{P} \left(d_1(v) \leq \beta D_v n_1/ L_n \mid (D_z) \right) =\mathcal{O}\left(\frac{L_n}{n_1D_v}\right),$$ for some constant $\beta>0$. The desired result follows by using a union bound and then Lemma \ref{lb} and \ref{ltb1} (i)-(ii). \end{proof} \noindent As a consequence we get \begin{prop} \label{stara2} Assume that the law of the degree distribution is given either by \eqref{pnaj} or \eqref{paj} and that $a=2$. There exists a positive constant $\beta$ such that w.h.p. $G_n$ contains as a subgraph a copy of $S(k;d_1,\dots, d_k)$, with $k\asymp n^{1/4}$, $d_i\ge \beta n^{3/4}/\log n$ for all $i\le k$, and $d_1+\dots + d_k \asymp n$. \end{prop} \begin{proof} Just take for the $v_i$'s the elements of $E'$. Then use Lemma \ref{ltb1} (ii)-(iii) and Lemma \ref{ex4}. \end{proof} \section{Contact process on a two-step star graph} In this section we will study the contact process on a two-step star graph. Our main result is the following: \begin{prop} \label{psta} There exist positive constants $c$ and $C$, such that for any two-step star graph $G=S(k;d_1,\dots,d_k)$, satisfying $d_i \geq C \log n/\lambda^2$, for all $i\le k$, and $d_1+...+d_k = n$, \begin{align*} \mathbb{P} \left(\tau_n^{v_1} \geq \exp(c \lambda^2 n)\right) = 1- o(1), \end{align*} where $\tau_n^{v_1}$ is the extinction time of the contact process with infection parameter $\lambda\le 1$ starting from $v_1$ on $S(k;d_1,\dots,d_k)$. \end{prop} Note that since we are only concerned with the extinction time here, there is no restriction in assuming $\lambda \le 1$, as the contact process is stochastically monotone in $\lambda$ (see \cite{L}). So when $\lambda>1$ the same result holds; one just has to remove the $\lambda$ everywhere in the statement of the proposition. \vspace{0.2cm} Now of course an important step in the proof is to understand the behavior of the process on a single star graph. This has already been studied for a long time, for instance it appears in Pemantle \cite{P}, and later in \cite{BBCS, CD, MVY}. We will collect all the results we need in Lemma \ref{lst} below, but before that we give some new definition. We say that a vertex $v$ is {\bf lit} (the term is taken from \cite{CD}) at some time $t$ if the proportion of its infected neighbors at time $t$ is larger than $\lambda/(16e)$ (note that in \cite{MMVY} the authors also use the term {\it infested} for a similar notion). \begin{lem} \label{lst} There exists a constant $c\in (0,1)$, such that if $(\xi_t)$ is the contact process with parameter $\lambda\le 1$ on a star graph $S$ with center $v$, satisfying $\lambda^2 |S|\geq 64 e^2$, then \vspace{0.15cm} \begin{itemize} \item[(i)] $\mathbb{P} (\xi_{\exp(c \lambda^2 |S|)} \neq \varnothing \mid v \textrm{ is lit at time } 0) \geq 1-\exp(-c \lambda^2 |S|)$, \\ \item[(ii)] $\mathbb{P} (\exists t>0 : v \textrm{ is lit at time }t\mid \xi_0(v)=1)\to 1\qquad \textrm{as }|S|\to \infty$. \\ \item[(iii)] $\mathbb{P} ( v \textrm{ is lit at time } 1 \mid \xi_0(v)=1)\geq (1- \exp(-c \lambda |S|))/e$,\\ \item[(iv)] $\mathbb{P} (v \textrm{ lit during }[\exp(c \lambda^2 |S|), 2\exp(c \lambda^2 |S|)]\mid v \textrm{ lit at time } 0)\ge 1- 2 \exp (-c \lambda ^2 |S|)$. \end{itemize} \end{lem} \begin{proof} Parts (i), (ii) and (iii) are exactly Lemma 3.1 in \cite{MVY}, and (iv) can be proved similarly, see for instance \cite{C} (similar results can be found in \cite{BBCS, CD, D,P}). \end{proof} \noindent \textit{Proof of Proposition \ref{psta}.} We first handle the easy case when there is some $1\le i \le k$, such that $\deg(v_i) \geq n/2$. First by Lemma \ref{lst} we know that w.h.p. the virus survives inside $S(v_1)$ at least a time $\exp(c\lambda^2 d_1)$. Since by hypothesis $d_1$ diverges when $n$ tends to infinity, and since $v_1$ and $v_i$ are at distance at most two (both are connected to $v_0$), we deduce that w.h.p. $v_i$ will be infected before the extinction of the virus. The proposition follows by another use of Lemma \ref{lst}. \vspace{0.2cm} We now assume that $d_i \leq n/2$, for all $i$. First we need to introduce some more notation. For $s<t$ and $v,w \in S(v_i)$, we write \begin{equation} \label{iconnect} (v,s)\mathop{\longleftrightarrow}^{(i)} (w,t), \end{equation} if there exists an infection path entirely inside $S(v_i)$ joining $(v,s)$ and $(w,t)$. Similarly if $V$ and $W$ are two subsets of $G$, we write $$V\times \{s\} \mathop{\longleftrightarrow}^{(i)} W \times \{t\},$$ if there exists $v\in V\cap S(v_i)$ and $w\in W\cap S(v_i)$, such that \eqref{iconnect} holds. Now for $\ell \geq 0$ and $1\leq i \leq k$ define \begin{align*} E_{\ell , i}: = \left\{\xi_{\ell n^2} \times \{\ell n^2\} \mathop{\longleftrightarrow}^{(i)} S(v_i)\times \{(\ell +1)n^2\} \right\}. \end{align*} We claim that for any $\ell \geq 0$ and $1 \leq i \leq k$, we have \begin{align} \label{ps1} \mathbb{P} \big(E_{\ell,i} \cap \left(\cap_{j \neq i} E_{\ell+1,j}^c\right) \big) \leq \exp(- c \lambda^2 n), \end{align} for some constant $c>0$. To fix ideas we will prove the claim for $i=1$ (clearly by symmetry there is no loss of generality in assuming this) and to simplify notation we also assume that $\ell =0$ (the proof works the same for any $\ell$). Furthermore, in the whole proof the notation $c$ will stand for a positive constant independent of $\lambda$, whose value might change from line to line. \vspace{0.2cm} \noindent Now before we start the proof we give a new definition. We denote by $(\xi'_t)_{t\ge 0}$ the contact process on $\overline S(v_1):=S(v_1)\cup \{v_0\}$, which is defined by using the same Poisson processes as $\xi$, but only on this subgraph. In particular with $\xi'$, the vertex $v_0$ can only be infected by $v_1$, and thus the restriction of $\xi$ on $\overline S(v_1)$ dominates $\xi'$. We also assume that the starting configurations of $\xi'$ and of the restriction of $\xi$ on $\overline S(v_1)$ are the same. Now for any integer $m \leq n$, define $$G_m=\left\{ \xi'_t(v_0)=1 \textrm{ for all } t\in [3m+2,3m+3]\right\}.$$ Let also $\mathcal{F}_t=\sigma(\xi'_s,s\le t)$ be the natural filtration of the process $\xi'$. Then observe that for any vertex $w\in S(v_1)$, conditionally on $\mathcal{F}_{3m}$, and on the event $\{ \xi'_{3m}(w)=1\}$, we have \begin{eqnarray*} G_m &\subset& \{\mathcal{N}_w \cap [3m,3m+1] = \varnothing, \mathcal{N}_{(w,v_1)} \cap[3m,3m+1] \neq \varnothing, \mathcal{N}_{v_1} \cap[3m,3m+2] = \varnothing, \\ && \mathcal{N}_{(v_1,v_0)} \cap[3m+1,3m+2] \neq \varnothing, \mathcal{N}_{v_0} \cap[3m+1,3m+3] = \varnothing\}, \end{eqnarray*} at least if $w\neq v_1$. Moreover, the event on the right hand side has probability equal to $(1- e^{- \lambda})^2 e^{-5}$, which is larger than $c\lambda^2$, for some $c>0$, and a similar result holds if $w=v_1$. Therefore for any $m$ and any nonempty subset $A \subset S(v_1)$, \begin{equation*} \mathbb{P} (G_m^c \mid \mathcal{F}_{3m} )\, {\bf 1}(\xi'_{3m}=A) \leq (1-c\lambda^2){\bf 1}(\xi'_{3m}=A). \end{equation*} In other words, if we define $$H_m=\{\xi'_{3m} \cap S(v_1) \neq \varnothing\},$$ we get \begin{equation*} \label{es1} \mathbb{P} (G_m^c \mid \mathcal{F}_{3m})\, {\bf 1}(H_m) \leq 1-c\lambda^2, \end{equation*} for all $m\le n$. By using induction, it follows that \begin{align*} \mathbb{P} \left( \left(\bigcup_{m=0}^{n-1} G_m \right)^c \cap \left(\bigcap_{m=0}^{n-1} H_m\right)\right) \leq (1-c\lambda^2)^n. \end{align*} But by construction $$E_{0,1}\ \subset \ \bigcap_{m=0}^{n-1} H_m.$$ Therefore $$\mathbb{P} \left(E_{0,1}\cap\{\exists m\in [0,3n-1]\, :\, \xi'_t(v_0)=1 \textrm{ for all }t\in [m,m+1] \}^c \right) \le \exp(- c\lambda^2 n).$$ Then by repeating the argument in each interval $[3 Mn,3(M +1)n]$, for every $M\le n/3-1$, we get \begin{equation} \label{kM} \mathbb{P} \left(E_{0,1} , \, |\mathcal{M}| < n/3\right) \le \exp(-c\lambda^2 n ), \end{equation} where $$\mathcal{M}:= \left\{m \le n^2-1\, :\, \xi'_t(v_0)=1 \textrm{ for all }t\in [m,m+1] \right\} .$$ Now for each $2\le j\le k$ and $m\le n^2-1$, define, \begin{eqnarray*} &&C_{m,j}:= \left\{\mathcal{N}_{(v_0,v_j)}\cap [m,m+1]\neq \varnothing,\, \mathcal{N}_{v_j}\cap [m,m+2]=\varnothing\right\}\\ & \cap &\left\{|\{w\in S(v_j): \mathcal{N}_{(v_j,w)}\cap[m+1,m+2]\neq \varnothing\textrm{ and }\mathcal{N}_w\cap[m+1,m+2] =\varnothing\}|> \frac{\lambda d_j}{16e}\right\}. \end{eqnarray*} Note that these events are independent of $\mathcal{M}$ and $E_{0,1}$, as they depend on different Poisson processes. Note also that by using \eqref{ld} \begin{eqnarray} \label{Cmj} \nonumber \mathbb{P} (C_{m,j}) & =& (1-e^{-\lambda})e^{-2} \times \mathbb{P} (\mathcal{B}(d_j, (1-e^{-\lambda})/e)\ge \lambda d_j/(16e))\\ &\ge & c\lambda, \end{eqnarray} and thus (since $C_{m,j}$ and $C_{m',j}$ are independent when $m-m'\ge 2$), $$\mathbb{P} \left(\bigcap_{m\in \mathcal{M}} C_{m,j}^c\ \Big| \ |\mathcal{M}|\ge n/3\right) \le \exp(-c\lambda n).$$ Moreover, by construction if $m\in \mathcal{M}$ and $C_{m,j}$ holds, then $v_j$ is lit at some time $t\in [m+1,m+2]$. Therefore by using \eqref{kM}, \begin{eqnarray} \label{finalprop} \mathbb{P} \left(E_{0,1}\cap\{ \exists j\in\{2,\dots,k\}: v_j \textrm{ is never lit in }[0,n^2]\}\right) \le \exp(-c\lambda n). \end{eqnarray} Finally define $U_j = \exp(c\lambda^2 d_j)$, for all $j\le k$, with the constant $c$ as in Lemma \ref{lst}, and take $C$ large enough, so that the hypothesis $d_j\lambda^2\ge C\log n$ implies $U_j\ge 2n^2$. Then \eqref{finalprop} together with Lemma \ref{lst} (i) imply that $$\mathbb{P} \left(E_{0,1} \cap (\cap_{j\ge 2} E_{1,j}^c)\right)\le \exp(-c\lambda^2n) + \prod_{j\ge 2} U_j^{-1} \le 2\exp(-c\lambda^2n/2),$$ where for the last inequality we used that $d_2+\dots +d_k \ge n/2$. This concludes the proof of \eqref{ps1}. The proposition immediately follows, since by using Lemma \ref{lst}, we also know that $\mathbb{P} (E_{0,1})=1-o(1)$, when $v_1$ is infected initially (observe that $\exp(c\lambda^2d_1) \ge n^2$, if the constant $C$ in the hypothesis is large enough). \hfill $\square$ \section{Proof of Theorem \ref{td}} \label{sectionproof} The proof is the same in all the cases we considered, so to fix ideas we assume in all this section that the degree distribution is given by \eqref{pnaj} with $a\in(1,2)$. The other cases are left to the reader. Let $(t_n)$ be as in the statement of Theorem \ref{td}. Define for $v\in V_n$, $$ X_{n,v}={\bf 1}(\{\xi^{v}_{t_n} \neq \varnothing\}).$$ The self-duality of the contact process (see (1.7) p. 35 in \cite{L}) implies that for any $\gamma >0$, \begin{equation*} \mathbb{P}\left( |\xi^{V_n}_{t_n}| > \gamma n \right) = \mathbb{P} \left( \sum_{v\in V_n} X_{n,v} > \gamma n \right) \end{equation*} and similarly for the reverse inequality. Hence, to prove that $|\xi^{V_n}_{t_n}|/n $ converges in probability to $ \rho_a(\lambda)$, we have to show that \begin{equation} \label{ek1} \mathbb{P}\left( \sum_{v\in V_n} X_{n,v} > (\rho_{n,a}(\lambda) + \varepsilon) n \right) \to 0\quad \textrm{as }n\to \infty \end{equation} and \begin{equation} \label{ek2} \mathbb{P}\left( \sum_{v\in V_n} X_{n,v} < (\rho_{n,a}(\lambda) - \varepsilon) n \right) \to 0\quad \textrm{as }n\to \infty \end{equation} for all $\varepsilon>0$ (recall that $\rho_{n,a}(\lambda)$ converges to $ \rho_a(\lambda),$ as $n \rightarrow \infty$). We will prove these two statements in the next two subsections. \subsection{Proof of \eqref{ek1}} \label{subsectionYnv} This part is quite elementary. The idea is to say that if the virus survives for a time $t_n$ starting from some vertex $v$, then $v$ has to infect one of its neighbors before $\sigma(v)$ (recall the definition \eqref{sm}), unless $\sigma(v) \ge t_n$, but this last event has $o(1)$ probability so we can ignore it. Now the probability that $v$ infects a neighbor before $\sigma(v)$, is bounded by the probability that one of the Poisson point processes associated to the edges emanating from $v$ has a point before $\sigma(v)$ (actually it is exactly equal to this if there is no loop attached to $v$). Then having observed that the latter event has probability exactly equal to $\rho_{n,a}(\lambda)$, we get the desired upper bound, at least in expectation. The true upper bound will follow using Chebyshev's inequality and the domination of the $X_{n,v}$'s by suitable i.i.d. random variables. \vspace{0.2cm} \noindent Now let us write this proof more formally. Set $Y_{n,v} = {\bf 1}(C_{n,v})$, with (recall \eqref{sme}) $$C_{n,v}= \left\{ \min_{e:v\to \cdot} \sigma(e) <\sigma(v) \right\},$$ where the notation $e:v\to \cdot$ means that $e$ is an (oriented) edge emanating from $v$ (possibly forming a loop). By construction the $(Y_{n,v})_{v\in V_n}$ are i.i.d. random variables, and moreover, the above discussion shows that for all $v$, \begin{equation} \label{XYnv} X_{n,v} \leq Y_{n,v} + {\bf 1}(\{ \sigma(v) >t_n \}). \end{equation} Now we have \begin{align} \label{Cnv} \mathbb{E} (Y_{n,v})=\mathbb{P} (C_{n,v})&= \sum_{j=1}^n \mathbb{P} (C_{n,v}\mid D_v=j)\mathbb{P} (D_v=j) \notag \\ & = \sum_{j=1}^n \frac{j \lambda}{j \lambda +1} p_{n,a}(j) = \rho_{n,a}(\lambda). \end{align} Therefore it follows from Chebyshev's inequality that \begin{equation*} \label{Ynv} \mathbb{P} \left( \sum_v Y_{n,v} > (\rho_{n,a}(\lambda) + \varepsilon/2) n \right) =o(1), \end{equation*} for any fixed $\varepsilon>0$. On the other hand $\mathbb{P} (\sigma(v) >t_n)=e^{-t_n} =o(1)$. Thus by using Markov's inequality we get $$\mathbb{P} \left( \sum_v {\bf 1}(\{\sigma(v)>t_n \}) > \varepsilon n/2 \right) =o(1).$$ Then \eqref{ek1} follows with \eqref{XYnv}. \subsection{Proof of \eqref{ek2}} This part is more complicated and requires the results obtained so far in Sections 2, 3 and 4. First define $Z_{n,v}= {\bf 1}(A_{n,v}\cap B_{n,v})$, for $v\in V_n$, where $$A_{n,v}= \{ v \textrm{ infects one of its neighbors before } \sigma(v)\},$$ and $B_{n,v}= \{\xi_{t_n}^v\neq \varnothing\}$. Remember that $X_{n,v} = {\bf 1}(B_{n,v})$, which in particular gives $Z_{n,v} \leq X_{n,v}$. Therefore the desired lower bound follows from the next lemma and Chebyshev's inequality. \begin{lem} \label{d4} For any $v\neq w \in V_n$, \begin{itemize} \item[(i)] $\mathbb{E} (Z_{n,v} )\geq \rho_{n,a}(\lambda) - o(1)$. \\ \item[(ii)] $\textrm{Cov} (Z_{n,v}, Z_{n,w}) =o(1)$. \end{itemize} \end{lem} \begin{proof} We claim that \begin{equation} \label{ABnv} \mathbb{P} (B_{n,v} \mid A_{n,v}) = 1-o(1). \end{equation} To see this first use that w.h.p. there is a large two-step star graph in $G_n$ (given by Proposition \ref{stara<2}). Then use Lemma \ref{lta1} (iii) and (v) to see that w.h.p. all neighbors of $v$ have large degree and are connected to all the $v_i$'s of the two-step star graph (recall that by construction $D_{v_i}\ge n/2$, for all $i$). Note that in the case $a=2$, this is not exactly true, but nevertheless the neighbors of $v$ and the $v_i$'s are still w.h.p. at distance at most two, since they are all connected to the set of vertices $z$ satisfying $D_z\ge n/\log n$ (and w.h.p. this set is nonempty). Now if a neighbor, say $w$, of $v$ is infected and has large degree, then Lemma \ref{lst} shows that w.h.p. the virus will survive in the star graph formed by $w$ and its neighbors for a long time. But if in addition $w$ and $v_1$ are connected (or more generally if they are at distance at most two), then $v_1$ will be infected as well w.h.p. before extinction of the process. Then Proposition \ref{psta} gives \eqref{ABnv}. On the other hand observe that $$\{s_v=0\} \cap C_{n,v}\, \subset \, A_{n,v}.$$ Therefore \eqref{Cnv} and Lemma \ref{lta1} (iv) give Part (i) of the lemma. The second part follows easily by using that we also have $A_{n,v} \subset C_{n,v}$, and that the $C_{n,v}$'s are independent. \end{proof} \section{Proof of Theorem \ref{propexp}} We first prove a lower bound on the probability that the extinction time is smaller than $n^2$. Together with the following lemma, we will get the assertion (ii) of the theorem: \begin{lem} \label{mintaun} For every $s>0$, we have $$\mathbb{P} (\tau_n\le s)\le \frac{s}{\mathbb{E} (\tau_n)}.$$ \end{lem} This lemma is a direct consequence of the Markov property and the attractiveness of the contact process, see for instance Lemma 4.5 in \cite{MMVY}. \vspace{0.2cm} For simplicity we assume that $\lambda \le 1$, and leave to the reader the task to slightly modify the values of some constants in the case $\lambda>1$. We also assume first that the degree distribution is given by \eqref{pnaj}. Let $\bar{n}_a$ be the number of vertices having degree larger than $n^{1/2a}$. Then $\bar{n}_a \sim \mathcal{B}(n, \bar{p}_a),$ where $\bar{p}_a = \sum_{j > n^{1/2a}} p_{n,a}(j) \asymp n^{(1-a)/2a}$. Hence, as for Lemma \ref{lb}, there exists a constant $K>0$, such that \begin{equation*} \label{tt} \mathbb{P} \left(\bar{n}_a \le K n^{(1+a)/2a}\right) = 1-o(1). \end{equation*} In fact thanks to Lemma \ref{lb}, we can even assume that \begin{equation} \label{En} \mathbb{P} (\mathcal E_n) = 1-o(1), \end{equation} where $$\mathcal{E}_n := \left\{n_j \le K n j^{-a} \textrm{ for all } j \le n^{1/2a}\right\} \cap \left\{\bar{n}_a \le K n^{(1+a)/2a}\right\}.$$ Now if a vertex has degree $j$, the probability that it becomes healthy before spreading infection to another vertex is at least equal to $1/(1+ j \lambda)$ (it is in fact exactly equal to this if there is no loop attached to this vertex). Since this happens independently for all vertices, we have that a.s. for $n$ large enough, on $\mathcal{E}_n$, \begin{align*} \mathbb{P}(\tau_n \leq \min_v \sigma(v) \mid (D_v)_{v\in V_n}) & \geq (1/(1+ \lambda n))^{\bar{n}_{a}} \prod \limits_{j=1}^{ \, \,n^{1/2a}}(1/(1+ \lambda j))^{n_j} \\ & \geq (2 \lambda n)^{-\bar{n}_{a}} \prod \limits_{j=1}^{1/\lambda} 2^{-n_j}\prod \limits_{1/\lambda}^{\, \, n^{1/2a}} (2 \lambda j)^{-n_j} \\ & \geq (2\lambda)^{-n} n^{-\bar{n}_{a}} \prod \limits_{1/\lambda}^{\,\,n^{1/2a}} j^{-n_j}\\ & \geq \exp \left(-n \left( \log( 2 \lambda) + K \sum\limits_{1/\lambda}^{n^{1/2a}} j^{-a} \log j \right) - \bar{n}_{a} \log n \right) \\ & \geq \exp(-Cn/4), \end{align*} for some constant $C=C(\lambda)>0$. Now for each vertex $v$, $\sigma(v)$ is an exponential random variable with mean $1$. Hence, a.s. for $n$ large enough and on $\mathcal{E}_n$, $$ \mathbb{P} (\tau_n \leq n^2 \mid (D_v)_{v\in V_n}) \geq e^{-Cn/4} - \mathbb{P} (\exists v : \sigma(v) \geq n^2) \geq e^{-Cn/2}. $$ The same can be proved in the case when the degree distribution is given by \eqref{paj}. One just has to use that w.h.p. all the degrees are bounded by $n^{2/(a-1)}$, but this does not seriously affect the proof. Together with \eqref{En}, it follows that $$\mathbb{P} (\tau_n \leq n^2)\ge \exp(-Cn)(1-o(1)),$$ and as we already mentioned above, with Lemma \ref{mintaun} we get the assertion (ii) of the theorem. \vspace{0.2cm} \noindent We now prove (i). This will be a consequence of a more general result: \begin{prop} \label{pcel} Let $(G_n^0)$ be a sequence of connected graphs, such that $|G_n^0|\le n$, for all $n$. Let $\tau_n$ denote the extinction time of the contact process on $G_n^0$ starting from full occupancy. Assume that \begin{align} \label{nas} \frac{D_{n,\max}}{d_n \vee \log n} \rightarrow \infty, \end{align} with $D_{n,\max}$ the maximum degree and $d_n$ the diameter of $G_n^0$. Then \begin{align*} \frac{\tau_n}{\mathbb{E} (\tau_n)}\quad \mathop{\longrightarrow}^{(\mathcal{L})}_{n\to \infty} \quad \mathcal{E}(1), \end{align*} where $\mathcal{E}(1)$ is an exponential random variable with mean one. \end{prop} \begin{proof} According to Proposition 1.2 in \cite{M} and Lemma \ref{mintaun} above it suffices to show that there exists a sequence $(a_n)$, such that $a_n=o(\mathbb{E} (\tau_n))$ and \begin{eqnarray} \label{xivxi} \sup_{v\in V_n}\, \mathbb{P} (\xi^v_{a_n} \neq \xi_{a_n}, \xi^v_{a_n} \neq \varnothing) = o(1), \end{eqnarray} where $(\xi_t)_{t\ge 0}$ denotes the process starting from full occupancy. Set $\bar{\lambda}= \lambda \wedge 1$. Using Lemma \ref{lst}, we get \begin{align} \label{taunDnmax} \mathbb{E} (\tau_n) \geq \exp(c \bar{\lambda}^2 D_{n,\max}), \end{align} with $c$ as in this lemma. Using next \eqref{nas}, we can find a sequence $(\varphi_n)$ tending to infinity, such that \begin{align} \label{Dnmaxdn} \frac{D_{n,\max}}{(\log n \vee d_n)\varphi_n} \rightarrow \infty. \end{align} Now define \begin{align*} b_n= \exp(c \bar{\lambda}^2 (\log n \vee d_n) \varphi_n ) \quad \textrm{and}\quad a_n=4b_n+1. \end{align*} Then \eqref{taunDnmax} and \eqref{Dnmaxdn} show that $a_n=o(\mathbb{E} (\tau_n))$, so it amounts now to prove \eqref{xivxi} for this choice of $(a_n)$. To this end it is convenient to introduce the dual contact process. Given some positive real $t$ and $A$ a subset of the vertex set $V_n$ of $G_n$, the dual process $(\hat{\xi}^{A,t}_s)_{s\le t}$ is defined by \[\hat{\xi}^{A,t}_s = \{ v\in V_n : (v,t-s)\longleftrightarrow A \times \{ t \} \},\] for all $s\le t$. It follows from the graphical construction that for any $v$, \begin{eqnarray} \label{cl2} &&\nonumber \mathbb{P} (\xi^v_{a_n} \neq \xi_{a_n}, \xi^v_{a_n} \neq \varnothing)\\ &=& \mathbb{P} (\exists w\in V_n : \xi^v_{a_n}(w) = 0,\, \xi^v_{a_n} \neq \varnothing,\, \hat{\xi}^{w,a_n}_{a_n} \neq \varnothing) \notag\\ &\le & \sum_{w\in V_n} \mathbb{P} \left(\xi^v_{a_n} \neq \varnothing,\, \hat{\xi}^{w,a_n}_{a_n} \neq \varnothing, \textrm{ and } \hat{\xi}^{w,a_n}_{a_n-t} \cap \xi^v_t = \varnothing \textrm{ for all } t\le a_n\right), \end{eqnarray} So let us prove now that the last sum above tends to $0$ when $n\to \infty$. Set $$\beta_n = [\varphi_n (d_n \vee \log n)],$$ and let $u$ be a vertex with degree larger than $\beta_n$. Let then $S(u)$ be a star graph of size $\beta_n$ centered at $u$. Now we slightly change the definition of a lit vertex, and say that $u$ is lit if the number of its infected neighbors \textit{in $S(u)$} is larger than $\bar{ \lambda} \beta_n/(16e)$. We first claim that \begin{align} \mathbb{P}(\xi^v_{b_n} \neq \varnothing, u \textrm{ is not lit before } b_n ) = o(1/n). \label{vbn1} \end{align} To see this, define $K_n=[b_n/d_n]$ and for any $0 \leq k \leq K_n-1$ \[A_k:=\{\xi^v_{kd_n}\neq \varnothing\},\] and \[B_k:= \left\{ \xi_{kd_n}^v\times\{kd_n\} \longleftrightarrow (u,(k+1)d_n-1) \right\} \cap \{u \textrm{ is lit at time } (k+1)d_n\}.\] Note that \begin{align} \label{inc.bn} \{\xi^v_{b_n} \neq \varnothing, u \textrm{ is not lit before } b_n\} \ \subset \ \bigcap_{k=0}^{K_n-1} A_k \cap B_k^c. \end{align} Moreover, by using a similar argument as for \eqref{Cmj}, we obtain \begin{eqnarray*} \mathbb{P} \left((z,t)\longleftrightarrow (z',t+d_n-1)\right) \ge \exp(-C d_n) \quad \textrm{for any $z, z'\in V_n$ and $t\ge 0$}, \end{eqnarray*} for some constant $C>0$ (in fact this is not true if $d_n=1$; but in this case one can just consider time intervals of length $d_n+1$ instead of $d_n$). On the other hand, Lemma \ref{lst} (iii) implies that if $u$ is infected at time $t$ then it is lit at time $t+1$ with probability larger than $1/3$, if $n$ is large enough. Therefore for any $k\le K_n-1$, $$\mathbb{P} (B_k^c\mid \mathcal{G}_k){\bf 1}(A_k) \le 1-\exp(-Cd_n)/3,$$ with $\mathcal{G}_k$ the sigma-field generated by all the Poisson processes introduced in the graphical construction in the time interval $[0, kd_n]$. Iterating this, we get \begin{eqnarray*} \mathbb{P} \left(\bigcap_{k=0}^{K_n-1} A_k \cap B_k^c\right) &\le & (1-\exp(-Cd_n)/3)^{K_n-1} = o(1/n), \end{eqnarray*} where the last equality follows from the definition of $b_n$. Together with \eqref{inc.bn} this proves our claim \eqref{vbn1}. Then by using Lemma \ref{lst} (iv) we get \begin{align} \label{vb} \mathbb{P}(\xi^v_{b_n} \neq \varnothing, u \textrm{ is not lit at time } 2b_n ) = o(1/n). \end{align} Therefore, if we define \begin{align*} \mathcal{A}(v)&=\{\xi^v_{b_n} \neq \varnothing, u \textrm{ is lit at time $2b_n$} \}, \end{align*} we get $$ \mathbb{P}(\mathcal{A}(v)^c, \xi^v_{b_n} \neq \varnothing)=o(1/n). $$ Likewise if \begin{align*} \hat{\mathcal{A}}(w)&= \{\hat{\xi}^{w,4b_n+1}_{b_n} \neq \varnothing, \exists \, U \subset S(u): |U| \geq \frac{\bar{\lambda}}{16e}\beta_n\textrm{ and } (x,2b_n+1) \leftrightarrow (w,4b_n+1) \, \forall \, x \in U \}. \end{align*} then $$ \mathbb{P}(\hat{\mathcal{A}}(w)^c, \hat{\xi}^{w, 4b_n+1}_{b_n} \neq \varnothing)=o(1/n). $$ Moreover, $\mathcal{A}(v)$ and $\hat{\mathcal{A}}(w)$ are independent for all $v$, $w$. Now the result will follow if we can show that for any $A,B \subset S(u)$ with $|A|, |B|$ larger than $\bar{\lambda}\beta_n/(16e)$ \begin{eqnarray} \label{aub} \mathbb{P}(A \times \{2b_n\} \mathop{\longleftrightarrow}^{S(u)} B \times \{2b_n+1\} ) = 1-o(1/n), \end{eqnarray} where the notation $$A \times \{2b_n\} \mathop{\longleftrightarrow}^{S(u)} B \times \{2b_n+1\}$$ means that there is an infection path inside $S(u)$ from a vertex in $A$ at time $2b_n$ to a vertex in $B$ at time $2b_n+1$. To prove \eqref{aub}, define \begin{align*} \bar{A} &=\{x \in A \setminus \{u\}: \mathcal{N}_{x} \cap [2b_n,2b_n+1] = \varnothing\},\\ \bar{B} &=\{y \in B \setminus \{u\}: \mathcal{N}_{y} \cap [2b_n,2b_n+1] = \varnothing\}. \end{align*} Since for any $x$, $$\mathbb{P}(\mathcal{N}_{x}\cap [2b_n,2b_n+1] = \varnothing)= 1-e^{-1},$$ standard large deviations results show that $|\bar{A}|$ and $|\bar B|$ are larger than $(1-e^{-1}) \bar{\lambda} \beta_n/(32e)$, with probability at least $1-o(1/n)$. Now let $$\mathcal{E}= \{|\bar{A}| \geq (1-e^{-1}) \bar{\lambda} \beta_n/(32e)\} \cap \{|\bar{B}| \geq (1-e^{-1}) \bar{\lambda} \beta_n/(32e)\}. $$ Set \begin{align*} \varepsilon_n = \frac{1}{(\log n) \sqrt{\varphi_n}} \quad \textrm{and} \quad J_n = \left[ \frac{(\log n) \sqrt{\varphi_n}}{2}\right], \end{align*} and define for $0 \leq j \leq J_n-1$ \begin{eqnarray*} C_j& =& \{\mathcal{N}_u \cap [2b_n+2j \varepsilon_n ,2b_n+(2j+2) \varepsilon_n ] = \varnothing\} \\ &\cap& \{ \exists x \in \bar{A}: \mathcal{N}_{(x,u)} \cap [2b_n+2j \varepsilon_n ,2b_n+(2j+1) \varepsilon_n ] \neq \varnothing\} \\ &\cap& \{ \exists y \in \bar{B}: \mathcal{N}_{(u,y)} \cap [2b_n+(2j+1) \varepsilon_n ,2b_n+(2j+2) \varepsilon_n ] \neq \varnothing\}. \end{eqnarray*} Observe that \begin{eqnarray} \label{unionCj} \bigcup_{j=0}^{J_n-1} C_j \subset \Big\{A \times \{2b_n\} \mathop{\longleftrightarrow}^{S(u)} B \times \{2b_n+1\}\Big\}. \end{eqnarray} Moreover, conditionally on $\bar A$ and $\bar B$, the events $(C_j)$ are independent, and \begin{align*} \mathbb{P}(C_j \mid \bar A, \bar B) & = e^{-2 \varepsilon_n} \mathbb{P}(\mathcal{B}(|\bar A|, 1- e^{-\varepsilon_n}) \geq 1)\times \mathbb{P}(\mathcal{B}(|\bar B|, 1- e^{-\varepsilon_n}) \geq 1)\\ &\ge 1/2, \end{align*} on the event $\mathcal{E}$, if $n$ is large enough. Therefore \begin{eqnarray*} \mathbb{P} \left(\mathcal{E}, \, \bigcap_{j=0}^{J_n-1} C_j^c\right) &\le & 2^{-J_n} = o(1/n). \end{eqnarray*} This together with \eqref{unionCj} imply \eqref{aub}, and concludes the proof of the proposition. \end{proof} \begin{rema} \label{remexp} \emph{This proposition can be used in various examples, for instance to the case of the configuration model with degree distribution satisfying $p(1)=p(2)=0$, and $$p(k) \sim c k^{-a}\qquad \textrm{as }k\to\infty,$$ for some constants $c>0$ and $a>2$. This is the degree distribution considered in \cite{CD, MMVY}. In this case it is known that w.h.p. the graph is connected and has diameter $\mathcal{O}(\log n)$, see \cite[Lemma 1.2]{CD}, and since the maximal degree is at least polynomial, the proposition applies well here. It also applies to the preferential attachment graph model considered by Berger et al \cite{BBCS}, see \cite{C}. } \end{rema} \begin{rema} \label{tn} \emph{Assume that on a sequence of graphs $(G_n)$, one can prove that w.h.p. $\tau_n\ge \varphi(n)$, for some function $\varphi(n)$, and that in the mean time we can prove \eqref{xivxi} for some $a_n\le \varphi(n)$. Then observe that if \eqref{etd} holds with $t_n=a_n$, then by using the self-duality, we can see that the same holds as well with $t_n=\varphi(n)$. In particular, in our setting, by using Theorem \ref{propexp}, we deduce that \eqref{etd} holds with $t_n = \exp(c n)$, for any $c<c_{\textrm{crit}}:=\liminf (1/n)\log \mathbb{E} (\tau_n)$, but (using again Theorem \ref{propexp}) it does not when $c > c_{\textrm{crit}}$. This argument also explains why the combination of the results in \cite{MVY} and \cite{MMVY} give the statement that was mentioned in the introduction for the case $a>2$. } \end{rema} Now to complete the proof of Theorem \ref{propexp} (i), it remains to show that the hypothesis of the proposition is well satisfied in our case, namely for the maximal connected component -- call it $G_n^0$ -- of the configuration model $G_n$. It amounts to show first that the size of all the other connected components is much smaller, to ensure that w.h.p. the extinction time on $G_n$ and on $G_n^0$ coincide. Remember that with Theorem \ref{td} we know that on $G_n$ it is w.h.p. larger than $\exp(cn)$. In the mean time we will show that the diameter of $G_n^0$ is $o(n)$. Since we could not find a reference, we provide a short proof here (in fact much more is true, see below). For $v\in V_n$, we denote by $\mathcal{C}(v)$ the connected component of $G_n$ containing $v$, and by $||\mathcal{C}(v)||$ its number of edges. We also define $$d'_n := \max_{v\notin G_n^0} \ ||\mathcal{C}(v)||.$$ \begin{lem} \label{dmg} Let $G_n$ be the configuration model with $n$ vertices and degree distribution given either by \eqref{pnaj} or \eqref{paj}, with $a\in (1,2]$. Let $d_n=\textrm{diam}(G_n^0)$ be the maximal distance between pair of vertices in $G_n^0$. Then there exists a positive constant $C$, such that w.h.p. \begin{eqnarray*} \max(d_n,d'_n) \leq \left\{ \begin{array}{ll} C & \textrm{when } 1<a<2 \\ 4 \log n/ \log \log n &\textrm{when } a=2.\\ \end{array} \right. \end{eqnarray*} \end{lem} \begin{proof} We only prove the result for $a=2$ here, the case $a<2$ being entirely similar. To fix ideas we also assume that the degree distribution is given by \eqref{pnaj}, but the proof works as well with \eqref{paj}. Set $$F=\left\{v: D_v\ge (\log n)^4\right\}.$$ Lemma \ref{ltb1} (iii) shows that w.h.p. all the elements of $F$ are in the same connected component, and Lemma \ref{ltb1} (v) then shows that w.h.p. this component has size $n(1-o(1)$, in particular it is the maximal connected component. In conclusion we get \begin{equation} \label{F} \mathbb{P} (F\subset G_n^0) = 1-o(1). \end{equation} Now let $$R_n: = \sum_{v\in F} D_v.$$ By construction, the probability that any stub incident to some vertex $v\notin F$ is matched with a stub incident to a vertex lying in $F$ is equal to $R_n/(L_n-1)$. By iterating this argument, we get $$\mathbb{P} \left(d(v,F)>k \textrm{ and }||\mathcal{C}(v)|| > k \mid (D_w)_{w \in V_n}\right) \leq \frac{R_n}{L_n-1} \frac{R_n}{L_n-3}\cdots \frac{R_n}{L_n-2k+1},$$ for any $k$, where $d(v,F)$ denotes the graph distance between $v$ and $F$ (which by convention we take infinite when there is no element of $F$ in $\mathcal{C}(v)$). Then it follows from Lemma \ref{ltb1} (i) and the fact that $R_n\asymp n\log \log n$, that $$\mathbb{P} \left(d(v,F) > k_n \textrm{ and }||\mathcal{C}(v)|| > k_n \right) \leq \left(\frac{C \log \log n}{ \log n} \right)^{ 2\log n / \log \log n-1} = o(n^{-1}),$$ for some constant $C>0$, with $k_n=2\log n/( \log \log n) -1$. This proves the lemma, using a union bound and \eqref{F}. \end{proof} To complete the proof of Part (i) of the theorem, we just need to remember that on any graph with $k$ edges, and for any $t\ge 1$, the extinction time is bounded by $2t$ with probability at least $1-(1-\exp(-Ck))^t$ (since on each time interval of length $1$ it has probability at least $\exp(-Ck)$ to die out, for some constant $C>0$, independently of the past). Therefore the previous lemma shows that w.h.p. the extinction time on $G_n^0$ and on $G_n$ are equal, as was announced just above the previous lemma. Then Part (i) of the theorem follows with Proposition \ref{pcel}. \section{Extension to more general degree distributions} \label{secext} We present here some rather straightforward extensions of our results to more general degree distributions. A first one, which was also considered in \cite{VVHZ}, is to take distributions which interpolate between \eqref{pnaj} and \eqref{paj}: for any fixed $\alpha \in [1,\infty]$, define $$p_{n,a,\alpha}(j):= c_{n,a,\alpha} \, j^{-a} \qquad \textrm{for all }1\le j\le n^\alpha,$$ where $(c_{n,a,\alpha})$ are normalizing constants, and with the convention that the case $\alpha = \infty$ corresponds to the distribution given by \eqref{paj}. It turns out that if $a <2$ and $\alpha < 1/(a-1)$, one can use exactly the same proof as in the case $\alpha =1$. When $\alpha > 1/(a-1)$, using that w.h.p. all vertices have degree smaller than $n^{1/(a-1)} \log \log n$, one can use the same proof as in the case $\alpha = \infty$. The case $\alpha = 1/(a-1)$ is more complicated, and as in \cite{VVHZ}, a proof would require a more careful look at it. When $a =2$, using that w.h.p. all vertices have degree smaller than $n \log \log n$, one can see that the same proof applies for any $\alpha>1$. \vspace{0.2cm} Another extension is to assume that there exist positive constants $c$ and $C$, and some fixed $m\ge 1$, such that for any vertex $v$, $$c j^{-a} \leq \mathbb{P} (D_v=j) \leq C j^{-a} \qquad \textrm{for }m \le j\le n^\alpha,$$ say with $\alpha = 1$, but it would work with $\alpha=\infty$ as well. The only minor change in this case is in the proof of Lemma \ref{q}. But one can argue as follows: just replace the set $A_1$ by the set of vertices in $A_m$ whose first $m-1$ stubs are not connected to any of the vertices in $E$. By definition these vertices have at most one neighbor in $E$ and moreover, it is not difficult to see that this set also has w.h.p. a size of order $n$. Then the rest of the proof applies, mutadis mutandis. All other arguments in the proof of Theorem \ref{td} remain unchanged. Therefore in this case we obtain that: $$\frac{|\xi^{V_n}_{t_n}|}{n} - \rho_{n,a}(\lambda) \xrightarrow{ \,\, (\mathbb{P} ) \, \, \, } 0,$$ with $\rho_{n,a}(\lambda)$ as in \eqref{lm}. Theorem \ref{propexp} remains also valid in this setting. \vspace{0.2cm} \noindent \textbf{Acknowledgments:} We thank Daniel Valesin for pointing out a gap in the proof of a previous version of Proposition 6.2.
2,869,038,154,174
arxiv
\section{INTRODUCTION} \IEEEPARstart{R}{obots} performing manipulation tasks rely on models of their bodies and their success is largely determined by their accuracy. However, inaccuracies creep in many ways as for example in the assembly process, in mechanical elasticity, or simply because of cheap design of components. Therefore, the actual model parameters of every robot exemplar have to be found by means of a calibration procedure, usually relying on external metrology systems. For kinematic calibration, such apparatuses can measure one or more of the components of the end-effector pose employing mechanical, visual, or laser systems (see~\cite{Hollerbach2016} for a survey). Different arrangements have different accuracy, requirements on the environment, and cost. These conditions have to be present for recalibration to be performed. Current trends in the robotics industry make classical calibration procedures less practical: with the advent of the so-called ``collaborative robots'', for example, the machines are becoming cheaper, lightweight, compliant, and they are being deployed in more versatile ways according to the needs of customized production of smaller batches rather than being fixed in a single production line for their entire lifetime. All these factors increase the need for calibration to be performed more frequently. At the same time, the machines, including home and service robots, often come with richer sets of powerful sensory devices that are affordable and not difficult to operate. Both these trends speak for alternative solutions to the self-calibration problem that are more ``self-contained'' and can be performed autonomously by the robot. Hollerbach et al.~\cite{Hollerbach2016} classify different calibration methods into \textit{open-loop}---where one or more of the components of the end-effector pose is measured employing mechanical, visual, or laser systems---and \textit{closed-loop} where physical constraints on the end-effector position or orientation can substitute for measurements. Observing the end-effector---or in general any other points on the kinematic chain---using a camera falls into the open-loop calibration family, although components of the end-effector pose can be observed only indirectly through projection into the camera frame. Self-touch configurations employing two arms of the humanoid robot could be framed as a constraint if contact measurement only (e.g. from force/torque sensors) was available and hence treated as closed-loop. In this work, we follow up on \cite{Roncone_ICRA_2014} and emulate sensitive skin measurements, which provide the position of contact (and hence fit more naturally with open-loop calibration). Our work is a simulation study that draws on calibration in the real world---like different approaches to kinematic calibration of the iCub humanoid robot relying on self-observation \cite{Fanello2014,Vicente2016} and self-touch \cite{Roncone_ICRA_2014}. Using the model of the robot with identical parameters, but exploiting the fact that we have complete knowledge of the system and capacity to emulate different levels of model perturbation and measurement noise, our goal is to get insights into the pros and cons of different optimization problem formulations. In particular, we study how the calibration performance is dependent on the type and number of intersecting kinematic chains, the number of parameters calibrated, number of robot configurations, and the measurement noise. Accompanying video is available here \url{https://youtu.be/zP3c7Eq8yVk} and dataset at~\cite{ProjectWeb}. This article is structured as follows. Related work is reviewed in the next section, followed by Materials and Methods, Data Acquisition and Description, and Simulation Results. We close with a Discussion and Conclusion. \section{RELATED WORK} We focus on humanoid robots or humanoid-like setups with many Degrees of Freedom (DoF) of two arms that can possibly self-touch, equipped with cameras and tactile or inertial sensors. These are challenging setups for calibration but they create new opportunities for automated self-contained calibration based on closing kinematic loops by touch (self-contact) and vision. Most often, the loops are closed through self-observation of the end-effector using cameras located in the robot head (\textit{open-loop calibration} method per \cite{Hollerbach2016}). Hersch et al. \cite{Hersch2008} and Martinez-Cantin et al.~\cite{Martinez-Cantin2010} present online methods to calibrate humanoid torso kinematics relying on gradient descent and recursive least squares estimation, respectively. The iCub humanoid was employed in \cite{Fanello2014,Vicente2016}. Vicente et al.~\cite{Vicente2016} used a model of the hand's appearance to estimate its 6D pose and used that information to calibrate the joint offsets. Fanello et al.~\cite{Fanello2014} had the robot observe its fingertip and learned essentially a single transformation only to account for the discrepancy between forward kinematics of the arm and the projection of the finger into the cameras. Next to cameras, inertial sensors also contain information that can be exploited for calibration. Kinematic calibration was shown exploiting 3-axis accelerometers embedded in the artificial skin modules distributed on robot body \cite{Mittendorfer2012,Dean2018} or in the control boards on the iCub \cite{Guedelha2016} or CMU/Sarcos \cite{Yamane2011}. The advent of robotic skin technologies \cite{Bartolozzi2016,Dahiya2013} opens up the possibility of a new family of approaches, whereby the chain is closed through contact like in closed-loop calibration, but the contact position can be extracted from the tactile array. Roncone et al.~\cite{Roncone_ICRA_2014} showed this on the iCub robot that performs autonomous self-touch using a finger with sensitive fingertip to touch the skin-equipped forearm of the contralateral arm; Li et al.~\cite{QiangLi2015} employed a dual KUKA arm setup with a sensorized ``finger'' and a tactile array on the other manipulator. Forward kinematics together with skin calibration provide contact position that can then be used for robot kinematic calibration. In this sense, the skin provides a pose measurement rather than constraint and as such, this may fall under \textit{open-loop calibration}. In this way, one arm of a humanoid can be used to calibrate the other. Khusainov et al.~\cite{Khusainov2017} exploit this principle using an industrial manipulator to calibrate the legs of a humanoid robot. Another variant is exploiting the sensitive fingertips to touch a known external surface \cite{Zenha2018}. Birbach et al. ~\cite{Birbach2015} were to our knowledge the only ones to employ truly ``multisensorial'' or ``multimodal'' calibration. Using the humanoid robot Justin observing its wrist, the error functions comparing the wrist's position from forward kinematics with its projection into the left and right camera images, Kinect image, and Kinect disparity, together with an inertial term, were aggregated into a single cost function to be minimized. It is claimed that while pair-wise calibration can lead to inconsistencies, calibrating everything together in a ``mutually supportive way'' is most efficient. In this work, we compare calibration through self-observation (with projection into cameras) and calibration through self-touch and the effect of their synergy. Our work makes a unique contribution, also compared to \cite{Birbach2015} who, first, employ essentially only ``hand-eye'' kinematic chains terminating in different vision-like sensors in the robot head, and, second, consider only the case where all chains are combined together using a single cost function. \section{MATERIALS AND METHODS} \subsection{iCub robot kinematic model and camera parameters} In this work, we use the upper body of the iCub humanoid robot (see Fig. \ref{fig:kinModel}) and its kinematic model expressed in the Denavit-Hartenberg convention, where every link $i$ is described by 4 parameters: $\{a_i, d_i, \alpha_i, o_i\}$. In this platform, all joints are revolute. We will consider several kinematic chains: all start in a single inertial or base frame---denoted iCub \textit{Root} Reference Frame here. For every chain, the DH parameters uniquely define a chain of transformation matrices from the inertial frame to the end-effector. The position and orientation of the end-effector in the \textit{Root} frame is thus given by ${\boldsymbol{T}}_n^{Root} = A_1(q_1)...A_n(q_n)$ where the homogeneous transformation matrices $A_i$ can be constructed from the DH representation and $q_i$ are current joint angles of the robot actuators. The links are schematically illustrated in Fig. \ref{fig:kinModel}. iCub kinematics version 1 was used \cite{icubWiki} with the following modification: the \textit{Root} was moved from the waist area to the third torso joint, which is the new inertial frame for our purposes. \begin{figure}[thpb] \centering \framebox{\parbox{3.2in}{ \includegraphics[width = 95 pt]{NewiCubRefFramesUpperBodyC.png} \includegraphics[width = 125 pt]{iCubSelfTouchFig2_chains.png}}} \caption{iCub upper body and schematic illustration of kinematic chains considered. All chains originate in a common \textit{Root} which is located at the third torso joint. The left and right arm chains are drawn in green and blue respectively. The eye chains have a common Root-to-head chain part marked in red. The right panel illustrates the self-calibration by connecting different chains---self-touch and self-observation. White lines denote projection into the eyes/cameras.} \label{fig:kinModel} \end{figure} The four chains under consideration are: \begin{enumerate} \item Left Arm (LA). DH parameters in Table~\ref{tab:LA_DH}. Short names to denote the links/joints: Root-to-LAshoulder, LA Shoulder Pitch, LA Shoulder Roll, LA Shoulder Yaw, LA Elbow, LA Wrist Prosup (for pronosupination), LA Wrist Pitch, LA Wrist Yaw. \item Right Arm (RA). DH parameters analogous to LA (see~\cite{icubWiki}). Link/joint names: Root-to-RAshoulder, RA Shoulder Pitch, RA Shoulder Roll, RA Shoulder Yaw, RA Elbow, RA Wrist Prosup, RA Wrist Pitch, RA Wrist Yaw. \item Left Eye (LEye). DH parameters in Table~\ref{tab:LEye_DH}. Link/joint names: Root-to-neck, Neck Pitch, Neck Roll, Neck Yaw, Eyes Tilt, Left Eye Pan. \item Right Eye (REye). DH parameters different than LEye in Table~\ref{tab:REye_DH}. Link/joint names: Root-to-neck, Neck Pitch, Neck Roll, Neck Yaw, Eyes Tilt, Right Eye Pan. \end{enumerate} Links or parameters not subject to calibration are showed shaded in grey in the corresponding tables. The first link always originates in the Root frame and is fixed in all chains (the torso joint is not moving) and is also excluded from calibration. The alpha parameter of the last link in the arm chains is also not being calibrated as it is not observable because we observe only position and not the orientation of the end-effectors. The right arm chain is further extended with a fixed transform from the end-effector in the palm to the tip of the index finger---not subject to calibration. The eye chains differ in the last link only. \begin{table}[htpb] \centering \begin{tabular}{c|cccc} \hline Link(i) & a(i) [mm] & d(i) [mm] &$\alpha$ [rad] & $o$ [rad]\\ \hline \rowcolor{Gray} 1 & 23.36 & 143.3 & $\pi/2$ & $105*\pi/180°$\\ 2 & 0 & 107.74 & $-\pi/2$ & $\pi/2$\\ 3 & 0 & 0 & $\pi/2$ & $-\pi/2$\\ 4 & 15 & 152.28 & $-\pi/2$ & $75*\pi/180°$\\ 5 & -15 & 0 & $\pi/2$ & 0\\ 6 & 0 & 137.3 & $\pi/2$ & $-\pi/2$\\ 7 & 0 & 0 & $\pi/2$ & $\pi/2$\\ 8 & 62.5 & -16 & \cellcolor{Gray}0 & 0\\ \end{tabular} \caption{DH parameters ($a, d, \alpha$ and offsets $o$) describing all links in Left Arm kinematic chain.} \label{tab:LA_DH} \end{table} \begin{table}[htpb] \centering \begin{tabular}{c|cccc} \hline Link(i) & a(i) [mm] & d(i) [mm] &$\alpha$ [rad] & $o$ [rad]\\ \hline \rowcolor{Gray} 1 & 2.31 & - 193.3 & $-\pi/2$ & $\pi/4$\\ 2 & 33 & 0 & $\pi/2$ & $\pi/4$\\ 3 & 0 & 1 & $-\pi/2$ & $\pi/4$\\ 4 &- 54 & 82.5 & $-\pi/2$ & $\pi/4$\\ 5 & 0 & - 34 & $-\pi/2$ & $0$\\ 6 & 0 & 0 & $\pi/2$ & $-\pi/4$\\ \end{tabular} \caption{DH parameters -- Left Eye kinematic chain.} \label{tab:LEye_DH} \end{table} \begin{table}[htpb] \centering \begin{tabular}{c|cccc} \hline Link(i) & a(i) [mm] & d(i) [mm] &$\alpha$ [rad] & $o$ [rad]\\ \hline 5 & 0 & 34 & $\pi/2$ & $-\pi/4$\\ 6 & 0 & 0 & $-\pi/2$ & $0$\\ \end{tabular} \caption{DH parameters -- Right Eye kinematic chain. Links 1-4 shared with Left Eye kinematic chain.} \label{tab:REye_DH} \end{table} The camera intrinsic parameters were taken from the real robot cameras and were not subject to calibration: resolution $320$ x $240$, focal length $f_x=257.34$, $f_y=257.34$ $c_y=120$. \subsection{Optimization problem formulation} \label{sec:optim} By calibration we mean estimation of the parameter vector ${\boldsymbol{\phi}} =\{ [a_1,...,a_n], [d_1,...,d_n], [\alpha_1,...,\alpha_n], [o_1,...,o_n]\}$ with $i \in N$, where $N = \{1,..,n \}$ is a set of indices identifying individual links; $a$, $d$ and $\alpha$ are the first three parameters of the DH formulation and $o$ the offset that specifies the positioning of the encoders on the joints with respect to the DH representation. We often estimate a subset of these parameters only, assuming that the others are known. This subset can for example consist of a subset of links $N' \subset N$ (e.g., only parameters of one arm are to be calibrated) or a subset of the parameters (e.g., only offsets $o$ are to be calibrated---sometimes dubbed ``daily calibration'' \cite{Nickels2003}). The estimation of the parameter vector $\boldsymbol{\phi}$ is done by optimizing a given objective function: \begin{equation} \label{eq:optim} {\boldsymbol{\phi}}^* = \operatornamewithlimits{argmin}_{\boldsymbol{\phi}} \sum_{m=1}^M || {\boldsymbol{p}}_m^r - {\boldsymbol{p}}_m^e ({\boldsymbol{\phi}}, {\boldsymbol{\Theta}}_m)||^2, \end{equation} where $M$ is the number of robot configurations and corresponding end-effector positions used for calibration (hereafter, often referred to as ``poses'' for short), ${\boldsymbol{p}}_m^r$ is a real (observed) end-effector position, ${\boldsymbol{p}}_m^e$ is an estimated end-effector position computed using forward kinematic function for a given parameter estimate ${\boldsymbol{\phi}}$ and joint angles from joint encoders ${\boldsymbol{\Theta}}_m$. For chains involving cameras, the reprojection error is used instead, as described in the next section. \subsection{Kinematic chain calibration} We study different combinations of intersecting chains and their performance in calibrating one another. \subsubsection{Two arms chain (LA-RA)} This corresponds to the self-touch scenario, with touch occurring directly at the end-effectors (the right arm end-effector being shifted from palm to tip of index finger using a fixed transform). The newly established kinematic chain for upper body includes both arms while head and eyes are excluded. To optimize parameters describing this chain, we minimize the distance between estimated positions in 3D space of left and right arm end-effectors. In this case, the parameter vector ${\boldsymbol{\phi}}$ consists of the following parameters: ${\boldsymbol{\phi}} = \{{\boldsymbol{\phi}}^r,{\boldsymbol{\phi}}^l\}$, where ${\boldsymbol{\phi}}^r$ and $\boldsymbol{\phi}^l$ are parameters corresponding to the robot right and left arm, respectively. The objective function to be optimized is \begin{equation} \label{eq:lara} {\boldsymbol{\phi}}^* = \operatornamewithlimits{argmin}_{\boldsymbol{\phi}}\sum_{m=1}^M||{\boldsymbol{X}}_{m}^{r,R} ({\boldsymbol{\phi}}^r, {\boldsymbol{\Theta}}_m^r) - {\boldsymbol{X}}_{m}^{l,R} ({\boldsymbol{\phi}}^l, {\boldsymbol{\Theta}}_m^l)||^2 \end{equation} where $M$ is the number of poses used for calibration, ${\boldsymbol{X}}_{m}^{r,R}$ and ${\boldsymbol{X}}_{m}^{l,R}$ are the $m$\textsuperscript{th} estimated end-effector positions in the Root frame for the right and left arm respectively, computed using a given parameter estimate $\boldsymbol{\phi}$ and joint angles from joint encoders ${\boldsymbol{\Theta}}_m$. \subsubsection{Hand to eye chains (LA-LEye, LA-REye, RA-LEye, RA-REye)} To predict position of the end-effector in each of the robot cameras (similar to \cite{Birbach2015}), the estimated end-effector position, ${\boldsymbol{X}}^{Root}$, is given by a current hypothetical robot calibration of the parameter vector ${\boldsymbol{\phi}}$ and is computed via forward kinematics. ${\boldsymbol{X}}^{Root}$ is then mapped to left camera coordinates (${\boldsymbol{X}}^{LEye}$) using a transformation matrix ${\boldsymbol{T}}_{Root}^{LEye}$. Then we use a pinhole camera model to transform the 3D point (${\boldsymbol{X}}^{LEye}$) into image coordinates (${\boldsymbol{X}}^{img}$): \begin{equation} \begin{pmatrix} X^{img}_x \\ X^{img}_y \end{pmatrix} = \begin{pmatrix} f_x X^{LEye}_x/X^{LEye}_z \\ f_y X^{LEye}_y/X^{LEye}_z \end{pmatrix}, \end{equation} where $f_x$, $f_y$ are focal lengths of the camera. Radial distortion of cameras was not considered. This approach doesn't require information from both eyes and enables us to estimate only one side of the robot body (e.g. parameters of the left arm and left eye). For example, the estimated parameter vector $\boldsymbol{\phi}$ in the case of the kinematic chain connecting left arm and left eye consists of the following parameters: ${\boldsymbol{\phi}} = \{{\boldsymbol{\phi}}^l,{\boldsymbol{\phi}}^{le}\}$, where ${\boldsymbol{\phi}}^l$ and ${\boldsymbol{\phi}}^{le}$ are parameters corresponding to the robot left arm and to the left eye, respectively. The objective function is then defined as: \begin{equation} \label{eq:laley} {\boldsymbol{\phi}}^* = \operatornamewithlimits{argmin}_{\boldsymbol{\phi}} \sum_{m=1}^M || {\boldsymbol{X}}^{l,img}_m({\boldsymbol{\phi}}^l,{\boldsymbol{\phi}}^{le}) - {\boldsymbol{u}}^L_m||^2, \end{equation} where ${\boldsymbol{X}}^{l,img}_m$ is the $m$\textsuperscript{th} 2D position of the estimated left arm end-effector projected to left eye image coordinates and ${\boldsymbol{u}}^L_m$ is the $m$th 2D position of the observed left arm end-effector in the left camera. For two arms and two eyes we get four possible combined chains: left/right arm to right/left eye. Since the results are similar due to symmetry, we present in the experimental section results only for the Left arm - Left eye (LA-LEye) chain. \subsubsection{Combining multiple chains (LA-RA-LEye, LA-RA-LEye-REye)} In order to estimate all kinematic parameters of the robot, we can take advantage of combining some or all of the above mentioned kinematic chains. For example, in the case that we combine LA-RA, LA-LEye and LA-REye chains together into LA-RA-LReye, the estimated parameter vector $\boldsymbol{\phi}$ consists of the following parameters: ${\boldsymbol{\phi}} = \{{\boldsymbol{\phi}}^r,{\boldsymbol{\phi}}^l,{\boldsymbol{\phi}}^{re},{\boldsymbol{\phi}}^{le}\}$, where ${\boldsymbol{\phi}}^l$, ${\boldsymbol{\phi}}^r$, ${\boldsymbol{\phi}}^{re}$, and ${\boldsymbol{\phi}}^{le}$ are parameters corresponding to the left arm, right arm, right eye, and left eye, respectively. The objective function is in this case defined as: \begin{equation} \label{eq:laralreye} \begin{split} {\boldsymbol{\phi}}^* =& \operatornamewithlimits{argmin}_{\boldsymbol{\phi}} \sum_{m=1}^M \{\mu\cdot||{ {\boldsymbol{X}}}_{m}^{r,R} ({\boldsymbol{\phi}}^r, {\boldsymbol{\Theta}}_m^r) - {\boldsymbol{X}}_{m}^{l,R} ({\boldsymbol{\phi}}^l, {\boldsymbol{\Theta}}_m^l)||+\\ & || {\boldsymbol{X}}^{l_L,I}_m({\boldsymbol{\phi}}^l,{\boldsymbol{\phi}}^{le}) - {\boldsymbol{u}}^{l_L}_m|| + || {\boldsymbol{X}}^{r_L,I}_m({\boldsymbol{\phi}}^r,{\boldsymbol{\phi}}^{le}) - {\boldsymbol{u}}^{r_L}_m|| +\\ & || {\boldsymbol{X}}^{l_R,I}_m({\boldsymbol{\phi}}^l,{\boldsymbol{\phi}}^{re}) - {\boldsymbol{u}}^{l_R}_m|| + || {\boldsymbol{X}}^{r_R,I}_m({\boldsymbol{\phi}}^r,{\boldsymbol{\phi}}^{re}) - {\boldsymbol{u}}^{r_R}_m||\}^2,\\ \end{split} \end{equation} where $M$ is the number of poses (configurations) used for calibration, ${\boldsymbol{X}}_{m}^{r,R}$ and ${\boldsymbol{X}}_{m}^{l, R}$ are the $m$\textsuperscript{th} estimated end-effector positions in the Root frame for the right and left arm, respectively. These are computed using a given parameter estimate $\boldsymbol{\phi}$ and joint angles from joint encoders ${\boldsymbol{\Theta}}_m$. Values ${\boldsymbol{X}}^{l_L,I}_m$ and ${\boldsymbol{X}}^{r_L,I}_m$ are the $m$\textsuperscript{th} positions of the estimated left arm end-effector projected to left eye and right eye image coordinates, respectively, and ${\boldsymbol{u}}^{l_L}_m$ and ${\boldsymbol{u}}^{r_L}_m$ are the $m$\textsuperscript{th} 2D position (pixel coordinates) of the left arm end-effector observed in the left and right eye/camera, respectively (variables ${\boldsymbol{X}}^{l_R,I}_m$, ${\boldsymbol{X}}^{r_R,I}_m$, ${\boldsymbol{u}}^{l_R}_m$ and ${\boldsymbol{u}}^{r_R}_m$ correspond to the right arm). Since the cost function contains both 3D and reprojection errors, the distances in space were multiplied by a coefficient $\mu$ determined from the intrinsic parameters of cameras and distance $d$ of the end-effector from the eye: $\mu = 320px/(d* (\pi/3)) $. \subsection{Non-linear least squares optimization} The objective functions (Eqs.~[\ref{eq:optim}]- [5]) defined for the optimization problem described in Section~\ref{sec:optim} are of the least-squares form and therefore can be minimized by Levenberg-Marquardt algorithm for non-linear least squares optimization (we used MATLAB implementation of the algorithm, same as in \cite{Birbach2015}). This iterative local algorithm performs minimization of a non-linear objective function by linearizing it at the current estimate every iteration. It interpolates between the Gauss-Newton and gradient descent method, combining advantages of both. \subsection{Error metrics} For comparing the results achieved for individual settings, we make use of the following error metrics: \subsubsection{Cartesian error between poses (position)} Cartesian position error $E_{c}$ between two generic poses, A and B, where ${\boldsymbol{P_A}} = [x_A, y_A, z_A]$ and ${\boldsymbol{P_B}} = [x_B, y_B, z_B]$ are 3D Cartesian positions of the end-effector, is defined as: \begin{equation} \begin{split} E_c = \sqrt{(x_A-x_B)^2+(y_A-y_B)^2+(z_A-z_B)^2}. \end{split} \end{equation} We evaluate the Cartesian error over the set of $N$ testing poses, which are selected as described in the section~\ref{sec:ttdata}. \subsubsection{Quality of estimated parameters} For each estimated parameter $\phi_i$ we compute the mean difference ($e_i$) of the estimated parameter $\phi_i^e$ from the target parameter value $\phi_i^t$ (averaged over $R$ repetitions of the experiment): \begin{equation} e_i = {{\sum_{r=1}^R{ |\phi_{i,r}^e-\phi_i^t|}}\over{R}}, \end{equation} as well as standard deviation of the parameter. \section{Data acquisition and description} \subsection{Pose set generation} \label{subsec:pose_set_gen} With the goal of comparing different calibration methods on a humanoid robot, we chose a dataset where the two arms of the robot are in contact---thereby physically closing the kinematic chain through self-touch. At the same time, the robot gazes at the contact point (self-observation). The points were chosen from a cubic volume in front of the robot. For each target, using the Cartesian solver and controller \cite{Pattacini2010}, the iCub moves the left hand with end-effector in the palm to the specified point. Then it moves the right hand, with end-effector in the tip of the index finger, to the same point, with the additional constraint that the finger can be at most $50^\circ$ away from the perpendicular direction of the palm. 5055 points and corresponding joint configurations were thus generated, with a difference on left and right effector position in every configuration of maximum $0.01 mm$---see Fig.~\ref{fig:poses_vis}, right. The gaze controller \cite{Roncone2016gaze} was used to command the neck and eyes of the robot to gaze at the same target (code and video can be accessed at \cite{github-datasetGenerator}). The full dataset thus consists of 5055 data vectors ${\boldsymbol{X}}_i = [{\boldsymbol{X}}^{target}_i, {\boldsymbol{X}}^{RA}_i, {\boldsymbol{X}}^{LA}_i, {\boldsymbol{\Theta}}_i]$ composed of target point coordinates (${\boldsymbol{X}}^{target}_i \in \mathbb{R}^3$), corresponding right arm and left arm end-effector positions (${\boldsymbol{X}}^{RA} \in \mathbb{R}^3$, ${\boldsymbol{X}}^{LA} \in \mathbb{R}^3$), and joint angles $\boldsymbol{\Theta}_i$ for every joint of the torso, arms, neck, and eyes (${\boldsymbol{\Theta}}_i \in \mathbb{R}^{20}$). Note that the solvers work with a given tolerance and hence $ X^{target}_i \neq X^{RA}_i \neq X^{LA}_i$. This way of dataset generation draws on previous work \cite{Roncone_ICRA_2014} and is hence feasible on the real robot provided sufficient quality of the initial model. Li et al. \cite{QiangLi2015} provide an alternative control method: ``tactile servoing''. The robot could be also manipulated into the desired configurations while in gravity compensation mode. \subsection{Training and testing dataset} \label{sec:ttdata} We had 5055 configurations with $|\boldsymbol{X}^{RA}_i - \boldsymbol{X}^{LA}_i| < 0.01$ $mm$. The $0.01$ $mm$ error will at the same time constitute the lower bound on the maximum achievable calibration accuracy using the closure of the kinematic chain through self-touch. For the case of loop closure through the cameras, we employ the neck and eye joint values obtained from the solver in the simulator but reproject the end-effector positions directly and accurately into the cameras simulated in Matlab. The 5055 data points were further divided into training and testing datasets in the following way: $N$ out of 4755 poses are used as a training set on which the optimization process is performed (with a subset of 10, 20, 50, or 1000 poses chosen at random in different experiments) and 300 poses are used for testing purposes. Fig. \ref{fig:poses_vis}, left, shows the distribution of joint values for individual joints in the dataset---this may impact the identifiability of individual parameters. \begin{figure}[thpb] \centering \framebox{\parbox{3.3in}{\includegraphics[width=240 pt]{Fig2_all.png}}} \caption{Dataset visualization -- 5055 configurations. (left) Distribution of joint values. (right) End-effector positions. Red -- left arm; Green -- right arm. } \label{fig:poses_vis} \end{figure} \subsection{Measurement error} \label{sec:MeasurementError} Measurement noise with a Gaussian distribution was added motivated by the sensory accuracy in the real robot. Since distance between individual taxels on the real iCub sensitive skin is around 5 mm, we decided to use Gaussian noise with zero mean and $\sigma^2 = 5$ for touch as a baseline. For cameras, we introduce a 5px error (Gaussian noise with zero mean and $\sigma^2 = 5$ px), inspired by the setup in \cite{Fanello2014} where the iCub is detecting its fingertip in the camera frame. These errors are used in all experiments in the Simulation results section if not stated otherwise. In Section~\ref{sec:measError} we evaluate how changing the size of these measurement errors affects the resulting accuracy of end-effector position detection for individual chains. \subsection{Perturbation of the initial parameters estimate} To evaluate the dependence of the optimization performance on the quality of the initial estimates of the parameters, we perturbed all estimated parameters by a \textit{perturbation factor} $p = \{2,5,10,20\}$. We perturbed all initial offset values $o_i$ as follows: \begin{equation} o^{new}_i = 1/100*p*uniform[-1;1]+ o_i \: [rad], \end{equation} It is reasonable to expect that the remaining DH parameters ($\alpha$, $a$, and $d$) will be in general more accurate as they can be extracted from CAD models and there is no moving part and no encoder involved. Therefore, their perturbation was chosen as follows: \begin{equation} \begin{split} &\alpha: \alpha^{new}_i = 1/1000*p*uniform[-1;1]+\alpha_i \: [rad],\\ &a, d: \Phi^{new}_i = 0.1*p*uniform[-1;1]+\Phi_i \: [mm].\\ \end{split} \end{equation} \section{SIMULATION RESULTS} In this section we show the calibration results. We evaluated our approach using both error of the end-effector position---the cost function optimized (or distance in camera frame for projections into eyes)---as well as error in individual parameters (vs. their correct values). We compared kinematic chains used for calibration, number of free parameters which were estimated by the optimization process, different perturbation factor on individual parameters, number of training poses (data points), as well as measurement noise levels. Performance is always is evaluated on the testing dataset. \subsection{Results for different chain combinations and number of training poses} Fig.~\ref{fig:pertDeg} (top) shows the performance in terms of end-effector position estimation when DH parameters of the left arm (LA) chain are calibrated, utilizing different kinematic chain combinations: The ``self-observation'' from a single camera (LALEye) and ``self-touch'' only (LARA) are outperformed by ``stereo self-observation'' (LALREye) and all the chains together provide the best results (LARALREye). Clearly, more training poses (50 vs. 20) improve calibration results; 1000 poses should be sufficient to reach an optimal value and serve as a lower bound on the error. The effect of initial parameter perturbation factor is also shown; for all perturbation levels, the performance is stable (low error variance). \begin{figure}[htb] \centering \includegraphics[width=240 pt]{Fig3_measurement_error_top.pdf} \includegraphics[width = 240pt]{Fig3_bottom_finger_plus100n.pdf} \caption{End-effector position error after optimization---averaged over 10 repetitions. (Top) Left Arm chain calibration (full DH) using different chain combinations, different initial perturbation factors (2, 5, 10, 20) and training on 20 (left), 50 (middle), and 1000 poses (right -- pert. factor 5 only). (Bottom) Performance of different parameter sets subject to calibration -- LARALREye chain was used for calibration of parameters. Free parameters (being calibrated) in a given chain are denoted. E.g., LALEye denotes that all 51 DH parameters of left arm and left eye (including head) are calibrated, and the rest of the DH parameters (e.g. right arm) is considered to be known.} \label{fig:pertDeg} \end{figure} In Fig.~\ref{fig:pertDeg} (bottom) only the largest ``multi-chain'' LARALREye is employed for training but the chains whose parameters are subject to calibration are varied. The error of end-effector position estimation is increasing with higher number of parameters estimated; however, even if parameters of all chains (86 DH parameters) are perturbed and subject to calibration simultaneously, end-effector error of around 2 (1) $mm$ can be achieved with 50 (100) poses. To investigate the distribution of errors for individual chains, we examined error residuals for every testing pose. For a higher number of training poses, error residuals have a zero mean and Gaussian distribution. For lower number of poses (especially for higher perturbation), the residuals are bigger and skewed and the resulting calibration also strongly depends on initialization. In Fig.~\ref{fig:chains}, the end-effector error residuals for perturbation factor $p=10$ are shown for their $x$ and $z$ coordinates (other 2D projections were qualitatively similar)---for different chains and different number of training poses. \begin{figure}[thpb] \centering \framebox{\parbox{3.3in}{\includegraphics[width=240 pt]{residuals_10pertError_5MeasErr_all.png}}} \caption{Error residuals -- Left Arm (LA) chain calibration using LARA, LALREye and LARALREye chains. Results visualized on 300 testing poses for each of 10 repetitions of the optimization, with random parameter initialization (3000 points in total per chain shown). (Left) 10 training poses; (Middle) 20 training poses; (Right) 50 training poses. Perturbation factor 10 and measurement errors 5 mm for skin and 5 px for cameras were considered.} \label{fig:chains} \end{figure} \subsection{Observability analysis of individual chains} We conducted an observability analysis using Singular Value Decomposition (SVD) of the identification Jacobian matrix $J = [J_1,...,J_n]$, where $n$ is the number of configurations in the training pose set and $J_n(i,j)=\left[\partial (X^r_i-X^e_i) \over \partial{\phi_j}\right]$, $\phi_j$ is the parameter $j$ to be estimated, $(X^r_i-X^e_i)$ denotes the error between the real/observed ($X^r$) and estimated ($X^e$) value of the $i$\textsuperscript{th} coordinate in the given chain.\footnote{ E.g., for LALEye, $X$ corresponds to 2 errors: error on the coordinate $u$ and $v$ as a reprojection of the end-effector position into the cameras; for LARA chain, $X$ will correspond to 3 numbers: distance in x, y and z coordinate between right ($X^{r,R}$) and left arm ($X^{l,R}$) end-effector 3D positions.} The Jacobian matrix represents the sensitivity of end-effector positions or their camera reprojections to the change of individual parameters. Using SVD, we can obtain a vector of singular numbers $\sigma_i$. Comparison of the obtained singular numbers for individual chains for the task of estimating all DH parameters of the left arm (using same training pose set) can be seen in Fig.~\ref{fig:observability}. We also evaluated observability indices $O_1$~\cite{borm1989} and $O_4$~\cite{nahvi1996} (performance of observability indices for industrial robot calibration was evaluated by Joubair~\cite{joubair2016}). $O_1$ index is defined as: $O_1 = {(\sigma_1 \sigma_2...\sigma_m)^{1/m} \over {\sqrt(n)}}$, where $m$ is the number of independent parameters to be identified, $\sigma_i$ is the $i$\textsuperscript{th} singular number, and $n$ is the number of calibration configurations. Index $O_4$ is defined as: $\sigma_m^2 \over \sigma_1$. See Fig.~\ref{fig:observability} (bottom panels). Te chain LALEye for 10 poses has very low observability caused by not full rank Jacobian (we have 24 parameters to estimate but only 20 equations). The highest observability is achieved in all cases for the largest chain LARALREye, where the information from touch and both cameras was used. \begin{figure}[thpb] \centering \includegraphics[width=230pt]{Fig5_Top.png} \includegraphics[width=120pt]{Fig5_botLeft_n.pdf} \includegraphics[width=120pt]{Fig5_botRight_n.pdf} \caption{Observability -- Left Arm (LA) chain calibration (full DH) using different chain combinations. (Top) singular numbers of identification Jacobian for different chains used for calibration. Evaluation is performed over the same pose set for every chain. Red, green, turquoise, and blue color of the lines denote 10, 20, 50, and 1000 poses in the training set respectively. (Bottom left) Observability index $O_1$~\cite{borm1989}. (Bottom right) Observability index $O_4$~\cite{nahvi1996}.} \label{fig:observability} \end{figure} \subsection{Evaluation of error based on measurement noise} \label{sec:measError} We evaluated the effect of measurement noise in individual sensors (touch, cameras) on the accuracy of end-effector position error on the testing data set---see Fig.~\ref{fig:MeasurementError}. With same error in pixels on cameras and in $mm$ on ``touch sensors'' (first two columns -- $2px$/$2mm$, $5px$/$5mm$), LALREye chain (both eyes, no touch) and LARALREye (both eyes and touch) have smallest final end-effector errors, for the ``multi-chain'' even smaller. When error on cameras increases ($5E2T$, $10E2T$, $10E5T$), the camera chains (LALEye, LALREye) are affected whereas the performance of the chain with touch (LARALREye) is not degraded. Conversely, more error on ``touch'' ($2E5T$, $2E10T$, $5E10T$) impacts the ``touch only'' chain (LARA), but the LARALREye remains robust. \begin{figure}[thpb] \centering \framebox{\parbox{3.2in}{\includegraphics[width=230 pt]{Fig6_finger2.pdf}}} \caption{End-effector position accuracy for different combinations of measurement noise on cameras and ``touch sensor''. Different chains employed to estimate DH parameters of the left arm (50 training poses, error evaluated over 300 testing poses, averaged over 10 repetitions). X-axis labels read as follows: first number -- error on cameras (``Eyes'') in pixels; second number -- error on the touch sensor in $mm$ (i.e. 5E2T denotes that we introduced zero-mean Gaussian error with 5px and 2mm variance to cameras and touch respectively. } \label{fig:MeasurementError} \end{figure} \subsection{Quality of DH parameter estimates} To get further insight and take advantage of the simulation study where we have access to ground truth values of all parameters, we also studied whether the optimization based on end-effector error also leads to correct estimates of all DH parameters---focusing on the left arm (LA) chain. Fig.~\ref{fig:parsLARA} shows the results for all estimated parameters when the LA-RA (``self-touch'') chain was used for calibration, using different number of training poses. The errors on the length parameters (top panel) are on average distributed between approx. 1 and 10 $mm$. For the angular quantities, it is in the $0.1$ to $1 ^\circ$ range for the proximal joints. \begin{figure}[thpb] \centering \includegraphics[width=230 pt]{Fig7_AandDLARA.pdf} \includegraphics[width=230 pt]{Fig7_alpha_and_off_LARA.pdf} \caption{Quality of DH parameter estimation for LA chain using LA-RA chain. Errors on individual parameters after optimization for different number of poses: (Top) $a$ and $d$ parameters; (Bottom) $\alpha$ and \textit{offsets}. Averaged over 10 repetitions, perturbation factor 5, measurement noise $5px$ on cameras and $5mm$ on touch.} \label{fig:parsLARA} \end{figure} Finally, having showed above that the ``self-touch and self-observation'' (LARALREye) chain slightly outperforms the ``stereo self-observation'' only chain (LALREye) (Fig.~\ref{fig:pertDeg} top, Fig.~\ref{fig:MeasurementError}), also in observability (Fig.~ \ref{fig:observability}), here in Fig.~\ref{fig:DHparametersEst} we can observe a similar trend in the estimated parameters of the LA chain against their ground truth values. The parameter estimates obtained from LARALREye are significantly better for $d$ for all joints except for wristPr and elbow and for $a$ for all shoulder joints. The other parameters estimates are comparable. The wrist joint calibration seems to be sensitive on the selection of training poses and will need further study. \begin{figure} \centering \includegraphics[width=240 pt]{Fig8__top.pdf} \includegraphics[width=240 pt]{Fig8_bottom.pdf} \caption{Absolute error of estimated DH parameters of LA chain after optimization (50 training poses, perturbation factor 5, measurement noise 5 px on cameras and 5 mm on touch). (Top) $a$ and $d$ parameters. (Bottom) $\alpha$ and \textit{offsets}.} \label{fig:DHparametersEst} \end{figure} \section{Discussion and Conclusion} We quantitatively and systematically investigated the potential of automatic self-contained kinematic calibration (DH parameters including camera extrinsic parameters) of a humanoid robot employing different kinematic chains---in particular relying on self-observation and self-touch. The parameters varied were: (i) type and number of intersecting kinematic chains used for calibration, (ii) parameters and chains subject to optimization, (iii) amount of initial perturbation of kinematic parameters, (iv) number of poses/configurations used for optimization, (v) amount of measurement noise in end-effector positions / cameras. We also tracked the computation time and while the details differ depending on the settings (chain calibrated, number of poses, etc.), a typical optimization run would not take more than tens of seconds on an older laptop PC. Next to results w.r.t. the cost function itself (error on end-effector or camera reprojection) a number of additional analyses were performed including error residuals, errors on estimated parameters compared to ground truth, and observability analysis. While some results were expected (such as improvement when more configurations are added or poor performance when using self-observation from a single camera), the most notable findings are: (1) calibrating parameters of a single chain (e.g. one arm) by employing multiple kinematic chains (``self-observation'' and ``self-touch'') is superior in terms of optimization results (Fig.~\ref{fig:pertDeg} top) as well as observability (Fig.~\ref{fig:observability}); (2) when using multi-chain calibration, fewer poses suffice to get similar performance compared to when e.g. only observation from a single camera is used (Fig.~\ref{fig:pertDeg} top); (3) parameters of all chains (here 86 DH parameters) can be subject to calibration simultaneously and with 50 (100) poses, end-effector error of around 2 (1) mm can be achieved (Fig.~\ref{fig:pertDeg} bottom); (4) adding noise to a sensory modality degrades performance of all calibrations employing the chains relying on this information (Fig.~\ref{fig:MeasurementError}). The last point is interesting to discuss in relation to Birbach et al. \cite{Birbach2015} who put forth the hypothesis that calibrating multiple chains simultaneously is superior to pairwise sequential calibration. Our results support this provided that measurement noise is small. Instead, if a certain modality is noisy, it may be beneficial to preferentially employ chains that rely on more accurate measurements first and then calibrate a ``noisy chain'' in a second step. We have only reported results from simulation, however, we claim that this was the right tool for this type of investigation. At the same time, our setup and choice of parameters was drawing on experiments performed in the real robot---self-touch \cite{Roncone_ICRA_2014} and self-observation \cite{Fanello2014,Vicente2016} in particular---which makes the results grounded in a real setting and should inform future experimentation on the iCub. The method to combine chains and analyze the results presented here can be transferred to other platforms as well. There are several aspects that we want to further investigate in the future. First, we note that while we did control for the angle between the palm and the contralateral finger for self-touch in the dataset generation, we did not monitor whether the contact point would be also visible. Additional analyses revealed that the contact point would not be occluded and hence be visible by both cameras in 35\% of the poses and by one of the cameras in 53\%. We recomputed the observability with this subset of the dataset only and found no decrease. In the future, configurations with occlusions should be excluded from dataset generation. Second, we found that around 50 configurations (data points) suffice for reasonable calibration. Finding the optimal subset of not more than 10 configurations would be desirable, such that recalibration can be performed rapidly. Here, clever pose selection will be necessary to warrant adequate and stable performance. Third, the information from the two cameras can be used to reproject observed position of the end-effector in image coordinates of both eyes (pixel $(u,v)$) to 3D space ($X^{eye}$) (similar to \cite{Fanello2014,Hirschmuller2008})---leading onto yet another formulation of the optimization problem. Fourth, our investigation can be extended considering also the contribution of inertial sensors---in the robot head \cite{Birbach2015} or distributed on the robot body \cite{Guedelha2016,Mittendorfer2012}. Fifth, the present method can be compared with filtering approaches \cite{Vicente2016,Zenha2018} or with methods that pose fewer assumptions on the initial model available (e.g., \cite{Lanillos2018}). Finally, the self-touch scenario can be also turned around from using a tactile array to calibrate kinematics \cite{Roncone_ICRA_2014,QiangLi2015} to calibrating the skin itself \cite{Albini2017}. \section*{ACKNOWLEDGMENT} We thank Alessandro Roncone for assistance with the models of the iCub robot in MATLAB and the source files leading to Fig.~\ref{fig:kinModel} left and Ugo Pattacini for discussions, tips, and assistance with the use of Cartesian solvers leading to the generation of self-touch configurations. \bibliographystyle{IEEEtran}
2,869,038,154,175
arxiv
\section{Introduction} \label{sec:intro} In what follows, $\mathbb{K}\xspace$ is an exact field, and $\polRing$ denotes the set of polynomials in $y$ whose coefficients are power series in $x$ over $\field$. \myparagraph{Problem and main result} Given a polynomial in $\polRing$, we are interested in computing its power series roots to some precision, as defined below. \begin{definition} \label{dfn:root} Let $\pol \in \Kx[y]$ and $\prc \in \ZZ_{>0}$. A power series $\rt \in \K[\mkern-4.2mu[ x ]\mkern-4mu]$ is called a \emph{root of $\pol$ to precision $\prc$} if $\pol(\rt) = 0 \bmod x^\prc$; the set of all such roots is denoted by $\rtset$. \end{definition} Our main problem (\cref{pbm:series_root}) asks, given $\pol$ and $\prc$, to compute a finite representation of $\rtset$; the fact that such a representation exists is explained below (\cref{thm:modular_roots_partition}). In all the paper, we count operations in $\mathbb{K}\xspace$ at unit cost, and we use the soft-O notation $\Osoft(\cdot)$ to give asymptotic bounds with hidden polylogarithmic factors. \begin{center} \fbox{ \begin{minipage}{8cm} \begin{problem} \label{pbm:series_root} ~\\ \emph{Input:} \begin{itemize} \setlength{\itemsep}{0pt} \item a precision $\prc \in \ZZp$, \item a polynomial $\pol \in \polRing$ known at precision $\prc$. \end{itemize} \emph{Output:} \begin{itemize} \item (finite) list of pairs $(\rtpol_i, \rtxpt_i)_{1 \le i \le \nrts} \subset \mathbb{K}\xspace[x] \times \NN$ such that $\rtset \,=\, \bigcup_{1\le i\le \nrts} (\rtpol_i + x^{\rtxpt_i} \K[\mkern-4.2mu[ x ]\mkern-4mu])$ \end{itemize} \end{problem} \end{minipage} } \end{center} \medskip An algorithm solving this problem must involve finding roots of polynomials in $\mathbb{K}\xspace[y]$. The existence, and complexity, of root-finding algorithms for univariate polynomials over $\mathbb{K}\xspace$ depends on the nature of $\mathbb{K}\xspace$. In this paper, we assume that $\mathbb{K}\xspace$ is such that we can find roots in $\mathbb{K}\xspace$ of a degree $n$ polynomial in $\mathbb{K}\xspace[y]$ in time $\mathsf{R_\K}\xspace(n)$, for some function $\mathsf{R_\K}\xspace: \NN \to \mathbb{R}$; the underlying algorithm may be deterministic or randomized. For instance, if $\mathbb{K}\xspace=\F_q$, we can take $\mathsf{R_\K}\xspace(n) \in \Osoft(n)$ using either a Las Vegas algorithm (in which case the runtime can be more precisely stated as $\Osoft(n\log(q))$~\cite[Cor.\,14.16]{von_zur_gathen_modern_2013}), or a deterministic one (with for instance a runtime $\Osoft(nk^2\sqrt{p})$, where we write $q=p^k$, $p$ prime~\cite{Shoup91}). We now state our main result: we separate the cost of the root-finding part of the algorithm, which may be randomized, and the rest of the algorithm which is deterministic. \begin{theorem} \label{thm:series_root} There is an algorithm which solves \cref{pbm:series_root} using $\Osoft(\prc\pdeg)$ deterministic operations in $\field$, together with an extra $O(\prc\mathsf{R_\K}\xspace(\pdeg))$ operations, where $\pdeg = \deg(\pol)$. \end{theorem} A cost in $\Osoft(\prc\pdeg)$ is essentially optimal for \cref{pbm:series_root}. Indeed, if $Q=(y-f_1)\cdots (y-f_n)$, for some power series $f_1,\dots,f_n$ such that $f_i-f_j$ is a unit for all $i \ne j$, then the roots of $Q$ to precision $d$ are all the power series of the form $f_i +x^d \K[\mkern-4.2mu[ x ]\mkern-4mu]$, for some $i$. In this case, solving \cref{pbm:series_root} involves computing all $f_i \bmod x^d$, which amounts to $\prc\pdeg$ elements in $\mathbb{K}\xspace$. \myparagraph{Previous work} When the discriminant of $Q \in \K[\mkern-4.2mu[ x ]\mkern-4mu][y]$ has $x$-valuation zero, or equivalently, when all $y$-roots of $Q_{|x=0}$ are simple (as in the example above), our problem admits an obvious solution: first, compute all $y$-roots of $Q_{|x=0}$ in $\mathbb{K}\xspace$, say $y_1,\dots,y_\ell$, for some $\ell \le n$, where $n = \deg Q$. Then, apply Newton iteration to each of these roots to lift them to power series roots $f_1,\dots,f_\ell$ of precision $d$; to go from precision say $d/2$ to $d$, Newton iteration replaces $f_i$ by \[ f_i - \frac{Q(f_i)}{Q'(f_i)} \bmod x^{d} \ , \] where $Q' \in \K[\mkern-4.2mu[ x ]\mkern-4mu][y]$ is the formal derivative of $Q$. The bottleneck of this approach is the evaluation of all $Q(f_i)$ and $Q'(f_i)$. Using an algorithm for fast multi-point evaluation in the ring of univariate polynomials over $\K[\mkern-4.2mu[ x ]\mkern-4mu]/\idealGen{x^{d}}$, these evaluations can both be done in $\Osoft(dn)$ operations in $\mathbb{K}\xspace$. Taking all steps into account, we obtain the roots $f_1,\dots,f_\ell$ modulo $x^d$ using $\Osoft(dn)$ operations in $\mathbb{K}\xspace$; this is essentially optimal, as we pointed out above. In this case, the total time for root-finding is $\mathsf{R_\K}\xspace(n)$. Thus, the non-trivial cases of Problem~\ref{pbm:series_root} arise when $Q_{|x=0}$ has multiple roots. In this case, leaving aside the cost of root-finding, which is handled in a non-uniform way in previous work, we are not aware of an algorithm with a cost similar to ours. The best cost bounds known to us are $\Osoft(n^2d)$, obtained in~\cite{alekhnovich_linear_2005} and with this cost estimate being showed in~\cite{nielsen_sub-quadratic_2015}, and $\Osoft(nd^2)$, obtained in~\cite{berthomieu_polynomial_2013}. When $Q_{|x=0}$ has multiple roots, a natural generalization of our problem consists in computing Puiseux series solutions of $Q$. It is then customary to consider a two-stage computation: first, compute sufficiently many terms of the power series / Puiseux series solutions in order to be able to {\em separate} the branches, then switch to another algorithm to compute many terms efficiently. Most algorithms for the first stage compute the so-called singular parts of rational Puiseux expansions~\cite{Duval89} of the solutions. They are inspired by what we will call the {\em Newton-Puiseux} algorithm, that is, Newton's algorithmic proof that the field of Puiseux series $\mathbb{K}\xspace\langle\mkern-4.2mu\langle x \rangle\mkern-4mu\rangle$ is algebraically closed when $\mathbb{K}\xspace$ is algebraically closed of characteristic zero~\cite{Newton1736,Walker78}. In the case of Puiseux series roots, one starts by reading off the leading exponent $\gamma$ of a possible solution on the Newton polygon of the input equation $Q \in \mathbb{K}\xspace\langle\mkern-4.2mu\langle x \rangle\mkern-4mu\rangle [y]$. The algorithm then considers $\hat{Q} = Q(x^\gamma y)/x^s \in \mathbb{K}\xspace\langle\mkern-4.2mu\langle x \rangle\mkern-4mu\rangle[y]$, where $s$ is the valuation at $x$ of $Q(x^\gamma y)$. If $y_1,\ldots,y_\ell$ are the $y$-roots of ${\hat{Q}}_{|x=0}$, then these give the $x^\gamma$ terms of the Puiseux series roots of $Q$. For each $i$ we then replace $Q$ with $Q(x^\gamma (y_i + y)) / x^{s'}$, where $s'$ is the valuation at $x$ of $Q(x^\gamma(y_i + y))$. This allows us to compute the terms of the solutions one by one. The best algorithms to date~\cite{PoRy11,PoRy15} use an expected number of $\Osoft(n^2 \nu + n^3 +n^2 \log(q))$ operations in $\mathbb{K}\xspace$, if $\mathbb{K}\xspace = \F_q$ and where $\nu$ is the valuation of the discriminant of $Q$. These algorithms are randomized of the Las Vegas type, since they rely on Las Vegas root-finding in $\F_q[y]$. In the second stage, given the singular parts of the solutions, it becomes for instance possible to apply Newton iteration, as in~\cite{KuTr78}. If $Q$ is actually in $\mathbb{K}\xspace[x][y]$, one may alternatively derive from it a linear recurrence with polynomial coefficients satisfied by the coefficients of the solutions we are looking for; this allows us to compute them at precision $d$ using $O(dn)$ operations, that is, in time genuinely linear in $n,d$~\cite{ChCh86a,Chch86b} (keeping in mind that in both cases, we may need to know about $\nu$ terms of the solutions before being able to switch to the faster algorithm). We will discuss a similar observation in the context of our algorithm, in \cref{sec:alg:roots}. Using ideas akin to the Newton-Puiseux algorithm, Berthomieu, Lecerf, and Quintin gave in~\cite{berthomieu_polynomial_2013} an algorithm that computes roots of polynomials in $L[y]$, for a wide class of local rings $L$. In the particular case $L=\F_q\llbracket x\rrbracket$ with $q=p^s$, the expected runtime of their algorithm is $\Osoft(n d^2 + n \log(q)+ nd \log(k)/p)$ operations in~$\F_q$. Let us finally mention algorithms for polynomial factorization over local fields. Using the Montes algorithm~\cite{Montes99}, it is proved in~\cite{BaNaSt13} that one can compute a so-called OM-factorization of a degree $n$ polynomial $Q$ in $\F_q\langle\mkern-4.2mu\langle x \rangle\mkern-4mu\rangle[y]$ at precision $d$ using $\Osoft(n^2\nu+n \nu^2 + n\nu\log(q))$, where $\nu$ is the valuation of the discriminant of $Q$; the relation to \emph{basic root sets}, defined below, remains to be elucidated. Sudan's and Guruswami-Sudan's algorithms for the list-decoding of Reed-Solomon codes~\cite{sudan_decoding_1997,guruswami_improved_1999} have inspired a large body of work, some of which is directly related to Problem~\ref{pbm:series_root}. These algorithms operate in two stages: the first stage finds a polynomial in $\mathbb{K}\xspace[x,y]$ with some constraints; the second one finds its factors of the form $y-f(x)$, for $f$ in $\mathbb{K}\xspace[x]$. The Newton-Puiseux algorithm can easily be adapted to compute such factors; in this context, it becomes essentially what is known as the Roth-Ruckenstein algorithm~\cite{roth_efficient_2000}; its cost is in $O(d^2n^2)$, omitting the work for univariate root-finding. In the context of Sudan's and Guruswami-Sudan's algorithms, we may actually be able to use Newton iteration directly, by exploiting the fact that we are looking for {\em polynomial} roots. Instead of computing power series solutions (that is, the Taylor expansions of these polynomial roots at the origin), one can as well start from another expansion point $x_0$ in $\mathbb{K}\xspace$; if the discriminant of $Q$ does not vanish at $x_0$, Newton iteration applies. If $\mathbb{K}\xspace$ is finite, one cannot exclude the possibility that all $x_0$ in $\mathbb{K}\xspace$ are roots of $Q$; if needed, one may then look for $x_0$ in an extension of $\mathbb{K}\xspace$ of small degree. Augot and Pecquet showed in~\cite{augot_hensel_2000} that in the cases appearing in Sudan's algorithm, there is always a suitable $x_0$ in $\mathbb{K}\xspace$. However, for example for the Wu list decoding algorithm \cite{wu_new_2008} or for the list-decoding of certain algebraic geometry codes \cite{nielsen_sub-quadratic_2015}, one does seek truncated power series roots. In this case, one may use Alekhnovich's algorithm~\cite[App.]{alekhnovich_linear_2005}, which is a divide and conquer variant of the Roth-Ruckenstein algorithm. It solves Problem~\ref{pbm:series_root} using $n^{O(1)} \Osoft(d)$ operations in $\mathbb{K}\xspace$ plus calls to univariate root-finding; the refined analysis in~\cite{nielsen_sub-quadratic_2015} gives the runtime $\Osoft(n^2 d + nd\log q)$. \myparagraph{Outline} We start by giving properties about the structure of the set of roots in \cref{sec:structure_roots}. We will see in particular how $\rtset$ can be described recursively as the finite union of set of roots at a lower precision for shifts of $\pol$, that is, polynomials of the form $\pol(\rtpol+x^{\rtxpt}y)$. From this, we will be able to derive a divide-and-conquer algorithm which is essentially Alekhnovich's. The reason why the runtime of this algorithm is quadratic in $n$ is the growth of the (sum of the) degrees of these shifts. Having in mind to control this degree growth, we conclude \cref{sec:structure_roots} with the definition of so-called \emph{reduced root sets}, for which we establish useful degree properties. In \cref{sec:affine_factors}, we detail a fast algorithm for the computation of \emph{affine factors}, which are polynomials having the same roots as the shifts but which can be computed more efficiently thanks to the degree properties of our reduced root sets. Finally, in \cref{sec:alg:roots}, we incorporate this into the divide and conquer approach, leading to our fast power series roots algorithm. \section{Structure of the set of roots} \label{sec:structure_roots} Recall the notation of \cref{pbm:series_root}. In the following analysis, we consider knowing $\pol$ to arbitrary precision, i.e.~$\pol \in \Kx[y]$. For convenience, we also define for any $d \leq 0$ that $\rtset = \K[\mkern-4.2mu[ x ]\mkern-4mu]$. First, we introduce basic notation. \begin{itemize} \item $v_x} %% {v_{(x=0)}: \Kx[y]\setminus\{0\} \rightarrow \NN$ denotes the valuation at $x$, that is, $v_x} %% {v_{(x=0)}(\pol)$ is the greatest power of $x$ which divides $\pol$, for any nonzero $\pol \in \Kx[y]$. \item For $\pol \in \Kx[y]$, we write $\polz$ for the univariate polynomial in $\mathbb{K}\xspace[y]$ obtained by replacing $x$ by $0$ in $\pol$. \item We denote by $\trcSer = \K[\mkern-4.2mu[ x ]\mkern-4mu]/\idealGen{x^\prc}$ the ring of power series in $x$ over $\field$ truncated at precision $\prc$. \item To avoid confusion, $\deg(\cdot)$ stands for the degree of some polynomial in $y$ over $\mathbb{K}\xspace$, over $\K[\mkern-4.2mu[ x ]\mkern-4mu]$, or over $\trcSer$, whereas the degree of polynomials in $\mathbb{K}\xspace[x]$ is denoted using $\deg_x(\cdot)$. \end{itemize} The next lemma follows from the above definitions, and shows that we can focus on the case $v_x} %% {v_{(x=0)}(\pol)=0$. \begin{lemma} \label{lem:modular_roots_valuation} Let $\pol \in \Kx[y]$ be nonzero and let $\prc \in \ZZ_{>0}$. If $\polz = 0$, then $\rtset = \rtset[x^{-\val}\pol][\,\prc-\val]$, where $\val = v_x} %% {v_{(x=0)}(\pol)$. \end{lemma} Now, we will focus on a compact way of representing root sets, and we will see that $\rtset$ always admit such a representation even though it is usually an infinite set. Similar representations are also behind the correctness and the efficiency of the algorithms of Roth-Ruckenstein \cite{roth_efficient_2000}, of Alekhnovich \cite[App.]{alekhnovich_linear_2005}, and of Berthomieu-Lecerf-Quintin \cite[Sec.\,2.2]{berthomieu_polynomial_2013}. To support the divide-and-conquer structure of our algorithm, we further describe how these representations compose. \begin{definition}\label{def:basic_root_set} Let $\pol \in \Kx[y]$ be nonzero and let $\prc \in \ZZ_{>0}$. A \emph{basic root set} of $\pol$ to precision $d$ is a finite set of pairs $(f_i, t_i)_{1\le i\le \ell}$, each in $\mathbb{K}\xspace[x] \times \ZZ_{\geq 0}$, such that: \begin{itemize} \item $v_x} %% {v_{(x=0)}(Q(f_i + x^{t_i} y)) \geq d$ for $1\le i\le\ell$, \item we have the identity \[ \rtset = \bigcup_{1\le i \le \ell} \left \{ f_i + x^{t_i} \K[\mkern-4.2mu[ x ]\mkern-4mu] \right \}. \] \end{itemize} For $d \le 0$, we define the unique basic root set of $\pol$ to precision $d$ as being $\{(0,0)\}$; note that it satisfies both conditions above. \end{definition} We remark that the first restriction on being a basic root set is key: for instance, $Q = y^2 + y \in \F_2[\mkern-4.2mu[ x ]\mkern-4mu][y]$ has $\rtset[Q][1] = \F_2[\mkern-4.2mu[ x ]\mkern-4mu]$. But $\{(0,0)\}$ is \emph{not} a basic root set because it does not satisfy the first property; rather a basic root set is given by expanding the first coefficient: $\{ (0,1), (1,1) \}$. At precision $d=1$, one can easily build a basic root set of $Q$ which has small cardinality: \begin{lemma} \label{lem:root_set_1} Let $\pol \in \Kx[y]$ be such that $\polz \neq 0$, and let $y_1,\ldots,y_\ell$ be the roots of $\pol_{|x=0}$. Then, $(y_i,1)_{1 \le i \le \ell}$ is a basic root set of $Q$ to precision $1$. \end{lemma} \begin{proof} Take $i$ in $\{1,\dots,\ell\}$ and write the Taylor expansion of $Q(y_i+xy)$ as $Q(y_i+xy)=Q(y_i) + xR_i(y)$, for some $R_i\in\Kx[y]$. Since both terms in the sum have valuation at least $1$, we obtain that $s_i=v_x} %% {v_{(x=0)}(Q(y_i+xy))$ is at least $1$. Furthermore, we remark that \begin{align*} \rtset[\pol][1] & = \{f \in \K[\mkern-4.2mu[ x ]\mkern-4mu] \mid \pol(f)=0 \bmod x\} \\ & = \{f \in \K[\mkern-4.2mu[ x ]\mkern-4mu] \mid \polz(f_0)=0\}, \end{align*} where $f_0$ is the constant coefficient of $f$. Thus, $\rtset[\pol][1]$ is the set of $f\in\K[\mkern-4.2mu[ x ]\mkern-4mu]$ whose constant coefficient is in $\{y_1,\dots,y_\ell\}$. \end{proof} \begin{proposition} \label{prop:roots_dc} Let $\pol \in \Kx[y]$ be such that $\polz \neq 0$ and let $ \prc', \prc$ be in $\ZZ_{\ge 0}$, with $\prc'\le \prc$. Suppose that $\pol$ admits a basic root set $(f_i, t_i)_{1 \leq i \leq \ell}$ to precision $\prc'$. Suppose furthermore that, for $1 \le i \le \ell$, $\pol(f_i + x^{t_i} y)/x^{s_i}$ admits a basic root set $(f_{i,j}, t_{i,j})_{1 \leq j \leq \ell_i}$ to precision $\prc - s_i$, where $s_i = v_x} %% {v_{(x=0)}(\pol(f_i + x^{t_i} y))$. Then, a basic root set of $\pol$ to precision $\prc$ is given by \[ (f_i + f_{i,j}x^{t_i}, t_i + t_{i,j})_{1 \leq j \leq \ell_i, 1 \leq i \leq \ell} \ . \] \end{proposition} \begin{proof} For $1\le i \le \ell$, let $Q_i = Q(f_i + x^{t_i} y)/x^{s_i}$. Then, for all $i,j$, from the definition of basic root sets, we have \begin{align*} v_x} %% {v_{(x=0)}\Big(Q(f_i + f_{i,j}x^{t_i} + x^{t_i + t_{i,j}})\Big) &= v_x} %% {v_{(x=0)}\Big(\big(x^{s_i} Q_i\big)_{|y = f_{i,j} + x^{t_{i,j}}y}\Big) \\ &\geq s_i + (\prc - s_i). \end{align*} This proves that the first property of \cref{def:basic_root_set} holds. For the second property, we prove both inclusions leading to the identity $\rtset = \cup_{i,j} \{ f_i + x^{t_i}f_{i,j} + x^{t_i + t_{i,j}} \K[\mkern-4.2mu[ x ]\mkern-4mu] \}$. First, consider some $f \in \rtset$; since $d' \le d$, $f$ is in $ \rtset[\pol][d']$, so we can write $f = f_i + x^{t_i} g$, for some $i$ in $\{1,\dots,\ell\}$ and $g$ in $\K[\mkern-4.2mu[ x ]\mkern-4mu]$. Then, $\pol(f) = x^{\val_i}\pol_i(g) = 0 \bmod x^d$, and so $g \in \rtset[\pol_i][d-\val_i]$. This implies that $g \in f_{i,j} + x^{t_i,j}\K[\mkern-4.2mu[ x ]\mkern-4mu]$ for some $j$. Now consider a power series $g \in \rtset[\pol_i][d-\val_i]$ for some $i$. This means that $\pol_i(g) = 0 \bmod x^{\max(0, d-\val_i)}$, so that $Q(f_i + x^{t_i} g) = x^{s_i}\pol_i(g) = 0 \bmod x^d$, and therefore $f_i + x^{t_i} g$ is in $\rtset$. \end{proof} We now deduce, by induction on $d$, that any $\pol \in \Kx[y]$ admits a finite basic root set to precision $d$ for any $d \in \ZZ_{\geq 0}$. By \cref{lem:modular_roots_valuation} we can reduce to the case where $v_x} %% {v_{(x=0)}(\pol) = 0$ and $\pol_{|x=0} \neq 0$. The claim is readily seen to be true for $d\le 0$ (take $\{(0,0)\}$) and $d=1$ (\cref{lem:root_set_1}). Suppose the claim holds for all $d' < d$, for some $d \ge 2$; we can then apply this property to $d-1$, obtaining a basic root set $(f_i, t_i)_{1 \leq i \leq \ell}$ of $\pol$ to precision $d-1$. We know that, with the notation of \cref{prop:roots_dc}, $s_i \ge d-1$ holds for all $i$, so in particular $s_i \ge 1$, and thus $d-s_i < d$. Then, applying again the induction property to each of $(Q_i,d-s_i)_i$, the conclusion of \cref{prop:roots_dc} establishes our claim. These results can be used to build basic root sets recursively, by either applying \cref{lem:root_set_1} iteratively or using \cref{prop:roots_dc} in a divide-and-conquer fashion with \cref{lem:root_set_1} applied at the leaves. As discussed in \cref{sec:intro}, this recursive approach is similar to the Newton-Puiseux algorithm. These iterative and divide and conquer solutions to \cref{pbm:series_root} are known in coding theory as the Roth-Ruckenstein algorithm \cite{roth_efficient_2000} and the Alekhnovich algorithm \cite[App.]{alekhnovich_linear_2005}. Below, we describe the latter algorithm in detail (\cref{alg:dnc_roots}), since our new algorithm runs along the same lines (\cref{alg:roots}). We will not further discuss the correctness or complexity of \cref{alg:dnc_roots}, but rather refer to \cite[App.]{alekhnovich_linear_2005} or \cite[App.\,A]{nielsen_sub-quadratic_2015}. \begin{algorithm}[h] \caption{\textbf{:} \algoname{DnCSeriesRoots} \cite{alekhnovich_linear_2005}} \label{alg:dnc_roots} \begin{algorithmic}[1] % \Require{$\prc \in \ZZp$ and $\pol \in \trcSerPol$ with $\polz \neq 0$.} \Ensure{A basic root set of $\pol$ to precision $\prc$.} \If{$d = 1$} \State $(y_i)_{1 \le i \le \nrts} \leftarrow$ roots of $\polz \in \field[y]$ \State \Return $(y_i, 1)_{1\le i\le\ell}$ \Else \State\label{alek:5} $(f_i, t_i)_{1\le i\le \nrts} \leftarrow \algoname{DnCSeriesRoots}(\pol \bmod x^{\ceil{\prc/2}},\ceil{\prc/2})$ \State $(\pol_i)_{1\le i\le \nrts} \leftarrow (\pol(f_i + x^{t_i}y) \bmod x^d)_{1\le i\le \nrts}$ \State $(\val_i)_{1\le i\le \nrts} \leftarrow (v_x} %% {v_{(x=0)}(\pol_i))_{1\le i\le \nrts}$ \For{$1 \le i \le \nrts$} \If{$\val_i \ge \prc$} \State $(\rtpol_{i,1},\rtxpt_{i,1}) \leftarrow (0,0)$ and $\nrts_i \leftarrow 1$ \Else \State $(\rtpol_{ij},\rtxpt_{ij})_{1\le j\le \nrts_i} \leftarrow \algoname{DnCSeriesRoots}(x^{-\val_i}\pol_i, \prc-\val_i)$ \EndIf \EndFor \State \Return $(\rtpol_i + x^{\rtxpt_i} \rtpol_{i,j}, \rtxpt_i + \rtxpt_{i,j})_{1\le j\le \nrts_i, 1\le i \le \nrts}$. \EndIf \end{algorithmic} \end{algorithm} The next step is to prove that there are special, small basic root sets, and that these also compose in a way similar to \cref{prop:roots_dc}. In order to formulate this, we first introduce a generalization of root multiplicity to our setting. \begin{definition} Let $(f, t) \in \mathbb{K}\xspace[x] \times \ZZp$ be such that $f$ is nonzero and $f=g+ f_{t-1}x^{t-1}$ for some $g\in\mathbb{K}\xspace[x]$ with $\deg_x(g) < t-1$. For $\pol \in \Kx[y]\setminus\{0\}$, we consider the polynomial of valuation zero \[ R=Q(g + x^{t-1}y)/x^{v_x} %% {v_{(x=0)}(Q(g + x^{t-1}y))} \in \Kx[y]. \] Then, the \emph{root multiplicity of $(f,t)$ in $Q$} is the root multiplicity of $f_{t-1}$ in $R_{|x=0} \in \mathbb{K}\xspace[y]$. \end{definition} Note that if $f_{t-1}$ is not a root of $R_{|x=0}$, the root multiplicity of $(f,t)$ is 0. Also, if $t=1$, so that $f=f_0$ is in $\mathbb{K}\xspace$, and if $\pol_{|x=0} \ne 0$, the root multiplicity of $(f_0,1)$ is simply the multiplicity of $f_0$ in $\pol_{|x=0}$. \begin{definition} \label{def:reduced_root} Let $\pol \in \Kx[y]$ be such that $\polz \neq 0$ and let $d$ be in $\ZZ$. Suppose that $(f_i, t_i)_{1 \leq i \leq \ell}$ is a basic root set of $Q$ at precision $d$. Then, we say that $(f_i, t_i)_{1 \leq i \leq \ell}$ is a \emph{reduced root set}, if the following holds: \begin{itemize}[leftmargin=0.5cm] \item either $d \leq 0$, \item or $d> 0$, and all the $f_i$'s are nonzero, and the following points are all satisfied, where for $1 \le i \le \ell$, we write $s_i = v_x} %% {v_{(x=0)}(Q(f_i + x^{t_i}y))$, $Q_i = Q(f_i + x^{t_i}y)/x^{s_i}$, and we write $m_i$ for the root multiplicity of $(f_i, t_i)$ in $Q$: \begin{enumerate} \item $m_i \geq 1$ for $1\le i\le \ell$, \item $\deg({Q_i}_{|x=0}) \leq m_i$ for $1\le i \le\ell$, and \item $\sum_{1\le i\le\ell} m_i \;\leq \deg(Q_{|x=0})$. \end{enumerate} \end{itemize} \end{definition} It follows from the restrictions \emph{(1)} and \emph{(3)} that $\ell \leq \deg(Q_{|x=0})$. Mimicking the structure of the first half of the section, we now prove the existence of reduced root sets for $d = 1$ and then give a composition property. The next lemma is inspired by~\cite[Lem.\,1.1]{alekhnovich_linear_2005}. \begin{lemma} \label{lem:onestep_roots_partition} Let $\pol \in \Kx[y]$ be such that $\polz \neq 0$. The basic root set of $\pol$ to precision $1$ defined in \cref{lem:root_set_1} is reduced. \end{lemma} \begin{proof} Let $y_1,\ldots,y_\ell$ be the roots of $\pol_{|x=0}$, and, for $1\le i\le\ell$, let $\val_i = v_x} %% {v_{(x=0)}(Q(y_i+xy))$, $\pol_i = Q(y_i+xy)/x^{\val_i}$, and let $m_i$ be the root multiplicity of $y_i$ in $\pol_{|x=0}$. The inequalities $m_i \ge 1$, for $1\le i \le\ell$, and $\sum_i m_i \leq \deg(Q_{|x=0})$ are clear. Consider now a fixed index $i$; it remains to prove that $\deg({\pol_i}_{|x=0}) \leq m_i$. There are $P \in \mathbb{K}\xspace[y]$ and $R \in \Kx[y]$ such that $P(y_i) \neq 0$ and $\pol = (y - y_i)^{\mult_i}P(y) + x R$. Then \[ x^{\val_i} \pol_i = \pol(y_i + xy) = (xy)^{\mult_i}P(y_i + xy) + x R(y_i + xy) \, . \] The right-hand side reveals the following: \begin{itemize} \item Any monomial $x^\alpha y^\beta$ in $x^{\val_i}\pol_i$ satisfies $\alpha \ge \beta$, and hence $\deg({\pol_i}_{|x=0}) \le \val_i$. \item $x^{\val_i} \pol_i$ contains the term $(xy)^{\mult_i} P(y_i)$, since this appears in $(xy)^{\mult_i}P(y_i + xy)$ and it cannot be cancelled by a term in $xR(y_i + xy)$ since all monomials there have greater $x$-degree than $y$-degree. \end{itemize} These two points imply $\deg({\pol_i}_{|x=0}) \leq s_i \leq m_i$. \end{proof} The following theorem is exactly the statement of \cref{prop:roots_dc} except that ``basic'' has been replaced by ``reduced''. \begin{theorem} \label{thm:modular_roots_partition} Let $\pol \in \Kx[y]$ be such that $\polz \neq 0$ and let $ \prc', \prc$ be in $\ZZ_{\ge 0}$, with $\prc'\le \prc$. Suppose that $\pol$ admits a reduced root set $(f_i, t_i)_{1 \leq i \leq \ell}$ to precision $\prc'$. For $i = 1,\ldots,\ell$, suppose furthermore that $\pol(f_i + x^{t_i} y)/x^{s_i}$ admits a reduced root set $(f_{i,j}, t_{i,j})_{1 \leq j \leq \ell_i}$ to precision $\prc - s_i$, where $s_i = v_x} %% {v_{(x=0)}(\pol(f_i + x^{t_i} y))$. Then a reduced root set of $\pol$ to precision $\prc$ is given by \[ (f_i + f_{i,j}x^{t_i}, t_i + t_{i,j})_{1 \leq j \leq \ell_i, 1 \leq i \leq \ell} \ . \] \end{theorem} \begin{proof} By \cref{prop:roots_dc} it is clear that the specified set is a basic root set, and we should verify the additional restrictions of \cref{def:reduced_root}. Introduce for each $i,j$ \[ Q_{i,j} = Q(f_i + f_{i,j}x^{t_i} + x^{t_i + t_{i,j}}y)/x^{s_{i,j}} = Q_i(f_{i,j} + x^{t_{i,j}}y)/x^{s_{i,j}} \ , \] where $\pol_i = \pol(f_i + x^{t_i} y)/x^{s_i}$ and $s_{i,j} = v_x} %% {v_{(x=0)}(Q_i(f_{i,j} + x^{t_{i,j}}y))$. Consider first for some $i$ the case $d - s_i \leq 0$. Then $\ell_i = 1$ and $(f_{i,1}, t_{i,1}) = (0,0)$, and so the root multiplicity $m_{i,1}$ of $(f_i + f_{i,1}x^{t_i}, t_{i} + t_{i,1})$ in $Q$ is $m_i$ which is positive by assumption. Also $Q_{i,j} = Q_i$ so $\deg({Q_{i,j}}_{|x=0}) = \deg({Q_{i}}_{|x=0})$ which is at most $m_i = m_{i,1}$ by assumption. Finally, $\sum_j m_{i,j} = m_{i,1} = m_i$. We will collect the latter fact momentarily to prove the third item of the reduced root definition. Consider next an $i$ where $d - s_i > 0$. In this case $t_{i,j} > 0$ for all $1 \leq j \leq \ell_i$, and the root multiplicity of $(f_i + f_{i,j}x^{t_i}, t_i + t_{i,j})$ in $Q$ equals the root multiplicity $m_{i,j}$ of $(f_{i,j}, t_{i,j})$ in $Q_i$ which is positive by assumption. The assumptions also ensure that $\deg({Q_{i,j}}_{|x=0}) \leq m_{i,j}$, and $\sum_j m_{i,j} \leq \deg({Q_i}_{|x=0}) \leq m_i$. Thus, the two first restrictions on being a reduced root set is satisfied for each element. All that remains is the third restriction: but using our previous observations, we have $\sum_i \sum_j m_{i,j} \leq \sum_i m_i$ and this is at most $\deg(Q_{|x=0})$ by assumption. \end{proof} To solve \cref{pbm:series_root} we will compute a reduced root set using \cref{lem:onestep_roots_partition} and \cref{thm:modular_roots_partition}. Note that it follows that a reduced root set is essentially unique: apart from possible redundant elements among the $f_i$, non-uniqueness would only be due to unnecessarily expanding a coefficient in a root $(f,t)$, that is, replace that root by the $|\mathbb{K}\xspace|$ roots $(f + ax^t, t+1)_{a \in \mathbb{K}\xspace}$. Of course this could only be an issue if $\mathbb{K}\xspace$ is finite and if $\deg(\pol_{|x=0})$ is very large. Our algorithm as well as previous ones are computing the ``minimal'' set of reduced roots. According to \cref{thm:modular_roots_partition}, the total number of field elements required to represent this minimal set cannot exceed $ \prc \deg(Q_{|x=0}) \le \prc\deg(Q)$. \section{Affine factors of the shifts} \label{sec:affine_factors} The appendix~A of~\cite{nielsen_sub-quadratic_2015} gives a careful complexity analysis of \cref{alg:dnc_roots}, and proves that it runs in time $\Osoft(d n^2 + d n \mathsf{R_\K}\xspace)$, where $n=\deg(\pol)$. The main reason why the cost is quadratic in $\deg(\pol)$ is that all the shifted polynomials $\pol_i = x^{-\val_i}\pol(\rtpol_i + x^{\rtxpt_i}y)$ can have large degree, namely up to $\deg(\pol)$. Thus, merely representing the $\pol_i$'s may use a number of field elements quadratic in $\deg(\pol)$. Nonetheless, we are actually not interested in these shifts themselves, but only in their reduced root sets. The number of these roots is well controlled: the shifts have altogether a reduced root set of at most $\deg(\polz)$ elements. Indeed, by definition, we know that $\deg({\pol_i}_{|x=0})$ is at most the multiplicity $\mult_i$ of the root $(\rtpol_i,\rtxpt_i)$, and the sum of these multiplicities is at most $\deg(\polz)$. The difficulty we face now is that we want to efficiently compute reduced root sets of the shifts without fully computing these shifts. To achieve this, we compute for each shift $\pol_i$ a factor of it which has the same roots and whose degree is $\deg({\pol_i}_{|x=0}) \le \mult_i$, {\em without entirely computing $\pol_i$ itself}. We design a fast algorithm for computing these factors, by using ideas from \cite[Algo.\,Q]{Musser75}, in which we also incorporate fast modular reduction techniques so as to carefully control the quantity of information we process concerning the shifts. The next result formalizes the factorization we will rely on. It is a direct consequence of the Weierstrass preparation theorem for multivariate power series \cite[VII.\S1.~Cor.\,1 of Thm.\,5]{ZarSam60}. \begin{theorem} \label{thm:affine_factor} Let $\pol \in \K[\mkern-4.2mu[ x ]\mkern-4mu][y]$ be such that $\polz \neq 0$. Then, there exist unique $\aff, \coaff \in \K[\mkern-4.2mu[ x ]\mkern-4mu][y]$ such that $\pol = \aff \coaff$, $\aff$ is monic and $\coaff_{|x=0} \in \field\setminus\{0\}$. \end{theorem} In the case at hand, one may as well derive existence and uniqueness of $A$ and $B$ (together with a slow algorithm to compute them) by writing their unknown coefficients as $A=a_0(y)+ x a_1(y) + \cdots$ $B=b_0+xb_1(y)+\cdots$, with $b_0$ in $\mathbb{K}\xspace\setminus\{0\}$ and all $a_i$'s ($i \ge 1$) of degree less than that of $a_0$. Extracting coefficients of $x^0,x^1,\dots$, we deduce that the relation $Q=AB$ defines the $a_i$'s and $b_i$'s uniquely. In what follows, $\aff$ is called the \emph{affine factor} of $\pol$. Remark that if we start from $\pol$ in $\trcSerPol$, we can still define its affine factor as a polynomial in $\trcSerPol$, by reducing modulo $x^d$ the affine factor of an arbitrary lift of $\pol$ to $\K[\mkern-4.2mu[ x ]\mkern-4mu][y]$ (the construction above shows that the result is independent of the lift). Our algorithm will compute the affine factors $(\aff_i)_{1\le i\le \nrts}$ of the shifts $(\pol_i)_{1\le i\le \nrts}$ at some prescribed precision $d$ in $x$, having as input $\pol$ and the shifting elements $(\rtpol_i + x^{\rtxpt_i}y)_{1\le i\le \nrts}$. A factorization $\pol_i = \aff_i \coaff_i$ can be computed modulo any power $x^d$ of $x$ from the knowledge of $\pol_i$ by means of Hensel lifting \cite[Algo.\,Q]{Musser75}, doubling the precision at each iteration. However, the above-mentioned degree bounds indicate that neither the shifts $(\pol_i)_i$ nor the cofactors $(\coaff_i)_i$ may be computed modulo $x^d$ in time quasi-linear in $\deg(\pol)$ and $\prc$: the key of our algorithm is to show how to compute the affine factors $\aff_i$ at precision $d$ directly from $Q$ within the prescribed time bounds. (Hensel lifting factorization techniques were also used in~\cite{berthomieu_polynomial_2013}, but in a context without the degree constraints that prevent us from computing the shifts $\pol_i$). Hereafter, $A \quo B$ and $A \rem B$ denote the quotient and the remainder in the division of the polynomial $A$ by the monic polynomial $B$. The input of the algorithm is the polynomial $Q$ known modulo $x^d$, as output, we compute the affine factors $A_i$ of the shifts at respective precisions $d-s_i$, together with the valuation $s_i$; if $s_i \ge d$, we detect it and return $(0,d)$. The initialization consists in computing the affine factors of the $x$-constant polynomials $({\pol_i}_{|x=0})_{1\le i\le \nrts}$. If these polynomials are known, this is straightforward: the affine factor of ${\pol_i}_{|x=0}$ is itself divided by its leading coefficient, which is a nonzero constant from $\field$. It turns out that computing these polynomials is not an issue; remark that the sum of their degrees is at most $\mult_1 + \cdots + \mult_\nrts \le \deg(\pol)$. Explicitly, we first compute the remainders $(\pol(\rtpol_i + x^{\rtxpt_i} y) \rem y^{\mult_i+1})_i$ via fast modular reduction techniques; then, we can both retrieve the valuations $(\val_i)_i = (v_x} %% {v_{(x=0)}(\pol(\rtpol_i + x^{\rtxpt_i} y)))_i$ (or, more precisely, $\val^*_i=\min(\val_i, d)$), and, when $\val_i < d$, the $x$-constant terms of $\pol_i=x^{-\val_i} \pol(\rtpol_i + x^{\rtxpt_i} y)$ to carry out the initialization step (\cref{line:init} to \cref{line:endinit} in \cref{alg:affine_factors}). Before continuing to describe the algorithm, we detail one of its main building blocks (\cref{alg:shifted_remaindering}): the fast computation of simultaneous shifted remainders via multiple modular reduction. \begin{algorithm}[h] \caption{\textbf{:} \algoname{ShiftedRem}} \label{alg:shifted_remaindering} \begin{algorithmic}[1] % \Require a commutative ring $\ring$, a polynomial $\pol \in \ring[y]$, and triples $(\aff_i,\rt_i,r_i)_{1\le i\le \nrts} \in \ring[y] \times \ring \times \ring$, with the $\aff_i$'s monic. \Ensure the remainders $\pol(\rt_i + r_i y) \rem \aff_i$ for $1\le i \le \nrts$. \State $(\bar\aff_i)_{1\le i\le \nrts} \leftarrow (\sum_{0\le j\le\delta_i} r_i^{\delta_i-j} a_{i,j} y^j)_{1\le i\le\nrts}$ \\ \hfill where $(\aff_i)_{1\le i\le \nrts} = (\sum_{0\le j\le\delta_i} a_{i,j} y^j)_{1\le i\le\nrts}$ with $a_{i,\delta_i}=1$ \State $(\hat\aff_i)_{1\le i\le \nrts} \leftarrow (\bar\aff_i(y-\rt_i))_{1\le i\le \nrts}$ \State $(\hat\rmd_i)_{1\le i\le \nrts} \leftarrow$ $(\pol \rem \hat\aff_i)_{1 \le i \le \nrts}$ \State $(\rmd_i)_{1\le i\le\nrts} \leftarrow (\hat \rmd_i(\rt_i+r_i y))_{1 \le i \le \nrts}$ \State \Return $(\rmd_i)_{1\le i\le\nrts}$ \end{algorithmic} \end{algorithm} \begin{proposition} \label{prop:shifted_remaindering} \cref{alg:shifted_remaindering} is correct and uses \[ \Osoft{(\deg(\pol) + \deg(\aff_1 \cdots \aff_\nrts))} \] operations in $\ring$. \end{proposition} \begin{proof} Let $i \in \{1,\ldots,\nrts\}$. Since $\hat\aff_i$ is monic, the remainder $\hat\rmd_i = \pol \rem \hat\aff_i$ is well-defined, and $\pol = P_i \hat\aff_i + \hat\rmd_i$ with $\deg(\hat\rmd_i) < \deg(\pol)$ and $P_i \in \ring[y]$. Then, we have \begin{align*} \pol(\rt_i + r_i y) & = P_i(\rt_i + r_i y) \hat\aff_i(\rt_i + r_i y) + \hat\rmd_i(\rt_i + r_i y) \\ & = P_i(\rt_i + r_i y) \bar\aff_i(r_i y) + \rmd_i(y) \\ & = P_i(\rt_i + r_i y) r_i^\delta \aff_i(y) + \rmd_i(y), \end{align*} which ensures $\rmd_i = \pol(\rt_i + r_i y) \rem \aff_i(y)$, hence the correctness. Concerning the cost bound, the polynomial $\bar\aff_i$ is computed using at most $2\delta_i$ multiplications in $\ring$, where $\delta_i = \deg(\aff_i)$, and then $\hat\aff_i$ is computed by fast shifting using $\Osoft(\delta_i)$ operations in $\ring$ \cite[Thm.\,9.15]{von_zur_gathen_modern_2013}. The conclusion follows, since fast remaindering can be used to compute all remainders $(\hat\rmd_1,\ldots,\hat\rmd_\nrts)$ simultaneously in $\Osoft{(\deg(\pol) + \delta_1 + \cdots + \delta_\nrts)}$ operations in $\ring$. Indeed, we start by computing the subproduct tree in $\Osoft(\delta_1+\cdots+\delta_\nrts)$ operations \cite[Lem.\,10.4]{von_zur_gathen_modern_2013}, which gives us in particular the product $\hat\aff_1 \cdots \hat\aff_\nrts$. Then, we compute the remainder $\hat\rmd = \pol \rem \hat\aff_1 \cdots \hat\aff_\nrts$, which can be done in $\Osoft{(\deg(\pol) + \delta_1 + \cdots + \delta_\nrts)}$ operations in $\ring$ using fast division \cite[Thm.\,9.6]{von_zur_gathen_modern_2013}. Finally, the sought $\hat\rmd_i = \hat\rmd \bmod \hat\aff_i$ are computed by going down the subproduct tree, which costs $\Osoft{(\delta_1 + \cdots + \delta_\nrts)}$ operations in $\ring$ \cite[Cor.\,10.17]{von_zur_gathen_modern_2013}. \end{proof} \begin{algorithm}[t] \caption{\textbf{:} \algoname{AffineFacOfShifts}} \label{alg:affine_factors} \begin{algorithmic}[1] % \Require a precision $\prc\in\ZZp$, a polynomial $\pol \in \trcSerPol$ such that $\polz \neq 0$, and triples $(\rtpol_i,\rtxpt_i,\mult_i)_{1\le i\le \nrts} \subset \trcSer \times \ZZp \times \ZZp$. \Ensure $(\aff_i,\val_i)_{1 \le i \le \nrts}$ with $(\aff_i,\val_i)= (0,\prc)$ if $\pol(\rtpol_i + x^{\rtxpt_i} y) = 0$ in $\trcSerPol[\prc]$, and otherwise $\val_i = v_x} %% {v_{(x=0)}(\pol(\rtpol_i + x^{\rtxpt_i} y)) < \prc$ and $\aff_i \in \trcSerPol[\prc-\val_i]$ is the affine factor of $Q_i=x^{-\val_i}\pol(\rtpol_i + x^{\rtxpt_i} y)$ at precision $d-s_i$. \Assume $\mult_i$ is such that $\aff_i = 0$ or $\deg(\aff_i) \le \mult_i$, for $1 \le i \le \nrts$. \State \label{line:init} $\alive \leftarrow (1,\ldots,\nrts)$ \hfill /* \emph{list of not yet computed factors} */ \State \label{line:init_shiftedrem} $(\rmd_i)_{1 \le i\le \nrts} \leftarrow \algoname{ShiftedRem}(\trcSer,\pol,(y^{\mult_i+1},\rtpol_i,x^{\rtxpt_i})_{1 \le i\le \nrts})$ \State /* \emph{Process trivial affine factors} */ \For{$1 \le i \le \nrts$ such that $\rmd_i = 0$} \State $(\aff_i,\val_i) \leftarrow (0,\prc)$, and remove $i$ from $\alive$ \EndFor \State /* \emph{Set valuations and compute affine factors $\bmod \:x$} */ \For{$i \in \alive$} \State $\val_i \leftarrow v_x} %% {v_{(x=0)}(\rmd_i)$ \State $\bar\rmd_i \in \mathbb{K}\xspace[y] \,\leftarrow (x^{-\val_i}\rmd_i)_{|x=0}$ \State $\coaffi_i \in \field\setminus\{0\} \,\leftarrow $ inverse of the leading coefficient of $\bar\rmd_i$ \State $\aff_i \in \trcSerPol[1] \,\leftarrow \coaffi_i \bar\rmd_i$ \EndFor \label{line:endinit} \State /* \emph{Each iteration doubles the precision} */ \For{$1 \le k \le \lceil\log_2(\prc)\rceil$} \For{$i \in \alive$ such that $\prc-\val_i \le 2^{k-1}$} \State remove $i$ from $\alive$ \EndFor \State $K \leftarrow 2^{k-1} ; \;\; (\delta_i)_{i\in\alive} \leftarrow (\min(K,\prc-\val_i - K))_{i\in\alive}$ \State \rule[0.03cm]{0pt}{\baselineskip} /* \emph{Lift the affine factors $(\aff_i)_i$ to precisions $\delta_i+K$} */ \State \label{line:rem_a} $(\rmd_i)_{i\in\alive} \leftarrow \algoname{ShiftedRem}(\trcSer,\pol,(\bar\aff_i,\rtpol_i,x^{\rtxpt_i})_{i\in\alive})$, \\ \hspace{1.1cm} \rule[-0.15cm]{0pt}{\baselineskip} where $\bar\aff_i$ is $\aff_i$ lifted into $\trcSerPol$ \State \label{line:lift_a_start} $(\aff_{i\top} \in \trcSerPol[\delta_i])_{i\in\alive} \leftarrow ((x^{-\val_i-K} \rmd_i \coaffi_i) \rem \aff_i)_{i\in\alive}$, \\ \hspace{1.1cm} \rule[-0.15cm]{0pt}{\baselineskip} with $x^{-\val_i-K} \rmd_i$, $\coaffi_i$, and $\aff_i$ truncated at precision $\delta_i$ \State \label{line:lift_a_end} $(\aff_i \in \trcSerPol[\delta_i+K])_{i\in\alive} \;\leftarrow (\aff_i + x^{K} \aff_{i\top})_{i\in\alive}$ \State \rule[0.05cm]{0pt}{\baselineskip} /* \emph{Find the cofactor inverses $(\coaffi_i)_i$ at precisions $\delta_i+K$} */ \State \label{line:rem_aa} $(S_i)_{i\in\alive} \leftarrow \algoname{ShiftedRem}(\trcSer,\pol,(\bar\aff_i^2,\rtpol_i,x^{\rtxpt_i})_{i\in\alive})$, \\ \hspace{1.1cm} \rule[-0.15cm]{0pt}{\baselineskip} where $\bar\aff_i$ is $\aff_i$ lifted in $\trcSerPol$ \State \label{line:lift_c} $(\coaffi_i \in \trcSerPol[\delta_i+K])_{i\in\alive} \leftarrow (((x^{-\val_i} S_i) \quo \aff_i)^{-1} \rem \aff_i)_{i\in\alive}$, \\ \hspace{1.1cm} \rule[-0.1cm]{0pt}{\baselineskip} with $x^{-\val_i} S_i$ and $\aff_i$ truncated at precision $\delta_i+K$ \EndFor \State \Return $(\aff_i,\val_i)_{1\le i\le\nrts}$ \end{algorithmic} \end{algorithm} Now, let us describe how we implement the Hensel lifting strategy to manage to compute the sought affine factors without fully computing the shifts. In addition to the affine factors, we will make use of partial information on the inverse of the cofactor: we compute this inverse modulo the affine factor. Let $1 \le i \le \nrts$ and assume that we have computed, at precision $K$, \begin{itemize} \item the affine factor $\aff_i \in \trcSerPol[K]$ of $\pol_i \bmod x^{K}$, \item $\coaffi_i = \coaff_i^{-1} \rem \aff_i \in \trcSerPol[K]$, where $\coaff_i\in \trcSerPol[K]$ denotes the cofactor such that $\aff_i \coaff_i = \pol_i \bmod x^{K}$. \end{itemize} Note that $\coaff_i$ is invertible as a polynomial of $\trcSerPol[K]$ since by definition ${\coaff_i}_{|x=0} \in \field\setminus\{0\}$. Thus, our requirement is that the inverse of $\coaff_i$ coincides with $\coaffi_i$ when working modulo $\aff_i$. Now, we want to find similar polynomials when we increase the precision to $2K$. The main point concerning efficiency is that we will be able to do this by only considering computations modulo the affine factors $\aff_i$ and their squares; remember that we control the sum of their degrees. In the algorithm, we increase for each $i$ the precision from $K$ to $K+\delta_i$, which is taken as the minimum of $2K$ and $\prc-\val_i$: in the latter case, this is the last iteration which affects $\aff_i$, since it will be known at the wanted precision $\prc-\val_i$. First, we use fast remaindering to get $R_i= \pol(\rtpol_i + x^{\rtxpt_i} y) \rem \aff_i$ at precision $d$ in $x$, simultaneously for all $i$ (see \cref{line:rem_a}); this gives us $ \pol_i \rem \aff_i=x^{-\val_i}R_i\rem \aff_i $ at precision $d-\val_i$, and thus $K+\delta_i$. Since $\aff_i$ is the affine factor of $\pol_i$ at precision $K$, $\pol_i \rem \aff_i$ is divisible by~$x^K$. We then look for $\aff_{i\top} \in \trcSerPol[\delta_i]$ such that $\hat\aff_i = \aff_i + x^K \aff_{i\top}$ is the affine factor of $\pol_i$ at precision $K+\delta_i$; to ensure that $\hat\aff_i$ is still monic, we require that $\deg(\aff_{i\top}) < \deg(\aff_i)$. Thus, we can determine $\aff_{i\top}$ by working modulo $\aff_i$: having \[ (\aff_i + x^K \aff_{i\top}) (\coaff_i + x^K \coaff_{i\top}) = \pol_i , \] at precision $K+\delta_i$, for some $\coaff_{i\top}\in \trcSerPol[\delta_i]$, implies that the identity \[ \aff_{i\top} \coaff_i = x^{-K} \pol_i \] holds modulo $\aff_i$ and at precision $\delta_i$. Multiplying by $\coaffi_i = \coaff_i^{-1}$ on both sides yields \[ \aff_{i\top} = (x^{-K} \pol_i \coaffi_i) \rem \aff_i = (x^{-K-\val_i} \rmd_i \coaffi_i) \rem \aff_i \ . \] Therefore, \cref{line:lift_a_start} and \cref{line:lift_a_end} correctly lift the affine factor of $\pol_i$ from precision $K$ to precision $K+\delta_i$. \medskip From now on, we work at precision $K+\delta_i$, and, as in the pseudo-code, we denote by $\aff_i$ the affine factor obtained through the lifting step above (that is, $\aff_i \leftarrow \hat\aff_i$). Besides, let $\coaffi_i$ now denote the cofactor inverse at precision $K+\delta_i$: $\coaffi_i = \coaff_i^{-1} \rem \aff_i$, where $\coaff_i \in \trcSerPol[K+\delta_i]$ is the cofactor such that $\pol_i = \aff_i \coaff_i$. Our goal is to compute $C_i$, without computing $B_i$ but only $\coaff_i \rem \aff_i$. We remark that the remainder $S_i = \pol(\rtpol_i+x^{\rtxpt_i}y) \rem \aff_i^2$ (as in \cref{line:rem_aa}) is such that $x^{-\val_i} S_i = \pol_i \rem \aff_i^2 = \aff_i (\coaff_i \rem \aff_i)$; $x^{-\val_i} S_i$ is known at precision $d-s_i \ge K+\delta_i$. Thus, \[ (x^{-\val_i} S_i) \quo \aff_i = \coaff_i \rem \aff_i \ , \] and therefore $\coaffi_i$ can be obtained as \[ \coaffi_i = \coaff_i^{-1} \rem \aff_i = ((x^{-\val_i} S_i) \quo \aff_i)^{-1} \rem \aff_i. \] This shows that \cref{line:lift_c} correctly computes $\coaffi_i$ at precision $K+\delta_i$. \begin{proposition} \label{prop:compl_AffineFactorsOfShifts} \cref{alg:affine_factors} is correct and uses \[ \Osoft\big(\prc(\deg(\pol) + \mult_1 + \cdots + \mult_\nrts)\big) \] operations in $\field$. \end{proposition} \begin{proof} The correctness follows from the above discussion. Concerning the cost bound, we will use the following degree properties. Since $\aff_i$ is monic, we have the degree bound $\deg(\aff_i) = \deg({\aff_i}_{|x=0}) \le \mult_i$ for all $i$ and at any iteration of the loop; and since $\coaffi_i$ is always computed modulo $\aff_i$, we also have $\deg(\coaffi_i) < \mult_i$. The cost of the initialization (\cref{line:init} to \cref{line:endinit}) is dominated by the computation of shifted remainders at \cref{line:init_shiftedrem}, which costs $\Osoft(\prc(\deg(\pol) + \mult_1 + \cdots + \mult_\nrts))$ operations in $\field$ according to \cref{prop:shifted_remaindering}. The same cost bound holds for each call to \algoname{ShiftedRem} at \cref{line:rem_a} or \cref{line:rem_aa}, since we have $\deg(\aff_i) \le \mult_i$ and $\deg(\aff_i^2) \le 2\mult_i$. At both \cref{line:lift_a_start} and \cref{line:lift_c}, the degrees of $\rmd_i$, $\coaffi_i$, and $\aff_i$ are at most $\mult_i$; besides, we have $\delta_i \le \prc$ and $\delta_i+K \le 2\prc$. Thus, the quotient and remainder computations use $\Osoft(\prc (\mult_1+\cdots+\mult_\nrts))$ operations in $\field$ according to \cite[Thm.\,9.6]{von_zur_gathen_modern_2013}. Finally, at \cref{line:lift_c} we are performing the inversion of the polynomial $((x^{-\val_i} \rmd_i) \quo \aff_i)$ modulo $A_i$; it is invertible in $\trcSerPol[\delta_i+K]/(A_i)$ since its $x$-constant coefficient is a nonzero field element. As a consequence, this this inversion can be done in $\Osoft((\delta_i+K) \deg(\aff_i))$ field operations using Newton iteration \cite[Thm.\,9.4]{von_zur_gathen_modern_2013}, and altogether \cref{line:lift_c} costs $\Osoft(\prc (\mult_1+\cdots+\mult_\nrts))$ operations in $\field$. Summing these cost bounds over the $\lceil\log_2(\prc)\rceil$ iterations yields the announced total cost bound. \end{proof} \section{Fast series roots algorithm} \label{sec:alg:roots} In this section, we describe our fast algorithm for solving \cref{pbm:series_root}. As explained above, it follows the divide and conquer strategy of \cref{alg:dnc_roots}, with the main modification being that we incorporate the fast computation of the affine factors of the shifts (\cref{alg:affine_factors}). This leads to a better efficiency by yielding more control on the degrees of the polynomials that are passed as arguments to the recursive calls. Besides, we also propagate in recursive calls the information of the multiplicities of the roots, which is then used as an input of \cref{alg:affine_factors} to specify the list of degree upper bounds for the affine factors. We start with a lemma which states that taking affine factors preserves reduced root sets. \begin{lemma}\label{lem:root_set_aff_fact} Let $Q$ be in $\K[\mkern-4.2mu[ x ]\mkern-4mu][y]$, with $Q_{|x=0} \ne 0$, and let $A \in \K[\mkern-4.2mu[ x ]\mkern-4mu][y]$ be its affine factor. Then, any reduced root set of $A$ at precision $d$ is a reduced root set of $Q$ at precision $d$. \end{lemma} \begin{proof} The claim follows from the factorization $Q=AB$, with $B _{|x=0} \in \mathbb{K}\xspace\setminus\{0\}$. Indeed, as a result, $B(P)$ is a unit in $\Kx[y]$ for any $P$ in $\Kx[y]$, hence $\rtset = \rtset[\aff]$ for any $d$; similarly, for any $(f,t)$, $Q(f+x^t y)$ and $A(f+x^t y)$ have the same valuation, say $s$, and $Q(f+x^t y)/x^s$ and $A(f+x^t y)/x^s$ differ by a constant factor. In particular, if $\{(f_i,t_i)\}_i$ is a basic root set of $A$, it is a basic root set of $Q$, and the multiplicities of $(f_i,t_i)$ in $A$ and $Q$ are the same. This implies that if $\{(f_i,t_i)\}_i$ is in fact a reduced root set of $A$, it remains so for $Q$. \end{proof} We continue with a procedure that operates on polynomials in $\K[\mkern-4.2mu[ x ]\mkern-4mu][y]$, without applying any truncation with respect to $x$: as such, this is not an algorithm over $\mathbb{K}\xspace$, as it defines objects that are power series in $x$, but it is straightforward to prove that it outputs a reduced root set. Remark that this procedure uses affine factors at ``full precision'', that is, in $\K[\mkern-4.2mu[ x ]\mkern-4mu][y]$, so \cref{alg:affine_factors} is not used yet. \begin{algorithm}[h] \caption{\textbf{:} \algoname{SeriesRoots\infty}} \label{alg:roots} \begin{algorithmic}[1] % \Require{$\prc \in \ZZp$ and $\pol \in \K[\mkern-4.2mu[ x ]\mkern-4mu][y]$ such that $\polz \neq 0$.} \Ensure{List of triples $(\rtpol_i, \rtxpt_i, \mult_i)_{1 \le i \le \nrts} \subset \mathbb{K}\xspace[x] \times \NN \times \ZZp$ formed by a reduced root set of $\pol$ to precision $\prc$ with multiplicities.} \If{$d = 1$} \State $(y_i,\mult_i)_{1\le i\le\nrts} \leftarrow $ roots with multiplicity of $Q_{|x=0} \in \field[y]$ \State \Return $(y_i, 1, \mult_i)_{1 \le i \le \ell}$ \Else \State $(f_i, t_i, \mult_i)_{1 \le i \le \nrts} \leftarrow \algoname{SeriesRoots\infty}(Q, \ceil{d/2})$ \State $(s_i)_{1 \le i \le \nrts} \leftarrow (v_x} %% {v_{(x=0)}(Q(f_i+x^{t_i} y))_{1 \le i \le \nrts}$ \State $(\aff_i)_{1 \le i \le \nrts} \leftarrow ({\rm AffineFactor}(Q(f_i+x^{t_i} y)/x^{s_i}))_{1 \le i \le \nrts}$ \For{$1 \le i \le \nrts$} \If{$\val_i \ge \prc$} \State $(\rtpol_{i,1},\rtxpt_{i,1},\mult_{i,1}) \leftarrow (0,0,\mult_i)$ and $\nrts_i \leftarrow 1$ \Else \State $(\rtpol_{i,j},\rtxpt_{i,j},\mult_{i,j})_{1\le j\le \nrts_i} \leftarrow \algoname{SeriesRoots\infty}(\aff_i, \prc-\val_i)$ \label{line:roots_recursive2} \EndIf \EndFor \State \Return $(\rtpol_i + x^{\rtxpt_i} \rtpol_{i,j}, \rtxpt_i + \rtxpt_{i,j},\mult_{i,j})_{1\le j\le \nrts_i, 1\le i \le \nrts}$. \EndIf \end{algorithmic} \end{algorithm} \begin{proposition} \label{prop:alg:roots} \cref{alg:roots} is correct. \end{proposition} \begin{proof} We prove this by induction on $d \ge 1$. By \cref{lem:onestep_roots_partition}, the algorithm is correct for the induction base case $d = 1$. Take $d > 1$, and assume that the algorithm is correct for all $d' < d$. Then, we obtain a reduced root set $(f_i, t_i)$ from the first recursive call, so in particular the valuations $s_i$ are at least equal to $d' \ge 1$. This shows that $d-s_i < d$, so the second recursive call is made at a lower precision, and the procedure terminates. By induction, in all cases, $(\rtpol_{i,j},\rtxpt_{i,j})_{1\le j\le \nrts_i}$ is a reduced root set of $Q_i$ to precision $d-s_i$: this is obvious when $s_i \ge d$, and follows from \cref{lem:root_set_aff_fact} when $s_i < d$. \cref{thm:modular_roots_partition} implies that $(\rtpol_i + x^{\rtxpt_i} \rtpol_{i,j}, \rtxpt_i + \rtxpt_{i,j})_{1\le j\le \nrts_i, 1\le i \le \nrts}$ is a reduced root set of $\pol$ to precision $\prc$. We verify that the integers $m_{i,j}$ are the associated multiplicities as we did in the proof of that theorem. \end{proof} Next, we describe a similar algorithm, where we maintain the input polynomial with degree less than $d$ in $x$ (when it is the case, we say that it is {\em reduced modulo $x^d$}). To differentiate this version from the previous one and facilitate proving the correctness, we add a superscript ${}^*$ to the objects handled here when they differ from their counterpart in \cref{alg:roots}. Remark that we do not claim that the output forms a reduced root set of $Q^*$, merely a basic root set; we also do not claim that the $m_i$'s in the output are the corresponding multiplicities. \begin{algorithm}[h] \caption{\textbf{:} \algoname{SeriesRootsTrc}} \label{alg:rootsTrc} \begin{algorithmic}[1] % \Require{$\prc \in \ZZp$ and $\pol^* \in \K[\mkern-4.2mu[ x ]\mkern-4mu][y]$ reduced modulo $x^d$ such that $\pol^*_{|x=0} \neq 0$.} \Ensure{List of triples $(\rtpol_i, \rtxpt_i, \mult_i)_{1 \le i \le \nrts} \subset \mathbb{K}\xspace[x] \times \NN \times \ZZp$ formed by a basic root set of $\pol^*$ to precision $\prc$.} \If{$d = 1$} \State $(y_i,\mult_i)_{1\le i\le\nrts} \leftarrow $ roots with multiplicity of $Q^*_{|x=0} \in \field[y]$ \State \Return $(y_i, 1, \mult_i)_{1 \le i \le \ell}$ \Else \State $(f_i, t_i, \mult_i)_{1 \le i \le \nrts} \leftarrow \algoname{SeriesRootsTrc}(Q^* \rem x^{\ceil{d/2}}, \ceil{d/2})$ \State $(\aff^*_i, \val^*_i)_{1 \le i \le \nrts} \leftarrow \algoname{AffineFacOfShifts}(Q^*, d, (\rtpol_i,\rtxpt_i,\mult_i)_{1\le i\le\nrts})$ \For{$1 \le i \le \nrts$} \If{$\val^*_i = \prc$} \State $(\rtpol_{i,1},\rtxpt_{i,1},\mult_{i,1}) \leftarrow (0,0,\mult_i)$ and $\nrts_i \leftarrow 1$ \Else \State $(\rtpol_{i,j},\rtxpt_{i,j},\mult_{i,j})_{1\le j\le \nrts_i} \leftarrow \algoname{SeriesRootsTrc}(\aff^*_i, \prc-\val^*_i)$ \label{line:roots_recursive2Trc} \EndIf \EndFor \State \Return $(\rtpol_i + x^{\rtxpt_i} \rtpol_{i,j}, \rtxpt_i + \rtxpt_{i,j},\mult_{i,j})_{1\le j\le \nrts_i, 1\le i \le \nrts}$. \EndIf \end{algorithmic} \end{algorithm} \begin{proposition} \label{prop:alg:rootstrunc} \cref{alg:rootsTrc} is correct. \end{proposition} \begin{proof} We claim that for $d > 0$ and any $Q$ and $Q^*$ in $\K[\mkern-4.2mu[ x ]\mkern-4mu][y]$ such that $Q^* = Q \rem x^d$, the outputs of \algoname{SeriesRoots\infty}$(Q,d)$ and \algoname{SeriesRootsTrc}$(Q^*,d)$ are the same. Before proving this claim, remark that it implies the correctness of \cref{alg:rootsTrc}: we know that this output is a reduced, and thus basic, root set of $Q$ to precision $d$. Since $Q$ and $Q^*$ are equal modulo $x^d$, one verifies easily that this output is thus a basic root set of $Q^*$ to precision $d$ as well. We prove the claim by induction on $d$. If $d=1$, the result is clear, as we compute the same thing on both sides. For $d > 1$, since $Q^* \rem x^{\lceil d/2\rceil}=Q \rem x^{\lceil d/2\rceil}$, the induction assumption shows that $(f_i, t_i, \mult_i)_{1 \le i \le \nrts} $ as computed in either \algoname{SeriesRoots\infty} or \algoname{SeriesRootsTrc} are the same. The affine factors of the shifts of $Q$ and $Q^*$ differ, but they coincide at the precision we need. Indeed, the equality $Q=Q^* \bmod x^d$ implies that for all $i$, $Q(f_i + x^{t_i}y)= Q^*(f_i + x^{t_i}y) \bmod x^d$. In particular, if $s_i < d$, these two polynomials have the same valuation $s_i$, and $Q(f_i + x^{t_i}y)/x^{s_i}=Q^*(f_i + x^{t_i}y)/x^{s_i} \bmod x^{d-s_i}$, which implies that their affine factors are the same modulo $x^{d-s_i}$. If $s_i \ge d$, then $Q^*(f_i + x^{t_i}y)$ vanishes modulo $x^d$. Remark that the assumption of \cref{alg:affine_factors} is satisfied: for all $i$, $m_i$ is the multiplicity of $(f_i,t_i)$ in $Q$; the definition of a reduced root set then implies that $\deg({Q_i}_{|x=0}) \le m_i$, so that the same degree bounds holds for the affine factors of $Q^*(f_i + x^{t_i}y)/x^{s_i}$. As a result, for $i$ such that $s_i \ge d$, \cref{alg:affine_factors} returns $(0,s_i^*)=(0,d)$, whereas if $s_i < d$, it returns $(A_i^*,s_i)$, where $A_i^*$ is the truncation modulo $x^{d-s_i}$ of the affine factor $A_i$ of $Q_i$. In the first case, the polynomials $(\rtpol_{i,1},\rtxpt_{i,1},\mult_{i,1})$ are the same in both algorithms; in the second case, this is also true, by induction assumption. Our claim follows. \end{proof} \begin{proof}[Proof of \cref{thm:series_root}] To conclude the proof of \cref{thm:series_root}, it remains to estimate the cost of \cref{alg:rootsTrc}. Let $T(n, d)$ denote the cost of \cref{alg:rootsTrc} on input $d$ and $\pol$ of degree $n = \deg(Q)$. If $d = 1$, then $T(n, 1) = \mathsf{R_\K}\xspace(n)$. Otherwise, the cost is given by the following recursion: \[ T(n, d) = T(n, d/2) + S(n, d, (n_1,\ldots,n_\ell)) + \sum_{i=1}^{\ell} T(n_i, d-s_i) \ , \] where $S(n,d,(n_1,\ldots,n_\ell))$ is the cost of \algoname{AffineFactorsOfShifts} and $n_i = \deg(A^*_i)$. The degrees of the polynomials $A^*_i$, in \cref{alg:rootsTrc}, and $A_i$, in \cref{alg:roots}, are the same, except for those cases where $s_i \ge d$ and $A^*_i$ is actually zero. By definition of a reduced root set, we have $$\sum_i \deg(A_i) \leq \deg(Q_{|x=0}) \leq n,$$ which thus implies $\sum_i n_i \le n$, and $S(n, d, (n_1,\ldots,n_\ell)) \in \Osoft(dn)$. Note also that $s_i \geq d/2$ by the correctness of $\algoname{SeriesRootsTrc}$. Since $T(n,d)$ is at least linear in $n$, we then get $\sum_i T(n_i, d-s_i) \leq T(n, d/2)$. This gives the upper bound \[ T(n,d) \le 2 T(n,d/2) + \Osoft(nd), \] from which we deduce that $T(n,d) = \Osoft(nd) + O(d \mathsf{R_\K}\xspace(n))$. \end{proof} Finally, we point out an optimization, which is not necessary to establish our main result, but useful in practice: once the affine factor of a shift has degree $1$, there is no need to continue the recursion (the affine factor being monic, we can just read off its root from its constant coefficient). This is the analogue of the situation described in the introduction, when we know enough terms of the solution to make it possible to apply Newton iteration without further branching. \begin{acks} The research leading to these results has received funding from the People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme (FP7/2007-2013) under REA grant agreement no. 609405 (COFUNDPostdocDTU). \end{acks} \bibliographystyle{ACM-Reference-Format}
2,869,038,154,176
arxiv
\section{Introduction} \label{sec:intro} Null hypothesis significance testing has been the default route to establish the validity of scientific claims for generations. Throughout this time, the $p$ value has largely been the tool of choice for measuring the evidence against a null hypothesis. When properly applied and interpreted, the $p$ value increases the rigor of the conclusions drawn from data \citep{benjamini2021asa}. Despite their ubiquity, the standard practice involving $p$ values suffers from several shortcomings. Two salient shortcomings are that $p$ values as a post-experimental evidence assessment are widely misunderstood \citep{wasserstein2016asa, benjamini2021asa} and that the default threshold for significance ($p < 0.05$) may not be stringent enough \citep{berger1987testing, benjamin2019three}. \subsection{The traditional fragility index} Towards resolving the two shortcomings of $p$ values, \citet{walsh2014statistical} proposed the \emph{fragility index}, a measure which extends a variant proposed by \citet{feinstein1990unit}. \citeauthor{walsh2014statistical} and \citeauthor{feinstein1990unit} were focused on analysing $2 \times 2$ contingency tables which resulted from clinical trials. The (traditional) fragility index is formally defined in Definition~\ref{def:walshfi} \citep{baer2021incidence}. \begin{definition} \label{def:walshfi} Consider data represented by a $2\times 2$ contingency table where the rows indicate intervention (treatment or control) arms and the columns indicate outcome (event or nonevent) status. The \emph{fragility index} is the minimum number of cases whose outcomes must be modified to reverse statistical significance. \end{definition} \noindent Note that this definition separates the algorithm and the implicit statistical method in \citet{walsh2014statistical}, as argued by \citet{lin2020factors} and \citet{baer2021incidence}. The definition relies on a concept of statistical significance. In nearly all applications, this has been taken to correspond to the $p$ value from Fisher's exact test being less than the default cutoff $0.05$. The definition then considers alternative contingency tables wherein each case can have a modified outcome. Although \citet{walsh2014statistical} defined the fragility index only for initially significant statistical tests, the measure was quickly and modestly extended to initially nonsignificant tests as well, via the so-called \emph{reverse fragility index} \citep{kipp2017vignette, khan2020application}. While the fragility index was initially motivated by clinical applications \cite{baer2021incidence, baer2021samplesize} suggest that it should be viewed as a trustworthy post-experimental evidence assessment that can be applied across the sciences. Towards this, we use the terminology \emph{case} in place of \emph{patient} throughout the article. In many applications decisions are made on the basis of statistical significance determined by a critical threshold \citep{benjamini2021asa, mayo2021statistics}. Although, statistical significance is neither necessary nor sufficient for reaching a finding of material significance \citep{poole1987confidence, goodman1999toward, goodman2008dirty, matrixx, greenland2016statistical}. The intention of the fragility index is to provide a measure of the fragility of a statistical determination. The fragility index is an interpretable supplement to traditional measures of evidence like the $p$ value which is in terms of ``case units'' instead of probability units. The fragility index has been used to reanalyse the principal results in fields across medicine \citep{holek2020fragility}. Researchers in some fields have been realized that the statistical significance of practice-changing studies have hinged on the outcome of a particular patient (or case). This is especially troubling when that patient plausibly could have had a different outcome. When researchers consider the value of the fragility index to determine whether a study's conclusion is fragile, their action can be viewed as performing an informal statistical hypothesis test \citep{baer2021samplesize}. To make this clear, we must first update Definition~\ref{def:walshfi}: consider the fragility index to be the \emph{signed} count of outcome modifications. Positive fragility indices correspond to initially significant tests, and negative fragility indices correspond to initially insignificant statistical tests. In a sense, this turns reverse fragility indices into negative fragility indices. This update is summarized in Definition~\ref{def:walshfi2}. \begin{definition} \label{def:walshfi2} Throughout this article, we consider the fragility index to be a signed variant of Definition~\ref{def:walshfi}, so positive fragility indices correspond to initially significant tests, and a negative fragility indices correspond to initially insignificant tests. \end{definition} With this improved measure, the fragility index can be neatly treated as a test statistic \citep{lehmann2006testing}. Suppose that we determine statistical significance through $p$ values. The fragility index being positive is equivalent to the $p$ value being less than the significance threshold. Therefore the fragility index provides an alternative test statistic for the same rejection region offered by a $p$ value \citep{baer2021samplesize}. This relationship is visualized in Figure~\ref{fig:interval}. However, researchers commonly use the fragility index to go a step further. Indeed, the standard use case of the fragility index is to determine whether the fragility index of a statistically significant test is not ``too low'', for otherwise the trial's statistical conclusion would hinge on a small number of cases. This procedure amounts to comparing the fragility index to a positive threshold rather than merely $0$ and hence produces a more stringent statistical test with a lower type I error rate. The need for having a more stringent statistical test is further motivated by a lack of reproducibility: reversals of statistical conclusions can be surprisingly common in the medical literature \citep{herrera2019meta}. \begin{figure} \centering \tikzset{every picture/.style={scale=0.7}}% \input{interval.tikz} \caption{An intuitive depiction of the relationship between fragility indices (on top) and $p$ values (on bottom) when the $p$ value significance threshold is $0.05$. The scale depends on e.g. the sample size and effect size.} \label{fig:interval} \end{figure} The concept underlying the fragility index is largely the same as the $p$ value. Both consider hypothetical outcomes from the same clinical trial. In one case, $p$ values rely on alternative case outcomes and their distributional impact on test statistics; in the other, the fragility indices directly explore alternative case outcomes. Critiques of the fragility index have been reviewed by \citet{baer2021incidence}. As pointed out in the editorial that accompanied the ASA President's Task Force Statement on the Statistical Significance and Replicability \citep{kafadar2021statistical}, ``[a] misuse of a tool ought not to lead to complete elimination of the tool from the toolbox, but rather to the development of better ways of communicating when it should, and should not, be used.'' We recommend Section~2 in \citet{baer2021incidence} for an elaborated development of the fragility index. Beyond the formalism of treating the fragility index as a test statistic, the fragility index can be neatly used in a sensitivity analysis. \subsection{The sample breakdown point} The theoretical basis of statistical science offers several general strategies for dealing with uncertainty in assessing significance \citep{benjamini2021asa}, in this article we will connect some to important concepts from robust statistics. \citet{baer2021incidence} review an interesting connection between breakdown points from robust statistics and the fragility index quotient, i.e. the fragility index divided by the sample size \citep{ahmed2016does}. We start by briefly reviewing the breakdown point in non-asymptotic settings. The breakdown point of an estimator is informally defined to be the smallest portion of distributional contamination such that a statistic diverges, i.e. breaks down \citep{hampel1968contributions, hampel2011robust, davies2005breakdown}. The distributional contamination can arise in several forms, and here we consider contamination in the form of observation replacement. A principal purpose of the breakdown point is to study the sensitivity of estimators to outliers. Breakdown points have analogously been defined for tests, and we formalize the definition in that context. Several variants of test breakdown points have been developed \citep{rieder1982qualitative, simpson1989hellinger, jolliffe1993influence}; here we exclusively focus on measures where breaking down means statistical significance reverses. Measures connected to testing breakdown points are compared in Table~\ref{tab:bdp_comparison}, and in the remainder of this section we will explain the contents of the table. \cite{ylvisaker1977test} first studied the notion of breakdown for testing and defines the resistance of a test as the smallest fraction of observations that can always determine the test decision regardless of the other observations in the sample. Define $Z$ to be a data sample from $n$ cases and $I$ to be a subset of cases $\{1, \dots, n\}$. The maximum resistance (MR) is formally defined as the minimum cardinality $|I|/n$ such that for all ($\forall$) samples $Z,$ \, there exists ($\exists$) cases $I$ for which $\exists$ a modified sample $Z^{\textrm{mod}}$ which reverses significance, where $Z^{\textrm{mod}}$ only differs from the original sample $Z$ for cases in $I$ \citep{coakley1994maximum}. This is indicated in the first column of Table~\ref{tab:bdp_comparison}. The maximum resistance tells how robust a test decision is for the least favorable sample and can be viewed as a least upper bound across all samples. \cite{coakley1992breakdown} proposes the expected resistance (ER) of a test as a measure of the robustness of its decision through an average across samples with some specified distribution. \begin{table} \begin{tabular}{l||l|l|l|l|l|l} & MR & ER & S-BDP/FI & GFI-SL & SFI & SGFI-SL \\ \hline Sample $Z$ & $\forall$ & Average & Given & Given & Given & Given \\ Selected cases $I$ & $\exists$ & $\exists$ & $\exists$ & $\exists$ & $\exists$ typical & $\exists$ typical \\ Modified sample $Z^\textrm{mod}$ & $\exists$ & $\exists$ & $\exists$ & $\exists$ plausible & $\exists$ & $\exists$ plausible \end{tabular} \caption{A comparison of methods. The columns represent maximum resistance (MR), expected resistance (ER), sample breakdown point (S-BDP), fragility index (FI), generalized fragility index with the sufficiently likely construction (GFI-SL), stochastic fragility index (SFI), and stochastic generalized fragility index with the sufficiently likely construction (SGFI-SL), with cell entries there exists ($\exists$) and for all ($\forall$). } \label{tab:bdp_comparison} \end{table} The maximum and expected resistances are defined across samples rather than for a given sample and are designed to study the abstract properties of statistical tests. On the other hand, in this article we are interested in studying a given a sample $Z$ \citep{donoho1983notion}. \citet{zhang1996sample} introduced the first sample breakdown point (S-BDP) for testing, defined as the minimum $|I|/n$ such that $\,\exists$ cases $I$ for which $\exists$ a modified sample $Z^{\textrm{mod}}$ which reverses significance, where $Z^{\textrm{mod}}$ only differs from the original sample $Z$ for cases in $I$, for a given sample $Z$. Notice that when the sample $Z$ can be represented as a $2 \times 2$ table, this is precisely the fragility index \citep{walsh2014statistical} in Definition~\ref{def:walshfi} divided by the sample size. In our view the fragility index is intended to have a different purpose than breakdown points. Users of the fragility index tend to be interested in the impact of minor perturbations of the data on rejection decisions, for which the sample breakdown point is not suitable. When researchers report a small fragility index, they want the corresponding modifications to plausibly have occurred and not rely on extreme outliers. \citet{baer2021incidence} update the fragility index to explicitly be based on only likely modifications (which correspond to minor perturbations) through the generalized fragility indices (GFIs) with the sufficiently likely construction, as we review in Section~\ref{sec:methods:stochgen}. Each of the measures discussed so far have a commonality. They merely ensure that existence of selected cases $I$ which contribute to reversing significance, as seen in the second row of Table~\ref{tab:bdp_comparison}. \citet{donoho1983notion} suggest that this can be a shortcoming and discuss a measure that they call a \emph{stochastic sample breakdown point} which randomly modifies outcomes. The methods we propose, the stochastic fragility index (SFI) and the stochastic generalized fragility indices (SGFIs), which rely on typical cases will be similar in spirit. \subsection{Motivation and roadmap} As seen in Table~\ref{tab:bdp_comparison}, the fragility index relies on two components which characterize the measure. The first is modifying outcomes. \citet{baer2021incidence} thoroughly studied this through the sufficiently likely construction. The second is selecting cases whose outcomes are to be modified. According to Definitions~\ref{def:walshfi} and \ref{def:walshfi2}, the selected cases for the fragility index are the most extreme possible. We will see that these cases are atypical, which can hamper their interpretation. In Section~\ref{sec:methods} we introduce methods to generalize and improve the selection of cases. In Section~\ref{sec:examples} we give real data examples which help develop intuition for the methods. Finally we summarise and conclude in Section~\ref{sec:conc}. \section{Methods} \label{sec:methods} We start with a motivating example. Consider the data on the left in Table~\ref{tab:sim_motivating} as arising from a simulated clinical trial. The original data has $p$ value $<0.01$ and the modified data has $p$ value $0.06$. The data is initially statistically significant and then becomes nonsignificant, with significance threshold $0.05$. The tables help show that the fragility index is $-7$: there exists seven cases for which statistical significance would be reversed had their outcomes been different. \begin{table} \centering \begin{tabular}{l||l|l} & Event & Nonevent \\ \hline\hline Treatment & 20 & 380 \\ \hline Control & 15 & 385 \end{tabular} \quad \begin{tabular}{l||l|l} & Event & Nonevent \\ \hline\hline Treatment & 20 & 380 \\ \hline Control & 8 & 392 \end{tabular} \caption{(Left) Simulated summary statistics; (Right) The modified data which reverses statistical significance} \label{tab:sim_motivating} \end{table} Because such little information is presented for each case--only their treatment arm and a dichotomous outcome--we can readily interpret these seven cases. Each received the control and had an event. Therefore we can refine our earlier fragility index interpretation: Seven cases who were in the control arm and experienced an event having a different outcome would have reversed significance. However, interest may not lie in these particular cases. There are only $15$ such cases out of $800$ study participants, resulting in the modified cases being atypical enough that they represent only $\frac{15}{800} \times 100\% \approx 1.9\%$ of the study. A user of the fragility index may reasonably be interested in exploring the impact of typical study participants having alternative outcomes. Additionally, the cases may not be so readily interpretable in studies with more complicated data types. In this section we introduce a method which relies on typical cases that we call the stochastic fragility indices. The method will resolve the interpretation issue motivated above, and we will plainly see through an example in Section~\ref{sec:examples:pres} that the stochastic fragility indices take into account the rarity of the modified cases. This is further illustrated in Section~\ref{sec:methods:interp}. Note that the motivation underlying the stochastic fragility indices was described as an interesting direction to pursue in \citet{baer2021incidence}. After, we review the generalized fragility indices introduced in \citet{baer2021incidence} and extend the stochastic fragility indices. A second example in Section~\ref{sec:examples:adverse} illustrates that the study participants whose outcomes are modified in the generalized fragility index can be acutely atypical when additional covariates are analysed. Finally, we introduce an accompanying algorithm in Section~\ref{sec:methods:alg} that is implemented in the open source \texttt{R} package \texttt{FragilityTools} \citep{baer2020fragility}. \subsection{The stochastic fragility indices} \label{sec:methods:stoch} In this section we define a method for $2 \times 2$ contingency tables which relies on typical cases. We first present a revealing characterization of the fragility index and then leverage it to define the stochastic fragility indices. By construction, the fragility index only guarantees the existence of cases for which significance would be reversed had their outcomes been different. We can see this by Definition~\ref{def:walshfi} since the minimum could be achieved by only one collection of cases. Note that for $2 \times 2$ tables, often more than only one collection of cases exist. The motivating example based on Table~\ref{tab:sim_motivating} illustrates this: any 7 cases could be chosen to reverse significance among the 15 control arm cases who experienced an event. When the fragility index equals $k$, there exists a collection of $k$ cases for which significance would reverse had their outcomes been different. There are however several possible collections of cases; when there are $n$ cases, there are $\binom{n}{k}$ collections. For example, when $n=800$ and $k=7$, there are more than $4 \times 10^{16}$ case collections. Of all these collections, the one collection of cases guaranteed to have outcomes which can reverse significance can be unusual. We now introduce a fragility measure which does not necessarily rely on atypical cases. \begin{definition} Define the \emph{stochastic fragility index} $\mathrm{SFI}_r$ with threshold $r \in [0,1)$ as the minimum $k$ such that more than $r \times 100\%$ of case collections with cardinality $k$ have the reversibility property, that statistical significance can reverse had the cases in the collection had different outcomes. \end{definition} Consider the stochastic fragility index to be signed according to the initial significance of the statistical test, as in Definition~\ref{def:walshfi2}. The stochastic fragility indices can ensure that a substantial portion of possible case collections can reverse significance and hence is not forced to rely on atypical cases. When $r=0$, the stochastic fragility index reduces to the fragility index in Definition~\ref{def:walshfi}. The same holds when $r < 1/\binom{n}{\mathit{FI}}$, where $\mathit{FI}$ is the traditional fragility index. The stochastic fragility index is undefined when $r=1$. For convenience, we will abuse notation and write $\mathit{SFI}_1 = \mathit{SFI}_{r}$, when $r=1^{-}$ is the limit as $r$ approaches $1$ from below. In this case, the stochastic fragility index ensures that all case collections with cardinality $\mathrm{SFI}_r$ have the reversibility property. We consider this to be a conservative value. Roughly speaking, in addition to relying on atypical cases, the measure will also rely on atypical cases at the opposite extreme. When $r=1/2$, more than half of the possible case collections have the reversibility property. That is, most combinations of patients have possible outcomes which can reverse significance. We consider this to be a highly interpretable choice and treat it as the default. The stochastic fragility index generalizes the fragility index to ensure that a particular pattern or collection of cases alone do not determine the fragility index result, analogously to the relationship between the stochastic sample breakdown point and the sample breakdown point. The stochastic fragility index can equivalently be defined as a quantile with respect to the discrete uniform distribution on the set of case collections with a given cardinality, as is explored in Section~\ref{sec:methods:alg}. With this view, the relationship between maximum resistance and expected resistance roughly corresponds to the relationship between fragility indices and stochastic fragility indices. \subsection{Interpreting the stochastic fragility indices} \label{sec:methods:interp} Interpretability is the beating heart of fragility measures. In this section, we study the interpretation of stochastic fragility indices for various choices of $r$. We focus on the case that $\mathit{SFI}_r = 1$ since it is particularly intuitive: case collections reduce to merely cases. We consider each of the following possible interpretations. The statistical test would not have been significant if: \begin{enumerate} \item a particular case had a different outcome, \item a typical case had a different outcome, or \item any single case had a different outcome. \end{enumerate} When $r=0$ so that the stochastic fragility index is simply the traditional fragility index, the correct interpretation is the first. By construction, the fragility index only ensures the existence of cases which can reverse significance. When $r=1$, the correct interpretation of the stochastic fragility index is the third. Significance would reverse if any case had a different outcome, and the word ``single'' considers the multiplicity of cases and not which case. When $r=1/2$, we consider the correct interpretation of the stochastic fragility index to be the second. By definition, more than half of the cases in the study have the property that statistical significance would reverse had their outcome (alone) been different. In our view, more than half of the cases in a study cannot be atypical, so one of those cases must be typical. Interpretations for $\mathit{SFI}_r>1$ are analogous except with case collections instead of individual cases. An example in Section~\ref{sec:examples:pres} illustrates an interesting connection between case collections and the proportion of individual cases with a desirable property. \subsection{The stochastic generalized fragility indices} \label{sec:methods:stochgen} The generalized fragility indices directly extend the scope of the fragility index in Definition~\ref{def:walshfi} to arbitrary data types and tests \citep{baer2021incidence}. Let $Z$ be a data frame where the rows represent cases $(1, \dots, n)$ and the columns represent measurements. For the $2 \times 2$ contingency tables described earlier, this data frame $Z$ stores the same data but in a long format. Let the function $m$ be the so-called outcome modifier which inputs a row of $Z$ and outputs the set of \emph{permitted} modifications. Writing $\mathcal{R}$ as the rejection region, the generalized fragility indices are formally defined as \begin{align} \label{def:genfi} \min \,\,\,\, & \| Z - Z^{\mathrm{mod}} \|_{\#} \\ \text{such that} \,\,\,\, & Z \in \mathcal{R} \,\oplus\, Z^{\textrm{mod}} \in \mathcal{R} \nonumber \\ & Z^{\textrm{mod}}_{i,} \in m (Z_{i,}) \text{ for all } i=1,\dots,n \nonumber \end{align} where $\oplus$ is the exlusive-or denoting that $Z$ or $Z^\textrm{mod}$ is in the rejection region (but not both) and $\| \cdot \|_{\#}$ is the norm which counts the number of nonzero rows, i.e. the number of cases with modified values for their measurements. This definition can be interpreted as a projection of the data, making clear the extreme nature of the generalized fragility index. As in Definition~\ref{def:walshfi2}, consider the generalized fragility indices to be signed according to the initial significance. We can readily see that the generalized fragility indices do indeed generalize the fragility index for $2 \times 2$ tables. The outcome modifier $m$ needs to be chosen to define a generalized fragility index. We will choose $m$ according to the sufficiently likely construction \citep{baer2021incidence}, which only permits outcome modifications that have probability at least $q$ for some user supplied $q\in[0,1]$. Thus the modifier $m=m_q$. We use the notation $\mathit{GFI}_q$ for these fragility measures. The sufficiently likely construction alleviates an issue with the traditional fragility index which strains its interpretation. In this section, we marry the stochastic fragility indices and the generalized fragility indices to define a fragility measure which both relies on typical cases and permits only plausible modifications. The method will depend on two parameters: the threshold $r$ which controls how typical the cases who reverse significance must be and the sufficiently likely threshold $q$ which controls the plausibility of the modifications. \begin{definition} \label{def:sgfi} Define the \emph{stochastic generalized fragility indices} $\mathrm{SGFI}_{r,q}$ with thresholds $r \in [0,1)$ and $q\in[0,1]$ as the minimum $k$ such that more than $r \times 100\%$ of case collections with cardinality $k$ have the permitted reversibility property, that statistical significance can reverse had the cases in the collection had different permitted outcomes. \end{definition} \noindent Recall that an outcome modification is permitted if it is returned by the modifier $m_q$. Consider the stochastic generalized fragility index to be signed according to the initial significance of the statistical test, as in Definition~\ref{def:walshfi2}. The stochastic generalized fragility indices are monotonically nondecreasing in absolute value in both $r$ and $q$ . For data which can be stored in a $2\times 2$ table, notice that a stochastic generalized fragility index with $q=1$ so that any outcome modification is permitted is simply a stochastic fragility index, i.e. $\mathrm{SGFI}_{r,1} = \mathrm{SFI}_r$. \subsection{An algorithm} \label{sec:methods:alg} We now describe an algorithm to approximately calculate a stochastic generalized fragility index and hence also a stochastic fragility index. The calculation relies on an different but equivalent presentation of Definition~\ref{def:sgfi}. Let $E^{(q)}_k$ denote whether a uniformly random collection of $k$ cases have a permitted outcome modification which reverses statistical significance. Here the selection of the case collection is random but each case measurement is fixed. With this notation, the stochastic generalized fragility index is simply the minimum integer $k$ such that $\mathbb{P}[E^{(q)}_k] > r$, and hence is a quantile. Thus the value $\mathit{SGFI}_{r,q}$ is the ceiling of the smallest root of $f(k) := \mathbb{P} [E^{(q)}_k] - r$. The function $f$ is nondecreasing since having more cases available to receive modifications necessarily increases the probability of reversal. Thus the roots of $f$ are a connected set; for simplicity we will henceforth consider that the root is unique. Note this has not been an issue in practice, except when $r=1$ so that $\mathit{SGFI}_{1,q}$ and any larger count have full probability of reversing significance. Suppose that we can observe noisy estimates $\hat{f}(k) = \hat{\mathbb{E}}[E^{(q)}_k] - r$ of the target function $f$ for any subsample size $k$. Then, the Polyak-Ruppert averaging algorithm from the stochastic approximation literature can be used to find the root of $f$ with high probability guarantees \citep{ruppert1988efficient, polyak1992acceleration}. This procedure is displayed in Algorithm~\ref{alg:sgfi}. Note, this same algorithm was used to develop a fragility index based sample size calculation \citep{baer2021samplesize}. We can readily observe a noisy estimate $\hat{f}$ through the following approach. Write $E^{(q)}_k = R^{(q)}(S_k)$ where $S_k$ is a uniformly random sample of $k$ cases and $R^{(q)}$ is a deterministic function which equals is True if the cases $S_k$ have permitted outcome modifications which reverse significance and False otherwise. If $R^{(q)}$ was readily available and computable, we may choose $\hat{f}(k) = \frac{1}{B} \sum_{b = 1}^B R^{(q)} (S_{k, b}) - r$ for i.i.d. random case samples $S_{k,b}$ with $b=1, \dots, B$. This is summarised in Algorithm~\ref{alg:reverseprob}. The function $R^{(q)}$ that determines whether significance is reversible can readily be approximated through the greedy algorithm presented in \citet{baer2021incidence}. We will run that algorithm with the outcomes fixed for the cases not in the random sample $S_{k, b}$ to determine whether the fragility index is finite or not, i.e. whether significance reversal is possible. \begin{algorithm} \caption{Algorithm to calculate a stochastic generalized fragility index} \label{alg:sgfi} \begin{algorithmic}[1] \Procedure{SGFI Calculator}{$q,r,\alpha,\text{function } pValue, B$} \State $\hat{f}(k) \gets \text{ProbabilityReversal} (\alpha, pValue, q, B)-r$ \State $\mathit{SGFI} \gets \text{FindRoot}(\hat{f})$ \Comment{Get the root of $\mathbb{E}[\hat{f}]$ using Polyak-Ruppert averaging} \State \textbf{return} $\mathit{SGFI}$ \EndProcedure \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{Monte Carlo estimate of the probability of reversing statistical significance} \label{alg:reverseprob} \begin{algorithmic}[1] \Procedure{ProbabilityReversal}{$k; \alpha, pValue, q, B$} \State $\mathit{RevCount} \gets 0$ \For{\texttt{iter} in 1,\dots,$B$} \State $S_k \gets \mathit{Sample}(k)$ \Comment{Randomly sample $k$ cases} \State $\mathit{GFI} \gets \mathit{GFIAlgorithm}(q, \alpha, pValue, S_k)$ \Comment{\parbox[t]{.4\linewidth}{Get GFI subject to modifications being permitted only for the cases $S_k$}} \State $\mathit{RevCount} \mathrel{+}= I(\mathit{GFI} < \infty)$ \Comment{Increment if $\mathit{GFI}$ is finite} \EndFor \State \textbf{return} $\mathit{RevCount}/B$ \Comment{The proportion of iterations that significance reversed} \EndProcedure \end{algorithmic} \end{algorithm} In summary, we approximately calculate a stochastic generalized fragility index through a stochastic root finding algorithm, Monte Carlo estimates, and a greedy approximation. \section{Examples} \label{sec:examples} In this section we review two interesting examples of the fragility measures defined earlier. The first example offers intuition for the stochastic fragility indices by connecting the proportion of cases with a desirable property to the case collections in the stochastic generalized fragility index. The second example illustrates a typical data analysis that makes use of each fragility measure presented. Part of the example illustrates that the cases chosen by a generalized fragility index to have their outcome modified are particularly atypical in the presence of a continuous covariate. Each example can be reproduced using scripts in the \texttt{R} package \texttt{FragilityTools} \citep{baer2020fragility}. \subsection{Presidential election} \label{sec:examples:pres} We feel that fragility measures are applicable beyond clinical trials and statistical hypothesis testing. For example, a generalized fragility index can be used to formalize a critique of the electoral college in United States presidential elections \citep{dixon1950electoral}. In the 2000 presidential race of Bush versus Gore, Bush won the election with a final tally of 50,999,897 votes for Gore, 50,456,002 votes for Bush. Additionally 92,875,537 eligible voters did not vote for either; for convenience we call these nonvoters, despite some voting for a third party. Note that in practice the decision of who will be the US President can depend on more than just votes due to the possibility of ballot recounting, judicial review, faithless electors, etc. In this section, we study this example and make an interesting connection to the generalized fragility indices and stochastic generalized fragility indices. A particularly interpretable representation for the stochastic fragility index (with $r=0.5$) is found. \subsubsection{Generalized fragility index} More detailed data by state reveals that Gore would have instead won the election had 538 nonvoters in Florida instead voted for Gore \citep{florida2000}. This number was widely broadcast at the time \citep{purdum2000counting}. Even though this example does not involve a statistical test, it demonstrates fragility of a decision through outcome modifications and hence is a kind of fragility measure. To formalize the connection to a generalized fragility index, we now make the elements in Section~\ref{sec:methods:stochgen} more concrete. Let $Z$ be a data frame with 2 columns denoting the State and the vote (either `Bush', `Gore', or `Neither') and with a row for each eligible American voter. Let the outcome modifier $m$ be unrestricted among nonvoters but fully restricted among voters so that nonvoters can have their vote modified to `Bush', `Gore', or `Neither' but the vote for those who already committed to Bush or Gore cannot be modified. Note, this modifier $m$ is not chosen according to the sufficiently likely construction. Finally, let the decision $\mathcal{R}$ indicate whether Bush won the election or Gore won the election. This generalized fragility index is thus the smallest count of vote modifications to nonvoters necessary to reverse the outcome of the US election. We now show that the circumstances of the election outcome lead to a helpful simplification. According to the vote margins in each state, Florida, New Hampshire, and Nevada are the only red states which could flip to blue states with a moderate amount of additional Gore votes. Any one of these three states going blue would have made Gore win the election; however, New Hampshire and Nevada were both much smaller than Florida and required many more Gore votes to turn blue. Therefore, we may reasonably make the simplifying assumption that a moderate sized collection of eligible voters in the US can only reverse the US Presidential race by reversing the result of the Florida race. Thus the generalized fragility index is the smallest count of vote modifications to \emph{Florida} nonvoters necessary to reverse the outcome of the US election, which is the $538$ number cited earlier. \subsubsection{Stochastic generalized fragility index} In this section we turn our attention to the stochastic generalized fragility indices. We focus on the case $r=1/2$ as it is a reasonable and will produce a particularly intuitive representation. Since $538$ Floridian nonvoters would need to vote for Gore to reverse the result of the Florida race, we seek the lowest count $\textit{SGFI}_{1/2}$ such that a random collection of $\textit{SGFI}_{1/2}$ eligible American voters is more likely than not to include $538$ Floridian nonvoters. Due to the hypergeometric distribution modelling this count, we seek the lowest value $\textit{SGFI}_{1/2}$ such that \begin{equation} \frac{1}{2} < \mathbb{P} \left[ 538 \leq \mathrm{HyperGeometric}\left( 194331526, 2693686, \textit{SGFI}_{1/2} \right) \right], \end{equation} where $194{,}331{,}526$ was the number of eligible American voters and $2{,}693{,}686$ was the number of Florida nonvoters \citep{florida2000}. Thus $\textit{SGFI}_{1/2}$ approximately satisfies that $\mathrm{median} \left( \mathit{HG} \right) = 538$, where $\mathit{HG} = \mathrm{HyperGeometric} \left( 194331526, 2693686, \textit{SGFI}_{1/2} \right)$. Due to the large values of the first two parameters, this hypergeometric distribution is approximately equal to a Binomial distribution with parameters $\textit{SGFI}_{1/2}$ and $\hat{p}_{\text{FL}}$, where $\hat{p}_{\text{FL}}=2693686/194331526$ is the empirical probability of selecting a Floridian nonvoter among all American eligible voters \citep{blitzstein2019introduction}. Thus the Binomial distribution mean $\textit{SGFI}_{1/2} \hat{p}_{\text{FL}}$ is approximately equal to the Hypergeometric median, and we have that \begin{equation} \textit{SGFI}_{1/2} \approx \frac{\mathit{GFI}}{\hat{p}_{\text{FL}}} \approx \frac{538}{\hat{p}_{\text{FL}}} \approx 38814. \end{equation} Therefore $38814$ American eligible voters need to be selected to ensure that a typical collection will include enough nonvoters to overturn the US election. These tens of thousands of random \emph{American eligible voters} revealed by the stochastic generalized fragility index are in sharp contrast to the $538$ \emph{Floridian nonvoters} revealed by the generalized fragility index. The former is representative of all Americans but the latter exclusively concerns Floridians, despite the generalized fragility index nominally involving all Americans. The stochastic generalized fragility index is simply an up-weighted generalized fragility index, directly taking into account the rarity $\hat{p}_\text{FL}$ of the eligible voters who must be selected to reverse the result of the election. The representation is reminiscent of the RIR (Robustness of an Inference to Replacement) method but the denominator probabilities are distinct \citep{frank2021hypothetical}. Recall that this representation hinged on Florida being the only State of interest for reversing the election due to the electoral college; an analogous property will not generally hold for statistical tests as we explore in the next section. \subsection{Modelling an adverse event} \label{sec:examples:adverse} The NHEFS, an observational study, and corresponding data set were relied on and analyzed throughout the causal inference textbook by \citet{hernan2020causal}. (The acronym stands for \emph{National Health and Nutrition Examination Survey Data I Epidemiologic Follow-up Study}.) The data set is a sample of 1629 cigarette smokers aged 25-74 years who had a baseline in the 1970s and then a follow up a decade later. The purpose of the study was to investigate the relationships between clinical, nutritional, and behavioral factors and several adverse events. In this section, we will study the relationship of smoking cessation between baseline and 1982 (the exposure) and death by 1992 (the endpoint). \begin{table} \centering \begin{tabular}{l||l|l} & Death & Survival \\ \hline\hline Quit smoking & 102 & 326 \\ \hline Continued smoking & 216 & 985 \end{tabular} \caption{Summary statistics from NHEFS} \label{tab:nhefs} \end{table} \subsubsection{Fragility indices for $2\times 2$ tables} An early analysis of the relationship could leverage the data in Table~\ref{tab:nhefs}. The odds ratio for smoking cessation is $1.43$, indicating that quitting smoking may increase the risk of death. Fisher's exact test for whether smoking cessation is associated with death has $p$ value $0.01$. Taking the significance cutoff as the default $0.05$, we would conclude that that smoking cessation is associated with an increased risk of death. The fragility index is $6$, revealing that a few particular cases would need to have modified outcomes to reverse significance. The incidence fragility indices reveal that the outcome modifications were rather likely: any $q\in[0, 0.76)$ gives the same value \citep{baer2021incidence}. The $6$ cases whose outcomes were modified each quit smoking and died. These cases are atypical in the study and comprise only $6\%$ of the study participants. The stochastic fragility indices ensure that cases across the study can contribute to significance reversing. Here we find $\mathit{SFI}_{1/2} = 22$, showing that a typical collection of 22 cases having outcome modifications would reverse statistical significance. Notice that the representation in the previous section does not hold (i.e. $22 \not\approx \frac{6}{0.06}$) since more than just cases who quit smoking and died can contribute to reversing significance. \subsubsection{Generalized fragility indices} The results of the previous section could naively be interpreted as suggesting that smoking cessation is harmful. However, determining a causal relationship requires controlling for confounders. Possible confounders include years smoked, sex, race, weight, etc. It is important to control for confounders because the association described previously may be spurious if the distribution of years smoked differs between study arms. For the purpose of better illustrating the stochastic fragility indices, we will treat years smoked as the only confounder. The arguments in this section will still hold in the presence of more confounders, but the visualizations will be more complicated. The $p$ value for whether smoking cessation is associated with death controlling for years smoked is $p=0.41$ in a logistic regression, with adjusted odds ratio $1.13$. This is initially insignificant with the usual threshold $0.05$. The traditional fragility index and the incidence fragility indices do not allow confounders so we now consider generalized fragility indices. The generalized fragility index permitting any modification (i.e. having $q=0$) is $-10$ so that at least ten cases must have their death status modified to reverse significance. This generalized fragility index belies the smoking years of the ten selected cases. Clinicians may imagine that these cases are typical and so are not consistently in especially poor or excellent condition. However, the generalized fragility index seeks to reverse significance with the fewest outcome modifications and hence relies on atypical cases for which their outcome modification can have an extreme impact. The top-left pane in Figure~\ref{fig:hnefs_gfi} shows the distribution of years smoked for the selected cases relative to all of the cases in the study. The selected cases (in blue) each quit smoking and died, as for the traditional fragility index in the previous subsection. The selected cases also each have low years smoked since modifying these cases' outcomes most alters the $p$ value. \begin{figure} \centering \includegraphics[scale=.7]{hnefs_2by2.eps} \caption{Histograms for the confounder, number of years smoked. The grey bins correspond to all cases in the study (min=1, max=64), and the colored bins correspond to cases selected to have their outcome modified by the generalized fragility index.} \label{fig:hnefs_gfi} \end{figure} The remaining panes show the same for different values of $q$. As $q$ grows larger than $0$, the selected cases having increasingly higher years smoked since the cases with lower years smoked no longer have an outcome modification permitted. When $q=0.57$ the years smoked of the selected cases reaches the right tail in the distribution of all the cases. After, at $q=0.6$, cases who did not quit smoking and survived (in blue) begin to have their outcome modified. The years smoked of these selected cases is large. As $q$ grows further, the selected cases have lower years smoked. When $q=0.9$, the years smoked of the selected cases reaches the left tail of the distribution of all the cases. Each example illustrates that the most atypical cases with permitted modifications will be selected for their outcome to be modified. \subsubsection{Stochastic generalized fragility indices} The stochastic fragility indices resolve this shortcoming by ensuring that outcome modifications of typical cases can reverse significance. Figure~\ref{fig:nhefs_sgfi} visualizes the stochastic generalized fragility indices corresponding to the test controlling for the years smoking confounder. Notice that the stochastic generalized fragility index is monotonically decreasing in both $r$ and $q$. When $q=0.9$ so that only very likely outcome modification are permitted, the generalized fragility index is $-30$ (indicated in red at the top of the figure). When the stochastic threshold $r$ grows beyond $0$ to $0.25, 0.5,$ or $0.75$, the stochastic generalized fragility indices are much larger: they are $-1458$, $-1517$, and $-1569$, respectively. Therefore, for example, $1517$ cases must be selected to ensure that typical (i.e. more than half of) case collections can reverse significance with only very likely modifications (i.e. modifications which have likelihood at least $0.9$). In general the choices $r=0.25, 0.5,$ and $0.75$ produce similar stochastic generalized fragility indices for each value of the sufficiently likely threshold $q$. Notice that these values are large because few cases have permitted outcome modifications which can contribute towards reversing significance when $q$ is large. \begin{figure} \centering \includegraphics[scale=.8]{qr_plot.eps} \caption{The stochastic generalized fragility indices for various choices of stochastic threshold $r$ and sufficiently likely threshold $q$.} \label{fig:nhefs_sgfi} \end{figure} \section{Conclusion} \label{sec:conc} We believe there is a promising future for statistics based on cases counts. They are broadly interpretable to medical researchers and others from varied backgrounds. The fragility index is an interesting addition to a statistician's toolkit, in addition to the other fragility measures developed here. The stochastic generalized fragility indices complete the foundational methodological development of the fragility index. Together with the sufficiently likely construction that permits only plausible modifications, they ensure that fragility measures are not driven by atypical cases and that the modifications are realistic. They study and improve both the case selection and the modification selection which defines fragility measures. Recall that the patients selected by a stochastic generalized fragility index were made to be typical by forcing a portion of the possible case collections to have permitted outcomes which can reverse significance. Through the Bush v. Gore example we saw that the stochastic generalized fragility indices take into account the rarity of cases whose outcomes must be modified to reverse significance in the generalized fragility index. Next, through the adverse event example, we saw that the cases whose outcomes are modified in a generalized fragility index can be highly unusual relative to the remaining cases in a study when there are additional explanatory variables. All fragility measures rely on choosing permitted outcome modifications which will have the largest impact on significance. Put differently, they all rely on an adversarial choice of the outcome modification \citep{lowd2005adversarial}. In view of Table~\ref{tab:bdp_comparison}, this is due to the ``$\exists$'' in each entry of the last row. Future work which introduces a new category of measures that deviates from this could be interesting and may bridge the gap between fragility measures and $p$ values. Randomly choosing outcome modifications may work for fragility indices with initially significant tests but not in general. \section{Acknowledgements} The authors gratefully thank Apurva Dixit and Derrick Tam for developing an early version of the stochastic fragility indices, alongside SEF. \bibliographystyle{apalike}
2,869,038,154,177
arxiv
\section{Introduction} \label{sec:introduction} \input{introduction} \section{Related work} \label{sec:relwork} \input{relwork} \section{Methodology} \label{sec:methodology} \input{methodology} \section{Results} \label{sec:results} \input{results} \section{Conclusion} \label{sec:conclusion} \input{conclusion} \subsubsection*{Acknowledgments} This work was supported by the Air Force Research Laboratory, 711th Human Performance Wing, Airman Systems Directorate with funding provided through Oak Ridge Institute for Science and Education (ORISE). Our work has also been supported by the Ohio Federal Research Network project \textit{Human-Centered Big Data}. Any opinions, findings, and conclusions or recommendations expressed in this article are those of the author(s) and do not necessarily reflect the views of the Ohio Federal Research Network. The authors would also like to thank Matthew Piekenbrock for discussions on multiscale MAPPER and hierarchical clustering that were useful in preparing the discussion of THD and comparisons with other techniques in Section~\ref{sec:methodology}. \bibliographystyle{plainnat} \subsection*{Comparison to transparent supervised models} It should be noted that Figure~\ref{fig:vnenhl_summary} does not appear all that different from a decision tree. Each split in the THD is based on a set of feature properties that differentiate one group from another, which is not unlike a decision tree that makes classification decisions by learning a hierarchy of heuristics to bin data. Moreover, decision trees are inherently transparent in the sence that each path down a tree from root to leaf describes a series of conditions explaining why data is classified. The key difference between using splits of a THD to provide explanations rather than a decision tree is that THD is an entirely {\em unsupervised} technique; in constructing a THD the target feature RiskPerformance is never used. A decision tree, in contrast, is a {\em supervised} approach where the target feature is used directly during learning. When this training data is collected based on credit award decisions made by an organization from the past, the decision tree essentially learns a model describing how and why a firm awards credit to applicants. The learned model thus incorporates any potentially historical biases or priorities of the organization the training data is from. In taking an unsupervised approach, the THD becomes {\em decoupled} from the organization or institution who issues credit: splits in a THD are based on distinguishing features between sets of past applications conditioned on whether they successfully paid their loan. Thus, the THD can lead to automatic loan decisions based solely on the merits of the applicant, instead of a combination of applicant merit and historical firm behavior. Moreover, a THD is theoretically grounded by exploiting the shape and structure of the underlying manifold of data about applicants, which is more likely to have a shape and structure characteristics across applicants for all forms of credit besides HELOC. Insights from a THD are thus more likely to be transferable across domains (e.g., to support decisions for other lines of credit besides HELOC), compared to decision tree heuristics that are (over)fitted to a single, specific dataset. We further note that the THD requires no \emph{a priori} information about the meaning or importance of each feature. Since these explanations are independent of any machine learning model used in classification, they could thus be used to supplement and explain decisions made by the algorithm. For example, a linear regression may give larger magnitude to weights that were found to correlate with RiskPerformance in THD groups, such as percentage of trades never delinquent and number of trades with balance. Finally, these explanations could also be used to understand a \emph{misclassification} made by a classifier. The classifier may be weighting the wrong features, i.e. features that correlate with RiskPerformance in a different THD group than the one the point being classified belongs to. Another possibility is that the data point being classified is an outlier - it is in a THD group but has unusual features for that group. THD provides a framework for identifying such points automatically.
2,869,038,154,178
arxiv
\section{Introduction} The observations that we used in this work are first-light data from the new instrument SPINOR (Spectro-Polarimeter for INfrared and Optical Regions, \citeNP{SNEP+05a}). Still under development, SPINOR can already be used for high-resolution full spectro-polarimetry at virtually any combination of 3 spectral regions in the 400 - 1000~nm range. The particular dataset that we report on was acquired on 16 June 2004 at 15:16 UT. We observed two photospheric Fe I lines (at 849.7 and 853.8 nm) and two chromospheric lines of the Ca II infrared triplet (at 849.8 and 854.2 nm) in active region NOAA 0634 at a time of particularly good seeing. The spectrograph slit was scanned over that region to construct a three-dimensional datacube, in such a way that for each (x,y,$\lambda$) point we have the four Stokes parameters $I,Q,U$ and $V$. We made use of the new adaptive optics system (\citeNP{RHR+03}) of the Dunn Solar Telescope. Combined with the excellent atmospheric conditions at the time of the observations, we achieved a spatial resolution as good as 0.6" (note that this figure varies slightly in the scanning direction due to temporal changes in the seeing conditions), which is among the best attained thus far in this kind of observations. The sunspot subject to detailed analysis is rather irregular (see Fig~\ref{fig:lines3D}) and exhibits two distinct umbral cores adjacent to the main umbra. One of them, above the main umbra in the figure, is surrounded by its own penumbra. The other umbral core is almost merged with the main umbra and is seen to its left separated by a faint light bridge. The interpretation of the data was done using the Stokes inversion code developed by \citeN{SNTBRC00a} for spectral lines formed in non-LTE. The code infers the depth stratification of the temperature, line-of-sight velocity and magnetic field vector that yields the best fit to a particular set of Stokes spectra. The photospheric Fe blends in the wings of the Ca lines are also computed by the code, providing a fully connected picture of the whole atmosphere from the low photosphere to the chromosphere. Each spatial point in the dataset is analyzed independently of the rest. In order to ensure proper convergence and minimize the risk of the algorithm settling in secondary minima, each inversion is repeated 10 times with randomized initializations. The best solution is picked as representative of the atmospheric conditions in the spatial location under study. The non-LTE inversions are very computing-intensive. However, this kind of analysis is necessary for accurate vector magnetometry (\citeNP{LMPS94}; \citeNP{SN02}) because radiative transfer and magneto-optical effects give rise to very complex dependences of the observables on the atmospheric conditions. We employed a scheme by which the non-LTE inversion code is efficiently run in parallel on several networked workstations. The hardware employed includes three dedicated and three shared (mostly off-hours) Intel P4 processors running Linux kernels. On average, we used the equivalent of 5 processors running at a clockspeed of 2.7 GHz. The real-time employed in the entire analysis was 29 days. This included a first inversion of a larger area (some 200x150 pixels) and a second pass at the area framed by a rectangle in Figure 1, of nearly 150x110 pixels. In the second pass, the profiles were averaged over a 3x3 pixel (1.1"x0.66") box before inverting with the aim of improving the signal-to-noise ratio. The noise level was thus reduced to approximately 1.5$\times$10$^{-4}$ in units of the quiet Sun continuum intensity. The detailed analysis outlined above produced a 3D reconstruction of the full magnetic field vector in the active region. Figure~\ref{fig:lines3D} shows the magnetic structure in the large sunspot, after deprojecting it to disk center and with the magnetic field transformed to the solar reference frame. The 180-degree ambiguity in the (observer frame) azimuth was resolved by picking the value that results in a more radial field when converted to the solar coordinate frame. As a consistency test, we computed the divergence of the field across the entire map using boxes of 1.6~Mm in each dimension. It was found that the divergence is always small compared to $B/l$ (the magnitude of the field divided by the length of the box), with an average absolute value of 1.8\%. This argument, as well as the spatial coherence of the results obtained (recall that each pixel was inverted independently), strongly supports the reliability of our findings. The east side of the sunspot (left of the figure) is reminiscent of what is typically found in numerical simulations of a simple sunspot, with field lines bending outwards in the penumbra. This area is partly force-free, as can be deduced from Fig~\ref{fig:angles} (recall that the condition for a force-free field is that $\nabla \times \vec B$ must be parallel or anti-parallel to $\vec B$), particularly in the chromosphere, but not entirely. The west side (right of the figure), on the other hand, exhibits a very different structure and the field is mostly non-force-free. Photospheric magnetograms of the region (not show in the figures) reveal an intricate pattern of flux near the sunspot on that side, suggesting a complex topology that would manifest itself also in the sunspot structure, perturbing the field away from the classical picture of a near-potential configuration. This idea is further supported by H$_{\alpha}$ images from the same day (e.g., Active Region Monitor at NASA Goddard Space Flight Center's Solar Data Analysis Center: http://www.solarmonitor.org/20040616/0634.html) showing considerable activity on the west side of this sunspot. To avoid confusion with the terminology, which is sometimes used with different meanings in the literature, we adopt the following definitions in this work. We denote by "twist" the deviation of the field from the radial direction and by "torsion" the vertical gradient of the azimuth\footnote{ The term azimuth here is defined as the angle between the (measured) magnetic field vector $\vec B$ and the solar East-West direction, measured counter-clockwise from the solar West. } . In this context, the concept of torsion is probably better defined from a mathematical point of view because the radial direction is not well defined in cases like the present one. Furthermore, while twist and torsion (as defined here) refer essentially to the same concept in a rigid object, this is not necessarily the case in non-rigid systems such as the magnetic field lines. Imagine, for example, a magnetic field that does not vary along the flux-rope axis but whose azimuth does not follow the radial direction. This configuration would have twist but not torsion. A detailed representation of the magnetic torsion is provided in Fig~\ref{fig:torsion}. This figure shows that the torsion is negative over most of the sunspot, but there are also two large areas of positive torsion. The first one corresponds roughly to the upper-right quadrant. It may be associated with the direction connecting the main umbra to the umbral core located towards the north-west. The torsion in this area decreases in magnitude as we move up into the chromosphere. The other positive-torsion area is roughly the lower-left quadrant and, contrarily to the first one, its magnitude increases with height. The results presented in this paper reveal a very complex topology of sunspot fields. The fact that most of the spot departs from the force-free regime is quite surprising and has some important implications. While the force-free approximation is a convenient way to estimate the magnetic field structure (whether by means of extrapolations or using it to make empirical twist determinations), our observations do not support its applicability in general. Empirical determinations of twist or electric currents from 2D maps in complex regions seem particularly questionable. In fact, such investigations have even more caveats because one can only obtain one of the components of the curl vector and also because the "measurement heights" at different pixels have variations as large as 500 km in a sunspot. In summary, there are no convenient shortcuts to measuring the actual 3D structure of the field via full Stokes spectro-polarimetry and proper non-LTE modelling. Finally, it is important to note that the non-force-freeness of the field provides potential energy that may become available for chromospheric/coronal heating by means of magnetic reconnection into a lower energy state. The coexistence of opposite-sign torsions is a rather surprising finding since, to the author's best knowledge, it has not been predicted or proposed in any previous theoretical models. The physical mechanism that twists the field is yet to be established. Three different processes have been proposed: a)the dynamo itself generates twisted fields (\citeNP{CCN04}); b)Coriolis forces twist the flux-tubes during their ascent (\citeNP{FFL+00}; \citeNP{FG00}); c)turbulent convective buffeting twists the flux-tubes during their travel through the convection zone (\citeNP{LFP98}). The mixed-twist scenario of our observations seems to be difficult to explain by the first two possibilities, which should produce flux-ropes of the same sign twist in a given active region. The convective buffeting scenario, on the other hand, is essentially a turbulent process and would naturally produce either sign twist in a more or less stochastic manner. Therefore, while the three mechanisms are probably contributing with different relative importance, our data indicates that the convective buffeting is probably the dominant one. However, convective buffetting cannot be the only mechanism for producing twist, since the statistics of active regions clearly indicate that there is also a systematic trend in the twist (see, for example, \citeNP{PCM95}). \clearpage
2,869,038,154,179
arxiv
\section{Introduction and motivation} \label{sec:intro} \subsection{Extreme mass ratio inspirals of spinning bodies} \label{sec:emri} Extreme mass-ratio inspirals (EMRIs) are stellar-mass compact objects (of mass $\mu$) which orbit a massive black hole (mass $M$) and inspiral due to the backreaction of gravitational-wave (GW) emission. They are predicted to be a key source of low-frequency gravitational waves, which will be targeted by the planned space-based Laser Interferometer Space Antenna (LISA) \cite{eLISA2013,Barausse2020}. The mass ratios of EMRI systems are small; $\varepsilon \equiv \mu/M$ lies in the range $10^{-7}\text{--}10^{-4}$. This means that the smaller object makes $\mathcal{O}(1/\varepsilon) \sim 10^4 \text{--} 10^7$ orbits during inspiral. By matching phase with theoretical model waveforms (``templates'') over those many thousands or millions of orbits, it is expected that EMRI GWs will make possible very precise measurements. Some of the science goals of EMRI measurements are to precisely determine the properties of the EMRI's black hole and its inspiraling companion \cite{Babak2017}, to probe that black hole's astrophysical environment \cite{Kocsis2011, Barausse2014, Derdzinski2019, Bonga2019}, and to robustly test the Kerr nature of the black hole spacetime \cite{Collins2004, Glampedakis2006,Barack2007, Vigeland2010, Gair2013}. An EMRI's mass ratio means that these systems can be treated perturbatively. This facilitates developing useful theoretical models, since models of the system can be developed using techniques from black hole perturbation theory --- we treat the binary as general relativity's exact Kerr solution \cite{Kerr1963}, and add a perturbation which describes the smaller body. In addition to accurately describing systems with extreme mass ratios, applications of perturbation theory play a role in helping to understand intermediate mass ratio and even comparable mass binaries \cite{LeTiec2011, Nakano2011, LeTiec2013, vandeMeent2020, Rifat2020}. Especially as the ground-based detectors uncover systems with very unequal mass components \cite{GW190814_2020, GW190412_2020}, there is great interest and potential in combining perturbation theory with numerical relativity \cite{Lousto2010} and analytic strong-field approaches \cite{Buonanno1999, Buonanno2000, Damour2008, Nagar2011, Balmelli2015_2, Khalil2020}. At zeroth order in the mass ratio $\varepsilon$, the small body travels along a geodesic of the background spacetime of the massive black hole with four-momentum $p^{\alpha}$, obeying \begin{equation} \frac{Dp^{\alpha}}{d\tau} = 0\;,\label{eq:geodesic} \end{equation} where $D/d\tau$ is the covariant derivative computed along the orbit and $\tau$ is proper time. When finite mass ratio and finite size effects are taken into account, the right-hand side of Eq.\ (\ref{eq:geodesic}) is replaced by a force $f^\alpha$. An example of such a force is the gravitational self force, which describes the small body's interaction with its own spacetime curvature \cite{Pound2012, Isoyama2014, vandeMeent2015, Pound2015, Pound2017, Barack2019, Pound2020}. The self force encodes the backreaction which drives GW-driven inspiral, as well as conservative effects that shift orbital properties relative to the geodesic. In this paper, we examine the force that arises due to the coupling of the background curvature with the spin of the small body, the spin-curvature force $f_{S}^{\alpha}$. The equation governing the small body's motion becomes \begin{equation} \frac{Dp^{\alpha}}{d\tau} =f_{S}^{\alpha}\equiv-\frac{1}{2}{R^\alpha}_{\,\nu\lambda\sigma}u^{\nu}S^{\lambda\sigma}\;. \label{eq:scf} \end{equation} This is one of the Mathisson-Papapetrou equations, and will be discussed in detail in Section \ref{sec:mpd}. Here ${R^\alpha}_{\,\nu\lambda\sigma}$ is the Riemann curvature tensor of the background spacetime, and $u^{\nu}$ is the 4-velocity associated with the smaller's orbital motion. The tensor $S^{\lambda\sigma}$ describes the spin of the orbiting body. If that body is a Kerr black hole, $S^{\lambda\sigma}\propto s\mu^2$ where $s$ is a dimensionless spin parameter with $s\leq1$. The spin-curvature force thus affects the orbiting body's motion at next-to-leading-order in mass ratio, just like many important self force effects \cite{Pound2015,Barack2019,Pound2021}. \subsection{Past work} \label{sec:pastwork} A great deal of work, both numerical and analytic, has gone into developing models for the dynamics of and gravitational waves produced by systems containing spinning members. Two limiting approaches have been used extensively for analytic modeling of such systems: the post-Newtonian PN approximation, formally good when members of the binary are widely separated and orbital speeds are small compared to light, and the extreme mass-ratio limit described in Sec.\ \ref{sec:emri}. The effective-one-body (EOB) framework synthesizes elements from post-Newtonian, extreme-mass-ratio, and numerical relativity results in order to construct a useful prescription for modeling inspirals across a wide parameter space. The dynamics of comparable mass binaries with spinning components has been explored in many post-Newtonian studies \cite{Kesden2015, Gerosa2015, Gerosa2015_2, Cho2019, Mould2020, Tanay2021_1,Tanay2021_2}; complementary to this, binaries with spinning members have been investigated extensively in numerical relativity simulations \cite{Lousto2010_2, Hemberger2013, Boyle2014, Ossokine2015, Lousto2015, Lousto2016}. Considerable work has also been undertaken to develop EOB models that include spin and quantify their reliability \cite{Damour2001, Damour2008, Nagar2011, Balmelli2013, Balmelli2015, Balmelli2015_2, Khalil2020}; a comparison of spinning effective one body Hamiltonians can be found in Ref.\ \cite{Rettegno2020}. In addition, studies of the relativistic three-body problem correspond to the spinning two-body problem in certain regimes. For example, in hierarchical triple systems, there can be a correspondence between the orbital angular momentum of the so-called ``inner'' binary (a two-body system which itself orbits a massive black hole) and the spin of a test body. This correspondence holds if the separation of the inner binary is much smaller than the curvature scale associated with the black hole about which the inner binary orbits \cite{Lim2020}. A number of studies have examined the motion of spinning bodies orbiting black holes. Many of these studies have focused either on numerical treatment of the Papapetrou equations (for example, Refs.\ \cite{Semerak1999, Plyatsko2011, Li2019}), or on constrained orbital geometries such as nearly circular or nearly equatorial orbits. For example, Ref.\ \cite{Hinderer2013} finds analytic expressions for the radial, meridional, and spin precession frequencies, including terms quadratic in spin for the limit of nearly circular, nearly equatorial orbits (see in particular Sec.\ IV B of \cite{Hinderer2013}). Treating the system to first order in the small body's spin has astrophysical relevance in the context of EMRIs. A scheme of this type was outlined in Ref.\ \cite{Chicone2005} and elucidated further in Refs.\ \cite{Singh2008, Singh2008_2}. Spinning-body orbits have been computed to first order in spin using similar frameworks in Refs.\ \cite{Mashhoon2006, Bini2011_1, Bini2011_2}. A useful effective potential approach presented in Refs.\ \cite{1976Tod, Saijo1998, Hackmann2014} describes equatorial orbits when the spin of the small body is aligned with the orbit. This method has been employed to compute corrections to orbital frequencies and explore resonance effects for equatorial orbits \cite{Abramowicz1979, Calvani1980, Mukherjee2019}. Corrections to the innermost stable circular orbit (ISCO) location of spinning-body motion have also been calculated \cite{Suzuki1998, Favata2011, Jefremov2015, Tsupko2016, Zhang2019, Zhang2019_2}. Another thread to this research is the use of a canonical Hamiltonian framework to describe the motion of a spinning body \cite{Tauber1988}. An explicit Hamiltonian for the Newton-Wigner supplementary condition was presented to linear order in spin in Ref.\ \cite{Barausse2009}, and later extended to quadratic order by Vines et al.\ \cite{Vines2016}. This canonical Hamiltonian picture provides the basis for certain spinning EOB models \cite{Barausse2010, Barausse2011}. Witzany et al.\ presented an overview of Hamiltonians for several commonly used spin supplementary conditions, including the Tulczyjew-Dixon condition, in Ref.\ \cite{Witzany2019}. A Hamilton-Jacobi formulation of spinning-body motion, which exploits the separability of parallel transport in order to determine the turning points analytically, is also known and can be used to compute corrections to the orbital frequencies \cite{Witzany2019_2}. A covariant Hamiltonian formalism has also been used to describe spinning-body motion \cite{Ambrosi2015, Ambrosi2016}. This approach is used in Ref.\ \cite{Saravanan2021} to describe circular orbits of spinning bodies in Kerr without truncating higher order spin terms, as well as to study non-planar bound orbits in a Schwarzschild background. Post-Newtonian analyses long ago indicated that spinning binaries exhibit chaotic dynamics \cite{Levin2000,Cornish2002,Levin2006}. The integrability of eccentric, spinning back hole binaries up to second post-Newtonian order was demonstrated in Ref.\ \cite{Tanay2021_1}, with action angle variables presented explicitly in Ref.\ \cite{Tanay2021_2}. In the extreme mass ratio limit, numerical studies in both Schwarzschild \cite{Suzuki1997} and Kerr \cite{Hartl2003, Hartl2003_2} backgrounds found evidence for chaotic motion. However, the linear-in-spin Hamilton-Jacobi analysis of Witzany \cite{Witzany2019_2} found that the equations of motion ``almost'' separate --- the librational motion in the radial and polar directions is coupled only by the way in which the libration region varies over an orbit. As such, Witzany shows that the equations of motion are amenable to computing important quantities such as frequencies associated with the orbits of spinning bodies. This analysis indicates that terms beyond linear in spin are necessary in order for orbits to exhibit chaos. Indeed, numerical studies have show that prolonged resonances leading to chaotic motion can be attributed to terms that are second order in spin \cite{Zelenka2020}. Non-integrability and the possibility of chaotic dynamics in the orbits of spinning bodies has received particular attention due to the implications of this for gravitational wave detection \cite{Lukes2021}. However, even if the motion remains perfectly predictable, it is crucial to understand and quantify the effect a small body's spin has on the dynamics of black hole orbits and the gravitational waves produced in spinning-body EMRI systems. The measurability of the secondary spin and its influence on EMRI parameter estimation has been assessed in previous studies \cite{Burko2015, Huerta2012, Piovano2020, Piovano2021}. Quasi-circular equatorial orbits with the spin of the small body aligned with the orbit provide a useful limit that has been studied extensively, and is often used to verify new methods for calculating gravitational wave fluxes \cite{Han2010, Harms2016, Nagar2019, Piovano2020_2, Akcay2020_2}. Gravitational-wave fluxes from equatorial orbits with aligned spin \cite{Saijo1998,Skoupy2021} and quasi-circular orbits with misaligned spin \cite{Tanaka1996} have also been well studied. Warburton and collaborators investigated the gravitational wave emission of a spinning body with misaligned spin orbiting a non-rotating black hole in Ref.\ \cite{Warburton2017}. The impact of different spin supplementary conditions on gravitational wave fluxes has been explored for both Schwarzschild \cite{Harms2016_2} and Kerr \cite{Lukes2017} black holes. Finally, as we were completing this analysis, Mathews et al.\ presented a detailed examination of the impact of a spinning secondary on the self force \cite{mathews2021selfforce}, focused on the simplest case (Schwarzschild background, spin parallel to orbit, circular configuration). \subsection{This work: Synopsis of our formulation} \label{subsubsec:freqdom} In this work, we examine orbits under the influence of the spin-curvature force $f^\alpha_S$. Because our focus is on extreme mass-ratio systems, we truncate all spin effects at leading order in the small body's spin. Under the assumption that the small body is itself a Kerr black hole (an astrophysically plausible assumption for EMRI systems), the small body's spin has a magnitude that scales with its mass squared. Terms beyond linear in spin thus scale very steeply with the system's mass ratio. At this order, a closed-form description of the spin precession is known \cite{vandeMeent2020}, amounting to parallel transport of a vector along a Kerr geodesic. With the precessional dynamics of the small body's spin in hand, we can straightforwardly compute the spin-curvature force. From this, we find the spinning-body trajectory $[r(t), \theta(t), \phi(t)]$ consistent with the spin-curvature force by solving Eq.\ (\ref{eq:scf}). Following Ref.\ \cite{vandeMeent2020}, we characterize the small body's spin using a set of quantities $\{S^1, S^2, S^3\}$ which represent the components of its spin vector projected onto three legs of a tetrad used in the closed-form analysis of its precession (see Sec.\ \ref{sec:ParallelTransport}). (A fourth component $S^0$, corresponding to the remaining leg of the tetrad, is constrained to be zero by the spin supplementary condition discussed in Sec.\ \ref{sec:ssc}.) We write its magnitude $S=\sqrt{S_\parallel^2+S_\perp^2}$, where $S_{\parallel}=S^3$ describes the component normal to the orbital plane, and $S_\perp=\sqrt{(S^1)^2+(S^2)^2}$ describes its magnitude within this plane. If $S_\perp \ne 0$, then components of the spin vector oscillate in the orbital plane with a frequency $\Omega_s$, describing a precession of the spin vector along its orbit; this frequency is described in more detail in Sec.\ \ref{sec:ParallelTransport}, and computed in Ref.\ \cite{vandeMeent2020}. At leading order in spin, the quantities $S_\perp$ and $S_{\parallel}$ (and thus $S$) are constants of motion along the spinning body's orbit. Because we consider the small body's spin to be a small parameter, the spinning-body orbits we examine are ``close to'' geodesic orbits (in a sense made more precise later). We begin our discussion of spinning-body orbits by examining how we parameterize bound Kerr geodesics. The radial motion of bound geodesics is typically described using a semi-latus rectum $p$ and an eccentricity $e$, such that the orbit oscillates between apoastron at $r_1 = pM/(1-e)$ and periastron at $r_2 = pM/(1+e)$. The polar angle $\theta$ of a bound orbit oscillates such that $-\sin{I} \le \cos\theta \le \sin{I}$. Using these bounds, we write these motions \begin{align} \hat r & =\frac{p M}{1 + e\cos\hat\chi_r}\;, \ \ \cos\hat\theta = \sin I\cos\hat\chi_\theta\;. \label{eq:geodparam1} \end{align} Here and throughout this paper, we use a ``hat'' accent (e.g.\ $\hat r$) to denote a quantity which is evaluated on a geodesic. The definitions (\ref{eq:geodparam1}) introduce the angles $\hat\chi_r$ and $\hat\chi_\theta$, which are generalizations of ``true anomaly'' angles often used in discussions of orbits in Newtonian gravity. The libration range of the geodesics does not change over an orbit, so that $p$, $e$ and $I$ are all constants of motion. Geodesics can be equivalently characterized by another set of constants of motion: $\hat{E}$, $\hat{L}_z$ and $\hat{Q}$, which denote a geodesic's energy, axial angular momentum and Carter constant respectively. These quantities are discussed in more detail in Sec.\ \ref{sec:kerrgeodesics}. Spinning-body orbits cannot in general be parameterized in the same way as geodesics using Eq.\ (\ref{eq:geodparam1}). For the ``nearly equatorial'' cases that we consider in this paper, we find the following parameterization robustly describes these orbits: \begin{align} r & =\frac{p M}{1 + e\cos\chi_r}\;, \ \ \theta = \frac{\pi}{2}+\delta\vartheta_S\;.\label{eq:thetaparamfirst} \end{align} This radial motion has turning points at $r = pM/(1 \pm e)$, exactly as for geodesic orbits. However, the anomaly angle $\chi_r$ is not the same as the anomaly angle $\hat\chi_r$ which describes geodesic motion. We elaborate on the difference between these angles in Sec.\ \ref{sec:slightlyecc}. The polar angle deviates from the equatorial plane by $\delta\vartheta_S$, a quantity with an amplitude $\mathcal{O}(S_\perp)$ which oscillates at harmonics of the frequency $\Omega_s$. If $S_\perp = 0$, so that the small body's spin is aligned or anti-aligned with the orbital angular momentum, then $\delta\vartheta_S = 0$. Aligned and anti-aligned orbits can be purely equatorial. For generic orbits, we find that the libration regions in both $r$ and $\theta$ must be modified to include oscillations at precession frequency $\Omega_s$. We defer the details of how this is handled to our companion analysis, Ref.\ \cite{Paper2}, which examines generic orbits of spinning bodies with generic spin-orbit configuration. \subsection{Organization of this paper} In the remainder of this paper, we present our method for precisely computing bound orbits of spinning bodies orbiting black holes. We begin by outlining characteristics of geodesics around a Kerr black hole in Sec.\ \ref{sec:kerrgeodesics}. We discuss the constants of motion, 4-velocities, and turning points associated with bound Kerr geodesics in \ref{subsec:kerrmetric} and \ref{subsec:fourvelocities_param}. In \ref{subsec:geodesicsfreqdom}, we present a frequency-domain description of motion in a Kerr spacetime that is particularly useful in our examination of spinning-body orbits. In Sec.\ \ref{sec:mpd}, we move on to the equations of motion for a body when its spin couples to spacetime curvature. We focus on the leading order in spin limit that has the most relevance to the astrophysical systems we are studying in Sec.\ \ref{sec:leadingorder}. In this limit, the spin vector is parallel transported along the worldline. Given this, we discuss parallel transport along Kerr geodesics in some detail in Sec.\ \ref{sec:ParallelTransport}. We begin our detailed study of bound spinning-body motion by examining several simple cases. In Sec.\ \ref{sec:simpleorbits}, we examine orbits which are circular and either equatorial or nearly equatorial, for which we can obtain closed form analytic solutions. This simple case allows us to establish the general principles of the framework we use throughout the paper, as well as to compare with previously known results. We present the circular, nearly equatorial case in detail and for general black hole spin. In Sec.\ \ref{sec:slightlyecc}, we extend these circular cases by expanding in eccentricity in order to study slightly eccentric, nearly equatorial orbits. For general Kerr, we develop closed-form solutions to first order in eccentricity. We also present these solutions to second order in eccentricity for the Schwarzschild limit. Finally, in Sec.\ \ref{sec:spinbodyfreqdom}, we use a frequency-domain treatment to compute orbits with arbitrary eccentricity and with the small body's spin arbitrarily oriented. The frequency-domain expansion allows us to examine orbits with arbitrary eccentricity, provided we include enough harmonics in our expansion. We calculate how the spin-curvature coupling shifts the orbital frequencies $\Omega_r$ and $\Omega_\phi$ from their geodesic expectations (using the fact that the parameterization for nearly equatorial spinning-body orbits is very similar to the parameterization of equatorial geodesic orbits), as well as how the coupling shifts the constants of motion $E^S$, $L_z^S$ and $Q^S$. Section \ref{sec:summary} concludes with a summary of our results, and an outline of plans for future work that uses the orbits of spinning bodies. We also briefly remark on results we present in our companion paper \cite{Paper2}, which describes how to extend this framework to model fully generic orbits (i.e., orbits of arbitrary eccentricity and inclination) with generic orientation of the small body's spin. \section{Kerr Geodesics} \label{sec:kerrgeodesics} Because we describe orbits of spinning bodies as perturbations of the orbits of non-spinning bodies, we begin by briefly reviewing the properties of Kerr geodesics. This content has been discussed at great length elsewhere \cite{Schmidt2002, Kraniotis2004, DrascoHughes2004, Hackmann2008, Levin2008, Levin2009, FujitaHikida2009, Hackmann2010, Warburton2013, Rana2019}; here we provide a brief synopsis in order for the paper to be self-contained, and to introduce important notation and conventions. \subsection{Kerr metric and constants of motion} \label{subsec:kerrmetric} The metric for a Kerr black hole with mass $M$ and spin parameter $a$ in Boyer-Lindquist coordinates $t$, $r$, $\theta$, $\phi$ \cite{Boyer1967} reads \begin{align} ds^2 & =-\left(1-\frac{2r}{\Sigma}\right)\,dt^2+\frac{\Sigma}{\Delta}\,dr^2-\frac{4Mar\sin^2\theta}{\Sigma}dt\,d\phi\nonumber\\ &+\Sigma\,d\theta^2 +\frac{\left(r^2+a^2\right)^2-a^2\Delta\sin^2\theta}{\Sigma}\sin^2\theta\,d\phi^2,\label{eq:kerrmetric} \end{align} where \begin{equation} \Delta =r^2-2Mr+a^2\;,\qquad \Sigma =r^2+a^2\cos^2\theta\;. \end{equation} (Here and throughout we use geometrized units, with $G = 1 = c$.) Four constants of motion characterize Kerr geodesics. The first is the rest mass $\mu$ of the orbiting body. It is determined by requiring $\hat p^\alpha = \mu \hat u^\alpha$ (where $\hat p^\alpha$ is the geodesic's 4-momentum, and $\hat{u}^\alpha$ its 4-velocity; recall we use the hat accent to denote quantities defined along geodesics) and by requiring the norm of the 4-velocity to be $-1$. The Kerr metric (\ref{eq:kerrmetric}) is independent of the coordinates $t$ and $\phi$, implying that the spacetime possesses two Killing vectors $\xi_{t}^{\alpha}$ and $\xi_{\phi}^{\alpha}$, corresponding to time translation and axial symmetries respectively. These Killing vectors yield two more constants of the motion, the energy per unit mass $\hat E$ and axial angular momentum per unit mass $\hat L_z$: \begin{align} \hat E & =-\xi_{t}^{\alpha}\hat u_{\mu}= -\hat u_{t}\;,\\ \hat L_z & =\xi_{\phi}^{\alpha}\hat u_{\mu} = \hat u_{\phi}\;. \end{align} Note that we have normalized these quantities by the mass $\mu$ of the orbiting body. The Kerr metric also admits an anti-symmetric Killing-Yano tensor \cite{Penrose1973}, given by \cite{Tanaka1996} \begin{equation} \mathcal{F}_{\mu\nu}=a\cos\theta\left(\bar e_{\mu}^{1}\bar e_{\nu}^{0}-\bar e_{\mu}^{0}\bar e_{\nu}^{1}\right)+r\left(\bar e_{\mu}^2 \bar e_{\nu}^{3}-\bar e_{\mu}^{3}\bar e_{\nu}^2\right)\;, \end{equation} where \begin{align} \bar e_{\mu}^0 & =\left[\sqrt{\frac{\Delta}{\Sigma}},0,0,-a\sin^2\theta\sqrt{\frac{\Delta}{\Sigma}}\right],\\ \bar e_{\mu}^1 & =\left[0,\sqrt{\frac{\Sigma}{\Delta}},0,0\right],\\ \bar e_{\mu}^2 & =\left[0,0,\sqrt{\Sigma},0\right],\\ \bar e_{\mu}^3 & =\left[-\frac{a\sin\theta}{\sqrt{\Sigma}},0,0,\frac{\left(r^2+a^2\right)\sin\theta}{\sqrt{\Sigma}}\right]. \end{align} This tensor has the defining property \begin{equation} \nabla_{\gamma}\mathcal{F}_{\alpha\beta}+\nabla_{\beta}\mathcal{F}_{\alpha\gamma}=0\;. \label{eq:KillingYanoDerivs} \end{equation} Let us define the vector \begin{equation} \hat{\mathcal{L}}^\nu=\mathcal{F}^{\mu\nu}\hat u_{\mu}\;. \label{eq:orbangmomdef} \end{equation} We will call this the orbital angular momentum 4-vector, since it has the dimensions of orbital angular momentum (per unit mass of the orbiting body), and reduces to the orbital angular momentum in the Schwarzschild limit. Notice that in Refs.\ \cite{Witzany2019_2} and \cite{vandeMeent2019}, this vector is defined with the index contracted on the second index of $\mathcal{F}^{\mu\nu}$. Because of the Killing-Yano tensor's antisymmetry, this results in an overall sign difference. With the definition (\ref{eq:orbangmomdef}), equatorial orbits have $\hat{\mathcal{L}}^\theta \propto -\hat L_z$. This is a sensible correspondence, since (by right-hand rule) one expects the angular momentum of a prograde equatorial orbit (for which $\hat L_z > 0$) to point opposite to the direction of increasing polar angle $\theta$. We have found that this sign swap is needed to establish correspondence between our results and important examples of past literature. In particular, past work which examined equatorial orbits of bodies with spin aligned with the large black hole's spin and with the orbital angular momentum typically designate the small body's spin as pointing along the ``$z$ direction.'' This correspondence requires the ``$z$ direction'' (i.e., parallel to the large black hole's spin) to point in the direction of decreasing $\theta$ at the equatorial plane. From the antisymmetry of $\mathcal{F}^{\mu\nu}$ we see that \begin{equation} \hat{\mathcal{L}}^\mu \hat u_\mu = 0\;. \label{eq:orbangmomconstraint} \end{equation} Further, using Eq.\ (\ref{eq:KillingYanoDerivs}), it is straightforward to show that $\hat{\mathcal{L}}^\mu$ is parallel-transported along geodesics: \begin{equation} \frac{D\hat{\mathcal{L}}^\beta}{d\tau} \equiv \hat u^{\alpha}\nabla_{\alpha}\hat{\mathcal{L}}^{\beta} = 0\;. \end{equation} It is also not hard to show that the square of this vector \begin{equation} \hat K = \hat{{\cal L}}^{\mu}\hat{{\cal L}}_{\mu} \label{eq:carter1} \end{equation} is conserved, i.e.~that \begin{equation} \frac{D\hat K}{d\tau} \equiv \hat u^\alpha\nabla_\alpha \hat K = 0\;. \end{equation} Carter \cite{Carter1968} first demonstrated the existence of a fourth conserved constant for Kerr geodesic motion. This constant arises from a Killing tensor $K_{\mu\nu}$, which can be thought of as the ``square'' of $\mathcal{F_{\mu\nu}}$, \begin{equation} K_{\mu\nu}=\mathcal{F}_{\mu\alpha}{\mathcal{F}_\nu}^{\alpha}\;. \end{equation} The corresponding constant \begin{equation} \hat K = K_{\alpha\beta}\hat u^{\alpha}\hat u^{\beta} \end{equation} is identical to the $\hat K$ defined in (\ref{eq:carter1}), and is usually called the ``Carter constant.'' For many analyses, it is particularly convenient to combine $\hat K$, $\hat E$, and $\hat L_z$ into a related conserved quantity $\hat Q$ given by \begin{align} \hat Q &= \hat K - \left(\hat L_z-a\hat E\right)^2 \label{eq:Qdef} \\ &= \hat p_{\theta}^2 + a^2\cos^2{\hat\theta}\left(1 - \hat E^2\right)+\cot^2{\hat\theta}\,\hat L_z^2\;. \end{align} Confusingly, $\hat Q$ is also often called the Carter constant; we will use both $\hat K$ and $\hat Q$ from time to time in our analysis. The constant $\hat Q$ is particularly useful for discussing geodesics, so we focus on this version of the Carter constant in the remainder of this section. \subsection{4-velocities, turning points, and parameterization} \label{subsec:fourvelocities_param} Carter first showed that the existence of these conserved quantities permits the geodesic equations to be separated in Boyer-Lindquist coordinates \cite{Carter1968}. These separated equations are given by \begin{align} \Sigma^2\left(\frac{d{\hat r}}{d\tau}\right)^2 &= [\hat E({\hat r}^2+a^2)-a\hat L_z]^2\nonumber\\ & \qquad-\Delta[{\hat r}^2+(\hat L_z-a\hat E)^2+\hat Q]\nonumber\\ & \equiv R({\hat r})\;,\label{eq:geodr}\\ \Sigma^2\left(\frac{d{\hat\theta}}{d\tau}\right)^2 &= \hat Q-\cot^2{\hat\theta} \hat L_z^2-a^2\cos^2{\hat\theta}(1-\hat E^2)\nonumber\\ & \equiv\Theta({\hat\theta})\;,\label{eq:geodtheta}\\ \Sigma\frac{d {\hat\phi}}{d\tau} &= \csc^2{\hat\theta} \hat L_z + a\hat E\left(\frac{{\hat r}^2 + a^2}{\Delta} - 1\right) - \frac{a^2\hat L_z}{\Delta}\nonumber\\ & \equiv\Phi({\hat r},{\hat\theta})\;,\label{eq:geodphi}\\ \Sigma\frac{d {\hat t}}{d\tau} &= \hat E\left(\frac{({\hat r}^2 + a^2)^2}{\Delta} - a^2\sin^2{\hat\theta}\right)\nonumber\\ &\qquad + a \hat L_z\left(1 - \frac{{\hat r}^2 + a^2}{\Delta}\right)\nonumber\\ & \equiv T({\hat r},{\hat\theta})\;.\label{eq:geodt} \end{align} Because these are evaluated strictly along geodesic orbits, we parameterize them using the coordinates $(\hat r, \hat\theta, \hat\phi, \hat t)$ of such an orbit. Equations (\ref{eq:geodr}) -- (\ref{eq:geodt}) are parameterized using proper time $\tau$ along the orbit. As written, these equations are not completely separated: the factor $\Sigma = {\hat r}^2 + a^2\cos^2{\hat\theta}$ couples the radial and polar motions. By introducing a new time parameter $\lambda$, commonly called ``Mino time'' and defined by $d\lambda=d\tau/\Sigma$ \cite{Mino2003}, the radial and polar equations of motion decouple, yielding \begin{align} \left(\frac{d{\hat r}}{d\lambda}\right)^2 &= R({\hat r})\;,\qquad \left(\frac{d{\hat\theta}}{d\lambda}\right)^2=\Theta({\hat\theta})\;, \nonumber\\ \frac{d {\hat\phi}}{d\lambda} &= \Phi({\hat r},{{\hat\theta}})\;,\qquad \frac{d {\hat t}}{d\lambda}=T({\hat r},{\hat\theta})\;. \label{eq:geods_mino} \end{align} Mino-time $\lambda$ is a very convenient parameterization for describing the strong-field dynamics of Kerr black hole orbits. By using $d\hat t/d\lambda$, it is not difficult to convert from $\lambda$ to Boyer-Lindquist time $t$, which naturally describes quantities as measured by a distant observer. To understand the turning points of bound geodesics and the parameterization that we use, begin by carefully examining the functions $R({\hat r})$ and $\Theta({\hat\theta})$. For bound orbits, $R({\hat r})$ can be written \begin{equation} R({\hat r})=(1-\hat E^2)(r_{1}-{\hat r})({\hat r}-r_{2})({\hat r}-r_{3})({\hat r}-r_{4})\;, \end{equation} where the roots are ordered such that $r_4 \le r_3 \le r_2 \le {\hat r} \le r_1$. The roots $r_1$ and $r_2$ are turning points of the motion. Likewise, $\Theta({\hat\theta})$ can be written \begin{equation} \Theta({\hat\theta})=\frac{a^2}{\sin^2{\hat\theta}}\left(1 - \hat E^2\right)\left(z_{+} - \cos^2{\hat\theta}\right)\left(z_{-} - \cos^2{\hat\theta}\right)\;, \end{equation} where we have introduced ${\hat z} \equiv \cos^2{\hat\theta}$. These roots are ordered such that $0 \le z_- \le 1 \le z_+$; turning points of the motion occur where ${\hat z} = z_-$. This occurs when ${\hat\theta} = \theta_-$ and ${\hat\theta} = \pi - \theta_-$, defined by $\cos^2\theta_- = z_-$. Bound geodesics are thus confined to a torus, bounded in radius by $r_2 \le {\hat r} \le r_1$ and in polar angle by $\theta_- \le {\hat\theta} \le (\pi - \theta_-)$. We can build these bounds into the orbiting body's motion by defining \begin{align} {\hat r} & =\frac{p M}{1 + e\cos\hat\chi_r}\;, \label{eq:rdef}\\ \cos{\hat\theta} &= \sin I\cos\hat\chi_\theta\;. \label{eq:thdef} \end{align} The angles $\hat\chi_r$ and $\hat\chi_\theta$ are relativistic generalizations of the ``true anomaly'' angles often used in Newtonian orbital dynamics; these angles increase monotonically over an orbit. The parameters $p$ and $e$ are the orbit's semi-latus rectum and eccentricity, respectively; in the Newtonian limit, they correspond to the equivalent parameters which define a Keplerian ellipse. By inspection, one can see that \begin{equation} r_1 = \frac{pM}{1 - e}\;,\qquad r_2 = \frac{pM}{1 + e}\;. \end{equation} The angle $I$ defines the inclination of the orbit; it is related to the angle $\theta_-$ according to \begin{equation} I = \pi/2 - \mbox{sgn}(\hat L_z)\theta_-\;. \end{equation} This angle automatically encodes a notion of prograde ($\hat L_z > 0$, $I < 90^\circ$) and retrograde ($\hat L_z < 0$, $I > 90^\circ$) orbits. Equatorial orbits ($\theta_- = 90^\circ$) have $I = 0^\circ$ (prograde) or $I = 180^\circ$ (retrograde). Up to initial conditions, an orbit can be specified by either the set of constants of the motion ($\hat E$, $\hat L_z$, $\hat Q$) or the quantities ($p$, $e$, $I$) which determine the orbit's geometry (being careful to choose values which do not go inside the ``last stable orbit,'' the locus of parameter space inside which bound orbits are unstable and rapidly plunge into the black hole; see \cite{Stein2020} for discussion). In this analysis, we use ($p$, $e$, $I$), and then use expressions given in Refs.\ \cite{FujitaHikida2009, vandeMeent2019} (see also App.\ A of Ref.\ \cite{Hughesetal2021}) to determine $\hat E$, $\hat L_z$, and $\hat Q$. Once these parameters are known, we can use closed-form expressions for the solutions to the geodesic equations (\ref{eq:geodr}--\ref{eq:geodt}), formulated in terms of elliptic functions \cite{FujitaHikida2009}. We also use solutions for bound geodesic trajectories as functions of Mino-time, ${\hat r(\lambda)}$ and ${\hat z(\lambda)}$, using the simplified form given by van de Meent \cite{vandeMeent2019}. Formulae for computing geodesic trajectories are implemented in the {\tt KerrGeodesics} {\it Mathematica} package of the Black Hole Perturbation Toolkit (hereafter ``the Toolkit'') \cite{Kerrgeodesics}. \subsection{Frequency-domain description of geodesic motion} \label{subsec:geodesicsfreqdom} Bound Kerr geodesics are triperiodic, with three frequencies describing their radial, polar, and azimuthal motions. Denote by $\hat\Lambda_{r}$, $\hat\Lambda_{\theta}$, and $\hat\Lambda_{\phi}$ the radial, polar, and axial Mino-time periods (i.e., the interval of Mino time it takes for the orbit to move from $r_1$ to $r_2$ back to $r_1$; the interval to move from $\theta_-$ to $\pi - \theta_-$ back to $\theta_-$; and the interval to move through $2\pi$ radians of axial angle). Denote by $\hat\Upsilon_{r}$, $\hat\Upsilon_{\theta}$, and $\hat\Upsilon_{\phi}$ the corresponding frequencies, with $\hat\Upsilon_x = 2\pi/\hat\Lambda_x$. First derived in this form in Ref.\ \cite{DrascoHughes2004}, we used closed-form expressions for these quantities given in Ref.\ \cite{FujitaHikida2009}, and coded into the {\tt KerrGeodesics} package of the Toolkit \cite{Kerrgeodesics}. From these Mino-time expressions, we can find their Boyer-Lindquist coordinate-time analogues using a factor $\hat\Gamma$ which is the orbit-averaged factor relating an interval of Mino-time $\lambda$ to an element of coordinate time $t$. Let $\hat T_x$ be the coordinate time orbital period for motion in coordinate $x$, and let $\hat\Omega_x = 2\pi/\hat T_x$ be the corresponding frequency. Then, \begin{equation} \hat\Omega_{r,\theta,\phi}=\frac{\hat\Upsilon_{r,\theta,\phi}}{\hat\Gamma}\;,\qquad \hat T_{r,\theta,\phi} = \hat\Gamma\,\hat\Lambda_{r,\theta,\phi}\;. \end{equation} Expressions for $\hat\Gamma$ (and thus for $\hat\Omega_{r,\theta,\phi}$) are also provided in Ref.\ \cite{FujitaHikida2009} and encoded in the {\tt KerrGeodesics} package of the Toolkit \cite{Kerrgeodesics} The Mino-time frequencies are particularly useful for our purposes because they make possible Fourier expansions of functions evaluated along Kerr orbits. Let $f(\lambda)=f\left[{\hat r(\lambda)},{\hat\theta(\lambda)}\right]$ be a function of ${\hat r(\lambda)}$ and ${\hat \theta(\lambda)}$. As shown in Ref.\ \cite{DrascoHughes2004}, we can write \begin{equation} f = \sum_{k=-\infty}^\infty\sum_{n = -\infty}^\infty f_{kn}e^{-i\left(k\hat\Upsilon_{\theta} + n\hat\Upsilon_{r}\right)\lambda}\;, \end{equation} where the Fourier coefficient $f_{kn}$ is given by \begin{widetext} \begin{equation} f_{kn} = \frac{1}{\hat\Lambda_{r}\hat\Lambda_{\theta}}\int_{0}^{\hat\Lambda_{r}}\int_{0}^{\hat \Lambda_{\theta}} f\left[{\hat r(\lambda_{r})},{\hat\theta(\lambda_{\theta})}\right] e^{ik\hat\Upsilon_{\theta}\lambda_\theta} e^{in\hat\Upsilon_{r}\lambda_r}d\lambda_{\theta}d\lambda_{r}\;. \end{equation} \end{widetext} The component $f_{00}$ represents the orbit-average of the function $f[{\hat r(\lambda)}, {\hat \theta(\lambda)}]$. It's worth noting that the quantities $\hat\Upsilon_\phi$ and $\hat\Gamma$ are orbit averages of the functions $\Phi( {\hat r}, {\hat \theta})$ and $T( {\hat r}, {\hat \theta})$ defined in Eq.\ (\ref{eq:geods_mino}): \begin{align} \hat\Upsilon_\phi &= \frac{1}{\hat\Lambda_r\hat\Lambda_\theta}\int_0^{\hat\Lambda_r}\int_0^{\hat\Lambda_\theta} \Phi[{\hat r(\lambda_r)}, {\hat \theta(\lambda_\theta)}]d\lambda_r\,d\lambda_\theta\;, \label{eq:Upsphi_def}\\ \hat\Gamma &= \frac{1}{\hat\Lambda_r\hat\Lambda_\theta}\int_0^{\hat\Lambda_r}\int_0^{\hat\Lambda_\theta} T[{\hat r(\lambda_r)}, {\hat \theta(\lambda_\theta)}]d\lambda_r\,d\lambda_\theta\;. \label{eq:Gamma_def} \end{align} We will use a variant of these definitions to compute $\Upsilon_\phi$ and $\Gamma$ along orbits of spinning bodies. \section{The motion of a spinning body} \label{sec:mpd} Strictly speaking, geodesics describe only the motion of zero-mass point particles. Any mass deforms the spacetime, pushing its trajectory away from the geodesic; any structure beyond a point can couple to spacetime curvature, also pushing its trajectory away from the geodesic. The leading example of such structure is the body's spin. We now consider the orbital motion of a pointlike body endowed with spin angular momentum. \subsection{Spin-curvature coupling} \label{sec:scc} A small spinning body moving in a curved spacetime precesses as it moves along its trajectory, and couples to the curvature of the background spacetime. The equations governing this precession and motion are known as the Mathisson-Papapetrou equations \cite{Papapetrou1951, Mathisson2010,Mathisson2010G_2,Dixon1970}, and are given by \begin{align} \frac{Dp^{\alpha}}{d\tau} & =-\frac{1}{2}{R^\alpha}_{\,\nu\lambda\sigma}u^{\nu}S^{\lambda\sigma}\;,\label{eq:mp1}\\ \frac{DS^{\alpha\beta}}{d\tau} & =p^{\alpha}u^{\beta}-p^{\beta}u^{\alpha}\;.\label{eq:mp2} \end{align} In these equations, the operator $D/d\tau$ denotes a covariant derivative along the small body's worldline, ${R^\alpha}_{\,\nu\lambda\sigma}$ is the Riemann curvature of the spacetime in which the small body orbits, $S^{\lambda\sigma}$ is the small body's spin tensor (about which we say more below), $p^\alpha$ is the small body's 4-momentum, and $u^\nu = dx^\nu/d\tau$ is its 4-velocity. In general, a spinning body's 4-momentum and 4-velocity are not parallel to each other, but are related by \begin{equation} p^{\alpha}=\mu u^{\alpha}-u_{\gamma}\frac{DS^{\alpha\gamma}}{d\tau}\;.\label{eq:momvel} \end{equation} Including additional structure on the small body leads to more complicated equations of motion. For example, the small body's quadrupole moment couples to the gradient of curvature \cite{Bini2008,Bini2014,Steinhoff2010} and introduces additional torque terms \cite{Rudiger1981}. The Mathisson-Papapetrou equations represent the ``pole-dipole'' approximation, in which the small body is treated as a monopolar point mass supplemented with a dipolar spin. For each spacetime Killing vector $\xi^{\alpha}$ there is constant of motion along the spinning body's worldline given by \begin{equation} \mathcal{C}=p_{\alpha}\xi^{\alpha}-\frac{1}{2}S^{\alpha\beta}\nabla_{\beta}\xi_{\alpha}\;. \end{equation} Using this, one finds that the conserved energy and axial angular momentum per unit mass for a spinning body moving in a Kerr spacetime are given by \begin{align} E^S & = -u_t+\frac{1}{2\mu}\partial_{\beta}g_{t\alpha}S^{\alpha\beta} \label{eq:Espin},\\ L_z^S & = u_{\phi}-\frac{1}{2\mu}\partial_{\beta}g_{\phi\alpha}S^{\alpha\beta}.\label{eq:Lspin} \end{align} There is no Carter constant for a spinning body, though (as we discuss below) there is a generalization of the Carter constant which is conserved to linear order in the small body's spin. \subsection{Spin supplementary conditions} \label{sec:ssc} Equations (\ref{eq:mp1}) and (\ref{eq:mp2}) do not completely specify the evolution of all degrees of freedom in the orbit of a spinning body; we must impose an additional constraint in order to close the system of equations. This constraint is called the Spin Supplementary Condition (SSC), and can be regarded as fixing internal degrees of freedom associated with the extended structure of the small body. In the non-relativistic limit, the center of mass can be identified as the natural place for the worldline to pass through the extended body. However, the center of mass is observer dependent in relativistic dynamics. The role of the SSC is thus to select one of the infinite choices of worldlines passing through the small body. Since there is in general no natural choice for the worldline, the SSC is intrinsically arbitrary. Excellent discussion of the physical meaning of the SSC can be found in Ref.\ \cite{Costa2014}; comparisons of different SSCs and investigation of their equivalence can be found in Refs.\ \cite{Lukes2014, Kyrian2007, Mikoczi2017, Lukes2017_2, Timogiannis2021}. An SSC commonly used in studies of gravitational wave sources is due to Tulczyjew \cite{Tulczyjew1959}, and is given by \begin{equation} p_{\alpha}S^{\alpha\beta}=0\;.\label{eq:TD} \end{equation} Using (\ref{eq:TD}), we find the relationship between the four-velocity and the four-momentum (\ref{eq:momvel}) is now given by \begin{equation} u^{\mu}=\frac{\mathcal{M}}{\mu^2}\left(p^{\mu}+\frac{2S^{\mu\nu}R_{\nu\rho\sigma\tau}p^{\rho}S^{\sigma\tau}}{4\mu^2+R_{\alpha\beta\gamma\delta}S^{\alpha\beta}S^{\gamma\delta}}\right)\;, \end{equation} where \begin{align} \mu & \equiv\sqrt{-p_{\alpha}p^{\alpha}}\;,\\ \mathcal{M} & \equiv-p_{\alpha}u^{\alpha}\;. \label{eq:mathcalM} \end{align} These relationships tell us that $p^\alpha = \mu u^\alpha + \mathcal{O}(S^2)$, and $\mu = \mathcal{M} + \mathcal{O}(S^2)$, a result we will exploit shortly. The spin tensor is antisymmetric, which facilitates defining the spin vector \cite{Kyrian2007} \begin{equation} S^{\mu}=-\frac{1}{2\mu}{\epsilon^{\mu\nu}}_{\alpha\beta}p_{\nu}S^{\alpha\beta}, \label{eq:spinvec} \end{equation} where \begin{equation} \epsilon_{\alpha\beta\gamma\delta}=\sqrt{-g}[\alpha\beta\gamma\delta] \end{equation} and where $\sqrt{-g}$ is the metric determinant, reducing to $\Sigma\sin\theta$ for Kerr, and $[\alpha\beta\gamma\delta]$ is the totally antisymmetric symbol. By combining these results, one can show that the magnitude of the spin is another constant of the motion, given by \begin{equation} S^2=S^{\alpha}S_{\alpha}=\frac{1}{2}S_{\alpha\beta}S^{\alpha\beta}\;.\label{eq:smag} \end{equation} \subsection{Leading order in small body's spin} \label{sec:leadingorder} The magnitude $S$ of the small body's spin can be defined using a dimensionless spin parameter $s$: \begin{equation} S=s\mu^2\;. \end{equation} If the small body is itself a Kerr black hole, then $0 \le s \le 1$, which tells us that $S \le \mu^2$. Linear-in-spin effects are thus effectively quadratic in the system's mass ratio, affecting a system's dynamics at the same formal order as important self force effects \cite{Pound2015,Barack2019,Pound2021}. The next order in spin scales with the fourth power of the system's mass ratio, practically negligible at extreme mass ratios. A linear-in-spin analysis is thus formally interesting as well as of astrophysical relevance. As such, we focus on the linear-in-spin limit, neglecting terms in all of our equations that are $\mathcal{O}(S^2)$ or higher. In this limit, the Matthisson-Papapetrou equations (\ref{eq:mp1}) -- (\ref{eq:mp2}) and the Tulczyjew SSC (\ref{eq:TD}) take a particularly useful form. Revisiting various relations in Secs.\ \ref{sec:scc} and \ref{sec:ssc} but dropping all terms beyond linear in $S$, the Tulczyjew SSC (\ref{eq:TD}) becomes \begin{equation} p^{\alpha}=\mu u^{\alpha}\;. \label{eq:momvelfirstorder} \end{equation} The orbit's 4-velocity and 4-momentum are parallel at this order. With this, the Mathisson-Papapetrou equations can be written \begin{align} \frac{Du^{\alpha}}{d\tau} & =-\frac{1}{2\mu}{R^\alpha}_{\,\nu\lambda\sigma}u^{\nu}S^{\lambda\sigma}\;, \label{eq:mp1linear}\\ \frac{DS^{\alpha\beta}}{d\tau} &= 0\;.\label{eq:mp2linear} \end{align} The second of these equations tells us that the spin tensor is parallel transported along the worldline at this order. Linearizing in $S$, Eq.\ (\ref{eq:spinvec}) becomes \begin{equation} S^{\mu}=-\frac{1}{2}{\epsilon^{\mu\nu}}_{\alpha\beta}\hat{u}_{\nu}S^{\alpha\beta}\;,\label{eq:spinveclinear} \end{equation} or equivalently, \begin{equation} S^{\alpha\beta}=\epsilon^{\alpha\beta\mu\nu}\hat{u}_{\mu}S_{\nu}\;.\label{eq:spintenslinear} \end{equation} Using these linear-in-spin forms, the SSC (\ref{eq:TD}) becomes \begin{equation} \hat{u}_{\alpha}S^{\alpha\beta}=0\;,\label{eq:TDlinear} \end{equation} or \begin{equation} \hat{u}_{\alpha}S^{\alpha}=0\;.\label{eq:TDlinear2} \end{equation} Equation (\ref{eq:TDlinear2}) helps us understand the meaning of the SSC, at least in a linear-in-spin analysis: it tells us that in a freely-falling frame that moves with the geodesic whose 4-velocity is $\hat u^\alpha$, the small body's spin is purely spatial. Combining Eqs.\ (\ref{eq:mp2linear}) and (\ref{eq:spintenslinear}), we find \begin{equation} \frac{DS^{\mu}}{d\tau}=0\;, \label{eq:mp2linear2} \end{equation} so the spin vector is also parallel transported along the worldline at this order. \subsection{Parallel transport in Kerr} \label{sec:ParallelTransport} Since the small body's spin vector is parallel transported along its orbit, as described by Eq.\ (\ref{eq:mp2linear2}), let us examine such parallel transport in detail. Past work \cite{Ruangsri2016} showed how to build a solution describing this transport using a frequency-domain expansion, demonstrating that an additional frequency emerges which characterizes the timescale associated with the spin's precession. Van de Meent \cite{vandeMeent2019} has since then produced an elegant closed-form tetrad-based solution for describing the parallel transport of vectors along Kerr geodesics, following methods first developed Marck \cite{Marck1983, Marck1983_2, Kamran1986}; see also work by Bini and collaborators, which explores and clarifies the geometrical properties of Marck's procedure \cite{Bini2008,Bini2019,Bini2017}, as well as Mashoon and collaborators \cite{Mashhoon2006,Chicone2006}. Following Ref.\ \cite{vandeMeent2019}, we summarize the procedure for constructing this tetrad and describe how to use it to describe a spinning body moving along its orbit. We write the tetrad $\{e_{0\alpha}(\lambda), \tilde{e}_{1\alpha}(\lambda), \tilde{e}_{2\alpha}(\lambda), e_{3\alpha}(\lambda)\}$. Take its first leg, $e_{0\alpha}(\lambda)$, to be the geodesic's 4-velocity; take its last leg, $e_{3\alpha}(\lambda)$, to be the (normalized) orbital angular momentum 4-vector defined in Eq.\ (\ref{eq:orbangmomdef}). Our tetrad so far consists of the vectors \begin{equation} e_{0\alpha}(\lambda) = \hat u_\alpha(\lambda)\;,\qquad e_{3\alpha}(\lambda) = \frac{1}{\sqrt{\hat K}}\hat{\mathcal{L}}_\alpha(\lambda)\; \label{eq:tetradleg03}, \end{equation} where $\hat{\mathcal{L}}_\alpha(\lambda)$ is the orbital angular momentum 4-vector along the geodesic with 4-velocity $\hat u_\alpha(\lambda)$. By the properties of $\hat u^\alpha(\lambda)$, $\hat{\mathcal{L}}^\alpha(\lambda)$, and $\hat K$, these tetrad legs are orthogonal to each other and parallel transported along $\hat u^\alpha(\lambda)$. We then construct $\tilde{e}_{1\alpha}(\lambda)$ and $\tilde{e}_{2\alpha}(\lambda)$ by choosing two vectors which lie in the plane orthogonal to $e_{0\alpha}(\lambda)$ and $e_{3\alpha}(\lambda)$; see Ref.\ \cite{vandeMeent2019}, Eqs.\ (50) and (51), for explicit formulas. The resulting tetrad is in general not parallel transported. However, by defining \begin{align} e_{1\alpha}(\lambda) &= \cos\psi_p(\lambda)\,\tilde{e}_{1\alpha}(\lambda) + \sin\psi_p(\lambda)\,\tilde{e}_{2\alpha}(\lambda) \label{eq:tetradleg1}\\ e_{2\alpha}(\lambda) &= -\sin\psi_p(\lambda)\,\tilde{e}_{1\alpha}(\lambda) + \cos\psi_p(\lambda)\,\tilde{e}_{2\alpha}(\lambda) \label{eq:tetradleg2} \end{align} and requiring that the precession phase $\psi_p(\lambda)$ satisfies \begin{equation} \frac{d\psi_p}{d\lambda}=\sqrt{\hat{K}}\left(\frac{(r^2+a^2)\hat{E}-a\hat{L}_z}{\hat{K}+r^2}+a\frac{\hat{L}_z-a(1-z^2)\hat{E}}{\hat{K}-a^2z^2}\right) \label{eq:precphaseeqn} \end{equation} we obtain a tetrad $\{e_{0\alpha}(\lambda), e_{1\alpha}(\lambda), e_{2\alpha}(\lambda), e_{3\alpha}(\lambda)\}$ that is parallel transported along the geodesic \cite{Marck1983, Marck1983_2, vandeMeent2019}. Van de Meent further finds a closed form solution to Eq.\ (\ref{eq:precphaseeqn}) of the form \begin{equation} \psi_p(\lambda) = \Upsilon_s\lambda + \psi_r(\hat\Upsilon_r\lambda) + \psi_\theta(\hat\Upsilon_\theta\lambda)\;, \label{eq:precphasesol} \end{equation} where $\Upsilon_s$ (denoted $\Upsilon_\psi$ in Ref.\ \cite{vandeMeent2019}) is the frequency (conjugate to Mino-time) describing the precession of this tetrad along the orbit; the functions $\psi_r(\hat\Upsilon_r\lambda)$ and $\psi_\theta(\hat\Upsilon_\theta\lambda)$ are phases associated with the orbit's radial and polar motions. We define the Mino-time precession period as $\Lambda_s = 2\pi/\Upsilon_s$. This solution makes setting the spin of the small body easy: We write the small body's spin vector \begin{equation} S_\alpha = S^0 e_{0\alpha}(\lambda) + S^1 e_{1\alpha}(\lambda) + S^2 e_{2\alpha}(\lambda) + S^3 e_{3\alpha}(\lambda)\;, \label{eq:spinvectetrad} \end{equation} where $\{S^0, S^1, S^2, S^3\}$ are all constants with the dimension of angular momentum. The requirement that $\hat u^\alpha S_\alpha = 0$ means that $S^0 = 0$ for all configurations. A component $S^3 \equiv S_{\parallel}$ denotes a component of the small body's spin parallel or antiparallel to the orbital angular momentum, normal to the orbital plane; $S^1$ and $S^2$ define components perpendicular to the orbital angular momentum, in the orbital plane. A spin vector with $S^1 = S^2 = 0$ does not precess, and so its motion has no frequency components at harmonics of the spin-precession frequency $\Upsilon_s$. By contrast, when $S^1$ or $S^2$ are non-zero, the small body's spin precesses over an orbit, and harmonics of the frequency $\Upsilon_s$ appear in a frequency-domain description of the small body's orbit. Code for computing these tetrad legs is implemented as part of the {\tt KerrGeodesics} package in the Toolkit \cite{Kerrgeodesics}. \subsection{Spin deviation from geodesic trajectory} \label{sec:SpinDev} As argued in Sec.\ \ref{sec:leadingorder}, our focus is on computing orbits to linear order in the small body's spin. For the configurations that we study, the spin is a small parameter, and these trajectories can be regarded as perturbative deviations from bound Kerr geodesics. We discuss the nature of an orbit's ``spin shift'' in detail later as we analyze specific orbit and spin configurations. In general, the small body's trajectory can be written in the form \begin{equation} x^\alpha(\lambda) = \hat x^\alpha(\lambda) + \delta x_S^\alpha(\lambda)\;, \label{eq:trajectoryshift} \end{equation} where $\hat x^\alpha(\lambda)$ is the coordinate-space trajectory of an appropriately chosen geodesic, and $\delta x_S^\alpha(\lambda)$ is the $\mathcal{O}(S)$ shift due to the spin. Similarly, we write the small body's 4-velocity \begin{equation} u^{\alpha}=\hat{u}^{\alpha}+u_{S}^{\alpha}\;, \label{eq:4vellinear} \end{equation} where $\hat u^\alpha$ solves the geodesic equation, and $u_S^\alpha=\mathcal{O}(S)$. One important point to note is that $\hat{x}^\alpha(\lambda)$ will in general have different periods than $x^\alpha(\lambda)$: the periods $\Lambda_{r,\theta,\phi}$ which characterize bound orbits of spinning bodies differ from the geodesic periods $\hat\Lambda_{r,\theta,\phi}$ by $\mathcal{O}(S)$. As such, a naive definition of $\delta x^\alpha_S$ necessarily contain unbounded, secularly growing terms. Such terms ruin the perturbative expansion that we use. As such, we do not use the explicit form Eq.\ (\ref{eq:trajectoryshift}) directly when we compute spinning-body orbits in Secs. \ref{sec:slightlyecc} and \ref{sec:spinbodyfreqdom}. We instead characterize these orbits using amplitude-phase variables. Doing so, the frequency shift is incorporated into the parameterization; see Eq.\ (\ref{eq:rparam}) or (\ref{eq:rparam2}) and nearby text. Once we have solved for the frequency shift and phase variables, we can then compute $\delta x^\alpha_S$. These quantities are particularly useful for finding the concomitant ``spin shifts'' to constants of motion, which we describe below. In Appendix \ref{sec:secularterms}, we provide the explicit form of $\delta x^\alpha_S$ in terms of variables that we use in this work, as well as further discussion of the secular terms. As the orbit evolves, we must preserve the norm of its 4-velocity. Using Eq.\ (\ref{eq:4vellinear}), demanding that $\hat u^\alpha\hat u_\alpha = -1$, and enforcing $u^{\alpha}u_{\alpha}=-1$ yields the constraint \begin{equation} \hat{u}^\alpha u^S_\alpha+\hat{u}_\alpha u_S^\alpha=0\;.\label{eq:udotu} \end{equation} Writing $u_\alpha=g_{\alpha \beta}u^\beta$, and noting that $g_{\alpha \beta}$ is evaluated along the spinning-body orbits for which $r=\hat{r}+\delta r_S$ and $\theta=\hat{\theta}+\delta\vartheta_S$, the spin-corrected covariant 4-velocity has the form \begin{equation} u^S_\alpha=g_{\alpha\beta} u^\beta_S+\delta r_S \partial_r g_{\alpha\beta}\hat u^\beta +\delta \theta_S \partial_\theta g_{\alpha\beta}\hat u^\beta\;. \end{equation} This allows us to write constraint (\ref{eq:udotu}) entirely in terms of the contravariant spin-correction to the 4-velocity, viz., \begin{equation} 2g_{\alpha\beta}\hat u^\alpha u^\beta_S + \delta r_S \partial_r g_{\alpha\beta}\hat u^\alpha\hat u^\beta +\delta \theta_S \partial_\theta g_{\alpha\beta}\hat u^\alpha\hat u^\beta=0\;.\label{eq:udotucontra} \end{equation} We use this constraint throughout our analysis. We also define the leading order in spin corrections to the energy $\delta E^S$ and axial angular momentum $\delta L_z^S$ due to the spin using (\ref{eq:Espin}) and (\ref{eq:Lspin}): \begin{align} E^{S}&=\hat E + \delta E^S\;, \label{eq:deltaEspin}\\ L_{z}^{S}&=\hat{L}_z + \delta L_z^S\;.\label{eq:deltaLspin} \end{align} As mentioned in Sec.\ \ref{sec:scc}, an analogue to the Carter constant is preserved at linear order in spin. Normalizing by the orbiting body's rest mass squared, it is given by \cite{Rudiger1981} \begin{equation} K^S=K_{\alpha\beta}u^\alpha u^\beta+\delta\mathcal{C}^S, \label{eq:Kspin} \end{equation} where \begin{equation} \delta\mathcal{C}^S= -\frac{2}{\mu}\hat u^{\mu}S^{\rho\sigma}\left( {\mathcal{F}^\nu}_{\sigma}\nabla_{\nu}\mathcal{F}_{\mu \rho } - {\mathcal{F}_\mu}^\nu\nabla_{\nu}\mathcal{F}_{\rho\sigma}\right)\;. \label{eq:Cspin} \end{equation} We define the first order in spin correction to $K$ by \begin{equation} K^{S}=\hat{K}+\delta K^S\;, \label{eq:deltaKspin}\\ \end{equation} where $\hat K$ is the Carter constant along the geodesic whose 4-velocity is $\hat u^\alpha$, and $\delta K^S$ is $\mathcal{O}(S)$. Combining Eqs.\ (\ref{eq:trajectoryshift}), (\ref{eq:4vellinear}) and (\ref{eq:Kspin}) with the definition (\ref{eq:deltaKspin}) and truncating at linear order in $S$, we find \begin{align} \delta K^S &= 2K_{\alpha\beta}\hat u^\alpha u^\beta_S + \delta r_S \partial_r K_{\alpha\beta}\hat u^\alpha\hat u^\beta + \delta \theta_S \partial_\theta K_{\alpha\beta}\hat u^\alpha\hat u^\beta \nonumber\\ & + \delta\mathcal{C}^S\;. \label{eq:deltaKspin2} \end{align} The first line of Eq.\ (\ref{eq:deltaKspin2}) includes two terms which are due to the shift of the small body's orbit that we find when examining spinning-body orbits. Applying Eq.\ (\ref{eq:Qdef}), we then find the first-order shift in $Q$: \begin{equation} \delta Q^S = \delta K^S - 2(\hat L_z - a\hat E)(\delta L^S_z - a\delta E^S)\;. \label{eq:deltaQspin} \end{equation} For nearly equatorial orbits with polar motion defined by $\theta=\pi/2+\delta\vartheta_S$ in Eq.\ (\ref{eq:thetaparamfirst}), $\delta\vartheta_S$ and $\delta\theta_S$ may be used interchangeably (which we do throughout this paper). However, in general, $\delta\vartheta_S$ corresponds only to the corrections to the \textit{libration region} of the polar motion, while $\delta\theta_S$ denotes the entire spin-perturbation associated with $\theta$, as defined in Eq.\ (\ref{eq:trajectoryshift}). This distinction becomes important in our companion study \cite{Paper2}, \subsection{General characteristics of spinning-body orbits} \label{sec:spinningbodyorbitoverview} In the remainder of this paper, we examine several examples of the orbits of spinning bodies about Kerr black holes. Before exploring these specific cases in detail, we briefly lay out and summarize general characteristics of the orbits that we find. Consider first an orbit that would be equatorial if the orbiting body were non-spinning. If this body's spin is normal to the equatorial plane (i.e., parallel or antiparallel to both the orbital angular momentum and the large black hole's spin), then its orbit is quite simple. Just as in the geodesic case, we can use the parameterization $r = pM/(1 + e\cos\chi_r)$. The radial turning points are fixed for the duration of the orbit at $pM/(1 \pm e)$, and the orbit's dynamics maps onto a true anomaly angle $\chi_r$. This true anomaly differs from the true anomaly that describes geodesics, $\hat\chi_r$; details of this difference are presented in Sec.\ \ref{sec:slightlyecc}. The orbit's radial frequency is shifted compared to the geodesic by an amount $\mathcal{O}(S)$; we write the radial frequency $\Upsilon_r = \hat\Upsilon_r + \Upsilon^S_r$. This case is discussed in quantitative detail in Secs.\ \ref{sec:secondorderine} and \ref{sec:eqplanealign}, with the special case of circular equatorial orbits presented in Sec.\ \ref{subsec:circeqalign}. Consider next such an orbit but with the spin misaligned with respect to the orbital plane. This misalignment introduces $\mathcal{O}(S)$ oscillations centered about the equatorial plane: The polar motion acquires a correction $\delta \vartheta_S$ whose Fourier expansion is at harmonics of the spin frequency $\Upsilon_s$ and the radial frequency $\Upsilon_r = \hat{\Upsilon}_r + \Upsilon_r^S$. The radial motion, however, remains exactly as it was in the spin-aligned case. We discuss this case in detail in Secs.\ \ref{sec:leadingorderine} and \ref{sec:ecceqprecess}; an explicit analytic solution for circular, nearly equatorial motion is calculated in Sec.\ \ref{subsec:circeqmisalign}. We focus on these equatorial and nearly equatorial cases in this paper. For orbits that are not ``nearly equatorial'', the parameterization becomes rather more complicated. In particular, the ``geodesic-like'' parameterization of the nearly equatorial case must be modified, adding a spin-induced contribution to the orbit's libration region in both the radial and polar motions. This holds even if the spin-vector is aligned with the orbital angular momentum. We discuss these more complicated cases in a companion paper \cite{Paper2}. \section{Spinning-body orbits I:\\ Circular, nearly equatorial orbits} \label{sec:simpleorbits} We begin our study of spinning-body orbits by examining several simple cases for which we can find closed-form, fully analytic solutions. These cases allow us to introduce the main principles we use to describe and parameterize our solutions, and provide limiting examples which can be compared against other results in the literature. We begin with the simplest possible orbit: a circular orbit of radius $r$, confined to the equatorial plane ($I = 0^\circ$ or $I = 180^\circ$). Many of the results we find are derived in Ref.\ \cite{Tanaka1996}, which focuses on circular orbits of spinning bodies, as well as elsewhere in the literature. The results we present in Sec.\ \ref{subsec:circeqalign} can also be obtained using the effective potential derived in Ref.\ \cite{Saijo1998} (see also Refs.\ \cite{1976Tod} and \cite{Hackmann2014}). To facilitate the comparison to this literature, we discuss the method of Ref.\ \cite{Saijo1998} in detail in Appendix \ref{sec:Saijocomparison}. \subsection{Aligned spin} \label{subsec:circeqalign} Start with the small body spin parallel or antiparallel to the orbit: we set the spin components $S^1 = S^2 = 0$, and set $S^3 =s_\parallel \mu^2$, with $-1 \le s_\parallel \le 1$. The small body's spin is parallel to the orbit if $s_\parallel > 0$, and antiparallel if $s_\parallel < 0$. The geodesic integrals of motion are \begin{align} \hat E &= \frac{1 - 2v^2 \pm qv^3}{\sqrt{1 - 3v^2 \pm 2qv^3}}\;, \label{eq:Ecirceq}\\ \hat L_z &= \pm \sqrt{rM}\frac{1 \mp 2qv^3 + q^2v^4}{\sqrt{1 - 3v^2 \pm 2qv^3}}\;, \label{eq:Lzcirceq}\\ \hat Q &= 0\;. \end{align} We have introduced $v = \sqrt{M/r}$ (equivalently $r = M/v^2$) and $q = a/M$. Where there is a choice, the upper sign is for prograde orbits ($I = 0^\circ$) and the lower is for retrograde ($I = 180^\circ$). The small body's background 4-velocity is given by $\hat u_\alpha = (-\hat E, 0, 0, \hat L_z)$. The small body's spin 4-vector is given by \begin{equation} S_\alpha =s_\parallel\mu^2 e_{3\alpha} = (0, 0, \mp r s_\parallel \mu^2, 0)\;. \end{equation} This result comes from the fact that for an equatorial circular orbit \cite{vandeMeent2019}, \begin{align} e_{3\alpha} &= \left(0,0,-r\frac{(\hat L_z - a\hat E)}{|\hat L_z - a\hat E|},0\right) \nonumber\\ &= \left(0,0,\mp r,0\right)\;. \end{align} If the orbit is prograde and $s_\parallel > 0$, or the orbit is retrograde and $s_\parallel < 0$, then the small body's spin points in the direction of decreasing $\theta$; vice versa if {$s_\parallel$} and the orbit have the opposite signs and orientations. Let us examine (\ref{eq:mp1linear}) for this case. Using Eq.\ (\ref{eq:4vellinear}), we start by expanding the covariant derivative: \begin{align} \frac{Du^\alpha}{d\tau} &= (\hat u^\beta + u^\beta_S)\nabla_\beta\left(\hat u^\alpha + u^\alpha_S\right) \nonumber\\ &= \frac{d\hat u^\alpha}{d\tau} + \frac{du^\alpha_S}{d\tau} + {\Gamma^\alpha}_{\beta\gamma}\hat u^\beta\hat u^\gamma + 2{\Gamma^\alpha}_{\beta\gamma}\hat u^\beta u^\gamma_S + \mathcal{O}(S^2) \nonumber\\ &= \frac{du^\alpha_S}{d\tau} + 2{\Gamma^\alpha}_{\beta\gamma}\hat u^\beta u^\gamma_S\;. \label{eq:expand4velderiv} \end{align} Here, ${\Gamma^\alpha}_{\beta\gamma}$ is the Christoffel connection for the Kerr geometry evaluated along the orbit. In going from the second line to the third line, we used the fact that $\hat u^\alpha$ solves the geodesic equation, and we linearized in $S$. We also used the fact that, for this orbit, the spinning body remains confined to the equatorial plane $\theta = \pi/2$ at radius $r$. For the misaligned case we consider next, the orbit oscillates in the polar direction, and there is a correction term that involves $\partial_\theta{\Gamma^\alpha}_{\beta\gamma}$. Requiring the spinning body's orbit to be circular and equatorial means that \begin{equation} u^r_S = u^\theta_S = 0\;. \end{equation} Further, the requirement that $u^\alpha_S\hat u_\alpha = 0$ tells us that \begin{equation} u^t_S = \frac{\hat L_z}{\hat E}u^\phi_S\;. \label{eq:utScirceq} \end{equation} The only unique component we must determine is thus $u^\phi_S$. Note that we must have $du^\phi_S/d\tau = 0$. If we observe the system in a frame that co-rotates with the orbit, it appears static; the symmetries of the spin-curvature coupling in this case do not introduce any dynamics. Combining Eqs.\ (\ref{eq:mp2linear2}) and (\ref{eq:expand4velderiv}) with $u^r_S = u^\theta_S = 0 = du^\phi_S/d\tau$, we find the equation which governs the spin correction to the small body's orbital velocity is given by \begin{equation} 2{\Gamma^r}_{\beta\gamma}\hat u^\beta u^\gamma_S = -\frac{1}{2\mu}{R^r}_{\nu\lambda\sigma}\hat u^\nu S^{\lambda\sigma}\;; \label{eq:mp2circeqparallel} \end{equation} all other components of this equation vanish. Expanding the right-hand and left-hand sides of (\ref{eq:mp2circeqparallel}), we find \begin{widetext} \begin{align} 2{\Gamma^r}_{\beta\gamma}\hat u^\beta u^\gamma_S &= \mp\frac{2v\sqrt{1 - 3v^2 \pm 2qv^3}(1 - 2v^2 + q^2v^4)u^\phi_S}{1 - 2v^2 \pm qv^3}\;, \\ -\frac{1}{2\mu}{R^r}_{\nu\lambda\sigma}\hat u^\nu S^{\lambda\sigma} &= \frac{3s_\parallel\mu}{M^2}\frac{v^7(1 \mp q v)(1 - 2v^2 + q^2v^4)}{1 - 3v^2 \pm 2qv^3}\;. \end{align} \end{widetext} Using this to evaluate Eq.\ (\ref{eq:mp2circeqparallel}) yields \begin{equation} u^\phi_S = \mp\frac{3s_\parallel\mu}{2M^2}\frac{v^6(1 \mp q v)(1 - 2v^2 \pm q v^3)}{(1 - 3v^2 \pm 2qv^3)^{3/2}}\;. \label{eq:uphiScirceqalign} \end{equation} Using Eq.\ (\ref{eq:utScirceq}), this in turn yields a simple result for $u^t_S$. An observationally important aspect of this solution is its influence on the system's orbital frequency. Using \begin{equation} \Omega_\phi = \frac{u^\phi}{u^t} = \frac{\hat u^\phi + u^\phi_S}{\hat u^t + u^t_S}\;,\label{eq:Omegaphicirc} \end{equation} expanding in $S$, using $\hat\Omega_\phi = \hat u^\phi/\hat u^t$, and finally defining $\Omega_\phi = \hat\Omega_\phi + \delta\Omega_\phi$, we find the correction to the frequency due to the spin-curvature force: \begin{equation} \delta\Omega_\phi = \hat\Omega_\phi\left(\frac{u^\phi_S}{\hat u^\phi} - \frac{u^t_S}{\hat u^t_S}\right)\;. \label{eq:deltaOmPhicirceq} \end{equation} For circular and equatorial orbits, \begin{equation} \hat\Omega_\phi = \pm\frac{v^3}{M(1 \pm q v^3)}\;. \label{eq:OmPhicirceq} \end{equation} Combining these various results, we find the shift to the axial frequency: \begin{equation} \delta\Omega_\phi = \mp\frac{3s_\parallel}{2M}\frac{\mu}{M}\frac{(1 \mp qv)}{(1 \pm q v^3)^2}v^6\;. \label{eq:deltaOmPhicirceqalign} \end{equation} This agrees exactly with Eq.\ (4.26) in Ref.\ \cite{Tanaka1996}. The orbiting body's energy, axial angular momentum, and Carter constant are also shifted. Combining Eqs.\ (\ref{eq:Espin}), (\ref{eq:Lspin}), (\ref{eq:deltaEspin}), and (\ref{eq:deltaLspin}) with the results in this section and using Eqs.\ (\ref{eq:deltaKspin2}) and (\ref{eq:deltaQspin}), we find \begin{widetext} \begin{align} \delta E^S &= -\frac{s_\parallel}{2}\frac{\mu}{M}\frac{(1 \mp qv)(1 \mp 4q^3 + 3q^2v^4)}{(1 - 3v^2 \pm 2qv^3)^{3/2}}v^5\;, \label{eq:deltaEScirceqaligned}\\ \delta L^S_z &= \pm\frac{s_\parallel\mu}{2}\frac{(2 - 13v^2 + 18v^4) \pm 3q(3 - 7v^2)v^3 + 2q^2(1 + 2v^2)v^6 \pm q^3(3 - 7v^2)v^7 + 3q^4v^{10}}{(1 - 3v^2 \pm 2qv^3)^{3/2}}\;, \label{eq:deltaLzScirceqaligned}\\ \delta K^S &= s_\parallel\mu \frac{(2 - 13v^2 + 18v^4) \mp 2qv(2 - 17v^2 + 28v^4) - q^2v^4(17 - 45v^2) \mp 6q^3v^7 - 3q^4v^8}{v(1 - 3v^2 \pm 2qv^3)^2}\;, \label{eq:deltaKScirceqaligned}\\ \delta Q^S &= \mp 2 s_\parallel\mu a\;. \label{eq:deltaQScirceqaligned} \end{align} \end{widetext} These expressions for the conserved quantities $\delta E^S$ and $\delta L_z^S$ match exactly with Eqs.\ (\ref{eq:EscircSchw}) and (\ref{eq:LSscircSchw}) derived using the alternative approach outlined in Appendix \ref{sec:Saijocomparison}. It is interesting that there is a non-zero $\delta Q^S$ even though there is no change to the polar motion of the small body in this case. We note that Witzany has provided a modified definition of $\delta Q^S$ (see text near Eq.\ (48) of Ref.\ \cite{Witzany2019_2}) such that it is zero for cases in which there is no polar motion; we are likely to adopt this definition in future work. In any case, our result agrees with that reported in Ref.\ \cite{Tanaka1996}, after translating the somewhat different notation. \subsection{Misaligned spin} \label{subsec:circeqmisalign} Now consider the small body's spin misaligned from the orbit. The background 4-velocity and integrals of motion are identical to those used in Sec.\ \ref{subsec:circeqalign}, but the small body's spin becomes \begin{equation} S_\alpha = \mu^2\bigl(s_\perp\cos\phi_s\,e_{1\alpha} + s_\perp\sin\phi_s\,e_{2\alpha} + s_\parallel\,e_{3\alpha}\bigr)\;. \label{eq:Smisalign1} \end{equation} We have broken the spin into a component parallel to the orbit (out of the orbital plane) with magnitude $s_\parallel$, and into components normal to the orbit (in the orbital plane) with magnitude $s_\perp$. The angle $\phi_s$ describes the orientation of the spin components normal to the orbit. Setting $s = \sqrt{s_\perp^2 + s_\parallel^2}$, we require $0 \le s \le 1$. Using (\ref{eq:tetradleg1}) and (\ref{eq:tetradleg2}), Eq.\ (\ref{eq:Smisalign1}) can be rewritten \begin{align} S_\alpha &= \mu^2\biggl[s_\perp\Bigl(\cos(\phi_s + \psi_p)\tilde{e}_{1\alpha} + \sin(\phi_s + \psi_p)\tilde{e}_{2\alpha}\Bigr) \nonumber\\ &\qquad + s_\parallel e_{3\alpha}\biggr]\;, \label{eq:Smisalign2} \end{align} where $\psi_p$ is the precession phase, which grows with time. The tetrad leg $e_{3\alpha}$ is the same as in Sec.\ \ref{subsec:circeqalign}. Continuing to use the parameterization $q \equiv a/M$, $v = \sqrt{M/r}$, the tetrad legs $\tilde{e}_{1\alpha}$ and $\tilde{e}_{2\alpha}$ are given by \begin{widetext} \begin{align} \tilde{e}_{1\alpha} &= \left(0, \frac{1}{\sqrt{1 - 2v^2 + a^2v^4}}, 0, 0\right)\;, \label{eq:tildee1_circeq} \\ \tilde{e}_{2\alpha} &= \left(v\sqrt{\frac{1 - 2v^2 + q^2v^4}{1 - 3v^2 \pm 2qv^3}}, 0, 0,\mp r(1 \pm qv^3)\sqrt{\frac{1 - 2v^2 + q^2v^4}{1 - 3v^2 \pm 2qv^3}}\right)\;. \label{eq:tildee2_circeq} \end{align} \end{widetext} For circular and equatorial orbits, the precession phase $\psi_p$ can be written as functions of Mino-time $\lambda$, proper time $\tau$, or Boyer-Lindquist time $t$: \begin{equation} \psi_p = \Upsilon_s\lambda = \omega_s\tau = \Omega_s t\;, \end{equation} with \begin{align} \Upsilon_s = \sqrt{rM} &= M/v \;,\qquad\omega_s = \sqrt{M/r^3} = v^3/M\;, \nonumber\\ \Omega_s &= \omega_s\frac{\sqrt{1 - 3v^2 \pm 2 qv^3}}{(1 \pm q v^3)}\;. \end{align} This limiting form for $\Upsilon_s$ was found in Ref.\ \cite{Ruangsri2016}, and is confirmed by the general expression derived in Ref.\ \cite{vandeMeent2019}. The factor $\Sigma$ which converts from Mino-time frequencies to proper-time frequencies takes the constant value $r^2$ for circular and equatorial orbits; likewise, the factor \begin{equation} \Gamma = r^2\frac{1 \pm q v^3}{\sqrt{1 - 3v^2 \pm 2 qv^3}} \end{equation} which converts between Mino-time frequencies and coordinate-time quantities is constant for circular and equatorial orbits. To proceed, we again examine Eq.\ (\ref{eq:mp1linear}) and use Eq.\ (\ref{eq:thetaparamfirst}), i.e., $\theta=\pi/2+\delta\vartheta_S$, with $\delta\vartheta_S=\mathcal{O}(S)$. Expanding the covariant derivative yields a slightly different result as compared to what we found in the aligned case: \begin{align} \frac{Du^\alpha}{d\tau} &= (\hat u^\beta + u^\beta_S)\nabla_\beta\left(\hat u^\alpha + u^\alpha_S\right) \nonumber\\ &= \frac{d\hat u^\alpha}{d\tau} + \frac{du^\alpha_S}{d\tau} + {\Gamma^\alpha}_{\beta\gamma}\hat u^\beta\hat u^\gamma + \delta\vartheta_S\partial_\theta{\Gamma^\alpha}_{\beta\gamma}\hat u^\beta \hat u^\gamma \nonumber\\ &\qquad + 2{\Gamma^\alpha}_{\beta\gamma}\hat u^\beta u^\gamma_S + \mathcal{O}(S^2) \nonumber\\ &= \frac{du^\alpha_S}{d\tau} + \delta\vartheta_S\partial_\theta{\Gamma^\alpha}_{\beta\gamma}\hat u^\beta \hat u^\gamma + 2{\Gamma^\alpha}_{\beta\gamma}\hat u^\beta u^\gamma_S\;. \label{eq:expand4velderiv_thetaoscillate} \end{align} The misaligned spin causes the small body to oscillate about the equatorial plane by $\delta\vartheta_S$. This shifts the connection term at $\mathcal{O}(S)$, leading to the term in $\partial_\theta{\Gamma^\alpha}_{\beta\gamma}$. Expanding the covariant derivatives and Riemann components of Eq.\ (\ref{eq:mp1linear}) for this case, making use of Eq.\ (\ref{eq:expand4velderiv_thetaoscillate}) we find \begin{widetext} \begin{align} \frac{du^r_S}{d\tau} &\pm \frac{2v(1 - 2v^2 + q^2v^4)\sqrt{1 - 3v^2 \pm 2qv^3}}{1 - 2v^2 \pm qv^3}u^\phi_S = \frac{3s_\parallel\mu}{M^2}\frac{v^7(1\mp qv)(1 - 2v^2 + q^2v^4)}{1 - 3v^2 \pm 2qv^3}\;, \label{eq:circeqprec_rcomp}\\ \frac{du^\phi_S}{d\tau} &+ \frac{2v^5(1 - 2v^2 \pm qv^3)}{M^2(1 - 2v^2 + q^2v^4)\sqrt{1 - 3v^2 \pm 2qv^3}}u^r_S = 0\;, \label{eq:circeqprec_phicomp} \end{align} \end{widetext} for the equations governing $u^r_S$ and $u^\phi_S$. Notice that these equations do not couple to the precessing orbit's polar motion. Notice also that since $\hat u^r = \hat u^\theta = 0$, Eq.\ (\ref{eq:utScirceq}) holds for the misaligned case, and we do not need a separate equation governing $u^t_S$. We require the orbit to remain circular, so we put $u^r_S = 0 = du^r_S/d\tau$. This allows us to immediately solve Eq.\ (\ref{eq:circeqprec_rcomp}): \begin{equation} u^\phi_S = \mp\frac{3s_\parallel\mu}{2M^2}\frac{v^6(1 \mp q v)(1 - 2v^2 \pm q v^3)}{(1 - 3v^2 \pm 2qv^3)^{3/2}}\;. \label{eq:uphiScirceqmisalign} \end{equation} Since this does not vary with time, Eq.\ (\ref{eq:circeqprec_phicomp}) is also satisfied. Equation (\ref{eq:uphiScirceqmisalign}) is identical to the result we found in the spin-aligned case, Eq.\ (\ref{eq:uphiScirceqalign}). Our solution for $u^t_S$ is likewise identical to its aligned counterpart. From this it follows that Eq.\ (\ref{eq:deltaOmPhicirceqalign}) describes the change to the orbital frequency in this case as well. The polar motion for this misaligned case requires more attention. As stated above, we put $\theta = \pi/2 + \delta\vartheta_S$, where $\delta\vartheta_S$ denotes the spin-induced polar motion about the equatorial plane. Because $\hat u^\theta = 0$, we put $u^\theta = u^\theta_S = d\delta\vartheta_S/d\tau$. The polar component of Eq.\ (\ref{eq:mp1linear}) thus becomes \begin{widetext} \begin{equation} \frac{d^2\delta\vartheta_S}{d\tau^2} + \frac{v^6}{M^2}\frac{(1 \mp 4qv^3 + 3 q^2v^4)}{(1 - 3v^2 \pm 2qv^3)}\delta\vartheta_S = -\frac{3s_\perp\mu}{M^3}\frac{v^9(1 \mp qv)\sqrt{1 - 2v^2 + q^2v^4}}{1 - 3v^2 \pm 2qv^3}\cos(\phi_s + \psi_p)\;. \label{eq:circeqprec_thetacomp} \end{equation} The coefficient of $\delta\vartheta_S$ on the left-hand side of Eq.\ (\ref{eq:circeqprec_thetacomp}) is the square of the polar proper-time frequency for circular equatorial geodesic orbits, which we denote $\omega_\theta$. The solution to Eq.\ (\ref{eq:circeqprec_thetacomp}) has the form \begin{align} \delta\vartheta_S=A(\tau)\sin(\omega_\theta \tau)+B(\tau)\cos(\omega_\theta \tau)\;, \end{align} where \begin{align} \omega_\theta = \frac{v^3}{M}\sqrt{\frac{1 \mp 4qv^3 + 3q^2v^4}{1-3v^2\pm2qv^3}}\;, \label{eq:omegatheta} \end{align} and where $A(\tau)$ and $B(\tau)$ are given by \begin{align} A(\tau) = c_1 - \frac{3s_\perp\mu}{2M^2}\frac{v^{6}(1 \mp qv)}{\sqrt{1 - 3v^2 \pm 2qv^3}}\sqrt{\frac{1 - 2v^2 + q^2v^4}{1 \mp 4qv^3 + 3q^2v^4}}\left[\frac{\sin(\phi_s + (\omega_s - \omega_\theta)\tau)}{\omega_s - \omega_\theta} + \frac{\sin(\phi_s + (\omega_s + \omega_\theta)\tau)}{{\omega_s + \omega_\theta}}\right]\;, \label{eq:Atau_circ}\\ B(\tau) = c_2 + \frac{3s_\perp\mu}{2M^2}\frac{v^{6}(1 \mp qv)}{\sqrt{1 - 3v^2 \pm 2qv^3}}\sqrt{\frac{1 - 2v^2 + q^2v^4}{1 \mp 4qv^3 + 3q^2v^4}}\left[\frac{\cos(\phi_s + (\omega_s - \omega_\theta)\tau)}{\omega_s - \omega_\theta} - \frac{\cos(\phi_s + (\omega_s + \omega_\theta)\tau)}{{\omega_s + \omega_\theta}}\right]\;. \label{eq:Btau_circ} \end{align} \end{widetext} The constants $c_1$ and $c_2$ must be determined by matching to the initial conditions $\delta\vartheta_S|_{\tau=0}$ and $u^\theta_S|_{\tau=0}$. The precession of the small body's spin as it orbits the black hole causes the orbital plane to likewise precess. Note that the frequency combination $\omega_s - \omega_\theta$ never passes through zero anywhere over the domain of allowed orbits. As such, the functions $A(\tau)$ and $B(\tau)$ defined in Eqs.\ (\ref{eq:Atau_circ}) and (\ref{eq:Btau_circ}) are well behaved everywhere. The changes to the integrals of motion we find are identical to those in the aligned case, Eqs.\ (\ref{eq:deltaEScirceqaligned}) -- (\ref{eq:deltaQScirceqaligned}). The fact that the changes $\delta E^S$ and $\delta L^S_z$ are identical is consistent with other patterns that this analysis uncovered. However, the fact that $\delta Q^S$ is identical --- in particular, that $\delta Q^S$ is insensitive to $s_\perp$ --- is somewhat surprising, since the small body does in fact move in the polar direction when the spin and orbit are misaligned. The precession of the smaller body's spin nonetheless keeps the orbit equatorial on average, which appears to be sufficient for $Q$ to take its equatorial value. This again is consistent with results found in Ref.\ \cite{Tanaka1996}. \section{Spinning-body orbits II:\\ Slightly eccentric, nearly equatorial orbits} \label{sec:slightlyecc} Slightly eccentric equatorial orbits are simple enough that, by expanding in both eccentricity $e$ and spin $s$, we can develop and present mostly closed-form results for this case. In our discussion below, we show leading-order results, $\mathcal{O}(e,s)$, for orbits of bodies with general spin orientation in the Kerr spacetime. We go to higher order, $\mathcal{O}(e^2,s)$ for Schwarzschild only, confining ourselves to the case of small body spin aligned with the orbit. Though no issue of principle prevents us from developing a more generic analysis at higher order, the formulas describing Kerr orbits become cumbersome as we go to higher order in $e$. As we will see below, our leading-order analysis is sufficient for us to understand the impact of misaligned spin on spinning-body orbital dynamics. The results in the aligned spin section, Sec.\ \ref{sec:secondorderine}, can be obtained using an alternative method we describe in Appendix \ref{sec:Saijocomparison}. This method is discussed in Refs.\ \cite{1976Tod, Saijo1998, Hackmann2014}, and involves using conserved quantities $E^S$, $L_z^S$, $\mu^2$ and $S^2$ to develop an effective potential for the radial motion. \subsection{General principles} \label{sec:genprinciples} In this section and in what follows, we switch from using proper time $\tau$ to Mino time $\lambda$ for our parameterization of these orbits. This switch is not necessary for equatorial or nearly equatorial orbits, but will be necessary for the generic cases that we study in a companion paper. Using this parameterization now allows us to set up the calculation in this framework, and to examine the form of the solutions which emerge in this relatively simple limit. The governing equation for the orbits is Eq.\ (\ref{eq:mp1linear}), which we repeat here and use to define the spin-curvature force $f^\alpha_S$: \begin{equation} \frac{Du^\alpha}{d\tau} = -\frac{1}{2\mu}{R^\alpha}_{\nu\lambda\sigma}u^\nu S^{\lambda\sigma} \equiv f^\alpha_S/\mu\;. \end{equation} Expanding the covariant derivative, this becomes \begin{equation} \frac{du^\alpha}{d\tau} + {\Gamma^\alpha}_{\beta\gamma} u^\beta u^\gamma = f^\alpha_S/\mu\;, \label{eq:forcedgeodesic1} \end{equation} where ${\Gamma^\alpha}_{\beta\gamma}$ is the Christoffel connection for the Kerr spacetime, evaluated along the orbit. Let us define \begin{equation} U^\alpha \equiv \frac{dx^\alpha}{d\lambda} = \Sigma u^\alpha\;; \label{eq:Udef} \end{equation} this follows from $u^\alpha = dx^\alpha/d\tau$, as well as the definition of Mino-time: $d/d\lambda = \Sigma d/d\tau$. From (\ref{eq:Udef}), it follows that \begin{equation} \frac{du^\alpha}{d\lambda} = \frac{1}{\Sigma}\frac{dU^\alpha}{d\lambda} - \frac{U^\alpha}{\Sigma^2}\frac{d\Sigma}{d\lambda}\;. \label{eq:dudlambda} \end{equation} Next multiply (\ref{eq:forcedgeodesic1}) by $\Sigma^2$. Doing so and using Eq.\ (\ref{eq:dudlambda}), we put the equation which governs spinning-body orbits into the form \begin{equation} \frac{dU^\alpha}{d\lambda} - \frac{U^\alpha}{\Sigma}\frac{d\Sigma}{d\lambda} + {\Gamma^\alpha}_{\beta\gamma}U^\beta U^\gamma = \Sigma^2 f^\alpha/\mu\;. \label{eq:forcedgeodesic2} \end{equation} Note that in general, \begin{equation} \frac{d\Sigma}{d\lambda} = 2r\,U^r - 2a^2\cos\theta\sin\theta\,U^\theta\;. \label{eq:dSigmadlambda} \end{equation} For the equatorial and nearly equatorial orbits which are our focus in this section, the second term in (\ref{eq:dSigmadlambda}) is $\mathcal{O}(S^2)$, which we neglect. The factor $(1/\Sigma)d\Sigma/d\lambda$ in Eq.\ (\ref{eq:forcedgeodesic2}) becomes $2U^r/r$. For misaligned orbits, the orbiting body oscillates about the equatorial plane, just as we discussed for the circular misaligned case in Sec.\ \ref{subsec:circeqmisalign}. Setting the polar angle to $\theta = \pi/2 + \delta\vartheta_S$, with $\delta\vartheta_S = \mathcal{O}(S)$, the connection term in Eq.\ (\ref{eq:forcedgeodesic2}) becomes \begin{align} {\Gamma^\alpha}_{\beta\gamma}U^\beta U^\gamma &= \left({\Gamma^\alpha}_{\beta\gamma}\right)_{\theta = \pi/2}U^\beta U^\gamma \nonumber\\ &+ \delta\vartheta_S\left(\partial_\theta{\Gamma^\alpha}_{\beta\gamma}\right)_{\theta = \pi/2}\hat U^\beta \hat U^\gamma\;. \end{align} Notice that it is the geodesic 4-velocity $\hat U^\beta$ that appears in the term with the derivative of the connection. Because $\delta\vartheta_S$ is itself $\mathcal{O}(S)$, contributions from the non-geodesic parts of $U^\beta$ enter this term at $\mathcal{O}(S^2)$ or higher. Let us write the small body's spin in the form \begin{align} S_\alpha &= \mu^2\biggl[s_\perp\Bigl(\cos(\phi_s + \psi_p)\tilde{e}_{1\alpha} + \sin(\phi_s + \psi_p)\tilde{e}_{2\alpha}\Bigr) \nonumber\\ &\qquad + s_\parallel e_{3\alpha}\biggr]\;, \label{eq:Smisalign_form1}\\ &= \left(s_\perp \mu^2 \sigma_t, s_\perp \mu^2 \sigma_r, \mp s_\parallel\mu^2r, s_\perp\mu^2 \sigma_\phi\right)\;. \label{eq:Smisalign_form2} \end{align} Both the precession phase $\psi_p$ and the tetrad elements $\tilde{e}_{1\alpha}$ and $\tilde{e}_{2\alpha}$ are more complicated than they were in the circular limit; we defer discussion of their detailed forms until they are needed later in our analysis. The form (\ref{eq:Smisalign_form2}) is a useful rewriting of (\ref{eq:Smisalign_form1}); the components $\sigma_{t,r,\phi}$ can be read out of $\tilde{e}_{1\alpha}$ and $\tilde{e}_{2\alpha}$. With everything in place, it is now not difficult to evaluate all the terms appearing in Eq.\ (\ref{eq:forcedgeodesic2}) and write out the equations governing the small body's 4-velocity $U^\alpha$. First, we write out the equations for $U^r$, $U^t$ and $U^\phi$. \begin{widetext} \begin{align} \frac{dU^t}{d\lambda} &- \frac{2U^r\left[\left(r^3 - 3Mr^2 +a^2(r-M)\right)U^r + aM(3r^2 + a^2)U^\phi\right]}{r^2\Delta} = \frac{3s_\parallel\mu(\hat L_z - a\hat E)M(r^2 + a^2)\hat U^r}{r^2\Delta}\;, \label{eq:forcet_eqeccgen}\\ \frac{dU^r}{d\lambda} &+ \frac{\Delta\left[M(U^t - a U^\phi)^2 - r^3(U^\phi)^2\right]}{r^4} - \frac{(2r^2 - 3Mr - a^2)(U^r)^2}{r\Delta} = \frac{3s_\parallel\mu(\hat L_z - a\hat E)M\left[\hat E(r^2 + a^2) - a\hat L_z\right]}{r^2}\;, \label{eq:forcer_eqeccgen}\\ \frac{dU^\phi}{d\lambda} &+ \frac{2U^r\left[aMU^r + (r^3 - 2Mr^2 - a^2M)U^\phi\right]}{r^2\Delta} = \frac{3as_\parallel\mu(\hat L_z - a\hat E)M\hat U^r}{r^2\Delta}\;. \label{eq:forcephi_eqeccgen} \end{align} No term involving $\delta\vartheta_S$ enters these equations at $\mathcal{O}(S)$. Indeed, note that the equations for $U^t$, $U^r$, and $U^\phi$ are completely independent of $U^\theta$ at this order. We can therefore solve $U^{t,r,\phi}$ independently from our solution for $U^\theta$. It is worth remarking that Eqs.\ (\ref{eq:forcet_eqeccgen}) and (\ref{eq:forcephi_eqeccgen}) turn out to simplify further by converting them to equations for $u_t$ and $u_\phi$. Doing so using by converting from $U^{t,\phi}$ to $u^{t,\phi}$, lowering an index, and then using $u_t = -\hat{E} + u_t^S$, $u_\phi = \hat{L}_z + u_\phi^S$, where $u^S_{t,\phi} = \mathcal{O}(S)$, we find \begin{align} \frac{du^S_t}{d\lambda} &= -\frac{3s_{\parallel}\mu (\hat L_z - a\hat{E})M\hat{U}^r}{r^4}\;, \label{eq:forcet_eqeccgen_v2} \\ \frac{du^S_\phi}{d\lambda} &= \frac{3as_{\parallel}\mu (\hat L_z - a\hat{E})M\hat{U}^r}{r^4}\;. \label{eq:forcephi_eqeccgen_v2} \end{align} Solving Eqs.\ (\ref{eq:forcet_eqeccgen_v2}) and (\ref{eq:forcephi_eqeccgen_v2}) is equivalent to solving (\ref{eq:forcet_eqeccgen}) and (\ref{eq:forcephi_eqeccgen}), respectively. Finally, the equation we find for $U^\theta$ is \begin{align} &\frac{dU^\theta}{d\lambda}+\frac{2a^4 r \hat E^2 - 4 a^3r \hat E \hat L_z + (r - 2M) r^3\hat L_z^2 + a^2(2r^3 \hat E^2 + 2r \hat L_z^2 - (\hat U^r)^2)}{r^2\Delta}\delta\vartheta_S \nonumber\\&= -\frac{3s_\perp\mu(\hat L_z - a\hat E)M}{r^3\Delta}\left(\sigma_t(r^2+a^2)\hat U^r + \sigma_r\left[\hat E(r^2 + a^2) - a\hat L_z\right]\Delta + \sigma_\phi a \hat U^r\right)\;, \label{eq:forcetheta_eqeccgen} \end{align} \end{widetext} Notice that $dU^\theta/d\lambda$ only couples to $s_\perp$, and $dU^{t,r,\phi}/d\lambda$ only couple to $s_\parallel$. Notice further that we have not yet introduced an expansion in eccentricity. This means that for {\it all} nearly equatorial orbits, the small body's motion in the equatorial plane is totally decoupled from its out-of-plane dynamics. For equatorial and nearly equatorial orbits, we take the small body to move on a trajectory whose radial motion is given by \begin{equation} r = \frac{pM}{1 + e\cos\chi_r}\;. \label{eq:rparam} \end{equation} We introduce here the orbit's the semi-latus rectum $p$ and eccentricity $e$, as well as the radial true anomaly $\chi_r$. This anomaly can be written \begin{equation} \chi_r = w_r + \delta\chi_r\;,\label{eq:chir} \end{equation} where $w_r$ is the radial {\it mean anomaly}. The difference between the radial mean and true anomalies, $\delta\chi_r$, is an oscillatory function whose mean value is zero. In the Mino-time parameterization, $w_r = \Upsilon_r\lambda$. As discussed in Sec.\ \ref{sec:kerrgeodesics}, the parameterization (\ref{eq:rparam}) is used extensively in studies of geodesic motion. As we will show, it works perfectly for nearly equatorial orbits of spinning bodies as well. This form does not work so well for generic orbits of spinning bodies; for general orbit inclination, we need to allow the radial libration region to oscillate as the orbit precesses. This case is discussed in the companion analysis, Ref.\ \cite{Paper2}. We now solve for the orbit by introducing simultaneous expansions in the small body's spin and the orbit's eccentricity $e$. By requiring that Eqs.\ (\ref{eq:forcet_eqeccgen}) -- (\ref{eq:forcephi_eqeccgen}) hold order by order, we construct a full solution for the orbit of the small body's motion to that order in our expansion. \subsection{Leading order in eccentricity} \label{sec:leadingorderine} We begin by considering Kerr orbits at $\mathcal{O}(e,s)$. In this limit, it suffices to put $\chi_r = w_r = \Upsilon_r\lambda = (\hat\Upsilon_r + \Upsilon^S_r)\lambda$. [Although there is a linear-in-eccentricity correction to $\chi_r$, its impact on the small body's motion enters at $\mathcal{O}(e^2)$.] To first order in $e$, the radial motion of the small body is thus given by \begin{equation} r = pM\left(1 - e\cos w_r\right) = pM\left[1 - \frac{e}{2}\left(e^{iw_r} + e^{-iw_r}\right)\right]\;. \end{equation} The second form proves to be particularly useful for our purposes. Our goal is to compute how the spin-curvature interaction affects all of the important parameters of our system. Just as in our study of circular and equatorial orbits, we assume that the constants of the motion take the form $\mathcal{X}^S = \hat{\mathcal{X}} + \delta\mathcal{X}^S$ (with $\mathcal{X} \in [E, L_z, K, Q]$), and that \begin{align} \Upsilon_r &= \hat\Upsilon_r + \Upsilon^S_r\;, \\ \Upsilon_\phi &= \hat\Upsilon_\phi + \Upsilon^S_\phi\;, \\ \Gamma &= \hat\Gamma + \Gamma^S\;. \end{align} First consider just the leading-order geodesic motion. The integrals of motion are \begin{align} \hat E &= \frac{1 - 2v^2 \pm qv^3}{\sqrt{1 - 3v^2 \pm 2qv^3}} + \mathcal{O}(e^2)\;, \label{eq:Ecirceq2}\\ \hat L_z &= \pm \frac{M}{v}\sqrt{\frac{1 \mp 2qv^3 + q^2v^4}{1 - 3v^2 \pm 2qv^3}}+ \mathcal{O}(e^2)\;, \label{eq:Lzcirceq2}\\ \hat Q &= 0\;. \end{align} As before, $q \equiv a/M$, but now we have $v = \sqrt{1/p}$. We also have \begin{align} \hat\Upsilon_r &= \frac{M}{v}\sqrt{\frac{1 - 6v^2 \pm 8qv^3 - 3q^2v^4}{1 - 3v^2 \pm 2 qv^3}} + \mathcal{O}(e^2)\;, \\ \hat\Upsilon_\phi &= \pm \frac{M}{v}\sqrt{\frac{1}{1 - 3v^2 \pm 2qv^3}} + \mathcal{O}(e^2)\;, \\ \hat\Gamma &= \frac{M^2(1 \pm qv^3)}{v^4\sqrt{1 - 3v^2 \pm 2qv^3}} + \mathcal{O}(e^2)\;. \end{align} Let us first consider the components which describe the in-plane orbital motion, $U^{t,r,\phi}$. We write these components \begin{align} U^t &= U^t_0 + s_{\parallel}e\left(U^t_{-1}e^{iw_r} + U^t_{+1}e^{-iw_r}\right)\;, \label{eq:kerrecceqtime1}\\ U^\phi &= U^\phi_0 + s_{\parallel}e\left(U^\phi_{-1}e^{iw_r} + U^\phi_{+1}e^{-iw_r}\right)\;, \label{eq:kerrecceqaxial1}\\ U^r &= \frac{dr}{d\lambda} = -\frac{iepM}{2}\left(\hat\Upsilon_r + \Upsilon_r^S\right)\left(e^{iw_r} - e^{-iw_r}\right)\;. \label{eq:kerreccradial1} \end{align} In our assumed form of $U^r$, we used the fact that for small eccentricity equatorial orbits, $dw_r/d\lambda = \hat\Upsilon_r + \Upsilon_r^S$. We next insert Eqs.\ (\ref{eq:kerrecceqtime1}), (\ref{eq:kerrecceqaxial1}), and (\ref{eq:kerreccradial1}) into Eqs.\ (\ref{eq:forcet_eqeccgen}), (\ref{eq:forcer_eqeccgen}), and (\ref{eq:forcephi_eqeccgen}), also enforcing the constraint (\ref{eq:udotu}) in order to solve to each order in $s$ and $e$. This exercise yields \begin{widetext} \begin{align} U^t_0 &= \frac{M^2(1 \pm qv^3)}{v^4\sqrt{1 - 3v^2 \pm 2qv^3}} \mp \left(\frac{3s_\parallel\mu}{2}\right)\frac{Mv(1 \mp qv)(1 \mp 2qv^3 + q^2v^4)}{(1 - 3v^2 \pm 2qv^3)^{3/2}}\;, \label{eq:kerrecc_timelikesol_0freq}\\ U^t_{-1} &= U^t_{+1} = \mp\left(\frac{3s_\parallel\mu}{2}\right)\frac{qMv^4(1 \mp qv)^2(1 \mp 2qv^3 + q^2 v^4)}{(1 - 2v^2 + q^2v^4)(1 - 3v^2 \pm 2qv^3)^{3/2}}\;, \label{eq:kerrecc_timelikesol_1freq}\\ U^\phi_0 &= \pm \frac{M}{v}\sqrt{\frac{1 }{1 - 3v^2 \pm 2qv^3}} - \left(\frac{3s_\parallel\mu}{2}\right)\frac{v^2(1 \mp qv)(1 - 2v^2 \pm qv^3)}{(1 - 3v^2 \pm 2qv^3)^{3/2}}\;, \label{eq:kerrecc_axialsol_zerofreq}\\ U^\phi_{-1} &= U^\phi_{+1} = -\left(\frac{3s_\parallel\mu}{2}\right)\frac{qv^5(1 - 2v^2 \pm qv^3)}{(1 - 2v^2 + q^2v^4)(1 - 3v^2 \pm 2qv^3)^{3/2}}\;, \label{eq:kerrecc_axialsol_1freq}\\ \Upsilon^S_r &= \left(\frac{3s_\parallel\mu}{2}\right)\frac{v^2(1 \mp qv)\left(1 - 2v^2 \mp qv^3(5 - 14v^2) + 5v^4q^2(1 - 4v^2) \pm 7q^3v^7\right)}{(1 - 3v^2 \pm 2qv^3)^{3/2}\sqrt{1 - 6v^2 \pm 8qv^3 - 3q^2v^4}}\;. \label{eq:kerrlinecc_UpsilonS_r} \end{align} Eq.\ (\ref{eq:kerrlinecc_UpsilonS_r}) matches with the expression Eq.\ (\ref{eq:UpsilonrSKerrexactine}) derived using the exact-in-$e$ approach discussed in Appendix \ref{sec:Saijocomparison}. The integrals of the motion for these orbits are identical to those what we found in the circular case, Eqs.\ (\ref{eq:deltaEScirceqaligned}) -- (\ref{eq:deltaQScirceqaligned}), but with $v = \sqrt{1/p}$. \begin{figure*} \centerline{\includegraphics[scale=0.58]{Fig1.png}} \caption{Example of the spin contribution $\Upsilon_r^S$ to the radial Mino-time frequency $\Upsilon_r$. Left panel shows $\Upsilon_r^S$ to leading order in $e$ as a function of semi-latus rectum $p$ and spin parameter $a$ for prograde orbits ($I = 0^\circ$); see Eq.\ (\ref{eq:kerrlinecc_UpsilonS_r}). Right panel shows $\Upsilon_r^S$ to second-order in $e$ for Schwarzschild black hole orbits ($a = 0$) as a function of $p$ and $e$. In both cases, the last stable orbit is indicated by the black dashed line. \label{fig:contourmap}} \end{figure*} Turn now to the out-of-plane motion. To make progress here, we first must more completely describe the tetrad elements. They take the form \begin{align} \tilde{e}_{1\alpha} &= \tilde{e}_{1\alpha}^0 + e\,\tilde{e}_{1\alpha}^1\;, \\ \tilde{e}_{2\alpha} &= \tilde{e}_{2\alpha}^0 + e\,\tilde{e}_{2\alpha}^1\;. \end{align} The terms $\tilde{e}_{1\alpha}^0$ and $\tilde{e}_{2\alpha}^0$ are exactly as defined in Eqs.\ (\ref{eq:tildee1_circeq}) and (\ref{eq:tildee2_circeq}), but with $v = \sqrt{1/p}$ rather than $v = \sqrt{M/r}$. The eccentricity corrections are given by \begin{align} \tilde{e}_{1\alpha}^1 &= \left(-\frac{v^2}{M}\sqrt{\frac{1 - 3v^2 \pm 2qv^3}{1 - 2v^2 + q^2v^4}}\hat\Upsilon_r\sin w_r, \frac{v^2(1 - q^2v^2)}{(1 - 2v^2 + q^2v^4)^{3/2}}\cos w_r, 0, qv^2\sqrt{\frac{1 - 3v^2 \pm 2qv^3}{1 - 2v^2 + q^2v^4}}\hat\Upsilon_r\sin w_r\right)\;, \label{eq:etilde1correction} \\ \tilde{e}_{2\alpha}^1 &= \left(v\sqrt{\frac{1 - 3v^2 \pm 2qv^3}{1 - 2v^2 + q^2v^4}}\cos w_r, -\frac{v^3(1 \mp qv)}{(1 - 2v^2 - q^2v^4)^{3/2}}\hat\Upsilon_r\sin w_r, 0, pM(1 \mp qv^3)\sqrt{\frac{1 - 3v^2 \pm 2qv^3}{1 - 2v^2 + q^2v^4}}\cos w_r\right)\;. \label{eq:etilde2correction} \end{align} We used $dw_r/d\lambda = \hat\Upsilon_r$ rather than $dw_r/d\lambda = \Upsilon_r = \hat \Upsilon_r + \Upsilon^S_r$ because these tetrad elements are used to build the spin vector $S_\alpha$; any contribution from $\Upsilon^S_r$ is at $\mathcal{O}(S^2)$. \end{widetext} To complete our description of the out-of-plane motion, we first note that because $\hat U^\theta = 0$ \begin{equation} U^\theta = \Sigma u^\theta = \Sigma \frac{d\delta\vartheta_S}{d\tau} = \frac{d\delta\vartheta_S}{d\lambda}\;, \end{equation} and so \begin{equation} \frac{dU^\theta}{d\lambda} = \frac{d^2\delta\vartheta_S}{d\lambda^2}\;. \end{equation} Using this in Eq.\ (\ref{eq:forcetheta_eqeccgen}), along with Eqs.\ (\ref{eq:Ecirceq2}), (\ref{eq:Lzcirceq2}), and (\ref{eq:kerreccradial1}) for $\hat E$, $\hat L_z$, and $\hat U^r$, and finally using Eqs.\ (\ref{eq:etilde1correction}) and (\ref{eq:etilde2correction}) to work out the components $\sigma_t$, $\sigma_r$, and $\sigma_\phi$ yields \begin{equation} \frac{d^2\delta\vartheta_S}{d\lambda^2} + \Upsilon_\theta^2 \delta\vartheta_S = F_S^\theta(\lambda)\;,\label{eq:deltathetade} \end{equation} where \begin{equation} \Upsilon_\theta =\frac{M}{v}\sqrt{\frac{1\mp4qv^3+3q^2v^4}{1-3v^2\pm2qv^3}}, \end{equation} is the Mino-time polar frequency for nearly equatorial circular orbits, and where the forcing term is given by \begin{widetext} \begin{align} F^\theta_S(\lambda) &= 3s_\perp\mu M\biggl[\mp\frac{v(1\mp qv)\left[1 - 2v^2 + q^2v^4 + e\left(1 - v^2 \mp 2qv^3 + 2q^2v^4\right)\cos w_r\right]}{(1 - 3v^2 \pm 2qv^3)\sqrt{1 - 2v^2 + q^2v^4}}\cos(\phi_s + \psi_p) \nonumber\\ & \equiv 3s_\perp\mu M(\alpha_1 + e\alpha_2\cos w_r)\cos(\phi_s + \psi_p)\;. \label{eq:dUthetadlambda1} \end{align} \end{widetext} For notational convenience, we have introduced \begin{align} \alpha_1 &= \mp\frac{v(1 \mp qv)\sqrt{1 - 2v^2 + q^2v^4}}{(1 - 3v^2 \pm 2qv^3)}\;, \\ \alpha_2 &= \mp\frac{v(1 \mp qv)(1 - v^2 \mp 2qv^3 + 2q^2v^4)}{(1 - 3v^2 \pm 2qv^3)\sqrt{1 - 2v^2 + q^2v^4}}\;. \end{align} For eccentric equatorial orbits, the precession phase takes the form \begin{equation} \psi_p = \Upsilon_s\lambda + \psi_r\;, \end{equation} where \begin{equation} \Upsilon_s = M\sqrt{p} + \mathcal{O}(e^2) = \frac{M}{v} + \mathcal{O}(e^2)\;, \end{equation} and where $\psi_r$ is a contribution to the precession phase that varies along the orbit's radial motion. Van de Meent \cite{vandeMeent2019} provides a general expression for $\psi_r$; for small eccentricity, this expression reduces to \begin{align} \psi_r &= -\frac{2ev^2(1 \mp qv)^2}{(1 - 2v^2 + q^2v^4)}\sqrt{\frac{1 - 3v^2 \pm 2qv^3}{1 - 6v^2 \pm 8qv^3 - 3q^2v^4}}\sin w_r \nonumber\\ &\equiv e\varpi(q, v)\sin w_r\;. \end{align} Note that $\psi_r \propto ev^2$, and so by definition $\psi_r$ is a small quantity in the small eccentricity limit. This allows us to usefully expand $\cos(\phi_s + \psi_p)$: \begin{widetext} \begin{align} \cos(\phi_s + \psi_p) &= \cos(\phi_s + \Upsilon_s\lambda + e\varpi\sin w_r) \nonumber\\ &= \cos(\phi_s + \Upsilon_s\lambda)\cos(e\varpi\sin w_r) - \sin(\phi_s + \Upsilon_s\lambda)\sin(e\varpi\sin w_r) \nonumber\\ & \simeq \cos(\phi_s+\Upsilon_s\lambda) - e\varpi\sin w_r\sin(\phi_s+\Upsilon_s\lambda)\;. \label{eq:cospsi_p_linecc} \end{align} Combining Eqs.\ (\ref{eq:dUthetadlambda1}) and (\ref{eq:cospsi_p_linecc}), and then linearizing in $e$ yields \begin{equation} F^\theta_S(\lambda) = 3s_\perp\mu M\Bigl\{\alpha_1\cos(\phi_s + \Upsilon_s\lambda) + e\Bigl[\alpha_2\cos w_r\cos(\phi_s + \Upsilon_s\lambda) - \alpha_1\varpi\sin w_r\sin(\phi_s + \Upsilon_s\lambda)\Bigr]\Bigr\}\;. \end{equation} As in Sec.\ \ref{subsec:circeqmisalign}, we use variation of constants to solve Eq.\ (\ref{eq:deltathetade}), yielding \begin{align} \delta\vartheta_S=A(\lambda) \cos(\Upsilon_\theta\lambda)+B(\lambda)\sin(\Upsilon_\theta\lambda)\;, \label{eq:Utheta_linecc} \end{align} where \begin{align} A(\lambda)&=c_1-\frac{3\mu M s_{\perp}}{8 \Upsilon_\theta } \biggl[\frac{4 \alpha_1 \cos (\lambda (\Upsilon_s-\Upsilon_\theta ))}{\Upsilon_s-\Upsilon_\theta }-\frac{4 \alpha_1 \cos (\lambda (\Upsilon_\theta +\Upsilon_s))}{\Upsilon_\theta +\Upsilon_s}+\frac{2 e (\alpha_2 - \alpha_1\varpi) \cos (\lambda (-\Upsilon_\theta +\Upsilon_r-\Upsilon_s))}{-\Upsilon_\theta +\hat\Upsilon_r-\Upsilon_s}\nonumber\\&+\frac{2e (\alpha_2 + \alpha_1\varpi) \cos (\lambda (-\Upsilon_\theta +\Upsilon_r+\Upsilon_s))}{-\Upsilon_\theta +\hat\Upsilon_r+\Upsilon_s}-\frac{2e(\alpha_2 -\alpha_1\varpi) \cos (\lambda (\Upsilon_\theta +\Upsilon_r-\Upsilon_s))}{\Upsilon_\theta +\hat\Upsilon_r-\Upsilon_s}\nonumber\\&-\frac{2e(\alpha_2 + \alpha_1 \varpi) \cos (\lambda (\Upsilon_\theta +\Upsilon_r+\Upsilon_s))}{\Upsilon_\theta +\hat\Upsilon_r+\Upsilon_s}\biggr]\;, \label{eq:Alambda}\\ B(\lambda)&=c_2+\frac{3\mu M s_\perp}{8 \Upsilon_\theta } \biggl[\frac{4 \alpha_1 \sin (\lambda (\Upsilon_s-\Upsilon_\theta ))}{\Upsilon_s-\Upsilon_\theta }+\frac{4 \alpha_1 \sin (\lambda (\Upsilon_\theta +\Upsilon_s))}{\Upsilon_\theta +\Upsilon_s}+\frac{2e(\alpha_2 -\alpha_1\varpi) \sin (\lambda (-\Upsilon_\theta +\Upsilon_r-\Upsilon_s))}{-\Upsilon_\theta +\hat\Upsilon_r-\Upsilon_s} \nonumber\\ &+\frac{2e(\alpha_2 + \alpha_1\varpi) \sin (\lambda (-\Upsilon_\theta +\Upsilon_r+\Upsilon_s))}{-\Upsilon_\theta +\hat\Upsilon_r+\Upsilon_s}+\frac{2e(\alpha_2 - \alpha_1\varpi) \sin (\lambda (\Upsilon_\theta +\Upsilon_r-\Upsilon_s))}{\Upsilon_\theta +\hat\Upsilon_r-\Upsilon_s} \nonumber\\ &+\frac{2e(\alpha_2 + \alpha_1\varpi) \sin (\lambda (\Upsilon_\theta +\Upsilon_r+\Upsilon_s))}{\Upsilon_\theta +\hat\Upsilon_r+\Upsilon_s}\biggr]\;. \label{eq:Blambda} \end{align} \end{widetext} We have put $\phi_s = 0$ here for simplicity. Notice that the total radial frequency $\Upsilon_r$ appears inside the sine and cosine functions, but the {\it geodesic} radial frequency $\hat\Upsilon_r$ appears outside these functions in these solutions. This is because $A(\lambda)$ and $B(\lambda)$ are used to build the $\mathcal{O}(S)$ out of plane precessional motion of the small body, and $\Upsilon_r = \hat\Upsilon_r + \mathcal{O}(S)$. Using $\Upsilon_r$ instead of $\hat\Upsilon_r$ outside of the sines and cosines would affect the solution at $\mathcal{O}(S^2)$, and we neglect terms at this order. It is important to note that the combination $\hat\Upsilon_r + \Upsilon_s - \Upsilon_\theta$ can pass through zero. For example, when $a = 0$, this occurs for orbits that have $v = \sqrt{(2\sqrt{3} - 3)/3}$, for which $p \simeq 6.464$; for $a = M$, this occurs for orbits that have $v = (1/2)(\pm 1/\sqrt{3} + \sqrt{1/3 + 2/\sqrt{3}}$, for which $p \simeq 1.238$ (prograde) and $p \simeq 9.690$ (retrograde). The general case smoothly connects these limiting forms as a function of $a$. At least naively, Eq.\ (\ref{eq:Alambda}) appears to be poorly behaved at such ``resonant'' orbits, with certain terms diverging as this combination of frequencies passes through zero. It is not difficult to show, however, that the combination $\alpha_2 + \alpha_1\varpi$ passes through zero at exactly the same orbits for which $\hat\Upsilon_r + \Upsilon_s = \Upsilon_\theta$. Such resonances thus have no dynamical impact on the system. This is consistent with recent work \cite{Witzany2019_2,Zelenka2020} which shows that spinning body orbits are integrable at leading order in the smaller body's spin. Equations (\ref{eq:Utheta_linecc}), (\ref{eq:Alambda}) and (\ref{eq:Blambda}) show that the out-of-plane motion of the small body depends on $s_\perp$, is uncoupled from the in-plane motion, and is periodic, with structure at harmonics of the precession frequency $\Upsilon_s$, the radial frequency $\Upsilon_r$, and the polar frequency $\Upsilon_\theta$. As we consider more general configurations, we expect qualitatively similar behavior. We thus design our algorithm for describing the small body's orbital motion in the general case in order to capture such behavior. \subsection{Next order in eccentricity} \label{sec:secondorderine} As our final ``simple'' case, we examine equatorial and eccentric orbits to second order in eccentricity. To keep the expressions relatively simple, we do this only for orbits of Schwarzschild black holes, and only examine the spin-aligned case. As we saw for the equatorial and nearly equatorial orbits discussed in the previous section, non-aligned small body spin decouples from all components of the body's orbit except the out-of-plane motion component $U^\theta$, which is itself decoupled from the aligned spin and from all other components of the orbital motion. Focusing on the Schwarzschild limit of aligned spin orbits will be sufficient for us to develop a strategy for solving for this motion to high precision for more generic cases. The two most important changes versus our previous analyses are that it will turn out we need to know many quantities describing geodesics to {\it fourth} order in $e$ in order to compute corrections to the orbits of spinning bodies; and, we need a more complete accounting for the difference between the true anomaly $\chi_r$ and the mean anomaly $w_r \equiv \Upsilon_r\lambda$. The need to go to fourth order in $e$ may be somewhat surprising. The reason is that the radial velocity introduces a factor $e$; certain terms in the analysis which scale with $\hat U^r \hat U_r$ or $\hat U^r U^S_r$ have their order in eccentricity ``boosted'' by a factor of $e^2$. To describe the true anomaly, we generalize a functional form that is well known from studies of Keplerian orbits, writing \begin{align} \chi_r &= w_r + \left[e\left(\beta_{11} + \beta^S_{11}\right) + e^3\left(\beta_{31} + \beta^S_{31}\right)\right]\sin w_r\nonumber\\ &+ e^2\left(\beta_{22} + \beta^S_{22}\right)\sin2w_r + e^3\left(\beta_{33} + \beta^S_{33}\right)\sin3w_r \nonumber\\ &\equiv w_r + \delta\hat{\chi}_r + \delta\chi^S_r\;. \label{eq:anomaly_2ndorderecc} \end{align} The quantity $\delta\hat{\chi}_r$ stands for all the oscillatory geodesic terms (i.e., the terms with $\beta_{ab}$) that take us from the mean anomaly to the true anomaly. The quantity $\delta\chi^S_r$ stands for the equivalent terms which arise from spin-curvature coupling (the terms with $\beta^S_{ab}$). Other quantities we need are the integrals of the motion and the radial frequency: \begin{align} \hat E &= \frac{1 - 2v^2}{\sqrt{1 - 3v^2}} + \frac{e^2v^2}{2}\frac{(1 - 4v^2)^2}{(1 - 2v^2)(1 - 3v^2)^{3/2}} \nonumber\\ &+ \frac{e^4v^4}{8}\frac{(1-4v^2)^2(3-8v^2)}{(1-2v^2)^3(1-3v^2)^{5/2}}\;,\label{eq:Ehat4thorder} \\ \hat L_z &= M\sqrt{p}\biggl(\frac{1}{\sqrt{1 - 3v^2}} + \frac{e^2v^2}{2}\frac{1}{(1 - 3v^2)^{3/2}}\nonumber\\ &+ \frac{3e^4v^4}{8}\frac{1}{(1 - 3v^2)^{5/2}}\biggr)\;,\label{eq:Lzhat4thorder} \\ \hat\Upsilon_r &= M\sqrt{p}\biggl(\sqrt{\frac{1 - 6v^2}{1 - 3v^2}} + \frac{e^2v^2}{4}\frac{(1 - 9v^2)(2 - 9v^2)}{(1 - 3v^2)^{3/2}(1 - 6v^2)^{3/2}}\nonumber\\ &+ \frac{3e^4v^4}{64}\frac{[8 - 25v^2(1 - 3v^2)(8 - 49v^2 + 147v^4)]}{(1 - 3v^2)^{5/2}(1 - 6v^2)^{7/2}}\biggr)\;. \end{align} The true anomaly (\ref{eq:anomaly_2ndorderecc}), coupled with the form $r = p/(1 + e\cos\chi_r)$, suffices to fully describe the radial motion. Turn next to the small body's motion in $t$ and $\phi$. We parameterize this motion using the 4-velocity components \begin{align} u_t = -\hat E + u^S_t\;, \nonumber\\ u_\phi = \hat L_z + u^S_\phi\;\label{eq:uphiut}. \end{align} Raising the index and multiplying by $\Sigma = r^2$, these components can be easily converted to the forms $U^{t,\phi}$. We assume that the spin corrections to these 4-velocity components take the form \begin{equation} u^S_t = \sum_{n = -3}^3 u^s_{t,n} e^{-inw_r}\;,\quad u^S_\phi = \sum_{n = -3}^3 u^s_{\phi, n} e^{-inw_r}\;. \end{equation} We generically find that $u^s_{(t,\phi),n} \propto e^{|n|}$. We find that we don't have enough information to pin down these components for $|n| > 3$; presumably we need to describe the geodesic motion to higher order in order to do this. We solve for the various unknown quantities we have introduced by enforcing Eqs.\ (\ref{eq:forcet_eqeccgen}) -- (\ref{eq:forcephi_eqeccgen}) and the constraint (\ref{eq:udotu}), and then gathering terms in spin and eccentricity. Terms at order $(s_\parallel)^0$ are geodesic, and can be used to find the coefficients which make $\delta\hat{\chi}_r$, defined in Eq.\ (\ref{eq:anomaly_2ndorderecc}): \begin{align} \beta_{11} &= -\frac{v^2}{1 - 6v^2}\;, \\ \beta_{22} &= \frac{v^4}{8(1 - 6v^2)^2}\;, \\ \beta_{31} &= -\frac{19v^6}{16(1 - 6v^2)^3}\;, \\ \beta_{33} &= -\frac{v^6}{48(1 - 6v^2)^3\;.} \end{align} Turn now to various aspects of the solution at order $s_\parallel$. First, we find the following coefficients which define $\delta\chi^S_r$: \begin{align} \beta^S_{11} &= \frac{s_\parallel\mu}{M} v^3\frac{(1 - 2v^2)}{(1 - 6v^2)^2}\;, \\ \beta^S_{22} &= -\frac{s_\parallel\mu}{4M}\frac{v^5(1 - 2v^2)}{(1 - 6v^2)^3}\;, \\ \beta^S_{31} &= \frac{s_\parallel\mu}{16M}\frac{v^7(25 + 156v^2 - 924v^4)}{(1 - 2v^2)(1 - 6v^2)^4}\;, \\ \beta^S_{33} &= \frac{s_\parallel\mu}{16M}\frac{v^7(1 - 2v^2)}{(1 - 6v^2)^4}\;. \end{align} We next find the terms which define $u^S_\phi$ and $u^S_t$: \begin{widetext} \begin{align} u^S_\phi &= -s_\parallel\mu\left(\frac{3v^2}{2}\frac{(1 - 2v^2)}{(1 - 3v^2)^{3/2}} + e^2\frac{v^2}{4}\frac{(2 - 5v^2 - 16v^4 + 48v^6)}{(1 - 2v^2)(1 - 3v^2)^{5/2}}\right)\;, \\ u^S_t &= \frac{s_\parallel\mu}{M}\biggl(\frac{3v^5}{2}\frac{(1 - 2v^2)}{(1 - 3v^2)^{3/2}} + \frac{3ev^5\cos w_r}{(1 - 3v^2){1/2}} + \frac{e^2v^5}{4}\frac{\left[2 - 25v^2 + 126v^4 + 234v^6 + 6(1 - 3v^2)^2(1 - 7v^2)\cos2w_r\right]}{(1 - 6v^2)(1 - 3v^2)^{5/2}} \nonumber\\ &+ \frac{e^3v^5}{8}\frac{\cos w_r\left[4 - 24v^2 - 81v^4 + 459v^6 + (4 - 84v^2 + 513v^4 - 891v^6)\cos2w_r\right]}{(1 - 6v^2)^2(1 - 3v^2)^{3/2}}\biggr)\;. \end{align} Finally, we compute the shift to the radial frequency due to spin-curvature coupling: \begin{equation} \Upsilon_r^S = \frac{3s_\parallel\mu}{2}\left(\frac{v^2(1 - 2v^2)}{(1 - 3v^2)^{3/2}\sqrt{1 - 6v^2}} - \frac{e^2v^2}{12}\frac{(4 - 106v^2 + 985v^4 - 4275v^6 + 8928v^8 - 7452v^{10})}{(1 - 2v^2)(1 - 3v^2)^{5/2}(1 - 6v^2)^{5/2}}\right)\;.\label{eq:UpsilonrS2ndine} \end{equation} Neglecting the terms in $e^2$, this is consistent with the result we found previously, Eq.\ (\ref{eq:kerrlinecc_UpsilonS_r}) in the limit $q \to 0$. In addition, Eq.\ (\ref{eq:kerrlinecc_UpsilonS_r}) agrees exactly with the $\Upsilon^r_S$ in Eq.\ (\ref{eq:UpsilonrsexactineSchw}) obtained using the approach presented in Ref.\ \cite{Saijo1998}; see Appendix \ref{sec:Saijocomparison} for details of this comparison. Several other important quantities can be derived from what we computed here. Two that are particularly important are the axial frequency $\Upsilon_\phi$, and the quantity $\Gamma$ which converts from Mino-time frequencies and periods to coordinate-time frequencies and periods. As discussed in Sec.\ \ref{subsec:geodesicsfreqdom}, the axial frequency $\Upsilon_\phi$ is the orbit average of $U^\phi$: \begin{equation} \Upsilon_\phi = \frac{1}{2\pi}\int_0^{2\pi}U^\phi(w_r)dw_r\;. \end{equation} Using $U^\phi = \Sigma g^{\phi\phi}u_\phi$, we find \begin{equation} \Upsilon_\phi = \frac{M}{v\sqrt{1 - 3v^2}}\left(1 + \frac{e^2v^2}{2(1 - 3v^2)} - \frac{3s_\parallel v^2}{2}\frac{1 - 2v^2}{1 - 3v^2} - \frac{s_\parallel e^2v^3}{4}\frac{(2 - 5v^2 - 16v^4 + 48v^6)}{(1 - 2v^2)(1 - 3v^2)^2}\right)\;. \end{equation} Likewise, $\Gamma$ is found by orbit averaging $U^t = \Sigma g^{tt}u_t$: \begin{align} \Gamma &= \frac{M^2}{v^4\sqrt{1 - 3v^2}}\biggl(1 + \frac{e^2}{2}\frac{(3 - 38v^2 + 148v^4 - 186v^6)}{(1 - 11v^2 + 36v^4 - 36v^6)} \nonumber\\ & - \frac{3s_\parallel v^5}{2}\frac{1}{(1 - 3v^2)} - \frac{s_\parallel e^2 v^3}{4}\frac{(4 - 43v^2 + 160v^4 - 186 v^6 - 144v^8 + 216v^{10})}{(1 - 2v^2)(1 - 3v^2)^2(1 - 6v^2)^2}\biggr)\;. \end{align} With these quantities in hand, it is straightforward to compute $\Omega_{r,\phi} = \Upsilon_{r,\phi}/\Gamma$. Finally, the shifts to the conserved integrals due to the spin-curvature interaction become. \begin{align} \delta E^S &= -\frac{s_\parallel \mu v^5}{2M(1 - 3v^2)^{3/2}}\left(1 - e^2\frac{(4 - 15v^2)}{2(1 - 3v^2)}\right)\;,\label{eq:deltaEs2ndine} \\ \delta L^S_z &= \frac{s_\parallel\mu(2 - 13v^2 + 18v^4)}{2(1 - 3v^2)^{3/2}}\left(1 - \frac{e^2v^4}{2}\frac{(17 - 96v^2 + 144v^4)}{(1 - 2v^2)^2(1 - 3v^2)(2 - 9v^2)}\right)\;.\label{eq:deltaLzs2ndine} \end{align} All of these quantities agree with Eqs.\ (\ref{eq:ESexactineSchw}) and (\ref{eq:LzSexactineSchw}) which were obtained using the exact-in-eccentricity approach outlined in Appendix \ref{sec:Saijocomparison}. \section{Spinning-body orbits III: Frequency-domain treatment } \label{sec:spinbodyfreqdom} We now consider nearly equatorial orbits with \textit{arbitrary} eccentricity, using a frequency-domain treatment of the spinning body's motion. As described in Sec.\ \ref{sec:ParallelTransport}, the spin of the small body introduces the precession frequency $\Upsilon_{s}$ into the analysis. The small body also shifts the orbital frequencies by an amount $\mathcal{O}(S)$ which we denote $\Upsilon^S_r$ and $\Upsilon^S_{\theta}$. Functions evaluated on a spinning body's orbit can thus be written as a Mino-time Fourier expansion in terms of frequencies $\Upsilon_r = \hat{\Upsilon}_{r}+ \Upsilon^S_r$, $\Upsilon_\theta = \hat{\Upsilon}_{\theta}+\Upsilon^S_{\theta}$ and $\Upsilon_{s}$: \begin{equation} f(\lambda)=\sum_{j=-1}^{1}\sum_{n,k=-\infty}^{\infty}f_{jnk}e^{-ij\Upsilon_{s}\lambda}e^{-in(\hat\Upsilon_{r}+\Upsilon^S_{r})\lambda}e^{-ik(\hat\Upsilon_{\theta}+\Upsilon^S_{\theta})\lambda}\;. \label{eq:flambdaFourier} \end{equation} The Fourier coefficient $f_{jnk}$ is given by \begin{equation} f_{jnk} = \frac{1}{\Lambda_{r}\Lambda_{\theta}\Lambda_{s}}\int_{0}^{\Lambda_{r}}\int_{0}^{ \Lambda_{\theta}}\int_{0}^{\Lambda_{s}} f\left( \lambda_r,\lambda_\theta,\lambda_s\right)e^{ij\Upsilon_{s}\lambda_s}e^{in(\hat\Upsilon_{r}+\Upsilon^S_{r})\lambda_r}e^{ik(\hat\Upsilon_{\theta}+\Upsilon^S_{\theta})\lambda_\theta}d\lambda_{\theta}d\lambda_{r}d\lambda_{s}\;, \end{equation} where $\Lambda_{r,\theta,s} = 2\pi/\Upsilon_{r,\theta,s}$. By writing all relevant quantities as expansions of this form, we can compute the properties of spinning-body orbits to arbitrary precision, and develop a natural way of computing the frequency shifts $\Upsilon^S_r$ and $\Upsilon^S_{\theta}$. As written, Eq.\ (\ref{eq:flambdaFourier}) is appropriate for generic spinning-body orbits. In this analysis, we examine orbits of arbitrary eccentricity that are equatorial or nearly equatorial; the generic case is developed and presented in a companion analysis \cite{Paper2}. \end{widetext} \subsection{Aligned spin} \label{sec:eqplanealign} We first consider eccentric orbits with the spin of the small body aligned with the orbit. The orbit's geometry in this case is exactly as in Sec.\ \ref{sec:secondorderine}, but we now allow for arbitrary eccentricity. In this case, only radial oscillations are present in the motion, so all orbits can be described using expansions of the form \begin{align} f(\lambda) & =\sum_{n=-\infty}^{\infty}f_n e^{-in (\hat{\Upsilon}_r+\Upsilon_r^S)\lambda} \label{eq:radialexp}\;. \end{align} To evaluate these expressions, we truncate the Fourier expansion at a finite value $n_{\text{max}}$. In Fig.\ \ref{fig:residualplote}, we examine the convergence of important properties of the orbit as we increase $n_{\text{max}}$. These residuals are computed by comparing our frequency-domain expansion for these quantities with an alternate method which is exact in eccentricity, but only applies to the spin-aligned case. This method, which is based on that described by Saijo et al.\ (Ref.\ \cite{Saijo1998}) is described in detail in Appendix \ref{sec:Saijocomparison}. Our results indicate that we can accurately handle large eccentricities (up to at least $e \sim 0.8$) by increasing $n_{\text{max}}$, though larger $e$ requires larger values of $n_{\text{max}}$ in order in order to meet a prescribed level of truncation error. \begin{figure} \centerline{\includegraphics[scale=0.53]{Fig2.png}} \caption{Plot of residuals versus $n_{\text{max}}$ with $s_{\parallel}=s$ for $u^S_{t,0}$ (orange), $u^S_{\phi,0}$ (blue), $\Upsilon_r^S$ (red). These residuals are computed by comparing our frequency-domain expansion to results found using an approach which, for the spin-aligned case, is exact in eccentricity; see Ref.\ \cite{Saijo1998} and Appendix \ref{sec:Saijocomparison} for detailed discussion. Top panel shows $e=0.3$; middle is $e=0.5$; and bottom is $e=0.7$. In all cases, the large black hole has spin parameter $a = 0.9M$, and the orbit has $p=10$ and $I=0^{\circ}$. \label{fig:residualplote}} \end{figure} As described in Sec.\ \ref{sec:genprinciples}, we parameterize the radial motion as \begin{equation} r = \frac{pM}{1 + e\cos\chi_r}\;.\label{eq:rparam2} \end{equation} This form guarantees that the motion is constrained to the interval $p/(1+e)\leq r \leq p/(1-e)$. As in Eq.\ (\ref{eq:chir}), we write the true anomaly $\chi_r$ in Eq.\ (\ref{eq:rparam2}) as \begin{equation} \chi_r=w_r+\delta\chi_r\;, \end{equation} where $w_r$ is the mean anomaly and $\delta\chi_{r}$ is an oscillating contribution to $\chi_r$. The oscillating contribution in turn has a piece associated with geodesic motion, $\delta \hat{\chi}_r$, and another piece that arises from spin-curvature coupling $\delta \chi^S_r=\mathcal{O}(S)$, \begin{align} \delta\chi_{r} & =\delta \hat{\chi}_r + \delta \chi^S_r\;.\label{eq:deltachir} \end{align} The mean anomaly also has geodesic and spin-curvature contributions: \begin{equation} w_{r} = \left(\hat{\Upsilon}_r + \Upsilon^S_r\right)\lambda\;,\label{eq:meananom} \end{equation} where $\Upsilon^S_r$ is the $\mathcal{O}(S)$-correction to the radial Mino-time frequency. It is useful to write the true anomaly angles $\delta \hat{\chi}_r$ and $\delta \chi^S_r$ as Fourier expansions\footnote{Note that if the function we are Fourier expanding already has a subscript, we use a comma to denote the specific Fourier mode. For example, $\delta \hat\chi_{r,1}$ is the $n=1$ harmonic of function $\delta \hat \chi_r$.}, \begin{align} \delta \hat{\chi}_{r} & =\sum_{n=-\infty}^{\infty}\delta \hat{\chi}_{r,n} e^{-in w_r\lambda}\;, \label{eq:deltahatchi} \\ \delta \chi^S_{r} & =\sum_{n=-\infty}^{\infty}\delta \chi^S_{r,n}e^{-in w_r\lambda}\;. \label{eq:deltachiS} \end{align} We set $\chi^S_{r,0}=0$; this amounts to a choice of initial true anomaly. Note that the geodesic Fourier coefficients $\delta \hat{\chi}_{r,n} $ are known, as described in Sec.\ \ref{sec:kerrgeodesics}. Observe, however, that $w_r$ includes the frequency correction $\Upsilon_r^S$, meaning that $w_r + \delta \hat{\chi}_{r}$, with $\delta \hat{\chi}_{r}$ given by Eq.\ (\ref{eq:deltahatchi}), is \textit{not} the same as the true anomaly for the corresponding geodesic orbit with the same radial turning points. We treat the non-oscillating part of the spinning body's true anomaly as almost identical to the non-oscillating part of the true anomaly belonging to the geodesic with the same turning points, differing only by an appropriate shift to the orbit's frequency. This cures a pathology associated with the fact that the rate at which the mean anomaly accumulates for geodesic orbits differs at $\mathcal{O}(S)$ from the rate at which it accumulates for spinning-body orbits. This issue is described in more detail in Appendix \ref{sec:secularterms}. \begin{figure*} \centerline{\includegraphics[scale=0.58]{Fig3.png}} \caption{Example of radial motion for an aligned, spinning body in an equatorial orbit of a Kerr black hole ($a = 0.9M$). Left panel shows $r$ versus $\lambda$ for a geodesic (black dashed) and for a spinning-body orbit (blue solid). These orbits share radial turning points, corresponding to semi-latus rectum $p = 10M$, eccentricity $e = 0.5$. Top right panel shows the spinning body's $-u_t^S$ (red), $\partial_{\beta}g_{t\alpha} S^{\alpha\beta}/(2\mu)$ (orange), and $\delta E^S$ (blue) versus $\lambda$. Bottom right panel shows the spinning body's $u_{\phi}^S$ (red), $-\partial_{\beta} g_{\phi\alpha} S^{\alpha\beta}/(2\mu)$ (orange), $\delta L_z^S$ (blue) versus $\lambda$. Notice that the shifts in the integrals of motion $E$ and $L_z$ are constants, even though the terms which contribute to them oscillate. (The oscillations in the terms which contribute to $\delta L_z^S$ are so small they can barely be seen on this plot.) In all cases, the Fourier expansions have been taken to $n_{\rm max} = 8$; for the left panel, we have used $\mu s/M=0.5$. \label{fig:exampleorbitseqparallel}} \end{figure*} As in Eq.\ (\ref{eq:uphiut}), we define the $\mathcal{O}(S)$-corrections to the temporal and axial components of the 4-velocity by \begin{align} u_t=-\hat{E}+ u_t^S\;, \ \ \ \ u_{\phi}=\hat{L}_z+ u_{\phi}^S\;,\label{eq:utuphi} \end{align} where $u_t^S$ and $u_{\phi}^S$ can also be written as Fourier expansions, \begin{align} u_t^S & =\sum_{n=-\infty}^{\infty}u_{t,n}^Se^{-in w_r\lambda}\;,\\ u_{\phi}^S &=\sum_{n=-\infty}^{\infty}u_{\phi,n}^Se^{-in w_r\lambda}\label{eq:uphis}\;. \end{align} We divide both $u_t^S$ and $u_{\phi}^S$ into a piece that is constant, and a piece that oscillates: \begin{align} u_t^S & =u_{t,0}^S+\delta u_{t}^S(\lambda)\;,\ \ \ u_{\phi}^S =u_{\phi,0}^S+\delta u_{\phi}^S(\lambda)\;. \label{eq:deltautphis} \end{align} We can solve for the oscillating pieces using the $t$- and $\phi$- components of Eq.\ (\ref{eq:mp1linear}). Combining the axial and temporal components yields two equations of the form \begin{align} \frac{d u^S_{\phi}}{d\lambda}= \mathcal{R}_{\phi}\;, \ \ \ \frac{d u^S_{t}}{d\lambda}= \mathcal{R}_{t}\; \label{eq:Reqs}, \end{align} where $\mathcal{R}_{\phi}$ and $\mathcal{R}_t$ are functions of known geodesic quantities. For the equatorial and nearly equatorial cases, Eqs.\ (\ref{eq:Reqs}) are equivalent to Eqs.\ (\ref{eq:forcet_eqeccgen_v2}) -- (\ref{eq:forcephi_eqeccgen_v2}), and we can read out the functions $\mathcal{R}_\phi$ and $\mathcal{R}_t$ from there. The equations in (\ref{eq:Reqs}) allow us to immediately solve for $\delta u^S_t$ and $\delta u^S_{\phi}$. The constants $u^S_{t,0}$ and $u^S_{\phi,0}$ are determined by the system's initial conditions; as described below, we solve for these quantities along with with the other unknowns, $\delta \chi_r^S$ and $\Upsilon_r^S$. \begin{widetext} To make further progress, we insert Eqs.\ (\ref{eq:rparam2}) and (\ref{eq:utuphi}) into Eq.\ (\ref{eq:mp1linear}) and linearize in spin. By gathering in terms of unknown quantities, the radial component of Eq.\ (\ref{eq:mp1linear}) has the form \begin{align} \mathcal{F}_r \frac{d^2 \delta\chi_r^S}{d \lambda^2}+\mathcal{G}_r\frac{d \delta\chi_r^S}{d \lambda}+\mathcal{H}_r \delta\chi_r^S+\mathcal{I}_{1r} \Upsilon_r^S +\mathcal{I}_2 u^S_{t,0} +\mathcal{I}_3 u^S_{\phi, 0}+\mathcal{J}=0\;.\label{eq:radiallinMP} \end{align} In this equation, we have gathered all the terms and functional behavior which are known (i.e., they depend on the behavior of the geodesic with $p$ and $e$) into the functions $\mathcal{F}_r$, $\mathcal{G}_r$, $\mathcal{H}_r$, $\mathcal{I}_{1r}$, $\mathcal{I}_2$, $\mathcal{I}_3$ and $\mathcal{J}$. The explicit expressions for these functions in the Schwarzschild spacetime can be found in Appendix \ref{sec:coefficientfunctions}. For Kerr, the expressions become rather unwieldy. We include a \textit{Mathematica} notebook in the supplementary material which computes the expressions for $a\neq0$. Note that we solved for $\delta u^S_t$ and $\delta u^S_\phi$ when we solve (\ref{eq:Reqs}); these functions are incorporated into $\mathcal{J}$. We also use $u^{\alpha}u_{\alpha}=-1$ linearized in spin [i.e., Eq. (\ref{eq:udotu})], as an additional constraint. This yields an equation of the form \begin{align} \mathcal{K}_r \frac{d \delta\chi_r^S}{d \lambda}+\mathcal{M}_r \delta\chi_r^S+\mathcal{N}_{1r}\Upsilon_r^S +\mathcal{N}_2 u^S_{t,0} +\mathcal{N}_3u^S_{\phi, 0}+\mathcal{P}=0\;,\label{eq:udotucoeff} \end{align} where $\mathcal{K}_r$, $\mathcal{M}_r$, $\mathcal{N}_{1r}$, $\mathcal{N}_2$, $\mathcal{N}_3$ and $\mathcal{P}$ are again all functions\footnote{The functions $\mathcal{F}_r$, $\mathcal{G}_r$, etc.\ follow a mostly alphabetic sequence; however, we skip the letter $\mathcal{L}$ in our scheme to avoid confusion with the angular momentum 4-vector defined in Eq.\ (\ref{eq:orbangmomdef}).} of known quantities, and are listed in Appendix \ref{sec:coefficientfunctions} for Schwarzschild (with the Kerr versions included in supplemental material). The solutions for $\delta u_t^S$ and $\delta u_{\phi}^S$ are here incorporated into the function $\mathcal{P}$. \end{widetext} To solve for the unknown aspects of the spinning body's orbit, we write $\mathcal{F}_r$, $\mathcal{G}_r$, $\mathcal{H}_r$, $\mathcal{I}_{1r}$, $\mathcal{I}_{2}$, $\mathcal{I}_{3}$, $\mathcal{J}$, $\mathcal{K}_r$, $\mathcal{M}_r$, $\mathcal{N}_{1r}$, $\mathcal{N}_2$, $\mathcal{N}_{3}$ and $\mathcal{P}$ as Fourier expansions of the form shown in Eq.\ (\ref{eq:radialexp}). We insert these expansions, along with Eq.\ (\ref{eq:deltachiS}), into Eqs.\ (\ref{eq:radiallinMP}) and (\ref{eq:udotucoeff}). Evaluating Eqs.\ (\ref{eq:radiallinMP}) and (\ref{eq:udotucoeff}) in the frequency domain, we turn this differential equation into a system of linear equations which can be expressed in the form \begin{equation} \mathbf{M}\cdot\mathbf{v}+\mathbf{c}=0\;, \end{equation} where $\mathbf{M}$ is a matrix whose entries are related to the Fourier expansions of several of the functions appearing in Eqs.\ (\ref{eq:radiallinMP}) and (\ref{eq:udotucoeff}), and where $\mathbf{c}$ is a column vector whose entries are related to the Fourier expansion of the functions $\mathcal{K}$ and $\mathcal{P}$. The entries of the column vector $\mathbf{v}$ are the problem's various unknown quantities, such as the spin-induced shift in the radial frequency $\Upsilon^S_r$. As an illustration of this equation's form, we have written out the explicit form of $\mathbf{M}$, $\mathbf{v}$, and $\mathbf{c}$ in Appendix \ref{sec:matrixsystem} for $n_{\text{max}}=1$. Note that this value of $n_{\rm max}$ is far too small to achieve numerical convergence, and is used only for illustrative purposes. The matrix equation is ungainly when written out for realistic values of $n_{\rm max}$, though it poses no difficulties for numerical analysis. We then solve this system of linear equations for the unknown variables $\delta\chi_r^S$, $\Upsilon_r^S$, $u_{\phi,0}^S$ and $u_{t,0}^S$. This yields a complete solution for the motion of the spinning body to first order in spin. When the small body's spin is aligned with the orbit, an alternative method based on Ref.\ \cite{Saijo1998} allows us to calculate $\Upsilon^S_r$ exactly as a function of eccentricity; this method is described in detail in Appendix \ref{sec:Saijocomparison}. Figure \ref{fig:residualplote} shows how $\Upsilon_r^S$, $u_{\phi, 0}^S$ and $u_{t,0}^S$ converge to the exact result as we increase the value of $n_{\text{max}}$. For higher eccentricities, we need to include more harmonics (use a larger value of $n_{\text{max}}$) in order for the solution to converge to the same level of accuracy as the lower eccentricity orbit. For example, for an eccentricity of $e=0.7$ (bottom panel of Fig.\ \ref{fig:residualplote}) we need $n_{\text{max}}=20$ to obtain the same discrepancy between the exact and frequency-domain result as for $e=0.3$ (top panel of Fig.\ \ref{fig:residualplote}) with $n_{\text{max}}=9$. An example of an aligned spinning body's equatorial orbit is shown in the left panel of Fig.\ \ref{fig:exampleorbitseqparallel}. The geodesic orbit with the same radial turning points is overplotted for comparison. Notice the two ways in which the spinning body's radial motion differs from that of the geodesic. First, the radial frequency is shifted by $\Upsilon_r^S$. This effect can be very clearly seen in Fig.\ \ref{fig:exampleorbitseqparallel}. Second, the shape of the orbit is modified due to the impact of the oscillatory term in the true anomaly $\delta\chi_r^S$. This effect is quite a bit smaller, and is not obvious in the figure for this choice of parameters. In the right panel of Fig.\ \ref{fig:exampleorbitseqparallel}, we show $u_t^S$ and $u_{\phi}^S$, as well as corrections to the spinning body's energy $\delta E^S$ and axial angular momentum $\delta L_z^S$ [using Eqs.\ (\ref{eq:deltaEspin}) and (\ref{eq:deltaLspin})]. As expected, the oscillations $\partial_{\beta}g_{t\alpha} S^{\alpha\beta}/(2\mu)$ and $\partial_{\beta}g_{t\alpha} S^{\alpha\beta}/(2\mu)$ precisely cancel oscillations in $\delta u_t^S$ and $\delta u_{\phi}^S$; upon summing, $\delta E^S$ and $\delta L_z^S$ are indeed constant. The values for the spinning body's energy and axial angular momentum match those obtained using the alternative approach described in Appendix \ref{sec:Saijocomparison}; see App.\ \ref{sec:kerrSaijoecc} in particular. \subsection{Misaligned spin} \label{sec:ecceqprecess} \begin{figure} \centerline{\includegraphics[scale=0.53]{Fig4.png}} \caption{Plot of residuals versus $n_{\text{max}}$ for a nearly equatorial orbit of a misaligned spinning body. The body's spin in this case has $s_{\parallel}=0.5s$, $s_\perp = \sqrt{3}s/2$. We show residuals for $u^S_{t,0}$ (orange), $u^S_{\phi,0}$ (blue), $\Upsilon_r^S$ (red). To compute these residuals, we use the fact that the equations for this case are identical to the equations for the spin-aligned case, but substituting $s_\parallel$ for the small body's spin $s$. Because of this, the exact-in-eccentricity solution (described in Ref.\ \cite{Saijo1998} and Appendix \ref{sec:Saijocomparison}) that describes aligned orbits can be used to compute the quantities which describe the radial part of misaligned spinning body's orbit, provided we use only the parallel component $s_\parallel$ all of the relevant expressions. As in Fig.\ \ref{fig:residualplote}, top panel shows $e=0.3$, middle shows $e=0.5$, and bottom is $e=0.7$. In all cases, the large black hole has spin parameter $a = 0.9M$, and the orbit has $p=10$ and $I=0^{\circ}$. \label{fig:residualplotemisaligned}} \end{figure} \begin{figure*} \centerline{\includegraphics[scale=0.62]{Fig5.png}} \caption{Example of the motion of a nearly equatorial prograde ($I = 0^\circ$) orbit for a non-aligned spinning test body around a Kerr black hole with $a = 0.9M$. Top left panel shows $r$ versus $\lambda$ for a geodesic (black dashed) and a spinning test body (blue solid) orbit. These orbits share radial turning points, corresponding to $p = 3M$, $e = 0.3$. Note that, in the left two panels, we have used an unphysically high spin $\mu s /M=1$ in order make the spin-curvature effects clearly visible. Also note that for making this plot, the spinning-body orbit has been shifted slightly: its radial frequency $\Upsilon_r= \hat\Upsilon_r + \Upsilon_r^S$ has been replaced with $\hat\Upsilon_r $. This is done so that in the plot the geodesic and the spinning-body orbit pass through their radial turning points at the same times, which helps to illustrate differences in their motion between each turning point. Bottom left panel shows $\cos\theta$ versus $\lambda$ for a geodesic (black dashed) and the spinning-body (blue solid) orbit. Top right shows $-u_t^S$ (red), $\partial_{\beta} g_{t\alpha} S^{\alpha\beta}/(2\mu)$ (orange), and $\delta E^S$ (blue), as well as $\delta\chi_r^S$ (black), all versus $\lambda$. Finally, the bottom right panel shows $u_{\phi}^S$ (red), $-\partial_{\beta} g_{\phi\alpha} S^{\alpha\beta}/(2\mu)$ (orange), and $\delta L_z^S$ (blue), as well as $\delta\vartheta_S$ (black), all versus $\lambda$. Notice that the spin-induced shifts to the integrals of motion $E$ and $L_z$ are constants, although each such term has contributions that oscillate. In making these plots, we have used $s_{\parallel}=-0.5s$, $\phi_s=0$ and $n_{\text{max}}=5$. In the two left panels, we have used $\mu s /M=1$. \label{fig:exampleorbitseqprec}} \end{figure*} We now consider eccentric, nearly equatorial orbits, allowing the spin of the small body to have arbitrary orientation. As we saw in Secs.\ \ref{subsec:circeqalign}, \ref{sec:secondorderine} and \ref{sec:eqplanealign}, if the spin of the small body is aligned with the orbit, the motion remains in the equatorial plane. However, if the spin of the test body is misaligned, the spin vector precesses, as described in Secs.\ \ref{subsec:circeqmisalign} and \ref{sec:leadingorderine}. The spin precession introduces the frequency $\Upsilon_s$ into the motion. Orbital quantities can then be described using expansions of the form \begin{align} f(\lambda) & =\sum_{j=-1}^{1}\sum_{n=-\infty}^{\infty}f_{jn} e^{-ij \Upsilon_s\lambda} e^{-in (\hat{\Upsilon}_r+\Upsilon_r^S)\lambda}\;. \label{eq:radialexpprecess} \end{align} The spin precession induces out-of-plane motion, which we describe by introducing the new variable $\delta{\vartheta}_{S}$, as in Secs.\ \ref{subsec:circeqmisalign} and \ref{sec:leadingorderine}. The orbit can therefore be parameterized by \begin{align} r & =\frac{pM}{1+e\cos\left(w_{r}+\delta\hat{\chi}_r+\delta\chi_r^S\right)}\;,\label{eq:parameqprecr} \\ \theta &=\frac{\pi}{2}+ \delta\vartheta_S\label{eq:parameqprectheta}\;. \end{align} The spin contribution to the radial anomaly angle, $\delta\chi_{r}^{S}$, consists of purely radial oscillations, \begin{align} \delta\chi_{r}^{S} & =\sum_{n=-\infty}^{\infty}\delta\chi_{r,n}^{S}e^{-in w_r}\;;\label{eq:deltachireqprec} \end{align} the Fourier expansion for $\delta\vartheta_S$ depends in addition on the frequency $\Upsilon_s$, \begin{align} \delta\vartheta_S &= \sum_{j=-1}^1\sum_{n=-\infty}^{\infty}\delta\vartheta_{S,jn} e^{-in w_r} e^{-ij w_s}\;.\label{eq:deltachitheta2} \end{align} We have introduced $w_{s} = \Upsilon_s\lambda$. \begin{widetext} As in Sec.\ \ref{sec:eqplanealign}, we write the axial and temporal components of the 4-velocity in the form Eq.\ (\ref{eq:deltautphis}) and use Eq.\ (\ref{eq:Reqs}) to find $\delta u_{\phi}^S$ and $\delta u_{t}^S$. We insert Eqs.\ (\ref{eq:parameqprecr}), (\ref{eq:parameqprectheta}) and (\ref{eq:utuphi}) into Eq.\ (\ref{eq:mp1linear}) and linearize in spin. Similarly to Sec.\ \ref{sec:eqplanealign}, the radial component of Eq.\ (\ref{eq:mp1linear}) has the form \begin{align} \mathcal{F}_r \frac{d^2 \delta\chi_r^S}{d \lambda^2}+\mathcal{G}_r \frac{d \delta\chi_r^S}{d \lambda}+\mathcal{G}_{\vartheta} \frac{d \delta\vartheta_S}{d \lambda}+\mathcal{H}_r \delta\chi_r^S+\mathcal{H}_{\vartheta}\delta\vartheta_S+\mathcal{I}_{1r} \Upsilon_r^S +\mathcal{I}_2 u^S_{t,0} +\mathcal{I}_3 u^S_{\phi, 0}+\mathcal{J} =0\;,\label{eq:radiallinMP2} \end{align} where $\mathcal{F}_r$, $\mathcal{G}_r$, $\mathcal{G}_{\vartheta}$, $\mathcal{H}_r$, $\mathcal{H}_{\vartheta}$, $\mathcal{I}_{1r}$, $\mathcal{I}_{2}$, $\mathcal{I}_{3}$ and $\mathcal{J}$ are all functions of known quantities. For nearly equatorial orbits, $\mathcal{G}_{\vartheta}=\mathcal{H}_{\vartheta}=0$. This is not the case for generic orbit geometry, which we discuss in a companion paper \cite{Paper2}; we include these functions in Eq.\ (\ref{eq:radiallinMP2}) in order to lay out the structure we need for the generic case. When the small body's spin is misaligned with the orbit, the body's motion takes it out of the equatorial plane. This requires us to include the $\theta$-component of Eq.\ (\ref{eq:mp1linear}) in our analysis. We linearize this equation in spin, yielding \begin{align} \mathcal{Q}_{\vartheta}\frac{d^2 \delta\vartheta_S}{d \lambda^2}+\mathcal{S}_r \frac{d \delta\chi_r^S}{d \lambda}+\mathcal{S}_{\vartheta}\frac{d \delta\vartheta_S}{d \lambda}+\mathcal{T}_r\delta\chi_r^S+\mathcal{T}_{\vartheta}\delta\vartheta_S+\mathcal{U}_{1r}\Upsilon_r^S +\mathcal{U}_{2}u^S_{t,0} +\mathcal{U}_{3}u^S_{\phi, 0}+\mathcal{V}=0\;. \label{eq:polarlinMP2} \end{align} In (\ref{eq:polarlinMP2}), the functions $\mathcal{Q}_{\vartheta}$, $\mathcal{S}_r$, $\mathcal{S}_{\vartheta}$, $\mathcal{T}_r$, $\mathcal{T}_{\vartheta}$, $\mathcal{U}_{1r}$ $\mathcal{U}_{2}$, $\mathcal{U}_{3}$ and $\mathcal{V}$ all depend on known quantities. For nearly equatorial orbits, $\mathcal{S}_r = \mathcal{S}_\vartheta = \mathcal{T}_r = \mathcal{U}_{1r} = \mathcal{U}_{2} = \mathcal{U}_{3} = 0$. This is not the case for the more generic orbits which we discuss in a companion paper \cite{Paper2}. As in our discussion of the spin-aligned case, we use $u^{\alpha}u_{\alpha} = -1$ to obtain a linear-in-spin constraint which we write \begin{align} \mathcal{K}_r \frac{d \delta\chi_r^S}{d \lambda}+\mathcal{K}_{\vartheta} \frac{d \delta\vartheta_S}{d \lambda}+\mathcal{M}_r \delta\chi_r^S+\mathcal{M}_{\vartheta} \delta\vartheta_S+\mathcal{N}_{1r}\Upsilon_r^S +\mathcal{N}_2 u^S_{t,0} +\mathcal{N}_3 u^S_{\phi, 0}+\mathcal{P}=0\;. \label{eq:udotu2} \end{align} Here, $\mathcal{K}_r$, $\mathcal{K}_{\vartheta}$, $\mathcal{M}_r$, $\mathcal{M}_{\vartheta}$, $\mathcal{N}_{1r}, \mathcal{N}_2$, $\mathcal{N}_{3}$ and $\mathcal{P}$ are again all functions of known quantities. For nearly equatorial orbits, $L_\theta = M_\theta = 0$. We list the Schwarzschild limit of all these functions in App.\ \ref{sec:coefficientfunctions}, and include Kerr versions in our supplementary material. \end{widetext} We can now write $\mathcal{F}_r$, $\mathcal{G}_r$, $\mathcal{G}_{\vartheta}$, $\mathcal{H}_r$, $\mathcal{H}_{\vartheta}$, $\mathcal{I}_{1r}$, $\mathcal{I}_{2}$, $\mathcal{I}_{3}$, $\mathcal{J}$, $\mathcal{Q}_{\vartheta}$, $\mathcal{S}_r$, $\mathcal{S}_{\vartheta}$, $\mathcal{T}_r$, $\mathcal{T}_{\vartheta}$, $\mathcal{U}_{1r}$, $\mathcal{U}_{2}$, $\mathcal{U}_{3}$, $\mathcal{V}$, $\mathcal{K}_r$, $\mathcal{K}_{\vartheta}$, $\mathcal{M}_r$, $\mathcal{M}_{\vartheta}$, $\mathcal{N}_{1r}$, $\mathcal{N}_2$, $\mathcal{N}_{3}$ and $\mathcal{P}$ as Fourier expansions of the form given in Eq.\ (\ref{eq:radialexpprecess}). We insert these expansions, along with Eqs.\ (\ref{eq:deltachireqprec}) and (\ref{eq:deltachitheta2}), into Eqs. (\ref{eq:radiallinMP2}), (\ref{eq:polarlinMP2}) and (\ref{eq:udotu2}). This turns these differential equations in linear algebraic ones; as in our discussion of aligned orbits in Sec.\ \ref{sec:eqplanealign}, we gather terms into matrix form, and then solve for the for the unknown variables $\delta\chi_r^S$, $\delta\vartheta_S$, $\Upsilon_r^S$, $u_{t,0}^S$ and $u_{\phi, 0}^S$. Further details about the matrix system corresponding to Eq.\ (\ref{eq:polarlinMP2}) are provided in Appendix \ref{sec:matrixsystem} and the explicit solution given for $n_{\text{max}} = 1$. As discussed in Secs.\ \ref{subsec:circeqmisalign} and \ref{sec:leadingorderine}, when the small body's spin is misaligned from the orbit, qualitatively distinct behaviour arises due to the spin's precession. For the nearly equatorial case, non-trivial polar motion $\delta\vartheta_S$ emerges, varying with the spin precession frequency $\Upsilon_s$. Note, though, that in the expansion (\ref{eq:deltachitheta2}) we do not include harmonics at frequency $\Upsilon_\theta$. Such harmonics can in principle be present, as we saw in Eqs.\ (\ref{eq:Utheta_linecc}), (\ref{eq:Alambda}), and (\ref{eq:Blambda}). In the present analysis, we have only considered initial conditions such that the amplitude of the $\Upsilon_\theta$ harmonics are suppressed. In our companion study \cite{Paper2}, we examine motion with $\delta\vartheta_S$ governed by the completely general form (\ref{eq:flambdaFourier}). The motion in this case has harmonics of all three frequencies are present. In the left panel of Fig.\ \ref{fig:exampleorbitseqprec}, we show $r$ and $\theta$ for a small body with misaligned spin; an equatorial geodesic with the same radial turning points is overplotted for comparison. The form of $\delta\chi_r^S$ and $\delta\vartheta_S$ for this orbit is shown in the right panels of Fig.\ \ref{fig:exampleorbitseqprec}. As in Sec.\ \ref{sec:eqplanealign}, there are two main ways in which the radial motion of the spinning body differs from that of the geodesic with the same turning points: the radial frequency is shifted, and the shape of the orbit is modified by $\delta\chi_r^S$. We have actually hidden the first effect by shifting the spinning-body orbit's radial frequency --- the solid curve in Fig.\ \ref{fig:exampleorbitseqprec} is a spinning-body orbit with the radial frequency $\Upsilon_r= \hat\Upsilon_r + \Upsilon^S_r$ replaced with $\hat\Upsilon_r$. This allows us to more clearly show the impact of the shifted radial anomaly oscillation $\delta\chi_r^S$ --- notice that the shifted geodesic sometimes moves faster, and sometimes slower, than the spinning-body orbit with which it is plotted. The frequency shift $\Upsilon^S_r$ is exactly the same as for the equivalent aligned case except with $s$ replaced by $s_{\parallel}$. The harmonic content of $\cos \theta$ is more complicated, exhibiting a beat between $\Upsilon_r$ and $\Upsilon_s$. We also plot $u_t^S$ and $u_{\phi}^S$ alongside the corrections to the spinning body's energy $\delta E^S$ and orbital angular momentum $\delta L_z^S$ in the right panels of Fig.\ \ref{fig:exampleorbitseqprec}. Figure \ref{fig:residualplote} displays the convergence of an orbit with aligned spin, while Figure \ref{fig:residualplotemisaligned} shows the convergence of an orbit with misaligned spin, where both orbits have the same radial turning points. We call the discrepancy between the exact result and our value for a certain $n_{\text{max}}$ the ``residuals". These residuals are normalized by the exact value of the quantity we are computing, so the values for $\Upsilon_r^S$, $u_{t, 0}^S$ and $u_{\phi, 0}^S$ are directly comparable. As $n_{\text{max}}$ increases, the residuals decrease and approach closer to the true value, as expected. The convergence trend is identical for both the aligned and misaligned cases, except for the highest value of $n_{\text{max}}$ for each of the different eccentricities. At this point, the working precision of the calculation is insufficient and the computation breaks down due to rounding error. \section{Summary and future work} \label{sec:summary} In this work, we have studied equatorial and nearly equatorial orbits of spinning bodies around black holes in detail. Such orbits reduce to equatorial ones when the orbiting body is non-spinning. When the spin is aligned with the orbit, the motion is confined to the equatorial plane. When the spin vector is misaligned, it precesses with Mino-time frequency $\Upsilon_s$, and the motion acquires a polar oscillation $\delta\vartheta_S$ whose magnitude is $\mathcal{O}(S)$. The solution in this case appears to diverge on ``resonances,'' orbits for which the radial and spin frequencies combine to be commensurate with the polar oscillation frequency: $\hat\Upsilon_r + \Upsilon_s = \Upsilon_\theta$. In fact, the amplitude of the driving force vanishes at such frequencies, and the system is well behaved, in keeping with past work which demonstrated that nothing ``interesting'' happens during spin-orbit resonances at least when considering the motion to leading order in spin \cite{Witzany2019_2,Zelenka2020}. Sections \ref{sec:simpleorbits} and \ref{sec:slightlyecc} presented analytic descriptions of nearly equatorial orbits that are circular and slightly eccentric respectively. In Sec.\ \ref{sec:spinbodyfreqdom}, we introduced a frequency-domain description of nearly equatorial orbits with arbitrary eccentricity. In a companion paper, we use this frequency-domain approach to describe completely fully generic orbits --- orbits that are both inclined and eccentric, with the small body's spin arbitrarily oriented \cite{Paper2}. It is worth remarking that, for the nearly equatorial orbits we consider here, spinning-body orbits share the same radial turning points as some equatorial geodesic orbit. For the nearly equatorial case, this ``reference geodesic'' which shares the orbit's turning points serves as a particularly convenient point of comparison in analyzing the spinning body's orbit. This analysis becomes more complicated in the generic case, for which neither the polar nor the radial libration ranges coincide in general with those of a geodesic. We can nonetheless define a ``reference geodesic'' whose turning points coincide with the spinning body's orbit in an orbit-averaged sense; details are given in Ref.\ \cite{Paper2}. We use this framework to compute corrections arising from the small body's spin to the orbital frequencies $\Upsilon_r$ and $\Upsilon_\theta$ for generic orbits in Ref.\ \cite{Paper2}. In addition, we present a detailed comparison between our approach and the methods presented in Ref.\ \cite{Witzany2019_2} for the case of equatorial, spin-aligned orbits in Appendix B of the companion paper \cite{Paper2}. Results in Ref.\ \cite{Ruangsri2016} suggest that the behavior near resonance of terms which are quadratic in spin plays a critical role in the emergence of chaotic motion via the KAM theorem. This is supported by Ref.\ \cite{Zelenka2020} which contains a detailed numerical study of the growth of resonances and chaos for spinning-body motion in a Schwarzschild spacetime. By using the techniques discussed here to provide a very accurate formulation of the linear-in-spin aspect of spinning-body orbits, we plan to extend work in Ref.\ \cite{Ruangsri2016} by investigating the behaviour of the quadratic in spin terms in the frequency domain. We hope this may clarify the precise manner in which nonlinear terms in the spinning-body equations of motion push such orbits from integrable to chaotic behavior in a Kerr background. Another avenue for future work is to incorporate secondary spin into gravitational waveform models. An osculating geodesic integrator \cite{Pound2008,Gair2011} can be used to generate spinning-body worldlines. Any perturbed system of the form $Dp^{\alpha}/d\tau =\delta f^{\alpha}$ can be described using an osculating geodesic framework, so long as $\delta f^{\alpha}$ is sufficiently small. In the EMRI limit we are interested in, both the spin-curvature force $f_S^{\alpha}$ and the self-force effects are small, so it should be possible to fold both into a forcing term and build a spinning-body inspiral. Such a framework has been developed for Schwarzschild orbits, and is presented in Ref.\ \cite{Warburton2017}; we hope to use a similar approach to model completely generic spinning-body Kerr inspirals. Ultimately, one hopes to build a fully self consistent self-force driven inpiral, and it is encouraging that the first steps have been taken in this direction \cite{mathews2021selfforce}. \section*{Acknowledgements} This work has been supported by NASA ATP Grant 80NSSC18K1091, and NSF Grant PHY-1707549 and PHY-2110384. We are very grateful to Leo Stein and Sashwat Tanay for reading a draft of this paper and providing helpful comments; we are particularly grateful for comments regarding the possible impact of resonances in the low eccentricity limit, which helped us to uncover well-hidden typos in several equations. We are also very grateful to Vojt\v{e}ch Witzany for reading this manuscript and providing very helpful comments, and to Viktor Skoup\'{y} whose feedback and checks of our analysis uncovered a typographical error in one of our equations.
2,869,038,154,180
arxiv
\section{Introduction} This study reports the observation of an anomalous muon production in $p\bar{p}$ interactions at $\sqrt{s}=1.96$ TeV. The analysis was motivated by the presence of several inconsistencies that affect or affected the $b\bar{b}$ production at the Tevatron: (a) the ratio of the observed $b\bar{b}$ correlated production cross section to the exact next-to-leading-order (NLO) QCD prediction~\cite{mnr} is $1.15 \pm 0.21 $ when $b$ quarks are selected via secondary vertex identification, whereas this ratio is found to be significantly larger than two when $b$ quarks are identified through their semileptonic decays~\cite{bstatus}; (b) sequential semileptonic decays of single $b$ quarks are considered to be the main source of dilepton events with invariant mass smaller than that of a $b$ quark. However, the observed invariant mass spectrum is not well modeled by the standard model (SM) simulation of this process~\cite{dilb}; and (c) the value of $\bar{\chi}$, the average time integrated mixing probability of $b$ flavored hadrons derived from the ratio of muon pairs from $b$ and $\bar{b}$ quarks semileptonic decays with opposite and same sign charge, is measured at hadron colliders to be larger than that measured by the LEP experiments~\cite{bmix,pdg} This analysis extends a recent study~\cite{bbxs} by the CDF collaboration which has used a dimuon data sample to measure the correlated $\sigma_{b\rightarrow\mu,\bar{b}\rightarrow \mu}$ cross section. After briefly describing that study, it is shown that varying the dimuon selection criteria isolates a sizable, but unexpected background that contains muons with an anomalous impact parameter~\cite{d0} distribution. Further investigation shows that a smaller fraction of these events also has anomalously large track and muon multiplicities. We are unable to account for the size and properties of these events in terms of known SM processes, even in conjunction with possible detector mismeasurement effects. The CDF~II detector~\cite{cdfdet} consists of a magnetic spectrometer, based on a 96-layer drift chamber, surrounded by electromagnetic and hadron calorimeters and muon detectors. Precision impact parameter and vertex determinations are provided by three slicon tracking devices collectively referred to in this report as the ``SVX". The SVX is composed of eight layers of silicon microstrip detectors ranging in radius from $1.5$ to $28$~cm in the pseudorapidity region $|\eta|<1$. \section{Study of the data sample composition} The study presented here, which is further detailed in Ref,\cite{a0disc} uses the same data and Monte Carlo simulated samples, and the same analysis methods described in Ref.~\cite{bbxs} We use events containing two central ($|\eta|<0.7$) muons, each with transverse momentum $p_T \geq 3 \; \gevc$, and with invariant mass larger than 5 $ {\rm GeV/}c^2$. In Ref,\cite{bbxs} the value of $\sigma_{b\rightarrow\mu,\bar{b}\rightarrow \mu}$ is determined by fitting the impact parameter distribution of these primary muons with the expected shapes from all known sources. To ensure an accurate impact parameter determination, Ref.\cite{bbxs} uses a subset of dimuon events in which each muon track is reconstructed in the SVX with hits in the two inner layers and in at least four of the inner six layers. The data are nicely \begin{wrapfigure}[15]{l}{8.2cm} \vspace{-0.4cm} \includegraphics[width=8.0cm]{fig01.eps} \vspace{-0.2cm} \caption[] {Impact parameter distribution of muons contributed by different physics processes.} \label{fig:fig01} \end{wrapfigure} described by a fit with contributions from the following QCD processes: semileptonic heavy flavor decays, prompt quarkonia decays, Drell-Yan production, and instrumental backgrounds from hadrons mimicking the muon signal. Using the fit result, shown in Fig.~\ref{fig:fig01}, Ref.~\cite{bbxs} reports $\sigma_{b\rightarrow\mu,\bar{b}\rightarrow \mu}= 1549 \pm 133$ pb for muons with $p_T \geq 3 \; \gevc$ and $|\eta| \leq 0.7$. This result is in good agreement with theoretical expectations as well as with analogous measurements that identify $b$ quarks via secondary vertex identification.\cite{ajets,shears} However, it is also substantially smaller than previous measurements of this cross section\cite{2mucdf,d0b2}, and raises some concern about the composition of the initial dimuon sample prior to the SVX requirements. The tight SVX requirements used in Ref.\cite{bbxs} select events in which both muons arise from parent particles that have decayed within a distance of $\simeq 1.5$ cm from the $p\bar{p}$ interaction primary vertex in the plane transverse to the beam line. Using Monte Carlo simulations, we estimate that approximately 96\% of the dimuon events contributed by known QCD processes satisfy this latter condition. Since the events selected in~\cite{bbxs} are well described by known QCD processes, we can independently estimate the efficiency of the tight SVX requirements. Using control samples of data from various sources and the sample composition determined by the fit to the muon impact parameter distribution, we estimate that ($24.4\pm 0.2$)\% of the initial sample should survive the tight SVX requirements, whereas only ($19.30\pm0.04$)\% actually do. This suggests the presence of an additional background that has been suppressed when making the tight SVX requirements. The size of this unexpected dimuon source is estimated as the difference of the total number of dimuon events, prior to any SVX requirements, and the expected contribution from the known QCD sources. This latter contribution is estimated as the number of events surviving the tight SVX requirements divided by the efficiency of that selection. In a data set corresponding to an integrated luminosity of 742 pb$^{-1}$, 143743 dimuon events survive the tight SVX cuts. Dividing this number by the $24.4\%$ efficiency of the tight SVX selection criteria we expect $589111\pm 4829$ QCD events to contribute to the initial sample whereas 743006 are observed. The difference, $153895\pm 4829$ events, is comparable in magnitude to the expected dimuon contribution from $b \bar{b}$ production, $221564\pm 11615$. This estimate assumes the unexpected source of dimuon events is completely rejected by the tight SVX requirements. Most CDF analyses use a set of SVX criteria, referred in the following as standard SVX, in which tracks are required to have hits in at least three of the eight SVX layers. This standard SVX selection accepts muons from parent particles with decay lengths as long as $10.6$~cm. Applying the standard SVX selection reduces the estimated size of the unknown dimuon source by a factor of two, whereas $88$\% of the known QCD contribution is expected to survive. A summary of the estimates of the size of this unexpected source of dimuon events, whimsically called ghost events, for various sets of SVX criteria is shown in Table~\ref{tab:tab01}. In this table and throughout this report the expected contribution from known QCD sources, referred to as QCD contribution, will be estimated from the sample of dimuons surviving the tight SVX requirements and properly accounting for the relevant SVX efficiencies using the sample composition from the \begin{wraptable}[15]{l}{0.65\textwidth} \vspace{-0.4cm} \caption[]{Number of events that pass different SVX requirements. Dimuons are also split into pairs with opposite ($OS$) and same ($SS$) sign charge.} \vspace{0.3cm} \begin{tabular}{|lccc|} \hline Type & No SVX & Tight SVX & Standard SVX \\ Total & 743006 & 143743 & 590970 \\ Total $OS$ & & 98218 & 392020 \\ Total $SS$ & & 45525 & 198950 \\ QCD & 589111 $\pm$ 4829 & 143743 & 518417 $\pm$ 7264 \\ QCD $OS$ & & 98218 & 354228 $\pm$ 4963 \\ QCD $SS$ & & 45525 & 164188 $\pm$ 2301 \\ Ghost & 153895 $\pm$ 4829 & 0 & 72553 $\pm$ 7264 \\ Ghost $OS$ & & 0 & 37792 $\pm$ 4963 \\ Ghost $SS$ & & 0 & 34762 $\pm$ 2301 \\ \hline \end{tabular} \label{tab:tab01} \end{wraptable} fits of Ref.\cite{bbxs} We elect to follow this approach since the tight SVX sample provides a well understood sample.\cite{bbxs} The ghost contribution will always be estimated from the total number of events observed in the data after subtracting the expected QCD contribution. Table~\ref{tab:tab01} shows also the event yields separately for the subset of events in which the dimuons have opposite-sign ($OS$) and same-sign ($SS$) charge. The ratio of OS to SS dimuons is approximately 2:1 for QCD processes but is approximately 1:1 for the ghost contribution. At this stage it is worth commenting further on the set of inconsistencies related to $b\bar{b}$ production and decay mentioned above. The general observation is that the measured $\sigma_{b\rightarrow\mu,\bar{b}\rightarrow \mu}$ increases as the SVX requirements are made looser and is almost a factor of two larger than that measured in Ref.\cite{bbxs} when no SVX requirements are made.\cite{d0b2} As mentioned above, the magnitude of the ghost contribution is comparable to the $b\bar{b}$ contribution when no SVX selection is made and in combination would account for the measurement reported in Ref.~\cite{d0b2} Similarly, for the standard SVX criteria, the magnitude of the ghost contribution, when added to the expected $b\bar{b}$ contribution of $194976 \pm 10221$ events, coincides with the cross section measurement reported in Ref.\cite{2mucdf} and the $\bar{\chi}$ value reported in Ref.\cite {bmix} since these measurements use similar sets of silicon criteria. Moreover, as demonstrated in\cite{a0disc}, when applying the tight SVX criteria to initial muons, the invariant mass spectrum of combinations of an initial muon with an additional accompanying muon is well described by known QCD sources and is dominated by sequential semileptonic heavy flavor decays. In contrast, without any SVX requirement the invariant mass spectrum cannot be modeled with the SM simulation and the inconsistencies at low invariant mass reported in\cite{dilb} are reproduced. Thus, this unknown source of dimuon events seems to offer a plausible resolution to these long-standing inconsistencies related to $b\bar{b}$ production and decay. The remainder of this paper is dedicated to a further exploration of these events. The nature of the anomalous events can be characterized by four main features. The impact parameter distribution of the initial muon pair cannot be readily understood in terms of known SM processes. In small angular cones around the initial muons the rate of additional muons is significantly higher than that expected from SM processes. The invariant mass of the initial and additional muons looks different from that expected from sequential semileptonic decays of heavy flavor hadrons. The impact parameter distribution of the additional muons has the same anomalous behavior as the initial muons. We will discuss these features in turn. As shown in Fig.~\ref{fig:fig02}, muons due to ghost events have an impact parameter distribution \begin{wrapfigure}[23]{l}{7.0cm} \vspace{-0.5cm} \includegraphics[width=6.8cm]{fig02.eps} \vspace{-0.3cm} \caption[] {Impact parameter distribution of muons contributed by ghost ($\bullet$) and QCD (histogram) events. Muon tracks are selected with the standard SVX requirements. The detector resolution is $\simeq 30 \; \mu$m. The insert shows the distribution of simulated muons (histogram) that pass the same analysis selection as the data and arise from in-flight-decays of pions and kaons produced in a QCD heavy flavor simulation. The dashed histogram shows the impact parameter of the parent hadrons. } \label{fig:fig02} \end{wrapfigure} that is completely different from that of muons due to QCD events. A number of potential background sources have been evaluated. The one expected to contribute significantly arises from in-flight-decays of pions and kaons. Based upon a generic QCD simulation, we predict a contribution of 57000 events,\cite{a0disc} 44\% and 8\% of which pass the standard and tight SVX selection, respectively. The uncertainty of this prediction is difficult to assess, but, as shown by the insert in Fig.~\ref{fig:fig02}, in-flight decays alone cannot account for the shape of the muon impact parameter distribution in ghost events. A minor contribution of $K^0_S$ and hyperon decays in which the punchthrough of a hadronic prong mimics a muons signal has been also investigated.\cite{a0disc} Secondary interactions in the tracking volume are also possible candidates, and more difficult to quantify. The possibility of instrumental effects, trigger and reconstruction biases have been investigated in detail in Ref.\cite{a0disc} For example, we have verified the soundness of large impact parameter tracks by measuring the lifetime of $K^0_S$ decays reconstructed in the same data set used for this analysis. \section{Events with additional muons} We search QCD and ghost events that contain a pair of initial muons that pass our analysis selection (without any SVX requirement) for additional muons with $p_T \geq 2 \; \gevc$ and $|\eta|\leq 1.1$. We have the following motivations: (a) events acquired because of in-flight decays or secondary interactions are not expected to contain an appreciable number of additional muons; (b) QCD events that might appear in the ghost sample because of not-yet-understood detector malfunctions should not contain more additional leptons than QCD events with well reconstructed initial dimuons; and (c) we want to investigate if the anomaly reported in Ref.\cite{dilb} is also related to the presence of the unexpected background. According to the simulation,\cite{a0disc} additional muons arise from sequential decays of single $b$ hadrons. In addition, one expects a contribution due to hadrons mimicking the muon signal. In the data, 9.7\% of the dimuon events contain an additional muon (71835 out of 743006 events). The contribution of events without heavy flavor, such as all conventional sources of ghost events mentioned above, is depressed by the request of an additional muon. For example, in events containing a $\Upsilon(1S)$ or $K^0_S$ candidate and are included in the dimuon sample, the probability of finding an additional muon is ($0.90 \pm 0.01$)\% and ($1.7 \pm 0.8$)\%, respectively. However, the efficiency of the tight SVX selection in dimuon events that contain additional muons drops from $0.1930 \pm 0.0004$ to $0.166 \pm 0.001$. This observation anticipates that a fraction of ghost events contains more additional muons than QCD data. This paragraph summarizes a detailed study of the rate and kinematic properties of events that contain at least three muons reported in Ref.\cite{a0disc} This study uses a data set of larger integrated luminosity that corresponds to $1131090\pm 9271$ QCD and $295481 \pm 9271$ ghost events. Reference\cite{a0disc} shows that the rate and kinematics of three-muon combinations are correctly modeled by the QCD simulation only if the two initial muons are selected with the tight SVX requirement. Muon pairs due to $b$ sequential decays peak at small invariant masses and small \begin{wrapfigure}[16]{l}{6.0cm} \vspace{-0.4cm} \includegraphics[width=5.8cm]{fig03.eps} \vspace{-0.3cm} \caption[]{Two-dimensional distribution of the impact parameter of an initial muon, $d_{0p}$, versus that, $d_{0s}$, of additional muons in ghost events. Muons are selected with standard SVX requirements.} \label{fig:fig03} \end{wrapfigure} opening angles. The distributions of analogous pairs in the unexpected background have a quite similar behaviour. However, combinations of initial and additional muons in ghost events have a smaller opening angle and a smaller invariant mass than those from sequential $b$ decays.\cite{a0disc} Therefore, the study of ghost events is further restricted to muons and tracks contained in a cone of angle $\theta \leq 36.8^\circ$ ($\cos \theta\geq 0.8$) around the direction of each initial muon. As reported in Ref.,\cite{a0disc} less than half of the OS and SS muon combinations in ghost events can be accounted for by fake muons, and ghost events are shown to contain a fraction of additional real muons (9.4\%) that is four times larger than that of QCD events (2.1\%). Reference\cite{a0disc} investigates at length the possibility that the predicted rate of fake muons is underestimated. The fraction of additional real muons in QCD and ghost \begin{wrapfigure}[17]{l}{6.0cm} \vspace{-0.4cm} \includegraphics[width=5.8cm]{fig04.eps} \vspace{-0.3cm} \caption[]{Exploded impact parameter distribution of additional muons in QCD events. The entire distribution is shown in the insert. Muons are selected without any SVX requirements.} \label{fig:fig04} \end{wrapfigure} events is verified by selecting additional muons with $p_T \geq 3\;\gevc$ and $|\eta| \leq 0.7$. In this case, because of the larger number of interaction lengths traversed by hadronic tracks, the fake rate is negligible.\cite{bbxs} In this study the muon detector acceptance is reduced by a factor of five but the rate of such additional muons is ($0.40 \pm 0.01$)\% in QCD and $(1.64 \pm 0.08)$\% in ghost events. Figure~\ref{fig:fig03} shows the two-dimensional distribution of the impact parameter of an initial muon versus that of all additional muons in a $\cos \theta \geq 0.8$ cone around its direction. The impact parameter distribution of the additional muons is found to be as anomalous as that of primary muons. However, the impact parameter of the additional and initial muons are weakly correlated (the correlation factor is $\rho_{d_{0p}d_{0s}}=0.03$). For comparison, Fig.~\ref{fig:fig04} shows that the impact parameter distribution of additional muons in QCD events is not anomalous at all. It is difficult to reconcile the rate and characteristics of these anomalous events with expectations from known SM sources. Although one can never rule out the possibility that these data could be at least partially explained by detector effects not presently understood, we will present some additional properties of the ghost sample. Figure~\ref{fig:fig05}~(a) shows the distribution of the number of muons found in a $\cos\theta \geq 0.8$ cone around a primary muon in ghost events. In the plot, an additional muon increases the multiplicity by 1 when of opposite and by 10 when of same sign charge as the initial muon. Leaving aside the case in which no additional muons are found, it is interesting to note that an increase of one unit in the muon multiplicity corresponds in average to a population decrease of approximately a factor of seven. This factor is very close to the inverse of the $\tau \rightarrow \mu$ branching fraction (0.174) multiplied by the 83\% efficiency of the muon detector, and makes it hard to resist the interpretation that these muons arise from $\tau$ decays with a kinematic acceptance close to unity. The multiplicity distribution corrected for the fake muon contribution\cite{a0disc} is shown in Fig.~\ref{fig:fig05}~(b). The fake contribution is evaluated on a track-by-track basis using the probability that pions from $D^0$ mesons from $B$ hadron decays mimic a muon signal. Unfortunately, the multiplicity distribution of muons and tracks contained in a $36.8^{\circ}$ cone around \begin{wrapfigure}[14]{l}{10.0cm} \vspace{-0.4cm} \includegraphics[width=10cm]{fig05.eps} \vspace{-0.3cm} \caption[]{Multiplicity distribution of additional muons found in a $\cos \theta \geq 0.8$ cone around the direction of a primary muon before (a) and after (b) correcting for the fake muon contribution. An additional muon increases the multiplicity by 1 when it has opposite and by 10 when it has same sign charge as the initial muon.} \label{fig:fig05} \end{wrapfigure} the direction of such $D^0$ mesons does not have the high multiplicity tail of ghost events. In the $D^0$ control sample, we do not observe any dependence of the fake rate on the track and muon multiplicity, but we also cannot rule out a drastic increase of the fake probability per track in events with multiplicities much larger than those of QCD standard processes. A study based on higher quality muons\cite{a0disc} does not show any evidence of that being the case.\\ \section{Conclusions} We report the observation of anomalous muon production in $p\bar{p}$ collisions at $\sqrt{s}=1.96\, \rm TeV$. This unknown source of dimuon events seems to offer a plausible resolution to long-standing inconsistencies related to $b\bar{b}$ production and decay. A significant fraction of these events has features that cannot be explained with our current understanding of the CDF~II detector, trigger and event reconstruction. \section*{References}
2,869,038,154,181
arxiv
\section{Introduction} The calculus on time-scales initiated by Stefan Hilger in \cite{hil} gives a convenient way to deal with discrete, continuous or mixed processes using a unique formalism. In 2004, this theory was used by M. Bohner \cite{bohn} and R. Hilscher and V. Zeidan \cite{HZ} to develop a {\it calculus of variations on time scales}. In this context, many natural problems arise. One of them is to generalize to the time scales setting classical results of the calculus of variation in the continuous case. One of these problem is to obtain a time scales analogue of the Noether's Theorem relating group of symmetries and conservation laws.\\ The aim of this article is precisely to derive a time scales version of the Noether's theorem. We refer to the books of Olver \cite{olver} and Jost \cite{jost} for the classical case. This problem was initially considered by Z. Bartosiewicz and D.F.M. Torres in \cite{BT} but both the result and the proof are incomplete. In the following, we follow the strategy of proof proposed in \cite{BT} consisting in deriving the Noether's theorem for transformations depending on time from the easier result obtained for transformations without changing the time. In \cite{ca}, we call {\it Jost's method} this way of proving the Noether's theorem as a classical reference is contained in the book \cite{jost}. \subsection{Main result} Our main result can be formulated as follows. \\ Let $\mathbb{T}$ be a bounded time scale with $a = \min (\mathbb{T})$, $b = \max (\mathbb{T})$ and $\mathrm{card} (\mathbb{T}) \geq 3$. We denote by $\rho$ and $\sigma$ the backward and forward jump operator (see Definition \ref{jump}). We set $\mathbb{T}^\kappa = \mathbb{T} \backslash ]\rho(b),b]$, $\mathbb{T}_\kappa = \mathbb{T} \backslash [a,\sigma(a)[$ and $\mathbb{T}^\kappa_\kappa = \mathbb{T}^\kappa \cap \mathbb{T}_\kappa$. We denote by $C^{1,\Delta}_{\mathrm{rd}}(\mathbb{T})$ the set of $\Delta$-differentiable functions on $\mathbb{T}^\kappa$ with rd-continuous $\Delta$-derivative (see Definition \ref{functio}).\\ Let us consider a functional $\mathcal{L} :C^{1,\Delta}_{\mathrm{rd}}(\mathbb{T}) \rightarrow \mathbb{R}$ defined by $$\mathcal{L} (q) = \displaystyle \int_a^b L(t,q(t),\Delta q(t))\Delta t ,$$ where \fonctionsansdef{L}{[a,b]\times \mathbb{R}^d \times\mathbb{R}^d}{\mathbb{R}} is a Lagrangian. The critical point of $\mathcal{L}$ are solutions of the time-scale Euler-Lagrange equation (see \cite{bourdin1}): \begin{equation} \label{tsel} \nabla \left[ \dfrac{\partial L}{\partial v} (t,q(t),\Delta q(t)) \right] = \nabla \sigma (t) \dfrac{\partial L}{\partial x} (t,q(t),\Delta q (t)) , \end{equation} for every $t \in \mathbb{T}^\kappa_\kappa$.\\ Following \cite{BT}, a time-scale Lagrangian functional $\mathcal{L}$ is said to be {\it invariant} under the one-parameter family group $G=\left \{ g_s \right \}_{s\in \mathbb{R}}$ of transformations $g_s (x,t)= (g_s^0 (t) ,g_s^1 (x))$ if and only if for any subinterval $[t_a ,t_b ] \subset [a,b]$ with $t_a, t_b \in \mathbb{T}$, for any $s \in \mathbb{R}$ and $x\in C^{1,\Delta}_{rd} (\mathbb{T} )$ one has \begin{equation} \label{invariance} \int_{t_a}^{t_b}L\left(t,x(t),\Delta x(t)\right)\Delta t = \int_{\tau_a}^{\tau_b}L\left(\tau,g^1_s\circ x \circ(g_s^0)^{-1}(\tau),\Delta_{\bar{\mathbb{T}}} \left(g^1_s\circ x \circ(g_s^0)^{-1}(\tau)\right)\right)\Delta_{\bar{\mathbb{T}}}\tau \end{equation} where $\tau_a=g^0_s(t_a)$ and $\tau_b=g^0_s(t_b)$.\\ In the following, we need the notion of {\it admissible} group of symmetries which corresponds to one-parameter group of diffeomorphisms satisfying: \begin{itemize} \item[$\bullet$] the set defined by $\displaystyle \bar{\mathbb{T}}_s=g^0_s(\mathbb{T})$ is a time-scale for all $s\in\mathbb{R}$, \item[$\bullet$] the function $g_s^0$ is strictly increasing, \item[$\bullet$] $\displaystyle \Delta_{\bar{\mathbb{T}}_s}\left(g_s^0\right)^{-1}$ exist, \item[$\bullet$] $\displaystyle \Delta g_s^0 \neq 0$ and $\Delta g_s^0$ is rd-continuous. \end{itemize} Our main result is the following version of the time-scale Noether's theorem: \begin{theorem}[Time-scale Noether's theorem] \label{main} Suppose $G=\{ g_s (t,x)=(g_s^0 (t) ,g_s^1 (x) )\}_{s\in \mathbb{R}}$ is an admissible one parameter group of symmetries of the variational problem $$\displaystyle\mathcal{L} (x)=\displaystyle\int_a^b L\left(t,x(t),\Delta x(t)\right)\, \Delta t$$ and \begin{equation} X= \zeta (t) \displaystyle\frac{\partial}{\partial t} +\xi (x) \displaystyle\frac{\partial}{\partial x} , \end{equation} be the infinitesimal generator of $G$. Then, the function \begin{equation} \label{conslaw} I(t,x)= \zeta^{\sigma} \cdot \left [ L(\star ) -\partial_v L (\star ) \cdot \Delta x \right ] + \xi^{\sigma} \cdot \partial_v L (\star ) + \displaystyle \int_a^t \zeta \left [ \nabla \sigma \partial_t L (\star) -\nabla \left ( L-\partial_v L \cdot \Delta x \right ) \right ] \, \nabla t , \end{equation} is a constant of motion over the solution of the time-scale Euler-Lagrange equation (\ref{tsel}), i.e. that \begin{equation} \nabla \left [ I(t,x(t)) \right ] =0 , \end{equation} for all solutions $x$ of the time-scale Euler-Lagrange equations and any $t\in\mathbb{T}_\kappa$. \end{theorem} The proof is given in Section \ref{proof}.\\ In the continuous case $\mathbb{T} =\mathbb{R}$, one obtain the classical form of the integral of motion \begin{equation} I(t,x) = \zeta \left ( L(\star )-\partial_v L (\star) \dot{x} \right ) +\xi \partial_v L (\star ) , \end{equation} because the last integral term is reduced to zero. Indeed, on the solutions of the Euler-Lagrange equation one has the identity $\displaystyle\partial_t L (\star ) = \displaystyle\frac{d}{dt} \left ( L(\star) -\partial_v L (\star ) \dot{x} \right )$. \\ In the discrete case, $\mathbb{T} =\mathbb{Z}$ and transformations without changing time, one recovers the classical integral (see \cite{BCG}, Theorem 12 p.885 and also \cite{lub}): \begin{equation} I(x)=\xi^{\sigma} \cdot \partial_v L (\star ) . \end{equation} \subsection{Comments on previous results} \subsubsection{The Bartosiewicz and Torres result} In \cite{BT} the authors obtain a time scales version of the Noether theorem in the shifted version of the calculus of variation on time scales. However, their result can be easily extended to the non shifted case following the same paths. It coincides with our result for transformations without changing time but differs from it in the other cases.\\ As illustrated in Section \ref{examples} with an example and numerical simulations, the result in \cite{BT} is not correct. The reason is that in order to follow their scheme of proof (see Section \ref{commentproof}) the solutions of the Euler-Lagrange equations have to satisfy an auxiliary equation given for all $t\in\mathbb{T}^\kappa_\kappa$ by (see Lemma \ref{key_ts} in Section \ref{proof}): \begin{equation} \label{condi} \quad \nabla \sigma(\tau)\partial_t L (\star )+\nabla\left(\Delta x(t) \partial_v L (\star ) -L(\star ) \right)=0, \end{equation} which is precisely the quantity under the $\nabla$-antiderivative. This quantity is discussed in the next Section. \subsubsection{The second Euler-Lagrange equation approach} As already noted, in the continuous case $\mathbb{T} =\mathbb{R}$, condition (\ref{condi}) is well known and corresponds to the {\it second Euler-Lagrange equation} or the {\it Dubois-Raymond necessary optimality condition}. A time-scales analogue of the second Euler-Lagrange equation was derived by Bartosiewicz, Martins and Torres in (\cite{BMT}, Theorem 5 p.12) leading to another proof of the time-scale Noether's theorem (see \cite{BMT},Section 4, Theorem 6).\\ As already said, the result in \cite{BT} is wrong without additional assumptions. As a consequence, we believe that the time scales second Euler-Lagrange equation in \cite{BMT} must be taken with care. \subsection{A time scales Jost's method of proof} \label{commentproof} The approach used by Bartosiewicz and Torres to prove their time scales Noether's theorem is an adaptation of a method which can be found in the classical Textbook by J. Jost and X. Li-Jost \cite{jost} on the calculus of variations. Formally, the idea is very simple. One introduce an extra variable corresponding to the time variable in order to transform the case of invariance under transformations changing time to a case of invariance without changing the "time" variable for an extended Lagrangian which is explicitly constructed from the initial Lagrangian. The corresponding Noether's theorem then follows from the one for transformations without changing time which is easier. We refer to \cite{jost} for more details.\\ However, as in the fractional case\footnote{This work was in fact suggested by a recent article \cite{FM} showing that the fractional Noether theorem proved by Frederico and Torres in \cite{FT} is wrong. However, the article \cite{FM} does not provide a clear understanding of where and why the result is not correct. The second author and A. Szafranska have analysed in \cite{ca} the proof given in \cite{FT} which is an adaptation of the Jost's method to the fractional calculus of variations. Several problems was then pointed out which can occur when generalizing the Jost's method to another framework.}, where the same method of proof were used, several problems arise when adapting the method of Jost to the time-scale case. In particular, one must be very careful with the validity of the change of variables and the fact that one can used the time-scale Noether's theorem for transformations without changing time. In particular, the proof proposed in \cite{BT} does not work precisely because one can not use the autonomous version of the Noether's theorem but only the infinitesimal invariance characterization (see Section \ref{proof}, Lemma \ref{key_ts} and after).\\ It must be pointed out that there exists several way to prove the Noether's theorem. However, we decide to follow the same strategy of Bartosiewicz and Torres in \cite{BT} because this method is very elegant and many other generalizations are based on it. As a consequence, the problems that we are discussing will be of importance for other works. \subsection{Plan of the paper} The plan of the paper is as follows. In Section \ref{remind}, we remind some definitions and notations about time-scales and give some particular statements about the chain rule formula and the substitution formula for $\Delta$-derivative in the time-scales setting. Section \ref{proof} gives the proof of our main result. The proof of several technical Lemmas are given in Section \ref{technical}. In Section \ref{examples}, we discuss an example first studied by Bartosiewicz and Torres in \cite{BT}. We compare the quantity that we have obtained with the one derived in \cite{BT} using a numerical integration. In particular, it shows that the conservation law obtained in \cite{BT} does not give an integral of motion contrary to the quantity obtained using our Theorem. \section{Preliminaries on time scales} \label{remind} In this Section, we remind some results about the chain rule formula, the change of variable formula for $\Delta$-antiderivative which will be used during the proof of the main result. We refer to \cite{agar2,bohn,bohn3,bourdin2} and references therein for more details on time scale calculus. \\ \begin{definition} \label{jump} The backward and forward jump operators $\rho, \sigma : \mathbb{T} \longrightarrow \mathbb{T}$ are respectively defined by: \begin{equation*} \forall t \in \mathbb{T}, \; \rho (t) = \sup \{ s \in \mathbb{T}, \; s < t \} \; \text{and} \; \sigma (t) = \inf \{ s \in \mathbb{T}, \; s > t \}, \end{equation*} where we put $\sup \emptyset = a$ and $\inf \emptyset = b$. \end{definition} \begin{definition} A point $t \in \mathbb{T}$ is said to be left-dense (resp. left-scattered, right-dense and right-scattered) if $\rho (t) = t$ (resp. $\rho (t) < t$, $\sigma (t) = t$ and $\sigma (t) > t$). \end{definition} Let $\mathrm{LD}$ (resp. $\mathrm{LS}$, $\mathrm{RD}$ and $\mathrm{RS}$) denote the set of all left-dense (resp. left-scattered, right-dense and right-scattered) points of $\mathbb{T}$. \begin{definition} The graininess (resp. backward graininess) function $\fonctionsansdef{\mu}{\mathbb{T}}{\mathbb{R}^+}$ (resp. $\fonctionsansdef{\nu}{\mathbb{T}}{\mathbb{R}^+}$) is defined by $\mu(t) = \sigma (t) -t$ (resp. $\nu(t) = t- \rho (t)$) for any $t \in \mathbb{T}$. \end{definition} Let us recall the usual definitions of $\Delta$- and $\nabla$-differentiability. \begin{definition} A function $\fonctionsansdef{u}{\mathbb{T}}{\mathbb{R}^n}$, where $n \in \mathbb{N}$, is said to be $\Delta$-differentiable at $t \in \mathbb{T}^\kappa$ (resp. $\nabla$-differentiable at $t \in \mathbb{T}_\kappa$) if the following limit exists in $\mathbb{R}^n$: \begin{equation} \lim\limits_{\substack{s \to t \\ s \neq \sigma (t) }} \dfrac{u(\sigma(t))-u(s)}{\sigma(t) -s} \; \left( \text{resp.} \; \lim\limits_{\substack{s \to t \\ s \neq \rho (t) }} \dfrac{u(s)-u(\rho (t))}{s-\rho(t)} \right). \end{equation} In such a case, this limit is denoted by $\Delta u (t)$ (resp. $\nabla u (t)$). \end{definition} \begin{proposition} \label{rappeldelta2} Let $\fonctionsansdef{u}{\mathbb{T}}{\mathbb{R}^n}$. Then, $u$ is \\ $\Delta$-differentiable on $\mathbb{T}^\kappa$ with $\Delta u = 0$ if and only if there exists $c \in \mathbb{R}^n$ such that $u(t) =c$ for every $t \in \mathbb{T}$. \end{proposition} The analogous results for $\nabla$-differentiability are also valid. \begin{definition} \label{functio} A function $u$ is said to be rd-continuous (resp. ld-continuous) on $\mathbb{T}$ if it is continuous at every $t \in \mathrm{RD}$ (resp. $t \in \mathrm{LD}$) and if it admits a left-sided (resp. righ-sided) limit at every $t \in \mathrm{LD}$ (resp. $t \in \mathrm{RD}$). \end{definition} We respectively denote by $C^0_{\mathrm{rd}}(\mathbb{T})$ and $C^{1,\Delta}_{\mathrm{rd}}(\mathbb{T})$ the functional spaces of rd-continuous functions on $\mathbb{T}$ and of $\Delta$-differentiable functions on $\mathbb{T}^\kappa$ with rd-continuous $\Delta$-derivative. \\ Let us denote by $\int \Delta \tau$ the Cauchy $\Delta$-integral defined in \cite[p.26]{bohn} with the following result (see {\cite[Theorem 1.74 p.27]{bohn}}): \begin{theorem} For every $u \in C^0_{\mathrm{rd}}(\mathbb{T}^\kappa)$, there exist a unique $\Delta$-antiderivative $U$ of $u$ in sense of $\Delta U = u$ on $\mathbb{T}^\kappa$ vanishing at $t=a$. In this case the $\Delta$-integral is defined by \begin{equation*} U(t) = \int_a^t u(\tau) \Delta \tau \end{equation*} for every $t \in \mathbb{T}$. \label{thm_antiderivative} \end{theorem} We have a time-scale chain rule formula (see \cite[Theorem 1.93]{bohn}). \begin{theorem}[Time-scale Chain Rule] \label{tscr} Assume that \fonctionsansdef{v}{\mathbb{T}}{\mathbb{R}} is strictly increasing and $\tilde{\mathbb{T}}:=v(\mathbb{T})$ is a time-scale. Let \fonctionsansdef{w}{\tilde{\mathbb{T}}}{\mathbb{R}}. If $\Delta v(t)$ and $\Delta_{\tilde{\mathbb{T}}}(v(t))$ exist for $t\in\mathbb{T}^\kappa$, then \begin{equation} \Delta\left(w\circ v\right) = \left(\Delta_{\tilde{\mathbb{T}}}\circ v\right) \Delta v \end{equation} \end{theorem} With the time-scale chain rule, we obtain a formula for the derivative of the inverse function (see \cite[Theorem 1.97]{bohn}). \begin{theorem}[Derivative of the inverse] Assume that \fonctionsansdef{v}{\mathbb{T}}{\mathbb{R}} is strictly increasing and $\tilde{\mathbb{T}}:=v(\mathbb{T})$ is a time-scale. Then \begin{equation} \frac{1}{\Delta v}=\Delta_{\tilde{\mathbb{T}}}\left(v^{-1}\right)\circ v \end{equation} at points where $\Delta v$ is different from zero. \end{theorem} Another formula from the chain rule is the substitution rule for integrals (see \cite[Theorem 1.98]{bohn}). \begin{theorem}[Substitution] Assume that \fonctionsansdef{v}{\mathbb{T}}{\mathbb{R}} is strictly increasing and $\tilde{\mathbb{T}}:=v(\mathbb{T})$ is a time-scale. If \fonctionsansdef{f}{\mathbb{T}}{\mathbb{R}} is a rd-continuous function and $v$ is differentiable with rd-continuous derivative, then for $a,b\in\mathbb{T}$, \begin{equation} \int_{a}^{b} f(t)\Delta v(t)\Delta t = \int_{v(a)}^{v(b)}\left(f\circ v^{-1}\right)(s)\Delta_{\tilde{\mathbb{T}}} s. \end{equation} \end{theorem} \section{Proof of the main result} \label{proof} We first rewrite the invariance relation (\ref{invariance}) in order to have the same domain of integration. \begin{lemma} \label{changement_bornes_ts} Let $\mathcal{L}$ be a time-scale Lagrangian functional invariant under the action of the group of diffeomorphisms $g$. Then, we have \begin{equation} \label{invar} \int_{a}^{b}L\left(t,x(t),\Delta x(t)\right)\Delta t = \int_{a}^{b}L\left (g^0_s(t),(g^1_s\circ x)(t),\Delta \left(g_s^1\circ x \right)(t) \frac{1}{\Delta g_s^0(t)}\right )\Delta g_s^0(t)\Delta t. \end{equation} \end{lemma} The proof is given in Section \ref{proof_changement_bornes_ts}. \\ As for the classical case, we construct an extended Lagrangian functional $\bar{\mathcal{L}}$ associated with the autonomous Lagrangian $\bar{L}$ as follows: \\ Let $\bar{\mathcal{L}}: C^{2}_{\Delta, \nabla}([a,b],\mathbb{R}) \times C^{2}_{\Delta, \nabla}([a,b],\mathbb{R}) \rightarrow \mathbb{R}$ defined by \begin{equation} \bar{\mathcal{L}}(t,x) =\int_{a}^{b} \bar{L}\left(t(\tau),x(t(\tau)),\Delta_{\bar{\mathbb{T}}}t(\tau),\Delta_{\bar{\mathbb{T}}}x(t(\tau))\right)\Delta \tau. \end{equation} where $\tilde{L}: \mathbb{R}\times \mathbb{R}^{d}\times\mathbb{R}\times \mathbb{R}^{d} \rightarrow \mathbb{R}$ is defined by \begin{equation} \bar{L}(t,x,w,v)=L\left(t,x,\frac{v}{w}\right)w. \end{equation} which is the same as the classical case. We define the \emph{time-scale bundle path class} denoted by $\bar{\mathsf{F}}$ and defined by \begin{equation} \bar{\mathsf{F}} = \{(t,x)\in C^{2}_{\Delta, \nabla}([a,b],\mathbb{R}) \times C^{2}_{\Delta, \nabla}([a,b],\mathbb{R}) \ ; \ \displaystyle \tau \longmapsto(t(\tau),x(\tau))=(\tau,x(\tau)\}. \end{equation} We have the following proposition: \begin{proposition} The restriction of the Lagrangian function $\bar{\mathcal{L}}$ to a path $\gamma=(t,x) \in \bar{\mathsf{F}}$ satisfies \begin{equation} \label{restri_equal_ts} \bar{\mathcal{L}}(t,x)=\mathcal{L}(x). \end{equation} \end{proposition} \begin{proof} Let $\gamma=(t,x) \in \bar{\mathsf{F}}$. By definition, we have \begin{equation*} \tilde{L}\left(t(\tau),x(\tau),\Delta_{\bar{\mathbb{T}}}t(\tau),\Delta_{\bar{\mathbb{T}}}x(t(\tau)\right)=L\left(t(\tau),x(t(\tau)),\Delta_{\bar{\mathbb{T}}}x(t(\tau))\frac{1}{\Delta_{\bar{\mathbb{T}}}t(\tau)}\right)\Delta_{\bar{\mathbb{T}}}t(\tau). \end{equation*} As $\gamma$ is a bundle path, we have $t(\tau)=\tau$ and $\Delta_{\bar{\mathbb{T}}}t(\tau)=1$. In consequence, $\tilde{\mathbb{T}}=\mathbb{T}$ and we obtain \begin{equation*} \tilde{\mathcal{L}}(t,x)=\int_{a}^{b} \tilde{L}\left(t(\tau),x(t(\tau)),\Delta_{\bar{\mathbb{T}}}t(\tau),\Delta_{\bar{\mathbb{T}}}x(t(\tau))\right)\Delta_{\bar{\mathbb{T}}} \tau=\int_{a}^{b} L\left(\tau,x(\tau),\Delta x(\tau)\right)\Delta\tau = \mathcal{L}(x). \end{equation*} \end{proof} In order to formulate the time-scale Euler-Lagrange equation for the extended autonomous Lagrangian, we need to have the $\nabla_{\bar{\mathbb{T}}_s}$-differentiability of $\bar{\sigma}$. We have: \begin{lemma} \label{nabla_diff_TTS} Let $s\in\mathbb{R}$. Let $\bar{\sigma}_s$ to be the forward jump operator over $\bar{\mathbb{T}}_s$. Assume that $\sigma$ is $\nabla$-differentiable on $\mathbb{T}_\kappa$ then $\bar{\sigma}_s$ is $\nabla_{\bar{\mathbb{T}}_s}$-differentiable on $(\bar{\mathbb{T}}_s)^\kappa_\kappa$. \end{lemma} \begin{proof} Let $s\in \mathbb{R}$. By definition, $\displaystyle \bar{\sigma} = \sigma\circ g^0_s$ and we have $\sigma\circ g^0_s= g^0_s \circ \sigma$. As $g_s^0$ is $\Delta$-differentiable on $\mathbb{T}^\kappa$ and $\sigma$ is $\nabla$-differentiable on $\mathbb{T}_\kappa$ then, from Theorem \ref{tscr}, we obtain that $g^0_s \circ \sigma$ is $\nabla$-differentiable on $\mathbb{T}^\kappa_\kappa$. As $\bar{\mathbb{T}}_s=g^0_s(\mathbb{T})$, we obtain the result. \end{proof} In what follows, we assume that $\sigma$ is $\nabla$-differentiable on $\mathbb{T}_\kappa$. \begin{lemma} \label{key_ts} A path $\gamma=(t,x)\in\bar{\mathsf{F}}$ is a critical point of $\bar{\mathcal{L}}$ if, and only if, $x$ is a critical point of $\mathcal{L}$ and for all $t\in\mathbb{T}^\kappa_\kappa$ we have \begin{equation} (\boldsymbol{\hexstar}) \quad \nabla \sigma(\tau)\frac{\partial L}{\partial t}(t,x(t),\Delta x(t))+\nabla\left(\Delta x(t)\frac{\partial L}{\partial v}(t,x(t),\Delta x(t)) -L(t,x(t),\Delta x(t))\right)=0. \end{equation} \end{lemma} The proof is given in Section \ref{proof_key_ts}. \\ Contrary to the continuous case, Lemma \ref{key_ts} implies that extended solutions of the initial Lagrangian are not automatically solutions of the extended Euler-Lagrange equation. This implies that one can not use the Noether's theorem but only the infinitesimal invariance criterion. \begin{lemma} \label{invariance_Ltilde_ts} Let $\mathcal{L}$ be a time-scale Lagrangian functional invariant under the one-parameter group of diffeomorphisms $g$. Then, the time-scale Lagrangian functional $\bar{\mathcal{L}}$ is invariant under the one-parameter group of diffeomorphisms $g$ over $\bar{\mathsf{F}}$. \end{lemma} The proof is given in Section \ref{proof_invariance_Ltilde_ts}.\\ We deduce from Lemma \ref{invariance_Ltilde_ts} and the {\it necessary condition of invariance} given in (\cite{BT},Theorem 2 p.1223) that \begin{equation} \label{main1} \partial_t L (\star ) . \zeta +\partial_x L (\star ) . \xi +\partial_v L (\star ) .\Delta \xi +\left ( L (\star ) -\partial_v L (\star ) .\Delta x \right ) .\Delta \zeta =0. \end{equation} Multiplying equation \eqref{main1} by $\nabla \sigma$ and using the Time scales Euler-Lagrange equation \eqref{tsel}, we obtain \begin{equation} \partial_t L (\star ) . \nabla \sigma . \zeta +\nabla \sigma . \partial_v L (\star ) . \Delta [\xi] +\nabla \left [ \partial_v L (\star ) \right ] . \xi +\left ( L (\star ) -\partial_v L (\star ) .\Delta x \right ) .\nabla \sigma . \Delta \zeta =0 . \end{equation} Using the relation \begin{equation} \label{leib2} \nabla \left [ f^{\sigma} g \right ] = \nabla \sigma \Delta [f] . g +f .\nabla [g] , \end{equation} we have \begin{equation} \partial_t L (\star ) . \nabla \sigma . \zeta +\nabla \left [ \partial_v L (\star ) . \xi^{\sigma} \right ] +\left ( L (\star ) -\partial_v L (\star ) .\Delta x \right ) .\nabla \sigma . \Delta \zeta =0 . \end{equation} Trying to be as close as possible to the continuous case, we can use again relation (\ref{leib2}) on the last term. We obtain \begin{equation} \partial_t L (\star ) . \nabla \sigma . \zeta +\nabla \left [ \partial_v L (\star ) . \xi^{\sigma} \right ] +\nabla \left [ \zeta .\left ( L (\star ) -\partial_v L (\star ) .\Delta x \right ) . \zeta^{\sigma} \right ] -\zeta .\nabla \left [ L (\star ) -\partial_v L (\star ) .\Delta x \right ] =0 . \end{equation} Taking the $\nabla$ antiderivative of this expression, we deduce the conservation law (\ref{conslaw}). This concludes the proof. \section{The Bartosiewicz and Torres example} \label{examples} We consider the example of Lagrangian given in \cite{BT} and we illustrate our result with respect to the result given in \cite{BT}. Let $N\in\mathbb{N}^{*}$, $a,b\in \mathbb{R}$ with $a<b$ and let $h=(b-a)/N$. We consider the time-scale $\mathbb{T}=\{t_k, \ k=0,\cdots N\}$ where $t_k=a+kh$. \subsection{Invariance and a conservation law} We consider the Lagrangian introduced in \cite{BT} \begin{equation} \label{exemple-delfim} L(t,x, v)=\frac{x^2}{t} + tv^2 \end{equation} for $x,v \in\mathbb{R}$. \begin{lemma} The Lagrangian functional associated to (\ref{exemple-delfim}) is invariant under the family of transformation $G=\{ \phi_s (t,x)=(t e^s , x )\}_{s\in \mathbb{R}}$ where its infinitesimal generator $X$ is given by \begin{equation} \zeta(t)=t \quad \mbox{\rm and} \quad \xi(x)=0. \end{equation} \end{lemma} \begin{proof} Indeed, we have $L\left ( t e^s , x , \displaystyle\frac{\Delta x}{e^s} \right ) e^s =\left ( \displaystyle\frac{x^2}{t e^s} +te^s \displaystyle\frac{(\Delta x)^2}{e^{2s}} \right )e^s =L(t,x,\Delta x)$ so that condition \eqref{invar} is satisfied. \end{proof} In our case, the (non-shifted) Euler--Lagrange equation associated with $L$ is given by \begin{equation} \nabla \left(t \Delta x(t) \right) = \frac{x}{t}, \end{equation} and our time-scale Noether's theorem generates the following conservation law \begin{equation} I(t,x,v)=\sigma(t)\left(\frac{x^2}{t}-t v^2\right) + \int_a^t \left[ -\frac{x^2}{t}+tv^2-t\nabla \left(\frac{x^2}{t}-tv^2\right)\right]\nabla t . \end{equation} The (shifted) Euler--Lagrange equation associated with $L$ is given by \begin{equation} \Delta \left(t \Delta x \right) = \frac{x^\sigma}{t}, \end{equation} and the time-scale Noether's theorem given in \cite{BT}, generates the following conservation law \begin{equation} C(t,x^\sigma,v)=\sigma(t)\left(\frac{(x^\sigma)^2}{t} - t v^2\right). \end{equation} \begin{remark} In \cite{BT}, the authors consider $\mathbb{T}=\{ 2^n : n\in\mathbb{N}\cup\{0\} \}$. In that case, $\sigma(t)=2 t$ for all $t\in \mathbb{T}$, which gives the expression of $C(t,x^\sigma,v)$ in \cite[Example 3]{BT}. \end{remark} \subsection{Simulations} The initial conditions are chosen such that $x(1)=1$ and $\Delta x(1)=0.1$. We display in Figure \ref{result1}, the two quantities computed numerically with $a = 1$, $b=10$ and $h=10^{-3}$. As we can see, the quantity $I(t,x,\Delta x)$ is a constant of motion over the solution of the time-scale Euler-Lagrange equation. It is clearly not the case for the quantity $C(t,x^\sigma,\Delta x)$ provided by the Noether's theorem in \cite{BT}. \begin{figure} \centering \includegraphics[width=0.7\linewidth]{all-int.pdf} \caption{} \label{result1} \end{figure} \section{Proof of the technical Lemma} \label{technical} \subsection{Proof of Lemma \ref{changement_bornes_ts}} \label{proof_changement_bornes_ts} Using the time-scale chain rule, we obtain \begin{equation*} \Delta_{\bar{\mathbb{T}}} \left(g^1_s\circ x \circ(g_s^0)^{-1}(\tau)\right)=\Delta \left(g_s^1\circ x \right)(t) \Delta_{\bar{\mathbb{T}}_s}\left(g_s^0\right)^{-1}(\tau). \end{equation*} Then, using the time-scale derivative formula for inverse function, we obtain \begin{equation*} \Delta_{\bar{\mathbb{T}}} \left(g^1_s\circ x \circ(g_s^0)^{-1}(\tau)\right)=\Delta \left(g_s^1\circ x \right)(t)\frac{1}{\Delta g_s^0(t)}. \end{equation*} Using the change of variable formula for time-scale integrals, we obtain \begin{align*} & \int_{\tau_a}^{\tau_b}L\left(\tau,g^1_s\circ x \circ(g_s^0)^{-1}(\tau),\Delta_{\bar{\mathbb{T}}} \left(g^1_s\circ x \circ(g_s^0)^{-1}(\tau)\right)\right)\Delta_{\bar{\mathbb{T}}}\tau \\ &=\int_{a}^{b}L\left (g^0_s(t),(g^1_s\circ x)(t),\Delta \left(g_s^1\circ x \right)(t) \frac{1}{\Delta g_s^0(t)}\right )\Delta g_s^0(t)\Delta t. \end{align*} Finally, using the invariance condition in Equation \eqref{invariance}, we obtain the result. \subsection{Proof of Lemma \ref{key_ts}} \label{proof_key_ts} For the necessary condition, let $\gamma=(t,x)\in\bar{\mathsf{F}}$ be a critical point of $\bar{\mathcal{L}}$. Then, from Lemma \ref{nabla_diff_TTS} and Equation \eqref{tsel}, it satisfies the following Euler-Lagrange equations \begin{equation} (\textrm{EL}^{\nabla \circ \Delta })_{\bar{L}} \left\{\begin{aligned} &\nabla_{\bar{\mathbb{T}}}\left[\frac{\partial \bar{L}}{\partial v}(\bar{\star}_\tau)\right]=\nabla \bar{\sigma}(\tau)\frac{\partial \bar{L}}{\partial x}(\bar{\star}_\tau),\\ &\nabla_{\bar{\mathbb{T}}}\left[\frac{\partial \bar{L}}{\partial w}(\bar{\star}_\tau)\right]=\nabla \bar{\sigma}(\tau)\frac{\partial \bar{L}}{\partial t}(\bar{\star}_\tau), \end{aligned}\right. \end{equation} for all $\tau \in (\bar{\mathbb{T}}_s)^\kappa_\kappa$, where $(\bar{\star}_\tau)=\left(t(\tau),x(t(\tau)),\Delta_{\bar{\mathbb{T}}}t(\tau), \Delta_{\bar{\mathbb{T}}}x(t(\tau)\right)$.\\ Let $(\star_\tau)=\left(t(\tau),x(t(\tau)),\Delta_{\bar{\mathbb{T}}}x(t(\tau))\frac{1}{ \Delta_{\bar{\mathbb{T}}} t(\tau)}\right)$. By definition, we have \begin{align} &\frac{\partial \bar{L}}{\partial t}(\bar{\star}_\tau)= \frac{\partial L}{\partial t}(\star_\tau) \Delta_{\bar{\mathbb{T}}}t(\tau), &\frac{\partial \bar{L}}{\partial w}(\bar{\star}_\tau) &= L\left(\star_\tau\right) - \Delta_{\bar{\mathbb{T}}}x(t(\tau))\frac{1}{\Delta_{\bar{\mathbb{T}}}t(\tau)} \frac{\partial L}{\partial v}(\bar{\star}_\tau), \label{eq_partialtildets1}\\ &\frac{\partial \bar{L}}{\partial x}(\bar{\star}_\tau) = \frac{\partial L}{\partial x} (\star_\tau)\Delta_{\bar{\mathbb{T}}}t(\tau), &\frac{\partial \bar{L}}{\partial v} (\bar{\star}_\tau)&= \frac{\partial L}{\partial v}(\star_\tau) \label{eq_partialtildets2}. \end{align} As $\gamma \in \bar{\mathsf{F}}$, we have $(\star_\tau)=\left(\tau,x(\tau),\Delta x(\tau)\right)$ and $\nabla_{\bar{\mathbb{T}}} \bar{\sigma}(\tau)=\nabla \sigma(\tau)$. In consequence, the first Euler-Lagrange equation is equivalent to \begin{equation} \label{EL1_final_ts} \nabla\left[\frac{\partial L}{\partial v}\left(\star_\tau\right)\right]=\nabla \sigma(\tau)\frac{\partial L}{\partial x}\left(\star_\tau\right). \end{equation} for all $\tau \in \mathbb{T}^\kappa_\kappa$ and the second Euler-Lagrange equation is equivalent to \begin{equation} \nabla \sigma(\tau)\frac{\partial L}{\partial t}(\star_\tau)+\nabla\left(\Delta x(\tau) \frac{\partial L}{\partial v} (\star_\tau)-L(\star_\tau)\right)=0, \end{equation} for all $\tau \in \mathbb{T}^\kappa_\kappa$, which corresponds to the condition $(\boldsymbol{\hexstar})$. As Equation \ref{EL1_final_ts} is the Euler-Lagrange equation associated with the Lagrangian functional $\mathcal{L}$, we obtain that $x$ is a critical point of $\mathcal{L}$ and $(\boldsymbol{\hexstar})$ is satisfied. \\ For the sufficient condition, assume that $(\boldsymbol{\hexstar})$ is satisfied and let $x$ be a critical point of $\mathcal{L}$ and let $\gamma$ be the path such that $(t,x) \in \bar{\mathsf{F}}$. Using the same computation as previous, we obtain that $\gamma$ is a critical point of $\tilde{\mathcal{L}}$. This conclude the proof. \subsection{Proof of Lemma \ref{invariance_Ltilde_ts}} \label{proof_invariance_Ltilde_ts} Let $\gamma=(t,x)\in\bar{\mathsf{F}}$. By definition, we have \begin{equation} \bar{\mathcal{L}}(g_s(\gamma))=\int_{a}^{b} \bar{L}\left(g^0_s(t(\tau)),g^1_s\circ x (t(\tau)),\Delta_{\bar{\mathbb{T}}_s}g_s^0(t(\tau)),\Delta_{\bar{\mathbb{T}}_s}\left(g^1_s\circ x (t(\tau))\right)\right)\Delta_{\bar{\mathbb{T}}_s}\tau. \end{equation} Using the definition of $\bar{L}$, the fact that $t(\tau)=\tau$ and $\Delta g_s^0(\tau)\neq0$ for all $\tau \in \mathbb{T}^\kappa$, we obtain \begin{equation} \bar{\mathcal{L}}(g_s(\gamma))=\int_{a}^{b}L\left (g^0_s(\tau),(g^1_s\circ x)(g^0_s(\tau)),\Delta \left(g_s^1\circ x \right)(\tau) \frac{1}{\Delta g_s^0(\tau)}\right )\Delta g_s^0(\tau)\Delta \tau. \end{equation} Using the invariance of $\mathcal{L}$ with the Lemma \ref{changement_bornes_ts}, we obtain \begin{equation} \bar{\mathcal{L}}(g_s(\gamma))=\int_{a}^{b}L\left (\tau,x(\tau),\Delta x(\tau)\right )\Delta \tau. \end{equation} In consequence, as $\Delta t(\tau)=1$, we obtain \begin{align} \bar{\mathcal{L}}(g_s(\gamma))=\int_{a}^{b}\bar{L}\left (\tau,x(\tau),1,\Delta x(\tau)\right )d\tau=\bar{\mathcal{L}}(\gamma). \end{align} This concludes the proof.
2,869,038,154,182
arxiv
\section{Introduction} Let $D$ be a division ring with the center $F=Z(D)$. For an element $x\in D$, if there exists a positive integer $n_x$ such that $x^{n_x}\in F$ and $x^{m}\notin F$ for any positive integer $m<n_x$ then $x$ is called {\it $n_x$-central}. If $n_x=1$, $x$ is said to be {\it central}. A subgroup $N$ of the unit group $D^*$ of $D$ is called {\it radical} over $F$ if for any element $x\in N$, there exists $n_x>0$ such that $x$ is $n_x$-central. Such a subgroup $N$ is called {\it central} if $n_x=1$ for any $x\in N$. In other words, $N$ is central if and only if $N$ is contained in $F$. In 1978, Herstein \cite{Her1} conjectured that if a subnormal subgroup $N$ of $D^*$ is radical over $F$ then it is central. Two years later, he considered the conjecture again and proved that the assumption ``subnormal" in this conjecture is equivalent to ``normal" (see \cite[Lemma 1]{Her2}). That is, he asked whether a normal subgroup of $D^*$ is central if it is radical over $F$. In \cite{Her1}, Herstein proved that the conjecture holds if $N$ is torsion. As a consequence, one can see that the conjecture is also true if $D$ is centrally finite. We notice that in \cite{HH}, there is a different proof of this fact. Recall that a division ring $D$ with the center $F$ is called {\it centrally finite} if $D$ is a finite dimensional vector space over $F$ \cite[Definition 14.1]{Lam}. In \cite{Her2}, by using the Pigeon-Hole Principle, Herstein also showed that the conjecture holds if $F$ is uncountable. Recently, there are some efforts to give the answer for this conjecture. In \cite{HDB1} and \cite{HDB2}, we proved that the conjecture holds if $D$ is either of type $2$ or weakly locally finite. Actually, we get a more general result: if a normal subgroup of $D^*$ is radical over a proper division subring $K$ of $D$ then it is central provided $D$ is either of type $2$ or weakly locally finite. Recall that a division ring $D$ is of {\it type $2$} if $\dim_FF(x,y)<\infty$ for any $x, y\in D^*$. If $F(S)$ is a centrally finite division ring for any finite subset $S$ of $D$ then $D$ is called {\it weakly locally finite}. Here, $F(S)$ denotes the division subring of $D$ generated by $F\cup S$. In general, the conjecture remains still open. In this paper, we give a positive answer for this conjecture in a particular case. In fact, we prove the following Theorem. \begin{Th} Let $D$ be a division ring and $N$ be a normal subgroup of $D^*$. If there exists a positive integer $d$ such that every element $x\in N$ is $n_x$-central for some positive integer $n_x\le d$ then $N$ is central. \end{Th} \section{The proof of the Theorem} The technique we use in this paper is generalized rational expressions. For our further need, we recall some definitions and prove some Lemmas. First, basing on the structure of twisted Laurent series rings, we will construct a division ring which will be used for next Lemmas. Let $R$ be a ring and $\phi$ be a ring automorphism of $R$. We write $\Cal R=R((t,\phi))$ for the ring of formal Laurent series $\sum\limits_{i = n}^\infty {{a_i}{t^i}}$, where $n\in \mathbb{Z}, a_i\in R$, with the mutiplication defined by the twist equation $ta=\phi(a)t$ for every $a\in R$. In case $\phi(a)=a$ for any $a\in R$, we write $R((t))=R((t,\phi))$. If $R=D$ is a division ring then $\Cal D=D((t,\phi))$ is also a division ring (see \cite[Example 1.8]{Lam}). Moreover, we have. \begin{Lemma}\label{2.1} Let $R=D$ be a division ring, $\Cal D=D((t,\phi))$ be as above, $F=Z(D)$ be the center of $D$, and $L=\{\, a\in D\mid \phi(a)=a\}$ be the fixed division ring of $\phi$ in $D$. If the center $k=Z(L)$ of $L$ is contained in $F$, then the center of $\Cal D$ is $$Z(\Cal D)=\left\{ {\begin{array}{*{20}{c}} k&{\text{ if } \phi \text{ has infinite order, }}\\ {k(({t^s}))}&{\text{ if } \phi \text{ has an order } s.} \end{array}} \right.$$ \end{Lemma} \begin{Proof} The proof is similar to \cite[Proposition 14.2]{Lam}. It suffices to prove that $Z(D)\subseteq k$ if $\phi$ has infinite order, and $Z(\Cal D)\subseteq k((t^s))$ in case $f$ has an order $s$ since it is easy to check that $k((t^s))\subseteq Z(\Cal D)$ if $f$ has an order $s$. Let $\alpha=\sum\limits_{i = n}^\infty {{a_i}{t^i}}$ be in $Z(\Cal D)$. We first prove that $a_i\in k$ for every $i\ge n$. One has $\sum\limits_{i = n}^\infty {{a_i}{t^{i+1}}}=(\sum\limits_{i = n}^\infty {{a_i}{t^i}})t=t\sum\limits_{i = n}^\infty {{a_i}{t^i}}=\sum\limits_{i = n}^\infty {{\phi(a_i)}{t^{i+1}}}$. Hence, $\phi(a_i)=a_i$ for every $i\ge n$. It means $a_i\in L$ for every $i\ge n$. Moreover, for any $a\in L$, $\sum\limits_{i = n}^\infty {{aa_i}{t^{i}}}=(\sum\limits_{i = n}^\infty {{a_i}{t^i}})a=\sum\limits_{i = n}^\infty {{a_i\phi(a)}{t^i}}=\sum\limits_{i = n}^\infty {{a_ia}{t^{i}}}$. Therefore, $aa_i=a_ia$ for every $i\ge n$. It implies, $a_i\in k$ for every $i\ge n$. Now for any $b\in D$, $\sum\limits_{i = n}^\infty {{ba_i}{t^{i}}}=(\sum\limits_{i = n}^\infty {{a_i}{t^i}})b=\sum\limits_{i = n}^\infty {{a_i\phi^i(b)}{t^i}}=\sum\limits_{i = n}^\infty {{\phi^i(b)a_i}{t^i}}$, so that $ba_i=\phi^i(b)a_i$ for every $i\ge n$. {\bf Case 1.} The automorphism $\phi$ has infinite order. For some $i\ne 0$, from the fact that $(b-\phi^i(b))a_i=0$, one has $a_i=0$, which implies $\alpha=a_0\in k$. {\bf Case 2.} The automorphism $\Phi$ has an order $s$. For any $i$ which is not divided by $n$, since $(b-\phi^i(b))a_i=0$, so that $a_i=0$. Therefore, $\alpha=\sum\limits_{i = m}^\infty {{a_{si}}{t^{si}}}\in k((t^s))$. \end{Proof} Let $\{\,t_i\mid i\in \mathbb{Z}\,\}$ be a countable set of indeterminates and $D$ be a division ring. We construct a family of division rings by the following way. Set $$D_0=D((t_0)), D_1 =D_0((t_1)),$$ $$D_{-1}=D_1((t_{-1})), D_2=D_{-1}((t_{2})),$$ for any $n>1,$ $$ D_{-n}=D_n((t_{-n})),D_{n+1}=D_{-n}((t_{n+1})).$$ Now put $D_{\infty}=\bigcup\limits_{n=-\infty}^{+\infty} {{D_n}}$. Then $D_\infty$ is a division ring. Assume that $F$ is the center of $D$. By Lemma~\ref{2.1}, it is elementary to prove by induction on $n\ge 0$ that the center of $D_0$ is $F_0=F((t_0))$, the center of $D_{n+1}$ is $F_{n+1}=F_{-n}((t_{n+1}))$ and the center of $D_{-n}$ is $F_{-(n+1)}=F_{n+1}((t_{-(n+1)}))$. In particular, $F$ is contained in $Z(D_\infty)$. Consider an automorphism $f$ on $D_\infty$ defined by $f(a)=a$ for any $a$ in $D$ and $f(t_i)=t_{i+1}$ for every $i\in \mathbb{Z}$. \begin{Prop}\label{2.2} Let $D, D_\infty$ and $f$ be as above. Then $\Cal D=D_\infty((t,f))$ is a division ring whose center coincides with the center $F$ of $D$. \end{Prop} \begin{Proof} We have $D$ is the fixed division ring of $f$ in $D_\infty$. Since the center $F$ of $D$ is contained in the center of $D_\infty$, $f$ has infinite order and by Lemma~\ref{2.1}, $Z(\Cal D)=F.$\end{Proof} \bigskip Recall that a {\it generalized rational expression} of a division ring $D$ is an expression constructed from $D$ and a set of noncommutative indeteminates using addition, subtraction, multiplication and division. A generalized rational expression over $D$ is called a {\it generalized rational identity} if it vanishes on all permissible substitutions from $D$. A generalized rational expression $f$ of $D$ is called nontrivial if there exists an extension division ring $D_1$ of $D$ such that $f$ is not a generalized rational identity of $D_1$. The details of generalized rational identities can be found in \cite{Rowen}. Given a positive integer $n$ and $n+1$ noncommutative indeteminates $x,y_1,\cdots, y_n$, put $$g_n(x,y_1,y_2,\cdots, y_n)=\sum\limits_{\delta \in {S_{n + 1}}} {\mbox{\rm sign}(\delta ).{x^{\delta (0)}}{y_1}{x^{\delta (1)}}{y_2}{x^{\delta (2)}} \ldots {y_n}{x^{\delta (n)}}}, $$ where $S_{n+1}$ is the symmetric group of $\{\,0,1,\cdots, n\,\}$ and $\mbox{\rm sign}(\delta)$ is the sign of permutation $\delta$. This is the generalized rational expression defined in \cite{BMM} to connect an algebraic element of degree $n$ and a polynomial. We have the first property of this generalized rational expression. \begin{Lemma}\label{3.1} Let $D$ be a division ring with the center $F$. For any element $a\in D$, the following are equivalent: \begin{enumerate} \item The element $a$ is algebraic over $F$ of degree less than $n$. \item $g_n(a,r_1,r_2,\cdots, r_n)=0$ for any $r_1, r_2,\cdots, r_n\in D$. \end{enumerate} \end{Lemma} \begin{Proof} See \cite[Corollary 2.3.8]{BMM} \end{Proof} Let $D$ be a division ring with center $F$ and $a$ be an element of $D$. Then, by definition, $g_n(axa^{-1}x^{-1},y_1,y_2,\cdots, y_n)$ is also a generalized rational expressions of $D$. Notice that, in general, the expression $g_n(x,y_1,\cdots, y_n)$ is a polynomial but $g_n(axa^{-1}x^{-1},y_1,y_2,\cdots, y_n)$ is not necessary a polynomial. If $a$ is algebraic of degree less than $n$ over $F$ then $g_n(a,y_1,y_2,\cdots, y_n)$ is a trivial generalized rational expression according to Lemma~\ref{3.1}. However, the following Lemma shows that $g_n(axa^{-1}x^{-1},y_1,y_2,\cdots, y_n)$ is always nontrivial if $a$ is not in $F$. \begin{Lemma}\label{1.1} Let $D$ be a division ring with center $F$. If $a\in D\backslash F$ then the generalized rational expression $g_n(axa^{-1}x^{-1},y_1,y_2,\cdots, y_n)$ is nontrivial. \end{Lemma} \begin{Proof} Let $D_\infty$, $\Cal D=D_\infty ((t, f))$ and $F$ be as in Proposition~\ref{2.2}. Since $a\notin F$, there exists $c\in D$ such that $c=aba^{-1}b^{-1}\ne 1$. Because $a,b,c$ commute with $t$, $$(c-1)(1+b^{-1}t)^{-1}+1=a(b+t)a^{-1}(b+t)^{-1}.$$ If $a(b+t)a^{-1}(b+t)^{-1}$ is algebraic over $F$ then so is $(c-1)(1+b^{-1}t)^{-1}$. Hence, $(c-1)^{-1}+b^{-1}(c-1)^{-1}t=((c-1)(1+b^{-1}t)^{-1})^{-1}$ is algebraic over $F$. Let $p(x)=x^m+a_{m-1}x^{m-1}+\cdots +a_1x+a_0$, with $m>0$, be the minimal polynomial of $(c-1)^{-1}+b^{-1}(c-1)^{-1}t$ over $F$. It means $$ 0=((c-1)^{-1}+b^{-1}(c-1)^{-1}t)^m+\cdots +a_1((c-1)^{-1}+b^{-1}(c-1)^{-1}t)+a_0.$$ For instance, $(b^{-1}(c-1)^{-1})^m=0$, a contradiction! Therefore, $a(b+t)a^{-1}(b+t)^{-1}$ is not algebraic over $F$. Using Lemma~\ref{3.1}, we have $$g_n(a(b+t)a^{-1}(b+t)^{-1},r_1,r_2,\cdots, r_n)\ne 0,$$ for some $r_1,r_2,\cdots, r_n\in \Cal D$. This means $g_n(axa^{-1}x^{-1},y_1,y_2,\cdots, y_n)$ is nontrivial. \end{Proof} A polynomial identity ring is a ring $R$ with a non-zero polynomial $P$ vanishing on all permissible substitutions from $R$. In this case, $P$ is called {\it polynomial identity} of $R$ or we say that $R$ {\it satisfies} $P$. There is a well-known result: a division ring is a polynomial identity division ring if and only if it is centrally finite (see \cite[Theorem 6.3.1]{Her3}). We have a similar property for generalized rational identity division rings. \begin{Lemma}\label{3.3} Let $D$ be a division ring with the center $F$. If there exists a nontrivial generalized rational identity of $D$ then either $D$ is centrally finite or $F$ is finite. \end{Lemma} \begin{Proof} See \cite[Theorem 8.2.15]{Rowen}. \end{Proof} Now we are ready to prove our Theorem.\\ \noindent {\bf Proof of Theorem 1.1}\\ Suppose that $N$ is not contained in $F$. Then, there exists $a\in N\backslash F$. For any $d+1$ elements $r, r_1,r_2,\cdots, r_d$ of $D$ with $r\ne 0$, since $ara^{-1}r^{-1}\in N$ is $n_{a,r}$-central element for some $0<n_{a,r}\le d$, by Lemma~\ref{3.1}, $$g_d(ara^{-1}r^{-1},r_1,r_2,\cdots, r_d)=0.$$ By Lemma~\ref{1.1}, $g_d(axa^{-1}x^{-1},y_1,y_2,\cdots, y_d)$ is a nontrivial generalized rational identity of $D$. Now, in view of Lemma~\ref{3.3}, either $D$ is centrally finite or $F$ is finite. If $D$ is centrally finite then $N\subseteq F$ by \cite[Theorem 3.1]{HDB1}. If $F$ is finite then $N$ is torsion, so by \cite[Theorem 8]{Her1}, $N\subseteq F$ . Thus, in both cases we have $N\subseteq F$, a contradiction. \subsection*{Acknowledgment} The author is very thankful to the referee for carefully reading the paper and making useful comments.
2,869,038,154,183
arxiv
\section{Introduction} The electronic structure, i.e., the electronic states and their broadening or scattering rate, is arguably the most fundamental property of a solid. Scattering processes not only affect equilibrium properties but are also essential if a material is driven away from equilibrium. Experimentally, the one-particle scattering rate for the (occupied) electronic states can be measured by angular-resolved photoemission spectroscopy (ARPES)\cite{Grioni2001,Damascelli2003}. If vertex corrections can be neglected, there is a one-to-one correspondence between this one-particle scattering rate and the two-particle scattering rates for response functions such as the optical conductivity. Here, the width of the Drude peak corresponds to the two-particle scattering rate that, without vertex corrections, is directly related to the one-particle scattering rates we calculate here\cite{Drude1900,Drude1900a} \footnote{Both in dynamical mean-field theory and Boltzmann there are no vertex corrections to the optical conductivity}. We study them by using two methods that are widely employed in solid state theory, albeit by different communities. Through our comparison, we hope to contribute to a better mutual understanding of the strengths and weaknesses of these methods, as well as of the very different electron-electron scattering in a metal, band insulator, and Mott insulator. Dynamical mean-field theory (DMFT) \cite{Metzner1989,Georges1992a,Jarrell1992,Georges1996} is one of the most widely used approaches for strongly correlated materials. It is non-perturbative and maps a correlated lattice model onto the solution of an Anderson impurity model in a self-consistent way \cite{Georges1992a}. DMFT becomes exact in the limit of high dimensions or a high connectivity of the lattice \cite{Metzner1989}, which implies that the self-energy and hence the scattering rate is momentum independent. The Boltzmann scattering equation (BSE) \cite{Boltzmann1872,Snoke2007,Ziman1960,Chambers1990} has been originally developed for gases \cite{Boltzmann1872} but is nowadays used to address a multitude of different problems, all the way from nuclear physics to cosmology. Often the transport part of this equation is combined with a crude approximation for the scattering, the relaxation time approximation, to study transport properties. However, the full Boltzmann scattering term can also be included, allowing e.g.\ for a highly detailed reconstruction of the thermalization process. Among the possible applications of the full Boltzmann scattering term is the possibility of calculating scattering rates, making a direct comparison of this approach with DMFT possible. To the best of our knowledge such comparison has not been done in a systematic way, and we attempt to fill this blank spot through this work. Specifically, we study the equilibrium scattering rates for the single-orbital Hubbard model in two dimensions as well as those for a two-orbital band insulator. The BSE is expected to fail at strong interaction $U$, since it describes the dynamics of the distribution function by a (momentum-resolved) rate equation with the transition rates usually calculated in lowest order perturbation theory in $U$ (Fermi's golden rule). DMFT, on the other hand, neglects (as an impurity model is solved) the momentum dependence of scattering (an approximation known to become correct in the limit of high dimensions). A final point is that, while DMFT even allows for the construction of effective (local) scattering matrix elements in form of the two-particle vertex, the Boltzmann scattering term needs them as input and only performs the joint density of states (DOS) integration and, eventually, the time propagation. In this paper, we show that indeed at strong interaction $U$, i.e., in the Mott insulating phase \footnote{For an overview of the Mott-Hubbard transition and the physics of the Mott insulator, see \cite{Gebhard1997}.}, a BSE description of the scattering rate is not possible. This is surprising given a good description of the spectral redistribution caused by impact ionization \cite{wais2018}. The DMFT scattering rate is much higher than what can be expected or understood in a rigid band picture; it is intimately connected with the formation of the Hubbard bands and shoulders therein. Conversely, at weak $U$ we obtain a discrepancy as well. These discrepancies, noticeable larger scattering rates and a momentum differentiation on the Fermi surface, can be traced back to the momentum conservation or lack thereof: DMFT and BSE without momentum conservation are in good agreement. This paper is structured as follows: In Sec.\ \ref{sec:model-methods} we introduce the Hubbard-type models considered, and describe how scattering rates are calculated in DMFT and with the Boltzmann scattering equation. In Sec.\ \ref{chap:weak1band} we present results for the weak-coupling single-orbital Hubbard model. Next, we compare scattering rates for the two-orbital band insulator in Sec.\ \ref{weakCoupling2Band} and the Mott insulating single-orbital Hubbard model in Sec.\ \ref{sec:mott}. In Sec.\ \ref{sec:conclusion} we summarize the results. Furthermore we provide additional derivations and results in the Appendix. \section{Model and methods}\label{sec:model-methods} \subsection{Hubbard-type models} In this paper we study the single-orbital Hubbard model on a two-dimensional square lattice, as well as a related two-orbital model which is a band insulator. It is useful to employ second quantization, where operators $c^{(\dag)}_{\mathbf{k}m\sigma}$ annihilate (create) electrons at momentum $\mathbf{k}$ and spin $\sigma$ in orbital $m$. Their Fourier-transformed operators $c^{(\dag)}_{i m\sigma}$ do the same for a lattice site $i$ instead of momentum $\mathbf{k}$; the products $n^{\vphantom{\dag}}_{\mathbf{k}m\sigma}=c_{\mathbf{k}m\sigma}^\dag c^{\vphantom{\dag}}_{\mathbf{k}m\sigma}$ and $n^{\vphantom{\dag}}_{im\sigma}=c_{im\sigma}^\dag c^{\vphantom{\dag}}_{im\sigma}$ are the particle number operators for momentum and site occupations, respectively. Both Hubbard-type models can be described by the following Hamiltonian \begin{equation} \label{eq:hubbard-hamiltonian} H = \sum_{\bold k m\sigma} \epsilon^{\vphantom{\dag}}_{m}(\mathbf{k}) n^{\vphantom{\dag}}_{\mathbf{k}m\sigma} + \frac{U}{2} \sum_{i} \! \sum_{(l \sigma) \neq(m \sigma')}\! n_{il\sigma} n_{im\sigma'}. \end{equation} The first term constitutes a tight-binding description of the system. It describes the kinetic energy (``hopping'') of non-interacting electrons with crystal momentum $\mathbf{k}$ and a dispersion relation $\epsilon^{\vphantom{\dag}}_m(\mathbf{k})$ that is assumed to be diagonal in the orbital index $m$. This term is diagonal in momentum space. The second term models the Coulomb repulsion $U$ between electrons. It is strictly local at each lattice site $i$ and, for the sake of simplicity, we take the interaction to be the same within one orbital and between different orbitals. Consistently, there is no Hund's rule coupling, i.\ e.\ $J=0$. A self-interaction is excluded in the sum. We consider both, the prevalent single-orbital Hubbard model, where orbital indices $m$ and $l$ are restricted to this single orbital, and a two-orbital band insulator with interaction $U$. In the latter case, the bandgap is encoded in the dependence of $\epsilon^{\vphantom{\dag}}_m(\mathbf{k})$ on $m\in\{1,2\}$ as detailed below. Due to the exponential scaling of the Fock space needed to represent an $N$-particle wave function, it is completely impossible to compute the dynamics of every single electron in the system. Instead one is bound to make approximations such as the DMFT and BSE, for extracting relevant information from statistically averaged quantities such as distributions or correlation functions. \subsection{Dynamical mean field theory} Many-body quantum field theory, which also is the pillar upon which DMFT is built, has the Green's function as its basic one-particle quantity. The retarded Green's function is defined as follows (with operators in the Heisenberg representation) \cite{Abrikosov1975a}: \begin{align} &G_R(\mathbf{k}, m, t) = -i\Theta(t)\Big\langle c_{\mathbf{k}m \sigma}^{\vphantom{\dag}}(t) c_{\mathbf{k}m \sigma}^\dag(0) \!+\! c^{\dag}_{\mathbf{k}m \sigma}(0) c_{\mathbf{k}m \sigma}^{\vphantom{\dag}}(t) \Big\rangle\label{eq:G-ret-time}\\ &G_R(\mathbf{k}, m, \omega) = \int_{-\infty}^\infty \!dt \;e^{i\omega t}\; G_R(\mathbf{k},m, t).\label{eq:G-ret-freq} \end{align} Here, $\Theta(t)=0$ for $t<0$ and $1$ for $t>0$ is the step function; and $\langle ... \rangle$ the grand canonical expectation value. One can further define a self-energy \begin{equation} \label{eq:dyson} \Sigma_R(\mathbf{k}, m,\omega) = \big[G_R^{(0)}(\mathbf{k}, m,\omega)\big]^{-1} - \big[G_R(\mathbf{k}, m,\omega)\big]^{-1} \end{equation} as the difference between (inverse) non-interacting ($U=0$) Green's function $G_R^{(0)}(\mathbf{k}, m,\omega)$ and interacting ($U$) Green's function $G_R(\mathbf{k}, m,\omega)$, which contains all effects of the interaction\cite{Abrikosov1975a}. Here, and similarly in \begin{equation} \label{eq:g-nonint} G_R^{(0)}(\mathbf{k}, m, \omega) = \lim_{\alpha\rightarrow 0^+}\big[ \omega + \mu + i\alpha - \epsilon_m(\mathbf{k}) \big]^{-1}, \end{equation} the orbital-diagonal dispersion relation allows us to avoid matrix-inversions in the orbital indices; $\mu$ is the chemical potential. In DMFT, which becomes exact in the limit of infinite dimensions \cite{Metzner1989}, the momentum dependence of the self-energy is neglected: $\Sigma_R(\mathbf{k}, m, \omega) \to \Sigma_R(m, \omega)$. Thus the one-particle Green's function of the Hubbard model in the DMFT approximation is \begin{equation} \label{eq:g-ret-w-explicit} G_R(\mathbf{k}, m, \omega) = \big[ \omega +\mu - {\epsilon}_m(\mathbf{k}) - \Sigma_R(m,\omega)\big]^{-1}, \end{equation} where the $i\alpha$ of Eq.\ \eqref{eq:g-nonint} becomes obsolete since ${\rm Im}\Sigma_R(\omega)$ is negative. For the actual calculation of this self-energy in DMFT, done through a self-consistent solution of an Anderson impurity model, we refer the reader to Refs.~\onlinecite{Georges1992a,Georges1996,Held2007}. Let us instead turn to our actual task, i.e.\ calculating scattering rates or quasiparticle life times. For the following considerations we drop the orbital ($m$) dependence, as the Green's function and self-energy are anyhow diagonal in $m$ due to the assumed dispersion relation. If we linearize the real part of the self-energy and parameterize it through the quasiparticle weight $Z$, i.e., $\mathrm{Re}\Sigma_R(\omega)\approx\mathrm{Re}\Sigma_R(0)+[1-Z^{-1}] \omega$ we can approximate Eq.~(\ref{eq:g-ret-w-explicit}) as \begin{equation} \label{eq:G_QP} G_R(\mathbf{k}, \omega) \approx Z \big[ \omega - \tilde{\epsilon}(\mathbf{k}) -Z \mathrm{Im}\Sigma_R(\omega)\big]^{-1}, \end{equation} where the Green's function has a quasiparticle pole at $\omega=\tilde{\epsilon}(\mathbf{k})=Z[{\epsilon}(\mathbf{k})+\mathrm{Re}\Sigma_R(0)-\mu]$, with a Lorentzian broadening of full-width--half-maximum of $-2Z \mathrm{Im}\Sigma_R(\tilde{\epsilon}(\mathbf{k}))$. That is, $\tilde{\epsilon}(\mathbf{k})$ is the quasiparticle energy and the broadening indicates that \begin{equation} \label{eq:scatrat-dmft-w} \frac{1}{\tau[\omega=\tilde{\epsilon}(\mathbf{k})]} = -2 Z \mathrm{Im} \Sigma_R(\omega=\tilde{\epsilon}(\mathbf{k})). \end{equation} is the inverse life time, also known as scattering rate. Even more transparent is the role of the life time $\tau$ when we recapitulate the physical meaning of the time-dependent retarded Green's function Eq.\ \eqref{eq:G-ret-time}. For the special case of zero temperature the system is in the ground state $|\mathrm{GS}\rangle$ and if the momentum $\mathbf{k}$ is not occupied in the ground state, Eq.\ \eqref{eq:G-ret-time} is reduced to \begin{equation} G_R(\mathbf{k}, t) = -i\langle \mathrm{GS} | c^{\vphantom{\dag}}_\mathbf{k}(t) c^\dag_\mathbf{k}(0) | \mathrm{GS} \rangle \; . \end{equation} That is, at time $t=0$ a particle is added to the system which is thus in the state $|\phi\rangle = c^\dag_\mathbf{k}(0) |\mathrm{GS}\rangle$. Projecting this state onto its propagated version at time $t>0$ $\langle\phi(t)| = e^{i E_\text{GS} t} \langle \mathrm{GS} | c^{\vphantom{\dag}}_\mathbf{k}(t)$ yields the probability amplitude ($E_\text{GS}$ is the ground state energy) that this state still exists after a time $t$ has elapsed \cite{Nolting2015}. This motivates the interpretation of $|G_R(\mathbf{k}, t)|^2$ as the probability that a state created by addition of a particle at $t=0$ still exists at later time $t>0$. In Appendix \ref{app:gft}, we will show that this probability is approximately \begin{equation} \label{eq:g-t} \big|G_R(\bold k, t)\big|^2 \propto e^{2Z\mathrm{Im}\Sigma_R(\tilde{\epsilon}(\mathbf{k}))\, t}\equiv e^{-t/\tau(\tilde{\epsilon}(\mathbf{k}))}, \end{equation} which again leads to Eq.~(\ref{eq:scatrat-dmft-w}) for the life time $\tau$. Technically, we calculate the DMFT self-energy on Matsubara frequencies \cite{Matsubara1955} by continuous-time quantum Monte Carlo \cite{Gull2011a} with symmetric improved estimators \cite{Kaufmann2019} using the w2dynamics program package \cite{Parragh2012,w2dynamics}. The retarded self-energy at real (physical) frequencies is then obtained by the maximum entropy analytic continuation \cite{Jarrell1996,Geffroy2019,kaufmannGithub}. \subsection{Boltzmann scattering equation} The key quantity of the BSE \cite{Boltzmann1872,Snoke2007,Ziman1960,Chambers1990} is the distribution function, whose dynamics is described through the leading-order contributions of the particle-particle interaction (for the models considered). In cases where elementary particles interact strongly, it is recommendable to rewrite the Hamiltonian in terms of weakly interacting quasiparticles so that the leading order perturbation theory can be applied to the weaker effective quasiparticle interaction. Here, we assume that a quasielectron description is possible and that these quasiparticles are characterized by a certain set of quantum numbers, namely the momentum $\bold k$, spin $\sigma$ or orbital-index $n$, and a corresponding quasiparticle dispersion relation $\tilde\epsilon_{n \sigma}(\bold k)$. Then the distribution function $f_{n \sigma}(t,\bold k)$ corresponds to the expectation value of the occupation number operator of a single-particle state $n_{\mathbf{k} n \sigma}$ at time $t$. In the following the spin will be absorbed into the band index for brevity. The BSE in case of a spatially homogeneous system without external fields but with a fermionic particle-particle scattering reads \cite{Snoke2007,Ziman1960,Chambers1990} \begin{equation} \label{eq:boltz1} \begin{split} &\frac{\mathrm d f_{n_0} (\bold k_0) }{\mathrm d t} = \frac{1}{2} \sum_{n_1 n_2 n_3} \int \mathrm d^d k_1 \mathrm d^d k_2 \mathrm d^d k_3 \Big [ W_{n_0 \dots n_3} (\bold k_0\dots \bold k_3) \\ &\quad \times \Big ( (1-f_{n_0}(\bold k_0)) (1-f_{n_1}(\bold k_1))f_{n_2}(\bold k_2) f_{n_3}(\bold k_3)\\ &\quad\quad \quad -f_{n_0}(\bold k_0) f_{n_1}(\bold k_1) (1- f_{n_2}(\bold k_2))(1- f_{n_3}(\bold k_3)) \Big ) \Big ] \end{split} \end{equation} for a $d$-dimensional system. Here, $W_{n_0 \dots n_3} (\bold k_0\dots \bold k_3)$ is defined as \begin{equation} \begin{split} &W_{n_0 \dots n_3} (\bold k_0\dots \bold k_3) = w_{n_0\dots n_3} (\bold k_0 \dots \bold k_3) \\ & \quad \quad \times \delta(\tilde\epsilon_{n_0}(\bold k_0)+\tilde\epsilon_{n_1}(\bold k_1)-\tilde\epsilon_{n_2}(\bold k_2)-\tilde\epsilon_{n_3}(\bold k_3) ) \\ & \quad \quad \times \sum_{\bold G} \delta(\bold k_0 + \bold k_1 - \bold k_2 - \bold k_3 + \bold G) \; ; \label{eq:BSEscat} \end{split} \end{equation} and the scattering amplitude $w_{n_0\dots n_3} (\bold k_0 \dots \bold k_3)$ can be calculated by perturbation theory (Fermi's Golden rule) and is $\sim U^2$ (explicit formulas follow in the context of the specific models below). The two delta-distributions $\delta(\cdot)$ ensure momentum and energy conservation at the scattering event and the sum $\sum_{\bold G}$ runs over all reciprocal lattice vectors $\bold G$. In thermal equilibrium, the distribution of electrons is given by the Fermi-Dirac distribution, $f_\textrm{FD}(\tilde\epsilon) = 1 / \big (1+\mathrm{exp}[\beta (\tilde\epsilon)] \big )$ with the inverse temperature $\beta=1/T$, and the chemical potential $\mu$ already absorbed in $\tilde\epsilon$. The Fermi-Dirac distribution is a fixed point of the Boltzmann equation Eq.~\eqref{eq:boltz1} and therefore properly represents an equilibrium system. The scattering rate $1/\tau_n(\bold k)$ of a test-particle that is added in the state $(n,\bold k)$ in thermal equilibrium can be calculated within the Boltzmann framework as (for a derivation, see \cite{wais2020Preprint}): \begin{equation} \begin{split} &\frac{1 }{\tau_{n_0}(\bold k_0)} = \frac{1}{2} \sum_{n_1 n_2 n_3} \int \mathrm d^d k_1 \mathrm d^d k_2 \mathrm d^d k_3 \Big [ W_{n_0 \dots n_3} (\bold k_0\dots \bold k_3) \\ & \times \Big ( (1-f_\textrm{FD}(\tilde\epsilon_{n_1}(\bold k_1)) )f_\textrm{FD}(\tilde\epsilon_{n_2}(\bold k_2)) f_\textrm{FD}(\tilde\epsilon_{n_3}(\bold k_3))\\ & + f_\textrm{FD}(\tilde\epsilon_{n_1}(\bold k_1)) (1- f_\textrm{FD}(\tilde\epsilon_{n_2}(\bold k_2)))(1- f_\textrm{FD}(\tilde\epsilon_{n_3}(\bold k_3))) \Big )\Big ]. \end{split} \label{eq:boltzwithk} \end{equation} The calculation of the scattering rate above is done numerically with the method presented in Ref.~\onlinecite{wais2020Preprint}. Notice that DMFT scattering rates are only energy (and orbital) dependent. In the BSE we can, on the other hand, add a quasiparticle at every momentum $\bold k$ which then necessarily has the quasiparticle energy $\tilde\epsilon_{n}(\bold k)$. When we later plot the BSE scattering rates as a function of energy, there will be different $1/\tau_n(\bold k)$'s at the same energy $\tilde\epsilon$. Note that the many-body life time broadening discussed above also allows us to add particles away from $\tilde\epsilon_{n}(\bold k)$ in DMFT, albeit the spectral density of such states is strongly suppressed if the broadening is weak. \subsection{BSE without momentum conservation} Prospective differences between the BSE and DMFT may emerge because of (i) strong coupling effects beyond the perturbative treatment of the scattering in the BSE rate equation and (ii) neglecting the momentum dependence in DMFT. The latter not only reflects in the momentum-independent DMFT self-energy but also in disregarding the momentum conservation at scattering events in DMFT. That is, the DMFT self-energy is calculated from Feynman diagrams to all order in $U$ but with the interaction only on an impurity which per construction breaks momentum conservation. We can apply the same approximation also to Boltzmann scattering. That is, we remove in Eq.~(\ref{eq:BSEscat}) the momentum conserving delta-distributions $\sum_{\bold G} \delta(\bold k_0 + \bold k_1 - \bold k_2 - \bold k_3) \to \frac{1}{V_{BZ}}$, where $V_{BZ}$ is the volume of the first Brillouin-zone, as was proposed in Ref.\ \onlinecite{wais2018}. Eq.~(\ref{eq:boltzwithk}) can then be simplified to a purely energy-dependent scattering rate $1/\tau_n(\epsilon)$ that is calculated as \cite{wais2018,wais2020Preprint} \begin{equation} \begin{split} &\frac{1}{\tau_{n_0}(\epsilon_0)} = \frac{1}{2} \sum_{n_1 n_2 n_3} \int \mathrm d \epsilon_1 \mathrm d \epsilon_2 \mathrm d \epsilon_3 \Big [ \tilde w_{n_0 \dots n_3} (\epsilon_0\dots \epsilon_3)\\ & \quad \times \delta(\epsilon_0 + \epsilon_1 - \epsilon_2 - \epsilon_3) A_0^{n_1}(\epsilon_1) A_0^{n_2}(\epsilon_2) A_0^{n_3}(\epsilon_3)\\ &\quad \quad \quad \times \Big ( (1-f_\textrm{FD}(\epsilon_{1}) )f_\textrm{FD}(\epsilon_{2}) f_\textrm{FD}(\epsilon_{3})\\ & \quad\quad \quad + f_\textrm{FD}(\epsilon_{1}) (1- f_\textrm{FD}(\epsilon_{2}))(1- f_\textrm{FD}(\epsilon_{3})) \Big )\Big ] , \end{split} \label{eq:boltznok} \end{equation} where $A_0^n(\epsilon)$ is the normalized DOS of band $n$ and $\tilde w_{n_0 \dots n_3} (\epsilon_0\dots \epsilon_3)$ is the thus modified scattering amplitude that depends on the energies only. Notice that in Eq.~(\ref{eq:boltznok}) we have explicitly used the fact that the interaction is itself momentum-independent (which is the case for the purely local interaction in the Hubbard model). In the general case Eq.~(\ref{eq:boltznok}) cannot be derived, but it can be constructed as an approximation~\cite{Ono_2018,Ono_2020}. In the following we will refer to Eq.~\eqref{eq:boltznok} as Boltzmann without momentum conservation (BSE without $\bold k$). Note that the structure of Eq.~\eqref{eq:boltznok} is way simpler than Eq.~(\ref{eq:boltzwithk}): it can be computed by inverting analytically the energy-conserving delta distribution in Eq.~\eqref{eq:boltznok} and then using standard numerical integration techniques. \section{One-band Hubbard model at weak coupling} \label{chap:weak1band} \begin{figure*} \includegraphics[width=15cm]{1band_u1_u2_extend_noninteract.pdf} \caption{Scattering rates $1/\tau$ normalized by the interaction squared ($U^2$) for the two-dimensional Hubbard model at half-filling calculated by DMFT and BSE with and without $\mathbf k$ conservation. The case $\beta=20$ could not be calculated with full Boltzmann due to computational limitations. The scattering rates shown are the same for both spins in the paramagnetic phase.}\label{fig:1band} \end{figure*} As a first comparison, we discuss the case of the prototypical one-band Hubbard model in two dimensions at half-filling. Depending on the strength of the local interaction $U$ and the temperature $T=1/\beta$, such a system is predicted by DMFT to be either metallic or Mott-insulating. For the weak coupling case we may employ Boltzmann theory with the dispersion relation of the non-interacting Hamiltonian, which is \begin{equation} \begin{split} &\epsilon(\mathbf{k}) = -2 t [\cos(k_x) + \cos(k_y)] \label{eq:disprel} \end{split} \end{equation} for ${\bold k}=(k_x,k_y) \in [-\pi,\pi) \otimes [-\pi,\pi)$ (lattice constant $a\equiv 1$; unit-cell volume $V_\text{UC}=a^2=1$) and the corresponding DOS \cite{Katanin2012} \begin{equation} A_0(\omega) = \int_\text{BZ} \!\! \frac{ d^2{k}}{V_\text{BZ}} \ \,\delta(\omega \!-\! \epsilon(\bold k)) = \frac{1}{2\pi^2t} \mathrm{K}\Bigg(\!\!\sqrt{1 - \left(\frac{\omega}{4t}\right)^2}\Bigg)\textrm{} \label{eq:dosNonInt1B} \end{equation} where $\mathrm{K}(\ldots)$ is the complete elliptic integral of first kind. As hopping parameter and unit of energy we choose $t\equiv 1$ in the following. The scattering amplitude for this system can be calculated in perturbation theory as \begin{equation} w (\bold k_0 \dots \bold k_3) = \frac{2 \pi}{{V_{BZ}}^2} U^2 \delta_{\sigma_0 \bar \sigma_1} \delta_{\sigma_2 \bar \sigma_3} \end{equation} with the short-hand notation $\bar \sigma_i \equiv - \sigma _i$ for the BSE scattering rate Eq.~\eqref{eq:boltzwithk}, and \begin{equation} \tilde w (\epsilon_0 \dots \epsilon_3) = 2 \pi U^2 \delta_{\sigma_0 \bar \sigma_1} \delta_{\sigma_2 \bar \sigma_3 } \end{equation} for the case of BSE without $\mathbf{k}$ in Eq.~\eqref{eq:boltznok}, cf.~Ref.\ \onlinecite{wais2018}. In Fig.~\ref{fig:1band} we show the calculated scattering rates for different temperatures comparing DMFT and the BSE with and without momentum conservation. The quasi-particle renormalization is $Z\approx 1$ for these values of the interaction. In order to compare the structure of the scattering rates for different interaction strengths, we divide the scattering rate by $U^2$. The Boltzmann scattering rates then become completely independent of $U$. In contrast, the DMFT scattering rates depend on $U$ in a non-trivial fashion (Fig.~\ref{fig:1band} shows $U=1$ and $U=2$) since it is a non-perturbative approach. Nonetheless in the limit $U\rightarrow 0$, the DMFT normalized scattering rates must be $U$-independent. Comparing the DMFT scattering rates for both interaction strengths, one notices that the thus normalized scattering rates lie almost on top of each other for the inverse temperatures $\beta = 1.0$, $\beta = 2.5$ and $\beta = 20$, while they slightly deviate for $\beta = 0.5$, $\beta = 1.5$, $\beta = 2.0$. Since there is a rather large uncertainty from the maximum entropy analytical continuation and the deviation is not systematic, we can conclude that the differences in the normalized DMFT scattering rates at $U=1$ and $U=2$ are within the error bars. \begin{figure} \includegraphics[width=6.5cm]{1band_U1_U2_specdens.pdf} \caption{\label{fig:1band_U1_U2_specdens} DMFT spectral densities for (a) $U=1$ and (b) $U=2$ for different temperatures} \end{figure} The scattering rates calculated by the BSE without $\bold k$ are in very good agreement with the DMFT data for all inverse temperatures except for $\beta = 1.0$. Again, this deviation may well originate from the uncertainties of the analytic continuation. In any case, the good agreement of the scattering rates from BSE without $k$ and DMFT along with the $\sim U^2$ scaling of the DMFT results, clearly show that even at $U=2t$ we are still in the perturbative regime. As we show in Appendix~\ref{sec:IPT}, to second order in $U$ the scattering rates as calculated in DMFT and BSE without $\bold k$ are indeed identical. Note however that the spectral density\footnote{ The DMFT spectral densities were calculated as $-\mathrm{Im}G_R(\omega)/\pi$ with \begin{equation} \label{eq:specdens-nice} G_R(\omega) = \int_{-\infty}^{\infty} dx \frac{A_0(x)}{\omega-x-\Sigma_R(\omega)} \end{equation} after analytical continuation of the self-energy. This allows for resolving features like sharp peaks in the spectral density that would be smeared out by direct analytic continuation of the local Green's function in Matsubara frequencies.} in Fig.~\ref{fig:1band_U1_U2_specdens} is already significantly smeared, especially at the band edges and the Van-Hove singularity, because of the stronger interaction. This smearing is a direct consequence of the scattering rate in Fig.~\ref{fig:1band}; and through the DMFT self-consistency it will in turn affect the scattering rates, but only in higher order in $U$ (when self-consistently calculating the spectral function as indicated in Appendix~\ref{sec:IPT}). Possibly this explains why the BSE without $\bold k$ in Fig.~\ref{fig:1band} has a lower scattering rate at the band edge $\omega=\pm4$ and a larger one for larger $|\omega|$, albeit we cannot exclude this to be an artifact of the analytical continuation. Both DMFT and BSE without $\bold k$ show a two-peak structure in the scattering rates with the peak positions roughly at the band-edges. The width of these peaks increases with temperature. At the highest temperature ($\beta = 0.5$) there is only one peak visible which actually consists of the two peaks that are strongly overlapping. In Appendix~\ref{chap:appConvolution}, we show that the position of the two peaks can be approximately calculated from the first moment of the particle- and hole-density. The width and height of the peaks can be calculated when the zeroth and second moment of the particle-density is taken into account in addition to the first moment After establishing a good agreement between DMFT and BSE without $\mathbf k$ at weak coupling, we next turn to the full BSE with momentum conservation. The thus calculated BSE scattering rates (dots in Fig.~\ref{fig:1band}) deviate from the rates obtained with the other methods. First of all, as already mentioned, we highlight that several values, corresponding to different momenta, are present for each energy. Fig.~\ref{fig:1band} shows a particularly strong spread at the Fermi level ($\omega=0$). Furthermore, in contrast to BSE without $\bold k$ and DMFT, there are no scattering rates outside the non-interacting bandwidth ($|\omega|>4$) any longer, as there is no momentum that has such an energy. In DMFT due to the aforementioned smearing of the band-edges there are such states, and in BSE without $\bold k$ we can at least calculate the scattering rate a state at such an energy would have. Another difference is that the BSE scattering rates are generally higher than DMFT or BSE without $\mathbf k$, especially at the band edge ($|\omega|\lesssim 4$ and at higher temperatures also around the Fermi level ($\omega=0$). As DMFT and BSE without $\bold k$ agree with each other, we can safely conclude that this difference originate from neglecting the momentum conservation of the scattering vertex. One can also smoothly interpolate between the results for the BSE with and without $\mathbf k$, by replacing the momentum conserving $\delta$-function by a Gaussian and increasing its width (not shown here). The reason for these discrepancies is that the momentum averaged scattering amplitude does not take into account that there is e.g.\ a particularly strong scattering among momenta at the Hove singularities $(\pm \pi,0)$ and $(0,\pm \pi)$. At low temperatures this scattering even leads to the formation of a pseudogap \cite{Vilk1997,Norman1998,Timusk_1999,Keimer2015,RevModPhys.78.17,Sordi2012,PhysRevLett.114.236402,PhysRevX.8.021048} which requires a beyond DMFT description \cite{Sadovskii2005,Zhang2007,Katanin2009,Gull2013,Schaefer2015-2,RMPVertex}. A precursor thereof is visible here as the strong-momentum dependence of the scattering rate on the Fermi surface. \section{Two-orbital band insulator} \label{weakCoupling2Band} \begin{figure} \includegraphics[width=6.5cm]{2band_U1_U2_specdens_2.pdf} \caption{Spectral-densities for different temperatures for the case (a) $U=4$, $\Delta = 0$ and (b) $U=2$, $\Delta = 2$. For both cases, the effective band gap is $\Delta_\textrm{eff.} \approx 4$.}\label{fig:2bandSpecDens} \end{figure} In this section, we address the case of a band insulator in the weak to intermediate coupling regime. We consider a two-dimensional Hubbard-type model with two orbitals ($A$ and $B$) at half-filling, i.e., $n=2$ electrons per site in the two orbitals. This corresponds to $\mu = 0$ for our dispersion relation below. For simplicity, we assume that electrons may only hop to neighboring orbitals of the same type and that the hopping amplitude has the same absolute size but opposite sign for both orbitals ($t_A=-1$, $t_B=1=t$). Further, we add a local one-particle energy $\mp( \Delta /2 + 4t)$ for orbital $A$ and $B$, respectively. This results in a band gap of size $\Delta$ in the non-interacting DOS, with the top of the valence($A$)-band and the bottom of the conduction($B$)-band both at the $\Gamma$ point. The interaction $U$ is local and the same within and between both orbitals such that the interaction term of the Hubbard model acquires the simple form of Eq.\ \eqref{eq:hubbard-hamiltonian}. We now discuss two different systems, one with $U=4$ and one-particle gap $\Delta = 0$ and one with $U=2$ and $\Delta = 2$. Due to the constant Hartree term in the self-energy, the effective gap in the interacting system is essentially the same $\Delta_\textrm{eff.} \approx U + \Delta= 4$ for both setups. This is because at sufficiently low temperatures, orbital $A$ is almost completely filled with two electrons per site and orbital B is empty. Hence an electron in orbital $B$ perceives a Hartree energy $2U$ (interacts with both $A$ electrons); an electron in orbital $A$ instead has a Hartree energy $1U$ (as it only interacts with the electron of opposite spin in orbital $A$). The difference enlarges the bandgap to $\Delta_\textrm{eff.} = U + \Delta$. The spectral densities for both cases are displayed in Fig.~\ref{fig:2bandSpecDens} and follow the above reasoning. At higher temperatures, we however induce holes in the valence and electrons in the conduction band. The difference in occupation is reduced, the bandgap hence smaller. For the highest temperature ($\beta = 0.25$), the gap disappears completely for the case $U=4$, $\Delta = 0$. The non-interacting DOS in Fig.~\ref{fig:2bandSpecDens} is constructed with the above enhanced effective band gap $\Delta_\textrm{eff.}$ instead of $\Delta$. As this describes the DMFT spectrum at low temperatures reasonably well, we employ for the BSE the corresponding effective bandstructure \begin{align} \epsilon_A(\bold k) =& -\epsilon(\bold k) - \left ( \frac{\Delta_\textrm{eff.}}{2} + 4t \right ) ,\\ \epsilon_B(\bold k) =& \epsilon(\bold k) +\left ( \frac{\Delta_\textrm{eff.}}{2} + 4t \right ) , \end{align} where $\epsilon(\bold k)$ is defined by Eq.~\eqref{eq:disprel}. The corresponding DOS of the non-interacting system corrected by the Hartree shift is used for the BSE without $\mathbf k$ and given by \begin{align} A_0^A(\omega) = & A_0 \left ( \omega + \left ( \frac{\Delta_\textrm{eff.}}{2} + 4t \right ) \right ) ,\label{eq:A0A}\\ A_0^B(\omega) = & A_0^A \left (-\omega \right ) \label{eq:A0B} \end{align} with $A_0(\omega)$ defined in Eq.~\eqref{eq:dosNonInt1B}. Due to particle-hole symmetry and the simple form of the interaction, the BSE calculation can be simplified as outlined in Appendix~\ref{chap:simp2band}. \begin{figure*} \includegraphics[width=12cm]{2band_u2_u4_extend_compare.pdf} \caption{Scattering rates normalized by the squared interaction for an electron in the upper band of a two-orbital band insulator as calculated with DMFT, BSE with and without $\mathbf k$. Two different sets of parameters are used: $U=4$, $\Delta = 0$ and $U=2$, $\Delta = 2$. The gray, vertical lines indicate the band edges of the non-interacting system.}\label{fig:2band} \end{figure*} Fig.~\ref{fig:2band} shows the scattering rate of the two-band system. The BSE with momentum conservation shows a seemingly parabolic increase starting with a sizable value at the lower band edge ($\omega=2$). Superimposed on this trend is an enhanced scattering rate in the middle of the band at $\omega=6$ with a strong momentum spread of the scattering rate. This is akin to the behavior at the Fermi level for the weakly correlated one-band Hubbard model in Fig.~\ref{fig:1band} and can again be attributed to the van Hove singularity. Similar as for the one-band case, the scattering rates in BSE without $\bold k$ are slightly smaller than in the BSE with $\bold k$ conservation and already decay toward the upper band edge ($\omega=10$). They closely resemble the DMFT values for the $U=2$ case; only the peak of the scatterings is shifted to slightly higher energies than in DMFT. There are larger differences to the DMFT data at the intermediate coupling $U=4$, which have systematically higher scattering rates at low energies. This is because stronger smearing of the spectral density at $U=4$ leads to a smaller effective gap and some in-gap spectral weight, see Fig.~\ref{fig:2band}. This, in turn, leads to more thermal excitations and therefore more scatterings. These effects can be included in BSE without $\bold k$ if we use the interacting spectral density $A^n(\omega)$ instead of the non-interacting one $A_0^n(\omega)$, which leads to a good agreement with the DMFT results, see Appendix~\ref{chap:twoBandInteracting}. Eye catching is the strong suppression of the scattering rate upon decreasing temperature. The reason for this is the dramatic reduction of the number of thermally excited carriers which are needed to act as scattering partners. Note that with a density-density Coulomb interaction, the electron in the conduction($B$)-band either needs (i) another $B$-electron to scatter with [the final state being again two $B$-electrons], or (ii) a hole in the valence($A$)-band into which an $A$-electron can scatter [final and initial state being one $A$- and one $B$-electron]. Both $B$-electron and $A$-hole scattering partners however require thermally excited carriers that are absent at low temperatures. For the Mott insulator discussed in the next section, the scattering rates are much higher because of impact ionization processes. Here, an electron in the upper Hubbard band excites an additional electron-hole pair across the gap. In the band insulator impact ionization corresponds to a process ${c^{\dag}_{i B \sigma}c^{\vphantom{\dag}}_{i A \sigma}c^{\dag}_{i B \bar\sigma }c^{\vphantom{\dag}}_{i B \bar\sigma}}$ which is not possible in lowest order perturbation theory in the density-density interaction, nor are Auger processes $c^{\dag}_{i B \sigma}c^{\vphantom{\dag}}_{i A \sigma}c^{\dag}_{i A \bar\sigma }c^{\vphantom{\dag}}_{i A \bar\sigma }$. In the one-band Mott insulator, the two Hubbard bands have the same orbital index and such processes hence dominate the scattering rate if $\omega$ is sufficiently large to allow impact ionization~\cite{Werner2014,Sorantin2018,wais2018,Maislinger2020,Kauch2020a}. Even if we generalize the Coulomb interaction to the widely employed Kanamori form \cite{Kanamori63} with spin-flip and pair-hopping terms, we still need a thermally excited second electron or hole for scattering. Only, the full Slater interaction \cite{Slater,Griffith} also contains interaction terms that directly mediate impact ionization. These interaction terms are however small or even vanish, which is the reason why they are often disregarded in the first place. Consider e.g.\ a material with cubic symmetry and the orbitals $A=d_{xy}$ and $B=d_{xz}$. Then interaction terms such as $U_{BAAA}c^{\dag}_{i B \sigma}c^{\vphantom{\dag}}_{i A \sigma}c^{\dag}_{i A \bar\sigma }c^{\vphantom{\dag}}_{i A \bar\sigma }$ vanish because the integral to calculate the matrix element $U_{BAAA}$ is odd under the transformaton $z\rightarrow -z$; for a furthergoing discussion, see e.g.~\cite{Ribic2014,Buenemann2017}. A more viable route to enhance the scattering rate through impact or Auger processes in a band insulator is if the bands strongly hybridize so that the conduction and valence bands are admixtures of the $A$ and $B$ orbitals. It is interesting to note that the scattering rate preserves its two-band like structure even in the case $U=4$, $\beta = 0.25$ when the spectral density does not show a gap any longer. The reason for this is again that the density-density interaction does not allow for impact excitation and Auger emission, which are very gap-size sensitive. Instead the scattering processes induced by the density-density interaction are agnostic about the gap-size per-se. The additional $B$ electron still needs another $B$ (or $A$) electron to scatter with, and two empty final $B$ states (or an empty $A$ and an empty $B$ state). The scattering process does not need to overcome the size of the gap, in contrast to impact ionization. \section{Strong coupling: Mott-insulator}\label{sec:mott} \begin{figure} \includegraphics[width=7.4cm]{mott_u12_b5_3.pdf} \caption{(a) Spectral density as obtained by DMFT and Fermi-Dirac distribution for the case $U=12$ and $\beta = 5$. (b) Scattering rates as obtained from DMFT, and BSE without $\mathbf k$ using either the non-interacting density of states (BSE without k $A_0(\omega)$) or the interacting DMFT spectral density shown (BSE without k $A(\omega)$).}\label{fig:mott_u12_b5} \end{figure} Finally, we compare the approaches introduced above in the strong coupling regime of the single-orbital Hubbard model. Since the BSE is a perturbative treatment in the interaction, this is certainly the most problematic case for the BSE. For sufficiently large interaction, the non-interacting DOS splits into two, the upper and lower Hubbard band, see Fig.~\ref{fig:mott_u12_b5} (top). We have a Mott insulator, one of the cornerstones of strongly correlated electron systems \cite{Gebhard1997}. If we use the BSE with the non-interacting DOS, this dramatic reshuffling of the DOS is not incorporated. The scattering rate is still the very same with a two peak structure as for weak coupling---just with the prefactor rescaled by $U^2$, see black-dotted line in Fig.~\ref{fig:mott_u12_b5}. This kind of description assumes that we have a metal with states at low energies. It is not an appropriate description of a Mott insulator. This problem can be mitigated if we consider better suited quasiparticles instead of the non-interacting ones. This is in general not trivial, and not always can proper quasiparticles with a long life time and weak interaction be identified. They might not even exist. Taking the electronic DMFT excitations of the Hubbard bands as our quasiparticles in the BSE without $\bold k$, we have to replace the non-interacting DOS $A_0(\omega)$ by the interacting spectral density $A(\omega)$ of Fig.~\ref{fig:mott_u12_b5} (top) in Eq.~\eqref{eq:boltznok}. Even if we have no well defined quasiparticles such a quantum Boltzmann description is possible \cite{wais2018} if we have a separation of time scales, and the average-time (distribution function) dynamics is slower than the relative-time dynamics. As was shown in \cite{wais2018} the thus modified BSE without $\mathbf k$ provides a good description of the DMFT impact ionization processes and redistribution of spectral weight in non-equilibrium\footnote{The calculation of scattering in this paper is still possible within equilibrium DMFT theory; whereas the non-equilibrium processes of \cite{wais2018} required the non-equilibrium DMFT~\cite{zlatic2006,Aoki2014}.} Here, we instead study in Fig.~\ref{fig:mott_u12_b5} (bottom, blue line) the one-particle scattering rates in the BSE without $\mathbf k$ and interacting $A(\omega)$: The Mott insulator is described as two split quasiparticle bands with the gap $\sim 4$ being much larger than temperature $T=1/5$. Hence, if we add an extra electron in the upper Hubbard quasiparticle band it has no partners to scatter in BSE, the scattering rate is zero similar to the suppression of the scattering rate in the band insulator. However, if the added electron has an excess energy [$\omega - \omega_{LBE}$ relative to the lower band edge of the upper Hubbard band $\omega_{LBE}\gtrsim 2$ in Fig.~\ref{fig:mott_u12_b5}] which is larger than the Mott gap [$\Delta_{\rm Mott}\gtrsim 4$], i.e., $\omega\gtrsim 6$, impact ionization processes with an electron-hole excitation across the gap become possible. The phase space of such scattering processes increase quadratically with $\omega- \omega_{LBE}-\Delta_{\rm Mott}$ for a box shaped DOS. This explains the BSE without $\mathbf k$ scattering rate in Fig.~\ref{fig:mott_u12_b5}, which as already mentioned well describes impact ionization processes, including the change of the double occupation and redistribution of spectral weight with time in non-equilibrium~\cite{wais2018}. Let us now turn to the DMFT scattering rate as extracted from the self-energy and shown in Fig.~\ref{fig:mott_u12_b5} (bottom, red-dashed line) \footnote{As we do not have a linear quasiparticle renormalization in the self-energy, we plot $1/\tau(\omega)=2{\rm Im} \Sigma (\omega)$; $Z=1$ in Eq.~(\ref{eq:g-t}).}. The by far dominating feature (cut-off by the finite $y$-axis scale) is at $\omega=0$ where $\Sigma(\omega)= (U^2/4) \; 1/(\omega+i\alpha)$ in the large $U$ limit of the Mott insulator with a Lorentzian broadening $\alpha \sim \pi T$. This pole is responsible for the splitting of the DOS into two Mott bands and yields the $\delta$-like peak in ${\rm Im} \Sigma$ at $\omega=0$. As a matter of course we cannot expect this feature to be described in the BSE without $\mathbf k$. It is also not necessary as $\omega=0$ is in the middle of the Mott gap where there are essentially no states---essentially since at low temperature the aforementioned finite broadening leads to a very small spectral weight. This filling of the Mott gap with temperature \cite{Mo2004} is a feature distinct from a band insulator. These in-gap states have an extremely short life time. \begin{figure} \centering \includegraphics[width=8.6cm]{spec_mott_dmft.pdf} \caption{(a) Spectral density as obtained by DMFT for $\beta=10$ and different interaction strengths $U=10$, 12, 16. The solid line $A(\omega)$'s are calculated from the analytically continued $\Sigma(\omega)$; the dashed lines are directly analytically continued from the Matsubara Green's function. (b) DMFT scattering rates for the same parameters as in (a).}\label{fig:mott_u} \end{figure} Let us now turn to the more relevant DMFT scattering rate within the Hubbard bands. These are orders of magnitude larger in DMFT than those from the BSE without $\mathbf k$ and with interacting $A(\omega)$. Also their shape is completely different: There is no suppression at the lower edge of the upper Hubbard bands which, as argued above, was the case if the scattering is due to impact ionization requiring a threshold energy; neither are the DMFT scattering rates flat or follow the shape of the upper Hubbard band. Instead the scattering rates are strongest around $\omega\sim 4$ close to the lower band edge, and are dramatically reduced for larger $\omega$. Similar as the pole at $\omega=0$, the maximum at $\omega\sim 4$ leads to a suppression of the spectral weight. Fig.~\ref{fig:mott_u12_b5} (top) where we have calculated $A(\omega)$ from the analytically continued $\Sigma(\omega)$ even shows a two peak structure in the upper Hubbard band. Such a two peak structure was previously observed on the metallic side of the Mott transition, immediately before the quasiparticle peak vanishes \cite{Karsaki2008,Ganahl2015a,Lee2017}. On the Mott insulating side, Refs.\ \onlinecite{Granath2014} and \onlinecite{Nishimoto2004} show an extra peak or a shoulder feature on the inner side of the Hubbard bands, similar to our findings. In Fig.~\ref{fig:mott_u} we also compare the $A(\omega)$ that is directly continued from the Green's function on the imaginary axis, which shows a shoulder rather than a double peak. While we hence cannot resolve within the maximum entropy uncertainty, whether we actually have a shoulder or double peak structure, it is clear that there is a feature in the upper Hubbard band. Mathematically, this is necessitated by the strong scattering rate in this region. A simple physical picture or understanding of these side structures in the Hubbard bands is still missing. Note, that also in strong coupling perturbation theory to second order such a shoulder and hence asymmetry of the self-energy within the upper Hubbard band is observed \cite{Kalinowski2002}, whereas the Hubbard-III approximation~\cite{Hubbard64} and the Falicov-Kimball model~\cite{vanDongen1997,Freericks2003} do not show such a shoulder. In agreement, Fig.~\ref{fig:mott_u} shows this feature for different values of $U$. Since the scattering in BSE without $\mathbf k$ and with non-interacting $A_0(\omega)$ is merely rescaled by $U^2$, it is clear from Fig.~\ref{fig:mott_u} that the agreement of the position of the maximal scattering rate between BSE and DMFT in Fig.~\ref{fig:mott_u12_b5} (bottom, black-dotted vs.~red-dashed line) was by chance. We can conclude that the one-electron scattering rate in a Mott insulator is very different from an impact ionization picture. It is associated with the formation ($\omega\sim 0$) of the Hubbard bands and even side structures therein ($\omega\sim 4$ in Fig.~\ref{fig:mott_u12_b5}). The Hubbard bands are created by the interaction of the same electrons we also use as a test charge for calculating the scattering rate. If there is a local extra hole or electron, locally the Hubbard bands deform. Most noticeable this is in the filling of the Mott gap, which does not only occur with increasing temperature \cite{Mo2004} but also if we drive the system out of equilibrium \cite{Werner2014,Sorantin2018,wais2018}. If we have an extra electron in a disordered spin background of the DMFT Mott insulator, it can hop or cannot hop to a neighboring site depending on the spin orientation of this neighbor. This leads to a large scattering rate without changing the number of double occupations. These processes are included in the DMFT but not in the BSE, they do not contribute to impact ionization (do not change the number of double occupations) or major energy redistributions. \section{Conclusion}\label{sec:conclusion} We have studied and compared scattering rates using two widely employed methods: BSE and DMFT. We have employed these methods out of their comfort zone, where they cannot be applied with mathematical rigor. For DMFT this is the dimensionality of the systems studied (2D), which is far away from the limit of infinite dimensions where DMFT become exact. For the BSE it is the strong interaction regime of the Mott insulator, where a rate equation with perturbatively determined scattering rates cannot safely be applied. We have mitigated the latter in part by using the interacting spectral function instead of the non-interacting DOS as the quasiparticle states whose occupation dynamics (here scattering rate) is calculated by BSE. DMFT somewhat underestimates the scattering rates and by construction cannot resolve their momentum-, only their energy-dependence. This momentum dependence is particularly strong in the middle of the band where the Van-Hove singularity is located. The physical reason behind both discrepancies is that the phase space for the scattering of a quasiparticle with another quasiparticle explicitly depends on available unoccupied states linked by momentum conservation. If we replace the momentum-conserving $\delta$-function by a Gaussian with increasing width or directly disregard momentum conservation in the BSE without $\mathbf k$, the scattering rates are reduced and the DMFT results reproduced by BSE without $\mathbf k$ for the weakly correlated metal ($U=1$ or 2). The biggest challenge for the BSE is the strongly interacting Mott insulating state. Here the DOS is split into two Hubbard bands which we take as the starting quasiparticle DOS in the BSE without $\mathbf k$. In the BSE, the scattering rate is due to impact ionization. These processes are well described and in good agreement with DMFT~\cite{wais2018}. However, in DMFT additional scattering processes which can be associated with the formation of the Hubbard bands and shoulders therein dominate. The same specimen of electrons that through their interaction form the Hubbard bands are also added as a charge probe, locally disturbing the spectrum. These huge DMFT scattering rates are beyond a BSE description with a static DOS. Scattering in an interacting band insulator bears no similarity at all with that in the Mott insulator. It is strongly suppressed at low temperatures since scattering is only possible if there are thermal excitations across the gap. BSE without $\mathbf k$ and DMFT agree, while the BSE with momentum conservation has, similar as for the weakly correlated metal, somewhat larger scattering rates. The difference to the Mott insulator does not only lie in the huge scattering associated with the Hubbard bands, but also in the absence of impact ionization which dominates the scattering in BSE for a Mott insulator. Impact ionization and Auger processes are only possible in a band insulator through higher order in $U$ processes, through quite small Coulomb matrix elements beyond the Kanamori interaction, or a sizable hybridization between valence and conduction band. This strongly suggests that Mott insulators are better suited than band insulators for increasing the efficiency of solar cells through impact ionization \cite{Manousakis2010,Assmann2013,Werner2014,Sorantin2018,wais2018,Maislinger2020,Kauch2020a}. \begin{acknowledgments} We thank M. Eckstein and P. Werner for discussions, and acknowledged financial support from the Austrian Science Fund (FWF) through the Doctoral School W1243 Solids4Fun (Building Solids for Function; MW) and project P30997 (M.W., J.K., K.H.), and from Nanyang Technological University, NAP-SUG (M.B.). Calculations have been done in part on the Vienna Scientific Cluster (VSC). \end{acknowledgments}
2,869,038,154,184
arxiv
\subsection{Results on ImageNet derivatives} \input{fig_text/tbl_cifar.tex} The miniImageNet dataset~\cite{NIPS2016_6385} is a standard benchmark for few-shot learning algorithms for recent works. It consists of 100 classes randomly sampled from the ImageNet; each class contains 600 downsampled images of size 84x84. We follow the widely-used splitting protocol proposed in~\cite{ravi2017}, which uses 64 classes for meta-training, 16 classes for meta-validation, and the remaining 20 classes for meta-testing. The tieredImageNet dataset~\cite{ren2018metalearning} is another subset of ImageNet but has more classes (608 classes). These classes are first grouped into 34 higher-level categories, which are further divided into 20 training categories (351 classes), 6 validation categories (97 classes), and 8 testing categories (160 classes). Such construction ensures the training set is distinctive enough from the testing set and makes the problem more challenging. \noindent\textbf{Results.} During meta-testing, we evaluate our method with 3 runs, where in each run the accuracy is the mean accuracy of $1000$ randomly sampled tasks. We report the median of 3 runs in Table~\ref{tab:miniImagenet}. Our simple baseline with ResNet-12 is already comparable with the state-of-the-art MetaOptNet~\cite{lee2019meta} on miniImageNet, and outperforms all previous works by at least 3\% on tieredImageNet. The network trained with distillation further improves over the simple baseline by 2-3\%. We notice that previous works~\cite{Qiao_2018_CVPR,rusu2018metalearning,NIPS2018_7352,Sun_2019_CVPR} have also leveraged the standard cross-entropy pre-training on the meta-training set. In ~\cite{NIPS2018_7352,rusu2018metalearning}, a wide ResNet (WRN-28-10) is trained to classify all classes in the meta-training set (or combined meta-training and meta-validation set), and then frozen during the meta-training stage. \cite{Dhillon2019ABF} also conducts pre-training but the model is fine-tuned using the support images in meta-testing set, achieving $57.73 \pm 0.62$. We adopt the same architecture and gets $61.1 \pm 0.86$. So fine-tuning on small set of samples makes the performance worse. Another work~\cite{NIPS2018_7352} adopts a multi-task setting by jointly training on the standard classification task and few-shot classification (5-way) task. In another work~\cite{Sun_2019_CVPR}, the ResNet-12 is pre-trained before mining hard tasks for the meta-training stage. In this work, we show standard cross-entropy pre-training is sufficient to generate strong embeddings without meta-learning techniques or any fine-tuning. \subsection{Results on CIFAR derivatives} The CIFAR-FS dataset~\cite{bertinetto2018meta} is a derivative of the original CIFAR-100 dataset by randomly splitting 100 classes into 64, 16 and 20 classes for training, validation, and testing, respectively. The FC100 dataset~\cite{NIPS2018_7352} is also derived from CIFAR-100 dataset in a similar way to tieredImagNnet. This results in 60 classes for training, 20 classes for validation, and 20 classes for testing. \noindent\textbf{Results.} Similar to previous experiments, we evaluate our method with 3 runs, where in each run the accuracy is the mean accuracy of 3000 randomly sampled tasks. Table~\ref{tab:CIFAR} summarizes the results, which shows that our simple baseline is comparable to Prototypical Networks ~\cite{NIPS2017_6996} and MetaOptNet~\cite{lee2019meta} on CIFAR-FS dataset, and outperforms both of them on FC100 dataset. Our distillation version achieves the new state-of-the-art on both datasets. This verifies our hypothesis that a good embedding plays an important role in few-shot recognition. \section{Related works} \label{sec:related} \paragraph{Metric-based meta-learning.} The core idea in metric-based meta-learning is related to nearest neighbor algorithms and kernel density estimation. Metric-based methods embed input data into fixed dimensional vectors and use them to design proper kernel functions. The predicted label of a query is the weighted sum of labels over support samples. Metric-based meta-learning aims to learn a task-dependent metric. \cite{Koch2015} used Siamese network to encode image pairs and predict confidence scores for each pair. Matching Networks \cite{NIPS2016_6385} employed two networks for query samples and support samples respectively and used an LSTM with read-attention to encode a full context embedding of support samples. Prototypical Networks \cite{NIPS2017_6996} learned to encode query samples and support samples into a shared embedding space; the metric used to classify query samples is the distance to prototype representations of each class. Instead of using distances of embeddings, Relation Networks \cite{sung2018learning} leveraged relational module to represent an appropriate metric. TADAM \cite{NIPS2018_7352} proposed metric scaling and metric task conditioning to boost the performance of Prototypical Networks. \paragraph{Optimization-based meta-learning.} Deep learning models are neither designed to train with very few examples nor to converge very fast. To fix that, optimization-based methods intend to learn with a few examples. Meta-learner \cite{ravi2017} exploited an LSTM to satisfy two main desiderata of few-shot learning: quick acquisition of task-dependent knowledge and slow extraction of transferable knowledge. MAML \cite{pmlr-v70-finn17a} proposed a general optimization algorithm; it aims to find a set of model parameters, such that a small number of gradient steps with a small amount of training data from a new task will produce large improvements on that task. In that paper, first-order MAML was also proposed, which ignored the second-order derivatives of MAML. It achieved comparable results to complete MAML with orders of magnitude speedup. To further simplify MAML, Reptile \cite{Nichol2018OnFM} removed re-initialization for each task, making it a more natural choice in certain settings. LEO \cite{rusu2018metalearning} proposed that it is beneficial to decouple the optimization-based meta-learning algorithms from high-dimensional model parameters. In particular, it learned a stochastic latent space from which the high-dimensional parameters can be generated. MetaOptNet \cite{lee2019meta} replaced the linear predictor with an SVM in the MAML framework; it incorporated a differentiable quadratic programming (QP) solver to allow end-to-end learning. For a complete list of recent works on meta-learning, we refer readers to \cite{weng2018metalearning}. \paragraph{Towards understanding MAML.} To understand why MAML works in the first place, many efforts have been made either through an optimization perspective or a generalization perspective. Reptile \cite{Nichol2018OnFM} showed a variant of MAML works even without re-initialization for each task, because it tends to converge towards a solution that is close to each task's manifold of optimal solutions. In \cite{raghu2019rapid}, the authors analyzed whether the effectiveness of MAML is due to rapid learning of each task or reusing the high quality features. It concluded that feature reuse is the dominant component in MAML’s efficacy, which is reaffirmed by experiments conducted in this paper. \paragraph{Meta-learning datasets.} Over the past several years, many datasets have been proposed to test meta-learning or few-shot learning algorithms. Omniglot~\cite{lake2015} was one of the earliest few-shot learning datasets; it contains thousands of handwritten characters from the world's alphabets, intended for one-shot "visual Turing test". In~\cite{Lake2019TheOC}, the authors reported the 3-year progress for the Omniglot challenge, concluding that human-level one-shot learnability is still hard for current meta-learning algorithms. \cite{NIPS2016_6385} introduced mini-ImageNet, which is a subset of ImageNet~\cite{imagenet_cvpr09}. In~\cite{ren2018metalearning}, a large portion of ImageNet was used for few-shot learning tests. Meta-dataset~\cite{triantafillou2019meta} summarized recent datasets and tested several representative methods in a uniform fashion. \paragraph{Knowledge distillation.} The idea of knowledge distillation (KD) dates back to \cite{Bucilua2006}. The original idea was to compress the knowledge contained in an ensemble of models into a single smaller model. In \cite{knowledgedistillation}, the authors generalized this idea and brought it into the deep learning framework. In KD, knowledge is transferred from the teacher model to the student model by minimizing a loss in which the target is the distribution of class probabilities induced by the teacher model. In was shown in \cite{yim2017} that KD has several benefits for optimization and knowledge transfer between tasks. BAN \cite{FurlanelloLTIA18} introduced sequential distillation, which also improved the performance of teacher models. In natural language processing (NLP), BAM \cite{clark2019bam} used BAN to distill from single-task models to a multi-task model, helping the multi-task model surpass its single-task teachers. Another two related works are~\cite{mobahi2020self} which provides theoretical analysis of self-distillation and CRD~\cite{tian2020contrastive} which shows distillation improves the transferability across datasets. % \subsection{Embeddings from self-supervised representation learning} \label{sec:self-sup} \input{fig_text/tbl_self_supervised} \input{fig_text/tbl_ablation.tex} \input{fig_text/fig_exp_distill.tex} Using unsupervised learning~\cite{wu2018unsupervised,tian2019contrastive,He2019MomentumCF,tian2020makes} to improve the generalization of the meta-learning algorithms~\cite{NIPS2016_6408} removes the needs of data annotation. In addition to using embeddings from supervised pre-training, we also train a linear classifier on embeddings from self-supervised representation learning. Following MoCo~\cite{He2019MomentumCF} and CMC~\cite{tian2019contrastive} (both are inspired by InstDis~\cite{wu2018unsupervised}), we train a ResNet50~\cite{He2015DeepRL} (without using labels) on the merged meta-training set to learn an embedding model. We compare unsupervised ResNet50 to a supervised ResNet50. From Table~\ref{tab:self-sup}, we observe that using embeddings from self-supervised ResNet50 is only slightly worse than using embeddings from supervised ResNet50 (in 5-shot setting, the results are comparable). This observation shows the potential of self-supervised learning in the scenario of few-shot learning. \subsection{Ablation experiments} In this section, we conduct ablation studies to analyze how each component affects the few-shot recognition performance. We study the following five components of our method: (a) we chose logistic regression as our base learner, and compare it to a nearest neighbour classifier with euclidean distance; (b) we find that normalizing the feature vectors onto the unit sphere, e.g., $\mathcal{L}$-2 normalization, could improve the classification of the downstream base classifier; (c) during meta-testing, we create 5 augmented samples from each support image to alleviate the data insufficiency problem, and using these augmented samples to train the linear classifier; (d) we distill the embedding network on the training set by following the sequential distillation~\cite{FurlanelloLTIA18} strategy. Table~\ref{table:ablation} shows the results of our ablation studies on miniImageNet, tieredImageNet, CIFAR-FS, and FC100. In general, logistic regression significantly outperforms the nearest neighbour classifier, especially for the 5-shot case; $\mathcal{L}$-2 normalization consistently improves the 1-shot accuracy by 2\% on all datasets; augmenting the support images leads to marginal improvement; even with all these techniques, distillation can still provide 2\% extra gain.% \subsection{Effects of distillation}\label{sec:exp_distill} \input{fig_text/tbl_backbone.tex} \input{fig_text/tbl_backbone_cifar.tex} We can use sequential self-distillation to get an embedding model, similar to the one in Born-again networks~\cite{FurlanelloLTIA18}. We therefore investigate the effect of this strategy on the performance of downstream few-shot classification. In addition to logistic regression and nearest-neighbour classifiers, we also look into a cosine similarity classifier, which is equivalent to the nearest-neighbour classifier but with normalized features (noted as ``NN+Norm.''). The plots of 1-shot and 5-shot results on miniImageNet and CIFAR-FS are shown in Figure~\ref{fig:exp_distill}. The 0-th generation (or root generation) refers to the vanilla model trained with only standard cross-entropy loss, and the ($k$-$1$)-th generation is distilled into $k$-th generation. In general, few-shot recognition performance keeps getting better in the first two or three generations. After certain number of generations, the accuracy starts decreasing for logistic regression and nearest neighbour. Normalizing the features can significantly alleviate this problem. In Table~\ref{tab:miniImagenet}, Table~\ref{tab:CIFAR}, and Table~\ref{table:ablation}, we evalute the model of the second generation on miniImageNet, CIFAR-FS and FC100 datasets; we use the first generation on tieredImageNet. Model selection is done on the validation set. \subsection{Choice of base classifier} \label{sec:choice-base-learner} One might argue in the 1-shot case, that a linear classifier should behavior similarly to a nearest-neighbour classifier. However in Table~\ref{table:ablation} and Figure~\ref{fig:exp_distill}, we find that logistic regression is clearly better than nearest-neighbour. We argue that this is casued by the scale of the features. After we normalize the features by the $\mathcal{L}$-2 norm, logistic regression (``LR+Norm'') performs similarly to the nearest neighbour classifier (``NN+Norm.''), as shown in the first row of Figure~\ref{fig:exp_distill}. However, when increasing the size of the support set to 5, logistic regression is significantly better than nearest-neighbour even after feature normalization \subsection{Comparsions of different network backbones.} \input{fig_text/tbl_meta_dataset} Better backbone networks generally produce better results; this is also obvious in few-shot learning and/or meta-learning (as shown in Table~\ref{tab:miniImagenet}). To further verify our assumption that the key success of few-shot learning algorithms is due to the quality of embeddings, we compare three alternatives in Table~\ref{tab:backbone} and Table~\ref{tab:backbone_cifar}: a ConvNet with four four convolutional layers (64, 64, 64, 64); a ResNet12 as in Table~\ref{tab:miniImagenet}; a ResNet12 with sequeeze-and-excitation~\cite{hu2018squeeze} modules. For each model, we have four settings: training on meta-training set; training and distilling on meta-training set; training on meta-training set and meta-validation set; training and distilling on meta-training set and meta-validation set. The results consistently improve with more data and better networks. This is inline with our hypothesis: embeddings are the most critical factor to the performance of few-shot learning/meta learning algorithms; better embeddings will lead to better few-shot testing performance (even with a simple linear classier). In addition, our ConvNet model also outperforms other few-shot learning and/or meta learning models using the same network. This verifies that in both small model regime (ConvNet) and large model regime (ResNet), few-shot learning and meta learning algorithms are \emph{no better} than learning a good embedding model. \subsection{Multi-task vs multi-way classification?} \label{sec:multitask} We are interested in understanding whether the efficacy of our simple baseline is due to multi-task or multi-way classification. We compare to training an embedding model through \emph{multi-task} learning: a model with shared embedding network and different classification heads is constructed, where each head is only classifying the corresponding category; then we use the embedding model to extract features as we do with our baseline model. This achieves $58.53\pm 0.8$ on mini-ImageNet 5-way 1-shot case, compared to our baseline model which is $62.02\pm 0.63$. So we argue that the speciality of our setting, where the few-shot classification tasks are mutually exclusive and can be merged together into a single \emph{multi-way} classification task, makes the simple model effective. \section{Experiments}\label{sec:exp} We conduct experiments on four widely used few-shot image recognition benchmarks: miniImageNet~\cite{NIPS2016_6385}, tieredImageNet~\cite{ren2018metalearning}, CIFAR-FS~\cite{bertinetto2018meta}, and FC100~\cite{NIPS2018_7352}. The first two are derivatives of ImageNet~\cite{ILSVRC15}, while the last two are reorganized from the standard CIFAR-100 dataset~\cite{Krizhevsky09learningmultiple,Torralba2008}. Additional results on Meta-Dataset~\cite{triantafillou2019meta} is presented in~\S\ref{sec:meta-dataset}. \input{section/exp-implementation.tex} \input{section/exp-main-results.tex} \input{section/exp-ablation-analysis.tex} \section{Results on Meta-Dataset}\label{sec:meta-dataset} Meta-Dataset~\cite{triantafillou2019meta} is a new benchmark for evaluating few-shot methods in large-scale settings. Compared to miniImageNet and tieredImageNet, Meta-Dataset provides more diverse and realistic samples. \textbf{Setup.} The ILSVRC (ImageNet) subset consists of 712 classes for training, 158 classes for validation, and 130 classes for testing. We follow the setting in Meta-Dateset~\cite{triantafillou2019meta} where the embedding model is trained solely on the ILSVRC training split. We use ResNet-18~\cite{He2015DeepRL} as the backbone network. The input size is 128$\times$128. In the pre-training stage, we use SGD optimizer with a momentum of 0.9. The learning rate is initially 0.1 and decayed by a factor of 10 for every 30 epochs. We train the model for 90 epochs in total. The batch size is 256. We use standard data augmentation, including randomly resized crop and horizontal flip. In the distillation stage, we set $\alpha=0.5$ and $\beta=1.0$. We perform distillation twice and use the model from the second generation for meta-testing. We do not use test-time augmentation in meta-testing. In addition to logistic regression (LR), we also provide results of linear SVM for completeness. We select the best results from~\cite{triantafillou2019meta} for comparison -- for each testing subset, we pick the best accuracy over 7 methods and 3 different architectures including 4-layer ConvNet, Wide ResNet, and ResNet-18. As shown in Table~\ref{tab:meta-dataset}, our simple baselines clearly outperform the best results from~\cite{triantafillou2019meta} on 9 out of 10 testing datasets, often by a large margin. Our baseline method using LR outperforms previous best results by more than $7\%$ on average. Also, self-distillation improves \texttt{max(LR, SVM)} in 7 out of the 10 testing subsets. Moreover, we notice empirically that logistic regression (LR) performs better than linear SVM. \subsection{Setup} \input{fig_text/tbl_imagenet.tex} \textbf{Architecture.} Following previous works~\cite{mishra2017simple,NIPS2018_7352,lee2019meta,Ravichandran_2019_ICCV,Dhillon2019ABF}, we use a ResNet12 as our backbone: the network consists of 4 residual blocks, where each has 3 convolutional layers with 3$\times$3 kernel; a 2$\times$2 max-pooling layer is applied after each of the first 3 blocks; and a global average-pooling layer is on top of the fourth block to generate the feature embedding. Similar to ~\cite{lee2019meta}, we use Dropblock as a regularizer and change the number of filters from (64,128,256,512) to (64,160,320,640). As a result, our ResNet12 is identical to that used in~\cite{Ravichandran_2019_ICCV,lee2019meta} . \textbf{Optimization setup.} We use SGD optimizer with a momentum of 0.9 and a weight decay of $5e^{-4}$. Each batch consists of 64 samples. The learning rate is initialized as $0.05$ and decayed with a factor of $0.1$ by three times for all datasets, except for miniImageNet where we only decay twice as the third decay has no effect. We train 100 epochs for miniImageNet, 60 epochs for tieredImageNet, and 90 epochs for both CIFAR-FS and FC100. During distillation, we use the same learning schedule and set $\alpha=\beta=0.5$. \textbf{Data augmentation.} When training the embedding network on transformed meta-training set, we adopt random crop, color jittering, and random horizontal flip as in \cite{lee2019meta}. For meta-testing stage, we train an $N$-way logistic regression base classifier. We use the implementations in scikit-learn \cite{sklearn} for the base classifier. \section{Introduction} Few-shot learning measures a model's ability to quickly adapt to new environments and tasks. This is a challenging problem because only limited data is available to adapt the model. Recently, significant advances \cite{Wang-2016-4848,NIPS2016_6385,Triantafillou2017FewShotLT,pmlr-v70-finn17a,NIPS2017_6996,sung2018learning,Wang2018LowShotLF,NIPS2018_7352,rusu2018metalearning,YeHZS2018Learning,lee2019meta,li2019finding} have been made to tackle this problem using the ideas of meta-learning or ``learning to learn". % Meta-learning defines a family of tasks, divided into disjoint meta-training and meta-testing sets. Each task consists of limited training data, which requires fast adaptability~\cite{NIPS2018_7293} of the learner (e.g., the deep network that is fine-tuned). During meta-training/testing, the learner is trained and evaluated on a task sampled from the task distribution. The performance of the learner is evaluated by the average test accuracy across many meta-testing tasks. Methods to tackle this problem can be cast into two main categories: optimization-based methods and metric-based methods. Optimization-based methods focus on designing algorithms that can quickly adapt to each task; while metric-based methods aim to find good metrics (usually kernel functions) to side-step the need for inner-loop optimization for each task. % Meta-learning is evaluated on a number of domains such as few-shot classification and meta-reinforcement learning. Focusing on few-shot classification tasks, a question that has been raised in recent work is whether it is the meta-learning algorithm or the learned representation that is responsible for the fast adaption to test time tasks. \cite{raghu2019rapid} suggested that feature reuse is main factor for fast adaptation. Recently, \cite{Dhillon2019ABF} proposed transductive fine-tuning as a strong baseline for few-shot classification; and even in a regular, inductive, few-shot setup, they showed that fine-tuning is only slightly worse than state-of-the-art algorithms. In this setting, they fine-tuned the network on the meta-testing set and \emph{used} information from the testing data. Besides, \cite{chen19closerfewshot} shows an improved fine-tuning model performs slightly worse than meta-learning algorithms. In this paper, we propose an extremely simple baseline that suggests that good learned representations are more powerful for few-shot classification tasks than the current crop of complicated meta-learning algorithms. Our baseline consists of a \emph{linear} model learned on top of a pre-trained embedding. Surprisingly, we find this outperforms \emph{all other meta-learning algorithms} on few-shot classification tasks, often by large margins. The differences between our approach and that of \cite{Dhillon2019ABF} are: we \emph{do not} utilize information from testing data (since we believe that inductive learning is more generally applicable to few-shot learning); and we use a fixed neural network for feature extraction, rather than fine-tuning it on the meta-testing set. The findings in concurrent works~\cite{Chen2020ANM,Huang2019AllYN} are inline with our simple baseline. Our model learns representations by training a neural network on the entire meta-training set: we merge all meta-training data into a single task and a neural network is asked to perform either ordinary classification or self-supervised learning, on this combined dataset. The classification task is equivalent to the pre-training phase of TADAM \cite{NIPS2018_7352} and LEO \cite{rusu2018metalearning}. After training, we keep the pre-trained network up to the penultimate layer and use it as a feature extractor. During meta-testing, for each task, we fit a linear classifier on the features extracted by the pre-trained network. In contrast to \cite{Dhillon2019ABF} and \cite{raghu2019rapid}, we \emph{do not} fine-tune the neural network. Furthermore, we show that self-distillation on this baseline provides an additional boost. Self-distillation is a form of knowledge distillation \cite{knowledgedistillation}, where the student and teacher models are \emph{identical} in architecture and task. We apply self-distillation to the pre-trained network. % \paragraph*{Contributions.} Our key contributions are: \begin{itemize}[itemsep=0pt] \item A surprisingly simple baseline for few-shot learning, which achieves the state-of-the-art. This baseline suggests that many recent meta-learning algorithms are \emph{no better} than simply learning a good representation through a proxy task, e.g., image classification. \item Building upon the simple baseline, we use self-distillation to further improve performance. \item Our combined method achieves an average of $3\%$ improvement over the previous state-of-the-art on widely used benchmarks. On the new benchmark Meta-Dataset~\cite{triantafillou2019meta}, our method outperforms previous best results by more than $7\%$ on average. \item Beyond supervised training, we show that representations learned with state-of-the-art self-supervised methods achieve similar performance as fully supervised methods. Thus we can ``learn to learn" simply by learning a good self-supervised embedding.% \end{itemize} \section{Architectures} \input{fig_text/fig_arch.tex} \noindent The architectures of ResNet-12 ans SEResNet-12 is show in Figure~\ref{fig:arch}. \section{More Training Details} For SEResNet-12, we use the same training setup as ResNet-12 on all four benchmarks, as described in Sec 4.1. For 4-layer convnet, we also the same training setup as ResNet-12 on tieredImageNet, CIFAR-FS, and FC100, For miniImageNet, we train for 240 epochs with learning rate decayed at epochs 150, 180, and 210 with a factor of 0.1. We found that using the logit layer as feature results in slightly better accuracy ($\leq1\%$) on miniImageNet, so we report this number in Table~\ref{tab:backbone} for miniImageNet. \section{Unsupervised Learning Details} We adapt the first layer of a standard ResNet-50 to take images of size $84\times84$ as input. We only train on the meta-train set of miniImageNet dataset (do not use meta-val set). We follow the training recipe in CMC~\cite{tian2019contrastive} and MoCo~\cite{He2019MomentumCF} (which also follows InstDis~\cite{wu2018unsupervised}) except for two differences. The first one is that we only use $2048$ negatives for each positive sample as miniImageNet contains less than $40$k images in total. The second difference is that we train for $2000$ epochs, with a learning rate initialized as $0.03$ and decayed by consine annealing. \section{Method} We establish preliminaries about the meta-learning problem and related algorithms in \S\ref{sec:formulation}; then we present our baseline in \S\ref{sec:baseline}; finally, we introduce how knowledge distillation helps few-shot learning in \S\ref{sec:kd}. For ease of comparison to previous work, we use the same notation as \cite{lee2019meta}. \subsection{Problem formulation} \label{sec:formulation} The collection of meta-training tasks is defined as $\mathcal{T} = \{(\mathcal{D}^{train}_i, \mathcal{D}^{test}_i)\}^I_{i=1}$, termed as meta-training set. The tuple $(\mathcal{D}^{train}_i, \mathcal{D}^{test}_i)$ describes a training and a testing dataset of a task, where each dataset contains a small number of examples. Training examples $\mathcal{D}^{train}=\{(\textbf{x}_t, y_t)\}^T_{t=1}$ and testing examples $\mathcal{D}^{test}=\{(\textbf{x}_q, y_q)\}^Q_{q=1}$ are sampled from the same distribution. A base learner $\mathcal{A}$, which is given by $y_* = f_\theta(\textbf{x}_*)$ ($*$ denotes $t$ or $q$), is trained on $\mathcal{D}^{train}$ and used as a predictor on $\mathcal{D}^{test}$. Due to the high dimensionality of $\textbf{x}_*$, the base learner $\mathcal{A}$ suffers high variance. So training examples and testing examples are mapped into a feature space by an embedding model $\boldsymbol{\Phi_*} = f_\phi(\textbf{x}_*)$. Assume the embedding model is fixed during training the base learner on each task, then the objective of the base learner is \begin{equation} \label{baselearner} \begin{split} \theta &= \mathcal{A}(\mathcal{D}^{train}; \phi) \\ &=\argmin_\theta\mathcal{L}^{base}(\mathcal{D}^{train}; \theta, \phi) + \mathcal{R}(\theta), \end{split} \end{equation} where $\mathcal{L}$ is the loss function and $\mathcal{R}$ is the regularization term. The objective of the meta-learning algorithms is to learn a good embedding model, so that the average test error of the base learner on a distribution of tasks is minimized. Formally, \begin{equation} \label{metalearner} \begin{split} \phi = \argmin_{\phi} \mathbb{E}_{\mathcal{T}}[\mathcal{L}^{meta}(\mathcal{D}^{test}; \theta, \phi)], \end{split} \end{equation} where $\theta= \mathcal{A}(\mathcal{D}^{train}; \phi)$. Once meta-training is finished, the performance of the model is evaluated on a set of held-out tasks $\mathcal{S} = \{(\mathcal{D}^{train}_j, \mathcal{D}^{test}_j)\}^J_{j=1}$, called meta-testing set. The evaluation is done over the distribution of the test tasks: \begin{equation} % \mathbb{E}_{\mathcal{S}}[\mathcal{L}^{meta}(\mathcal{D}^{test}; \theta, \phi), \text{where}~\theta= \mathcal{A}(\mathcal{D}^{train}; \phi)]. \end{equation} \subsection{Learning embedding model through classification} \label{sec:baseline} \begin{figure}[t!] \centering \includegraphics[width=1.0\columnwidth]{./fig/meta-train} \caption{In meta-training, we train on an image classification task on the merged meta-training data to learn an embedding model. This model is then re-used at meta-testing time to extract embedding for a simple linear classifier.} \label{fig:meta-train} \end{figure} \begin{figure*}[t!] \centering \includegraphics[width=1.6\columnwidth]{./fig/meta-test} \caption{We show a meta-testing case for 5-way 1-shot task: 5 support images and 1 query image are transformed into embeddings using the fixed neural network; a linear model (logistic regression (LR) in this case) is trained on 5 support embeddings; the query image is tested using the linear model.} \label{fig:meta-test} \end{figure*} As we show in \S\ref{sec:formulation}, the goal of meta-training is to learn a transferrable embedding model $f_\phi$, which generalizes to any new task. Rather than designing new meta-learning algorithms to learn the embedding model, we propose that a model pre-trained on a classification task can generate powerful embeddings for the downstream base learner. To that end, we merge tasks from meta-training set into a single task, which is given by \begin{equation} \label{mergetask} \begin{split} \mathcal{D}^{new} &= \{(\textbf{x}_i, y_i)\}^K_{k=1} \\ &= \cup \{\mathcal{D}^{train}_1, \ldots, \mathcal{D}^{train}_i, \ldots, \mathcal{D}^{train}_I\}, \end{split} \end{equation} where $\mathcal{D}^{train}_i$ is the task from $\mathcal{T}$. The embedding model is then \begin{equation} \label{phi} \begin{split} \phi = \argmin_\phi \mathcal{L}^{ce} (\mathcal{D}^{new}; \phi), \end{split} \end{equation} and $\mathcal{L}^{ce}$ denotes the cross-entropy loss between predictions and ground-truth labels. We visualize the task in Figure~\ref{fig:meta-train}. As shown in Figure~\ref{fig:meta-test}, for a task $(\mathcal{D}^{train}_j, \mathcal{D}^{test}_j)$ sampled from meta-testing distribution, we train a base learner on $\mathcal{D}^{train}_j$. The base learner is instantiated as multivariate logistic regression. Its parameters $\theta=\{\boldsymbol{W},\boldsymbol{b}\}$ include a weight term $\boldsymbol{W}$ and a bias term $\boldsymbol{b}$, given by \begin{equation} \label{w} \begin{split} \theta = \argmin_{\{\boldsymbol{W},\boldsymbol{b}\}} \sum_{t=1}^T\mathcal{L}^{ce}_t (\boldsymbol{W}f_{\phi}(\textbf{x}_t)+\boldsymbol{b}, y_t) + \mathcal{R}(\boldsymbol{W},\boldsymbol{b}). \end{split} \end{equation} We also evaluate other base learners such as nearest neighbor classifier with $\mathcal{L}$-2 distance and/or cosine distance in \S\ref{sec:choice-base-learner}. In our method, the crucial difference between meta-training and meta-testing is the embedding model parameterized by $\phi$ is carried over from meta-training to meta-testing and kept unchanged when evaluated on tasks sampled from meta-testing set. The base learner is re-initialized for every task and trained on $\mathcal{D}^{train}$ of meta-testing task. Our method is the same with the pre-training phase of methods used in \cite{rusu2018metalearning,NIPS2018_7352}. Unlike other methods \cite{Dhillon2019ABF,raghu2019rapid}, we \emph{do not} fine-tune the embedding model $f_\phi$ during the meta-testing stage. \subsection{Sequential self-distillation} \label{sec:kd} \begin{figure}[t!] \centering \includegraphics[width=1.0\columnwidth]{./fig/self-distill} \caption{Sequential self-distillation: a vanilla model, termed as \emph{Generation 0}, is trained with standard cross-entropy loss; then, the $k$-th generation is learned with knowledge distilled from the ($k$-1)-th generation.} \label{fig:self-distill} \end{figure} Knowledge distillation \cite{knowledgedistillation} is an approach to transfer knowledge embedded in an ensemble of models to a single model, or from a larger teacher model to a smaller student model. Instead of using the embedding model directly for meta-testing, we distill the knowledge from the embedding model into a new model with an identical architecture, training on the same merged meta-training set. The new embedding model parameterized by $\phi^\prime$ is trained to minimize a weighted sum of the cross-entropy loss between the predictions and ground-truth labels and the Kullback–Leibler divergence (KL) between predictions and soft targets predicted by $f_{\phi}$: \begin{equation} % \begin{split} \phi^\prime = \argmin_{\phi^\prime} & (\alpha \mathcal{L}^{ce} (\mathcal{D}^{new}; \phi^\prime) + \\ & \beta KL(f(\mathcal{D}^{new}; \phi^\prime), f(\mathcal{D}^{new};\phi))), \end{split} \end{equation} where usually $\beta = 1-\alpha$. We exploit the Born-again~\cite{FurlanelloLTIA18} strategy to apply KD sequentially to generate multiple generations, which is shown in Figure~\ref{fig:self-distill}. At each step, the embedding model of $k$-th generation is trained with knowledge transferred from the embedding model of ($k$-1)-th generation: \begin{equation} % \begin{split} \phi_k = \argmin_\phi & (\alpha \mathcal{L}^{ce} (\mathcal{D}^{new}; \phi) + \\ & \beta KL(f(\mathcal{D}^{new}; \phi), f(\mathcal{D}^{new};\phi_{k-1}))). \end{split} \end{equation} Assume we repeat the operation $K$ times, we use $\phi_K$ as the embedding model to extract features for meta-testing. We analyze the effects of sequential self-distillation in \S\ref{sec:exp_distill}. \section{Discussion} We have proposed a simple baseline for few-shot image classification in the meta-learning context. This approach has been underappreciated in the literature thus far. We show with numerous experiments that uch a simple baseline outperforms the current state-of-the-arts on four widely-used few-shot benchmarks. Combined with self-distillation, the performance further improves by 2-3\%. Even when meta-training labels are unavailable, it may be possible to leverage state of the art self-supervised learning approaches to learn very good embeddings for meta-testing tasks. \noindent1. What is the intuition of this paper? \\ \textbf{A:} We hope this paper will shed new light on few-shot classification. We believe representations play an important role. Shown by our empirical experiments, a linear model can generalize well as long as a good representation of the data is given. \noindent2. Why does this simple baseline work? Is there anything that makes few-shot classification special? \\ \textbf{A:} Few-shot classification is a special case of meta-learning in terms of compositionality of tasks. Each task is an $K$-way classification problem, and on current benchmarks the classes, even between tasks, are all mutually exclusive. This means we can merge all $N$ of the $K$-way classification tasks into a single but harder $NK$-way classification task. Our finding is that training an embedding model on this new $NK$-way task turns out to transfer well to meta-testing set. On the other hand, we also find that self-supervised embedding, which does not explicitly require this $NK$ compositionality, achieves a similar level of performance. A concurrent work~\cite{Du2020FewShotLV} studies the representations for few-shot learning from the theoretical point of view. \noindent3. Does your work negate recent progress in meta-learning? \\ \textbf{A:} No. Meta-learning is much broader than just few-shot classification. Although we show a simple baseline outperforms other complicated meta-learning algorithms in few-shot classification, methods like MAML may still be favorable in other meta-learning domains (e.g., meta-reinforcement learning).% \noindent4. Why does distillation work? What does it suggest? \\ \textbf{A:} The soft-labels \cite{knowledgedistillation} from the teacher model depict the fact that some classes are closer to each other than other classes. For example, a white goat is much more similar to a brown horse than to an airplane. But the one-hot label does not capture this. After being regularized by soft-labels, the network learns to capture the metric distance. From theoretical perspective, \cite{phuong2019towards} provides analysis for linear case. Ongoing work~\cite{Mobahi2020SelfDistillationAR} argues distillation amplifies regularization in Hilbert space.
2,869,038,154,185
arxiv
\section{Introduction} \subsection{Random Matrix Theory in disordered and complex systems: brief overview} The idea of Wigner \cite{10.2307/1970079} to describe complex physical systems by treating its Hamiltonian matrix as random has found since then a wide variety of applications. One of the main interests and challenges of modern theoretical physics to which random matrix theory has been very successfully applied is the description of interacting many-particle systems subject to a certain degree of randomness. Physically, this randomness is often caused by a true physical disorder, originating for instance from irregularities in a crystal lattice or by the presence of impurities. One can also have auxiliary phenomenological randomness representing the fact that the interactions in the system are too complicated to be described in microscopic detail, which is the case, for instance, for heavy nuclei. Further, quantum noise induced when a system is in contact with an external bath is a source of a temporal randomness. Random matrix theory (RMT) allows one to deal with such problems on a phenomenological level. This theory cannot answer questions about the microscopic details of a system, but it focuses instead on {\it universal } relations and scaling properties of relevant quantities. Indeed, one of the main results of RMT is the existence of universality classes (see \cite{tao2012random} for survey), in which the symmetry of the system determines the class and, consequently, the statistical properties of the energy spectrum. RMT models disordered and/or complicated Hamiltonians as matrices with random elements distributed according to a certain probability. Certain general physical symmetries (like time-reversal symmetry) provide restrictions on how the matrix elements are correlated. This leads to a different classes of random matrices \cite{dyson}, see the classic book by Mehta \cite{mehta} and a contemporary overview of RMT by Forrester \cite{Forrester:1315169}. Here, we will consider ensembles of Hermitian or unitary matrices, in particular, their eigenvalue statistics. A prominent RME is the GUE, which is an ensemble of Hermitian random matrices ${\bf H}$ with Gaussian weight function. This entails that its eigenvalues are distributed according to a $U(N)$-invariant Gaussian probability distribution $P(H) \sim \exp[- \alpha \mbox{Tr}V({\bf H})]$, where $V({\bf H})={\bf H}^{2}$ and $\alpha$ is a real positive parameter. Other classes correspond to ensembles of real symmetric matrices, with the probability measure being invariant under orthogonal transformations, or self-dual Hermitian matrices with probability distribution invariant under symplectic transformations, known as GOE and GSE, respectively \cite{mehta}. Another notable generalization is the notion of circular Random matrix Ensemble (RME), where the eigenvalues are distributed across the complex unit circle instead of the real line. The circular analogues of GOE, GUE, and GSE are known as COE, CUE, and CSE, respectively. We will only be considering unitary ensembes here. Further, although many properties are common to the Gaussian and circular ensembles, certain objects are easier to calculate in the circular case, which is why these ensembles are the focus of this paper. For {\it typical} systems, which obey a so-called Eigenstate Thermalization Hypothesis (see \cite{D_Alessio_2016}, \cite{Deutsch_2018} for a recent review), almost every energy level contains ``seeds" of thermal behavior (even for isolated systems) leading to the chaotic nature of the RMT statistics. Therefore, quantum states belonging to this type are called {\it ergodic}. In disordered systems, the delocalized or chaotic phase is described by Wigner-Dyson statistics, in which case the level spacing distribution is given by $p(s)\sim s^{\beta}e^{a_\beta s^{2}}$, where $s$ is the difference between consecutive energy levels, $\beta=1,2,4$ for the unitary, orthogonal and symplectic cases respectively and $a_\beta$ is a constant. As the strength of randomness increases, there can occur a transition to the situation where states of a system are localized in {\it some} basis. This could be a basis of states relevant for the description of localization in real space (Anderson localization) or in the Hilbert space (many-body localization). Deep inside a localized phase, the behavior of the system is nonergodic and the RMT level's statistics follows a Poisson distribution, $p(s)\sim e^{-s}$. This type of statistics is usually found in quantum {\it integrable} systems, where a sufficient number of conserved charges significantly constrains the dynamics. \subsection{Intermediate statistics and corresponding RMT approaches} Quantum systems whose classical counterparts are somewhere in between ordered and chaotic have spectral statistics that exhibit a mixture of Wigner-Dyson and Poissonian features, which we will refer to as \textit{intermediate statistics}. An important example of such a system is given by disordered conductors, where increasing the disorder strength leads to greater deviation from Wigner-Dyson universality. At the point of transition between extended and localized regimes the wave functions are \textit{multifractal} \cite{eversmirlin}, which entails that intersecting the wave function at various amplitudes gives a set of varying fractal dimensions depending on the amplitude. A natural question occurs: is it possible to unveil some universality, perhaps based on RMT, for the {\it ergodic-to-nonergodic transition} itself for a broad range of systems? Some works in the literature hint at this possibility. There were several proposals in this directions \cite{shklovskii}, \cite{MNS}, \cite{hofstetter}, \cite{ALTSHULER1997487}, \cite{muttens}, \cite{Kravtsov-tsvelik}, \cite{varga}, \cite{Bogomolny}. Since Anderson transition occurs in real space, the RME symmetry should be broken in some way: this is a general feature required for the RMT to describe the transition. One obvious class of RMT's should therefore has a {\it manifestly} broken symmetry. A notable example of these theories are the banded, non-invariant RMT's \cite{eversmirlin}. The probability distribution $P(H)\sim\exp(-\sum_{i,j}|H_{ij}|^{2}/A_{ij})$ is defined by the variance matrix $A_{ij}\sim [1+(i-j)^{2}/B^{2}]^{-1}$, which is clearly non-invariant with respect to the unitary transformations of the form $H\to UHU^{\dag}$. It was explicitly demonstrated that this ensembles describes an intermediate statistics and the multifractal wave functions \cite{PhysRevE.54.3221}. However, one can also have intermediate statistics in ensembles where the symmetry is not explicitly broken, i.e. for which the measure is invariant with respect to the transformations from the corresponding group. We focus on these ensembles here. Generically speaking, one can classify ensembles according to the asymptotic behaviour of the confining potential. Let us consider a power-law asymptotic scaling, $V(h) \sim |h|^\alpha$ for $|h|\gg 1$ . If the exponent $\alpha$ satisfies $\alpha>1$, we talk about \emph{steep confinement}. When on the other hand $\alpha<1$, we deal with a \emph{weakly confined} Random Matrix Ensemble. A particular weakly confined RME may be obtained from the generic one by a limiting procedure. Consider a potential of the form $V_\alpha(h) =\gamma^{-1} h^{-2} (|h|^\alpha-1)^2$ for large~$|h|$. In the limit $\alpha\rightarrow0$ at fixed~$h$, we find the following confining potential \begin{equation} V({\bf H}) = \frac{1}{\gamma} \log^2 {\bf H},\quad |{\bf H}| \gg 1~, \label{eq:LogSquaredPotentialDef} \end{equation} which shall be called \emph{log-Gaussian} critical RME (or a $\log^2$-RME)~\cite{kravtsov2009random}. It was realized that several classes of {\it invariant} RMT's, such as \eqref{eq:LogSquaredPotentialDef} exhibit intermediate statistics in terms of eigenvalues and {\it multifractal} behavior in terms of statistics of its eigenfunctions. Remarkably, both the spectral statistics and eigenvector multifractality at the mobility edge were found to match the matrix ensemble prediction at the exact same value of $q$ \cite{canali}. This behavior is somehow reminiscent of the spontaneous symmetry breaking conjectured in \cite{canali}, \cite{kravtsov-muttalib}. The intermediate RME exists in a `circular' guise, i.e. where the matrices under consideration are unitary instead of Hermitian, so that its eigenvalues lie on the complex unit circle. In this case, the potential is given by \begin{equation} V(x) = \log \left[ \prod_{j=1}^\infty (1+q^{j-1/2}x)(1+q^{j-1/2}z^{-1})\right]~, \end{equation} which, upon exponentiating, is proportional to the third Jacobi theta function. Again, due to the fact that certain expressions are more tractable in the circular case, we focus on this representation. \subsection{Connection to topological field and string theories} The intermediate RME described above was also found in a completely different context, namely, as a matrix model of $U(N)$ Chern-Simons theory $S^3$ \cite{marinocs}. Chern-Simons is a topological theory, indeed, Witten famously showed that its Wilson line expectation values are given by knot- and link invariants \cite{wittenjones}. We suspect that it is not a coincidence that the matrix model of a topological theory has intermediate statistics characteristics of ergodic-to-nonergodic transitions. Indeed, the absence of a natural local order parameter in ergodic-to-nonergodic transitions suggest that it is natural to use topological tools for its characterization. There is, in fact, a relation between strongly Anderson-localized systems and noninteracting topological states \cite{Ryu_2010}. One of the most notable features of topological states of matter is the existence of propagating edge states, which are robust with respect to the application of arbitrarily strong perturbations at the boundary that break translational symmetry (e.g. disorder). The existence of extended, gapless degrees of freedom in strongly random fermionic systems is unusual, because of the phenomenon of Anderson localization. Thus, the degrees of freedom at the boundary of topological insulators (superconductors) must be of a very special kind, in that they entirely evade the phenomenon of Anderson localization. The problem of classifying all {\it noninteracting} topological insulators in $d$ spatial bulk dimensions is equivalent to a classification problem of Anderson localization at the $(d-1)$-dimensional boundary. Therefore a 10-fold classification scheme of noninteracting topological insulators \cite{10-fold} is equivalent to the Altland-Zirnbauer classification of ({\it noninteracting}) Anderson insulators \cite{AZ}. This correspondence however does not describe transition from ergodic to nonergodic phases. This begs the question: {\it can the nonergodic phases and ergodic-to-nonergodic phase transitions be generally related to certain {\bf interacting} topological states of matter?} Indeed, $U(N)$ Chern-Simons theory is such an interacting topological system which describes ergodic-to-nonergodic transitions. We conjecture that it is representative of a broader correspondence, and that the appropriate tools for the description of ergodic-to-nonergodic transitions are available in the topological part of the string theory. This provides a potential new bridge (apart from AdS/CMT duality) between string theory and quantum many-body theory, from which a fruitful exchange of ideas can arise. This is the main motivation of the present work. To further substantiate our conjecture, we note that close inspection of matrix model potentials which appeared in the context of topological strings (see e.g. \cite{dijkgraaf2009toda}, \cite{Sulkowski_2010}, \cite{Ooguri_2011}) shows that all of them belong to the class of weak confinement potentials, as far as the authors are aware. As described above, weak confinement is a signature of intermediate statistics. On the other hand, it appears that many, if not all, of the known intermediate invariant one-matrix models that appeared in the condensed matter literature and which exhibit a multifractal spectrum are described by some of the variants of {\it topological string theory}. In the simplest case of the Chern-Simons matrix model, the connection to string theory arises from the finding due to Witten \cite{Witten:1992fb} that a $U(N)$ Chern-Simons theory on $S^3$ describes open topological strings on the co-tangent space $T^*S^3$, in the presence of $N$ $D$-branes wrapping $S^3$. Later, Gopakumar and Vafa \cite{Gopakumar:1998vy}, \cite{Gopakumar:1998ki} found that these models correspond to closed topological strings on other spaces, called conifolds. This correspondence was named geometrical transition between a so-called A and B models and is one of the manifestations of the {\it gauge-gravity duality} (see \cite{Auckly_2007} for an extensive review). In the $N\to \infty $ limit, which we focus on here, $U(N)$ Chern-Simons theory on $S^3$ undergoes a so-called {\it crystal melting transition} \cite{Okounkov:2003sp}, which is related to topological strings on certain Calabi-Yau manifolds \cite{okuda}. We conjecture that matrix models with a similar origin in topological string theory, such as those of $U(N)$ Chern-Simons theories on general lens spaces or or ABJM theory, also exhibit intermediate statistics. \subsection{Summary of main results} To clarify the connection between intermediate RME and topological string theory, we calculate the asymptotic SFF for the Chern-Simons matrix model. The SFF is one of the central objects in RMT, it has clear features which differentiate between ergodic and nonergodic behaviors. While our original motivation was the intermediate Chern-Simons matrix model mentioned above, the techniques we apply have far broader applicability. In particular, they can be applied to any matrix model with unitary matrices of infinite order and weight function satisfying the assumptions of Szeg\"{o}'s limit theorem \cite{szego}. For this reason, we treat both the general and the specific cases, so that certain sections may be skipped depending on the particular interests of the reader. \begin{itemize} \item \textbf{Spectral Form Factor} To calculate the SFF, we express it as a sum over weighted unitary integrals with the insertion of Schur polynomials. These integrals take the form of certain Toeplitz minors \cite{bd}, \cite{trawid}, \cite{GGT1}, \cite{GGT2}. We assume we can write the weight function as $f(z) = E(x;z) E(x;z^{-1} )$ or $f(z) = H(x;z) H(x;z^{-1} )$, where $E(x;z) $ $(H(x;z)) $ is the generating function of elementary (homogeneous) symmetric polynomials defined in terms of a set of variables $x=(x_1,x_2,\dots)$. We find that the SFF is then given by \begin{equation} \frac{1}{N}\avg{\abs{ \text{tr} U^n}^2 } = \left\{ \begin{array}{ll} N^{-1}\left[ n+p_n(x)^2\right] ~~,~~~n/N \leq 1 ~,\\ 1~~~~~~~~~~~~~~~~~~~~~~~~,~~~ n/N \geq 1 ~. \end{array} \right. \end{equation} where $p_n(x)$ are power sum polynomials in terms of $x$. SFF's are typically characterized by what has been termed a {\it dip-ramp-plateau} shape, see e.g. \cite{bhrmt}, \cite{drp}, \cite{drp2}, \cite{forrd}. We find that the {\it dip} arises from the {\it disconnected} SFF, i.e. $\avg{\text{tr} U^n}^2=p_n(x)^2$. The factor $n$ which saturates at $n/N=1$ gives the {\it ramp} and {\it plateau}; this contribution arises from the {\it connected} SFF. \item \textbf{Trace identities} As an auxiliary result to the calculation of the SFF, it is easy to show that, for $m,n\in \mathds{Z}^+$, \begin{equation} \avg{\text{tr} U^m \text{tr} U^{-n} } = m\delta_{mn} + \avg{\text{tr} U^m } \avg{\text{tr} U^{-n} }~. \label{trace1} \end{equation} Further, for a partition $\lambda$ satisfying $\lambda_1+\lambda_1^t-1<n$ for some $n\in \mathds{Z}^+$, we have, \begin{equation} \avg{\text{tr}_\lambda U \text{tr} U^{-n} } = \avg{\text{tr}_\lambda U } \avg{ \text{tr} U^{-n} } ~. \label{trace2} \end{equation} Consider instead the case where $\lambda$ satisfies $\lambda_1+\lambda_1^t-1<n$, and define $m\coloneqq \lambda_1+\lambda_1^t-1-n$. Then, if $m\leq \lambda_1-\lambda_2$ and $m\leq \lambda_1^t-\lambda_2^t$, \eqref{trace2} holds as well. \item \textbf{Dualities} It is easy to see that, upon replacing $E(x;z)$ by $H(x;z)$, we find exactly the same SFF. Indeed, for any set of variables $x$ for which $(p_n(x))^2$ gives the same value for all $n$, $f(z) =E(x;z) E(x;z^{-1})$ and $f=H(x;z)H(x;z^{-1})$ gives the same SFF. We suspect that this is an example of a larger class of dualities between various intermediate RME's. \item \textbf{Application to Chern-Simons RME} We apply these results to the matrix model with weight function given by the third Jacobi theta function, \begin{equation} \label{wf} f(z) = \sum_{n\in \mathds{Z}}q^{n^2/2}z^n = (q;q)_\infty \prod_{k=1}^\infty (1+q^{k-1/2}z)(1+q^{k-1/2}z^{-1})~~,~~~0< \lvert q \rvert <1 ~. \end{equation} This is the matrix model described above, which was introduced in \cite{muttens} as a phenomenological model of intermediate statistics, and in \cite{marinomm} as a matrix model of $U(N)$ Chern-Simons theory on $S^3$. In the latter context, the SFF is given by a topological invariant, specifically, the \emph{HOMFLY invariant} \cite{homfly}, of $(2n,2)$-torus links with one component in the fundamental and the other in the antifundamental representation. As far as the authors are aware, these invariants have heretofore not appeared in the literature. As for all matrix models considered here, the SFF is given by a linear ramp which saturates at a plateau, plus a disconnected contribution. Since the SFF corresponds to a $(2n,2)$-torus link, it follows that the disconnected contribution is the product of two $(n,1)$-torus knots. Calculating the invariant of an $(n,1)$-torus knots for general $N$, we find that it is given by the $q^n$-deformation of $N$, which simplifies even further upon implementing the limit $N\to \infty$. We thus find the following expression for the SFF \begin{equation} \frac{1}{N}\avg{\abs{ \text{tr} U^n}^2 } = \left\{ \begin{array}{ll} N^{-1}\left[ n+ (q^{-n/2}-q^{n/2})^{-2}\right] ~~,~~~n/N \leq 1 ~,\\ 1~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~,~~~ n/N \geq 1 ~. \end{array} \right. \end{equation} We plot this below for $q =0.9^k$, $k=1,\dots,9$, where we add lines at $x + (q^x+q^{-x}-2)^{-1}$ for continuous $x$ as a guide to the eye. The trace identities in \eqref{trace1} and \eqref{trace2} of course apply to the Chern-Simons matrix model as well, where the latter entails that one can `unlink' an $(n,1)$-torus knot in the fundamental representation and an unknot in representation $\lambda$. \begin{minipage}{\textwidth} \begin{center}\vspace{.5cm} \includegraphics[width=.8\linewidth]{possff} \captionof{figure}{The SFF given in \eqref{sffcsres} plotted for $n=1,2,\dots,20$, with $q=0,9^k$ , $k=1\dots,9$. The continuous lines are added to guide the eye. For $q$ farther from 0, the disconnected contribution becomes larger, so that the dip is more pronounced and the SFF displays greater deviations from a simple linear ramp. There are dashed lines which indicate $k n$ = constant, in particular $kn =1,\dots,9$. From \eqref{knconst}, it follows that lines with $k n$ = constant lie at 45 degrees for \emph{any} SFF calculated here, i.e. any SFF given by \eqref{sffres}. Note that these SFF's saturate at a plateau at $n/N=1$, which is, of course, not indicated in this plot. \label{sffplot}} \end{center} \end{minipage} \end{itemize} \subsection{Outline of the paper} This paper organized as follows. In section \ref{rmt}, we set up the general framework of random matrix ensembles and introduce important objects, including the SFF. In section \ref{csmm}, we treat $U(N)$ Chern-Simons theory on $S^3$ and its expression as a matrix model, after which we consider the expression of knot and link invariants as matrix integrals. In section \ref{appkl}, we review the computation of such matrix integrals using their expression in terms of Toeplitz minors. These Toeplitz minors, in turn, are given by symmetric polynomials in terms of variables determined by the weight function. We then express the assumptions of Szeg\"{o}'s theorem as requirements on these symmetric polynomials, in particular the power sum polynomials. Further, we find in this section that, although the expression in this work are generally valid for $N\to \infty$, in certain cases they are valid for finite $N$ as well. In section \ref{sffsection}, we set out to compute the SFF using the techniques outlined in the previous sections. Using fundamental relations in the theory of symmetric polynomials, we derive the results for general weight function outlined in the previous subsection. The specific case of the SFF of the Chern-Simons matrix model is worked out in section \ref{sfftheta}. We then consider the broader implications of these calculations in the concluding remarks. In the appendices, the reader can find more details about $q$-deformations and symmetric polynomials, with special attention given to Schur polynomials. \section{Random matrix theory} \label{rmt} We will consider random matrix ensembles, which have partition functions in the form of a matrix integral, \begin{equation} \int dM P(M) ~. \end{equation} Here, $P(M)$ is the probability density function associated to $M$. Consider first the case where the matrices $M$ are Hermitian, so that they can be diagonalized by a unitary transformation. Integrating over $U(N)$ leads to an eigenvalue expression of the form \cite{mehta} \begin{equation} Z = C_N \int \prod_{i=1}^N \frac{d x_i}{2\pi} f(x_i) \prod_{i<j} (x_i-x_j)^2 ~, \end{equation} where $C_N$ is some multiplicative constant and $f(x)$ is called the \textit{weight function}. Choosing \begin{equation} P(M) \propto \exp(-\alpha \text{tr} M^2 )~, \end{equation} where $\alpha$ is some positive numerical constant, leads to the familiar Gaussian unitary ensemble (GUE) with weight function $f(x) = \exp(-\alpha x^2)$. This ensemble is characterized by fully extended eigenvectors and strong eigenvalue repulsion, which we will collectively refer to as Wigner-Dyson statistics. It was conjectured in the 1980's \cite{cgvg}, \cite{berrmt}, \cite{bgs} that the eigenvalues of quantum systems whose classical counterpart is chaotic exhibit Wigner-Dyson statistics (after an unfolding procedure, which is to say, a rescaling of the energies such that the average inter-energy spacing equals unity). This conjecture has been so extensively corroborated that Wigner-Dyson statistics are nowadays seen almost as a definition of quantum chaos. We will also consider ensembles whose elements are themselves unitary matrices. Historically, the first example of such an ensemble is the CUE introduced by Dyson \cite{dyson}, which is mentioned in the introduction. Being unitary, the eigenvalues of these matrices are distributed across the complex unit circle. Such unitary ensembles have a partition function of the form \begin{equation} \label{circens} Z= \tilde{C}_N \int \prod_{i=1}^N \frac{d \phi_i}{2\pi } f(\phi_i)\prod_{i<j} \abs{e^{-i\phi_i}-e^{-i\phi_j}}^2 \end{equation} where we denote the matrices under consideration by $U$. For $f(x_i)$=constant, \eqref{circens} reduces to Dyson's circular unitary ensemble (CUE). in the limit $N\to \infty$, the CUE and GUE exhibit the same bulk statistics after unfolding, i.e. the CUE also described systems whose classical counterpart is chaotic \cite{mehta}, \cite{haake}. While the Wigner-Dyson ensembles described above provide excellent phenomenological descriptions of quantum chaotic systems, they naturally fail to describe systems with intermediate spectral statistics. An example of such a system consists of disordered electrons at the mobility edge of the Anderson localization transition \cite{anderson}, \cite{eversmirlin}. Muttalib and collaborators introduced a family of random matrix ensembles \cite{muttens} depending on some parameter $0\leq q \leq 1 $. This matrix ensemble appears in two guises, analogous to GUE and CUE. In case the matrices under consideration are Hermitian, the weight function is of the following ``log-squared'' form \begin{equation} \label{mutl2} f(x)~ \propto \exp \left( - \frac{1}{2 g_s} \log^2 x \right)~~,~~~\abs{x} \gg 1~. \end{equation} In the expression above, we define $q \eqqcolon e^{-g_s}$, where $g_s$ is the string coupling constant in the manifestation of Chern-Simons theory as a topological string theory on the cotangent space. The domain of $f(x)$ in \eqref{mutl2} is the positive real line. In case the matrices we consider are themselves unitary, the weight function is given by \begin{equation} \label{mutth3} f(e^{i \phi})=\Theta_3(e^{i\phi} ;q ) = \sum_n q^{n^2/2}e^{in\phi}~. \end{equation} That is, the weight function is given by Jacobi's third theta function, which is defined on the complex unit circle. \subsection{Density of states and spectral form factor} An important object in random matrix theory is the density of states, given by \begin{equation} \rho(\phi) = \frac{1}{N} \sum_{i=1}^N \delta(\phi-\phi_i)= \frac{1}{2\pi N}\sum_{i=1}^N \sum_{n\in \mathds{Z}} e^{in(\phi-\phi_i)} = \frac{1}{2\pi N}\sum_{n\in \mathds{Z}} \text{tr} U^n e^{i n \phi}~, \end{equation} where we used the fact that \begin{equation} \text{tr} U^n = \sum_{i=1}^N e^{-i n \phi_i}~. \end{equation} The density of states, averaged over the matrix ensemble, gives the probability of finding an eigenvalue at $\phi$. From these level densities, we can construct the $n$-point density correlation functions for $n=2,\dots$ and various related quantities. An important example thereof which is often used to characterize the eigenvalue statistics of various ensembles is the SFF, which is the Fourier transform of the two-point level density correlation function \cite{mehta}. The two-point correlation function is given by, \begin{equation} \langle \rho(\theta) \rho(\phi)\rangle = \frac{1}{N^2}\sum_{k,l \in \mathds{Z}} \langle \text{tr} U^k \text{tr} U^l \rangle e^{ik\theta+il\phi} -1 ~. \end{equation} The SFF is then defined as the expansion coefficients of $e^{in(\theta -\phi)}, ~n \in \mathds{Z}^+$, rescaled by a factor $N$, \cite{mehta}, \cite{haake}, \begin{equation} K(n) = \frac{1}{N} \langle \abs{\text{tr} U^n}^2 \rangle~. \end{equation} The choice of normalization is made so that the CUE SFF saturates at unity. For future convenience, we also define the connected part of the SFF \begin{equation} \label{sffcon} K(n)_c = K(n) - \frac{1}{N} \avg{\text{tr} U^n}^2`. \end{equation} For the CUE and GUE, the SFF is characterized by a linear ramp which saturates at $n=N$. For intermediate statistics, $K(n)$ displays deviations from this behavior, which can be seen in figure \ref{sffplot} and which will be further detailed below. \section{Chern-Simons matrix model and knot/link invariants} \label{csmm} \subsection{Knot operator formalism} We review the construction of Chern-Simons partition functions and knot invariants using Heegaard splitting \cite{wittenjones} and knot operators \cite{knotop}. Heegaard splitting provides a way to calculate the Chern-Simons partition functions of certain three-manifolds, which we denote by $M$. We construct $M$ by taking two separate three-manifolds $M_1$ and $M_2$ which share a common boundary $\Sigma$, i.e. $\partial M_1 \simeq \Sigma \simeq \partial M_2 $. $M$ is then constructed by acting on the common boundary $\Sigma$ with some homeomorphism $f$ and then gluing $M_1$ and $M_2$ together, which we write as \begin{equation} M = M_1 \bigcup_f M_2 ~. \end{equation} In this construction, we take the boundaries of $M_1$ and $M_2$ to have opposite orientation, so that $M$ is a closed manifold. Writing the Hilbert space of $\Sigma$ as $\mathcal{H}(\Sigma)$ and its conjugate as $\mathcal{H}^*(\Sigma)$, performing the path integral over $M_1$ gives a state $\ket{\Psi_{M_1}} \in \mathcal{H}(\Sigma)$, whereas performing the path integral over $M_2$ to find a state $\bra{\Psi_{M_2}}$ in the conjugate Hilbert space $ \mathcal{H}^*(\Sigma)$ due to the fact that the boundaries of $M_1$ and $M_2$ have opposite orientation. The homeomorphism $f$ induces a map $U_f$ on $\mathcal{H}(\Sigma)$ whose action we denote by \begin{equation} U_f : \mathcal{H}(\Sigma) \to \mathcal{H}(\Sigma)~. \end{equation} The partition function is then given by \begin{equation} Z(M) = \obket{\Psi_{M_1}}{U_f}{\Psi_{M_2}}~. \end{equation} In a seminal paper \cite{wittenjones}, Witten found that $\mathcal{H}(\Sigma)$ is given by the space of conformal blocks of the corresponding Wess-Zumino-Novikov-Witten (WZNW) model on $\Sigma$ at level $k$. In case there are no marked points on $\Sigma$ where Wilson lines are cut, i.e. if all Wilson lines can be embedded on $\Sigma$, $\mathcal{H}(\Sigma)$ is given by the characters of the WZNW model on $\Sigma$. We will be considering only the latter case. A relatively simple example of a Heegaard splitting is given by the division of $S^3$ into two three-balls that share a boundary $\Sigma = S^2$. The only knot that can be embedded on $S^2$ is the unknot, which is the trivial example of an unknotted circle. We therefore do not consider this example any further. Let us instead consider the case where $M_1$ and $M_2$ are given by solid tori $S^1 \times D^2$ which share a boundary torus $\partial M_1 = S^1 \times S^1 = \partial M_2$. The manifolds which can be constructed via such a Heegaard spltting on a torus are known as lens spaces \cite{geom}. The simplest example of a lens space is found by taking $f$ to be the identity map. In this case, we glue the two copies of $D^2$ along their boundaries to form $S^2$, so that the resulting space is given by $S^2 \times S^1$. We normalize the Chern-Simons partition function for $S^2 \times S^1$ to unity. Let us consider an example where we act on $T^2$ with a nontrivial homeomorphism. The group of homeomorphisms of $T^2$ is given by $SL(2;\mathds{Z})$, which consists of matrices of the form \begin{equation} \begin{pmatrix} a & b \\ c & d \end{pmatrix} ~, ~~~ ad-bc =1 ~, ~~~ a,b,c,d \in \mathds{Z}~. \end{equation} $SL(2;\mathds{Z})$ is generated by the modular $S$ and $T$-transformations. Representing the 1-cycles of the torus by basis vectors $\begin{pmatrix}1 \\ 0 \end{pmatrix}$ and $\begin{pmatrix}0 \\ 1 \end{pmatrix}$, the $S$ and $T$-transformations can be written as \begin{equation} S= \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} ~, ~~ T= \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} ~. \end{equation} That is, $S$ interchanges the 1-cycles and reverses the orientation of the torus, while $T$ cuts open the torus along a 1-cycles to form a cylinder, twists one end of the cylinder by $2\pi$, and glues the two ends of the cylinder back together. Consider the case where we glue two solid tori $M_{1,2}$ along their boundaries after acting with an $S$-transformation. Since $S$-transformations exchange the 1-cycles on the torus, the contractible cycle of $M_1$ is glued to the non-contractible cycle of $M_2$ and vice versa. We thus find a closed three-manifold with no non-contractible cycles which, from the Poincar\'{e} conjecture, is homeomorphic to $S^3$. The construction of torus knots is analogous to the construction of lens spaces in the sense that, if we insert a Wilson line corresponding to an unknot on the boundary torus, we can act with arbitrary $SL(2;\mathds{Z})$ transformation on the torus which turns the unknot into a non-trivial torus knot. Let us denote the torus knot operators, to be defined more precisely below, by $\mathcal{W}_\lambda^{(p,q)}$, where $\lambda$ labels the irreducible representation of the Wilson line and $p$ and $q$ are integers which count the winding of the knot around non-contractible and contractible cycle of the torus, respectively. Note that $p$ and $q$ are coprime for torus knots, whereas for $p$ and $q$ not coprime we would get a torus link, which is a generalization of a torus knot with more than one component (i.e. more than one knotted piece of string). The number of components of a torus link equals the greatest common divisor of $p$ and $q$. From the definition of the $S$ and $T$-transformations, it is clear that they act on torus knot as follows \begin{align*} S^{-1} \mathcal{W}^{(p,q)} S &= \mathcal{W}^{(q,-p)} ~, \\ T^{-1} \mathcal{W}^{(p,q)} T & = \mathcal{W}^{(p,q+p)}~. \end{align*} For example, if we insert an unknot around the non-contractible cycle of the torus and act $n$ times with the $T$-transformation, we get a knot which still winds around the non-contractible cycle once but which now also winds around the contractible cycle $n$ times. Note that this is topologically still an unknot; the additional winding around the contractible cycle only gives rise to a multiplicative framing factor. Similar knots will play an important role in the comparison with random matrix theory, to be outlined below. It is easy to see that modular transformations map the set of torus knots into itself, as these transformations do not change the number of components. Indeed, for any pair of coprime integers $(p,q)$, one can easily see that $(p,q+p)$ are also coprime, so that the number of components is unchanged under modular transformations. Further, due to B\'{e}zout's lemma \cite{tignol}, there is an $SL(2;\mathds{Z})$-transformation corresponding to any pair of coprime integers, so that we can construct any torus knot by acting on an unknot with an $SL(2;\mathds{Z})$-transformation. \begin{center}\vspace{1cm} \includegraphics[width=0.7 \linewidth]{toruslinks} \captionof{figure}{Two examples of $(2n,2)$-torus links. The Hopf link, on the left, is the $(2,2)$-torus link. On the right, we have the $(4,2)$-torus link. \label{knotslinks}} \end{center}\vspace{1cm} The explicit form for the knot operators mentioned above was found by Labastida, Llatas, and Ramallo \cite{knotop}, using the relation to WZNW-models previously found by Witten \cite{wittenjones}. Let us summarize the salient points of the knot operator formalism. As mentioned above, $\mathcal{H}(\Sigma)$ is given by the conformal blocks of the corresponding WZNW-model on $\Sigma$ with group $G$ at level $k$. In the case of $\Sigma = T^2$ without marked points, which we will be considering henceforth, $\mathcal{H}(\Sigma)$ consists of the characters of integrable representations of the corresponding WZNW-model. We denote the set of fundamental weights by $\{ v_i \}$ and Weyl vector by $\rho = \sum_i v_i$. A representation with highest weight $\Lambda$ is integrable if $p \coloneqq \rho + \Lambda = \sum_i p_i v_i$ is in the fundamental Weyl chamber, that is, \begin{equation} \sum_i p_i < k+ y ~~, ~~~ p_i > 0 ~, ~~ \forall ~i~, \end{equation} where $y$ is the dual Coxeter number of $G$, which equals $N$ for $G=U(N)$ and $N-1$ for $G=SU(N)$. Remember that an irrep with highest weight $\Lambda = \sum_i \Lambda_i v_i$ corresponds to a Young tableau where the length of the $i^\text{th}$ row is given by \begin{equation} \Lambda_i + \Lambda_{i+1} + \dots +\Lambda_{I}~, \end{equation} where $I$ equals $N$ in the case of $U(N)$ and $N-1$ in the case of $SU(N)$. See appendix \ref{sympol} or e.g. section 13.3.2 of \cite{difran} for more background information on partitions and their role in representation theory. From now on we will take $G=U(N)$ so that $y=N$. We will denote ket states corresponding to $p$ by $\ket{p}$, which can be chosen in such a way that they form an orthonormal basis. The vacuum state, that is, the state without any Wilson line inserted, is given by $\ket{\rho} \eqqcolon \ket{0}$ . If we act with a knot operator corresponding to an unknot in representation corresponding to $\Lambda$, the result is \cite{knotop} \begin{equation} \label{unknot} \mathcal{W}_\Lambda^{(1,0)} \ket{\rho} = \ket{\rho + \Lambda} = \ket{p}~. \end{equation} The only further ingredient we need are the explicit expressions for the Hilbert space operators induced by the modular transformations. We simply state these here, further details may be found in \cite{knotop} \begin{align} \label{modtrafo} T_{pp'} & = \delta_{p,p'} e^{2\pi i (h_p - c/24)}~, \notag\\ S_{pp'} & = \frac{i^{N(N-1)/2}}{N^{N/2}} \left(\frac{N}{k+N}\right)^{\frac{N-1}{2}} \sum_{w \in W} \epsilon(w) \exp\left( \frac{-2\pi i p \cdot w(p')}{k+N}\right) ~. \end{align} In the above expressions, $W$ is the Weyl group, $\epsilon(w)$ is the signature of Weyl reflection $w$, $c$ is the central charge of the WZNW-model, and $h_p$ is the conformal weigth of the primary field corresponding to $p$, which is given by \begin{equation} h_p = \frac{p^2-\rho^2}{2(k+y)} ~. \end{equation} \subsection{Chern-Simons matrix model} \label{sectcsmm} Let us consider how the matrix model description of Chern-Simons theory arises. As explained above, $S^3$ can be constructed via a Heegaard splitting along a torus on which we act with an $S$-transformation. We thus find that the Chern-Simons partition function on $S^3$ is given by \begin{equation} Z(S^3) = \obket{0}{S}{0} = S_{0 0} ~. \end{equation} We plug in the expression for $S_{00 }$ from equation \eqref{modtrafo} and use Weyl's denominator formula, \begin{equation} \label{wdf} \sum_{w \in W} \epsilon(w) e^{w(p)} = \prod_{\alpha > 0 } 2 \sinh( \alpha/2)~, \end{equation} where $\alpha$ are the positive roots of $U(N)$. Expressing the roots of $U(N)$ in terms of Dynkin coordinates $x_i$, we find \begin{equation} Z(S^3) = \frac{e^{- \frac{g_s}{12}N(N^2-1)}}{N!} \int \frac{dx_i}{2 \pi} \prod_{i=1}^N e^{-x_i^2 /2g_s } \prod_{i<j}\left( 2 \sinh \frac{x_i-x_j}{2} \right)^2 ~. \end{equation} Lastly, we define a new set of variables $ y_i \coloneqq e^{Ng_s + x_i}$, in which the partition function is given by \cite{marinomm} \begin{equation} \label{log2} Z(S^3) = \frac{e^{- \left( 7N^3 g_s /12 + N^2 g_s /2 - Ng_s /24 \right) }}{N!} \int_0^\infty \prod_{i=1}^N \frac{dy_i}{2\pi} \prod_{i<j} (y_i-y_j)^2 \exp \left(- \frac{1}{2g_s } \sum_i \log^2(y_1) \right) ~. \end{equation} Alternatively, we can use the following expression \begin{equation} \label{thetaid} q^{n^2/2}=\int_0^{2\pi}\frac{d\phi}{2\pi}\Theta_3(e^{i\phi};q) e^{in \phi}~ \end{equation} where we repeat the definition of the third Jacobi Theta function \begin{equation} \Theta_3(e^{i\phi};q)=\sum_{n\in\mathds{Z}} q^{n^2/2}e^{in \phi}~. \end{equation} This gives \begin{align} \sum_{w \in W} \epsilon(w) q^{\frac{1}{2} (w(\rho)-\rho)^2} & = \frac{1}{\abs{W}} \sum_{w,w'\in W} \epsilon(w)\epsilon(w') q^{\frac{1}{2}( w(\rho)-w(\rho'))^2} \notag\\ & = \frac{1}{\abs{W}} \int \prod_{i=1}^N \frac{d \phi_i}{2\pi} \Theta_3 (e^{i\phi_i};q) \sum_{w,w' \in W} \epsilon(w)\epsilon(w') q^{i( w(\rho)-w(\rho')\cdot \theta }~, \end{align} where we added another summation over the Weyl group in the first equality and applied \eqref{thetaid} in the second. Lastly, the Weyl group $W$ is isomorphic to the symmetric group $S_N$ so that $\abs{W} = N!$. Using the Weyl denominator formula again leads to \cite{okuda}\cite{dolivet}. \begin{equation} \label{theta3} Z = \frac{1}{N!} \int_0^{2\pi} \prod_{i=1}^N \frac{d\phi_i}{2\pi} \Theta_3(e^{i\phi_j};q) \prod_{j<k} \abs{e^{i\phi_j} -e^{i\phi_k}}^2 ~. \end{equation} Note that \eqref{log2} and \eqref{theta3} correspond precisely to the matrix ensemble introduced by \cite{muttens}, given in \eqref{mutl2} and \eqref{mutth3}, respectively. Further, using the Jacobi triple product formula, $\Theta_3$ can be written as a specialization of $E(x;z)$, the generating function of the elementary symmetric polynomials. We can also replace $E(x;z)$ by $H(x;z)$ at the cost of transposing all representations involved in the calculation, this amounts to replacing $\Theta_3(z;q)$ by $\frac{1}{\Theta_3(-z;q)}$. Since the SFF is invariant under transposition of all representations (see e.g. \eqref{sff}), the calculation of the SFF done below is also valid for the case where the weight function of of the form $\frac{1}{\Theta_3}$. Indeed, the above argument applies to any specialization i.e. to any choice of variables $x_i$. We will therefore use $E(x;z)$ and $H(x;z)$ interchangeably in the computations below. \subsection{Computing torus knot and link invariants in the Chern-Simons matrix model} We now consider knot and link invariants and their computation in the Chern-Simons matrix model. First, we treat the multiplication properties of knot operators. If we take $\mathcal{W}^\mathcal{K}_\lambda$ to be a knot operator corresponding to a knot $\mathcal{K}$ in representation $\lambda$, we can write \begin{equation} \label{wwk} \mathcal{W}^\mathcal{K}_{\lambda} \mathcal{W}^\mathcal{K}_{\mu} = \sum_\nu N_{\lambda \mu }^\nu \mathcal{W}^\mathcal{K}_\nu ~ . \end{equation} The coefficients $N_{\lambda \mu }^\nu$ in \eqref{wwk} are the fusion coefficients of the WZNW-model. When both $k$ and $N$ are much larger than any of the representations under consideration, $N_{R_1 R_2 }^R$ are given by Littlewood-Richardson coefficients. This allows us to construct the invariants of torus links. We label a torus link by $P,Q \in \mathds{Z}$, where the number of components is given by $S = \gcd (P,Q)$ and the representations are labelled by $j \in \{ 1, \dots ,S\}$. These links are given by \cite{knotop}, \cite{isidro}, \cite{torlink} \begin{equation} \prod_{j = 1}^S \mathcal{W}^{P/S , Q/S}_{\lambda_j} = \sum_\mu N_{\lambda_1 , \dots , \lambda_S}^\mu \mathcal{W}^{P/S , Q/S}_{\mu} ~, \end{equation} where $N_{\lambda_1 , \dots , \lambda_S}^\mu$ are generalized Littlewood-Richardson coefficients appearing in the product of representations $\lambda_1 \otimes \dots \otimes \lambda_S$. We now outline the computation of torus knot and link invariants using the matrix model for $U(N)$ Chern-Simons on $S^3$. The simplest knot, the unknot, is given by the ensemble average of the matrix trace in the corresponding representation \cite{bem}. That is, \begin{equation} W_\lambda \coloneqq \avg{ \mathcal{W}^{(1,0)}_\lambda } = \avg{\text{tr}_\lambda U}~. \end{equation} If we diagonalize a matrix $U$ to give diag$(d_1,d_2,\dots,d_N)$, it is well known that \begin{equation} \label{trasl} \text{tr}_\lambda U = s_\lambda (d_1,d_2,\dots,d_N) = s_\lambda (d) ~, \end{equation} where $s_\lambda(d)$ is the \textit{Schur polynomial} corresponding to representation $\lambda$ in terms of variables $d_i$ . The reader can consult appendix \ref{schurapp} or the book by Macdonald \cite{mcd} or Stanley \cite{stanley} for more information on Schur polynomials. In the remainder of this work, we will often write traces without specified representations, in which case the trace is understood to be in the fundamental representation. In general, we can assign an orientation to a knot or component of a link, which corresponds to a continuous non-zero tangent vector along $\mathcal{K}$. When we project a knot or link into the plane, we can assign a sign $+$ or $-$ to each crossing, as in figure \ref{cross}. \begin{center}\vspace{1cm} \includegraphics[width=0.6 \linewidth]{arrows} \captionof{figure}{After projecting a knot or link in the plane, crossings are given a sign in the way indicated above. \label{cross}} \end{center}\vspace{1cm} We denote by $\overline{\lambda}$ the representation conjugate to $\lambda$. We then have \cite{marinocs} \begin{equation} \label{truinv} \text{tr}_\lambda U^{-1}=\text{tr}_{\overline{\lambda}} U~. \end{equation} in the language of knot theory, taking $\text{tr}_\lambda U$ to $\text{tr}_\lambda U^{-1}$ corresponds to inverting the orientation of the component carrying representation $\lambda$. Of course, for the unknot, this does not matter, as reverting the orientation can be compensated by a simple parity transformation. The same is true for the Hopf link, as overcrossings can be freely changed into undercrossings. To convince oneself of this point, one can assign an orientation to both components of the Hopf link in figure \ref{knotslinks}, and rotate one component along an axis parallel to the projection plane whilst keeping the other component fixed. For more complicated knots or links, such as the $(4,2)$-torus link on the right hand side of figure \ref{knotslinks}, overcrossings can no longer be turned into undercrossings and inverting the orientation of one component will generally lead to a different expectation value. Let us consider more complicated objects involving integer powers of $U$. Generally, any product of traces of any $GL(N,\mathds{C})$ matrix $U$, \begin{equation} S_\alpha = (\text{tr}U)^{\alpha_1} (\text{tr}U^2)^{\alpha_2} \dots (\text{tr}U^s)^{\alpha_s} ~ ~,~~~ \alpha_i \in \mathds{Z}^+~, \end{equation} can be expanded in characters of $GL(N,\mathds{C})$, denoted by $\chi_\lambda(U)$, with characters of the symmetric group $S_l$ as expansion coefficients, where $l=\sum_i \alpha_i$ \cite{ltw}. If $U \in U(N)$, the characters are given by Schur polynomials, see appendix \ref{schurapp} for more background. We then have \begin{equation} \label{trunalph} \text{tr}U^{\alpha_1} \text{tr}U^{\alpha_2} \dots \text{tr}U^{\alpha_k } = \sum_R \chi_R (C(\vec{k} )) \text{tr}_R U ~, \end{equation} where $\sum_R$ is a sum over all Young tableaux with total number of boxes equal to $l$, and $ \chi_R (C(\vec{k} ))$ is the character of the symmetric group $S_{l}$ in representation $R$ evaluated at the conjugacy class of $S_{l}$ given by cycle lengths $\alpha_1, \alpha_2, \dots, \alpha_k$. Despite its concise notation, \eqref{trun} is generally rather difficult to compute due to the sum over partitions of $l$. However, in certain cases the above expression can be calculated. Taking $U\in U(N)$ with eigenvalues $d_i$ and choosing $\alpha_1 = n$ and $\alpha_i=0$ for $ i\neq 1$, we find \cite{ltw} \begin{equation} \label{traun} \text{tr} U^n = \sum_i d_i^n= \sum_\lambda \chi^\lambda_{(n)} s_\lambda (U)= \sum_{r = 0}^{n-1} (-1)^r s_{(n-r,1^r)}(U) ~, \end{equation} where we used the fact that characters of the symmetric group satisfy \begin{equation} \chi^\lambda_{(n)}= \begin{cases} (-1)^r~, & \text{if $\lambda = (n-r,1^r)$}~,\\ 0~~~~~~~, & \text{otherwise}~. \end{cases} \end{equation} In words, \eqref{traun} states that $\text{tr} U^n$ is given by the sum over hook-shaped irreps with $n$ boxes, which appear with alternating signs. One may recognize from \eqref{traun} that this is the expression of the $n\textsuperscript{th}$ power sum polynomial in terms of Schur polynomials. For $n=4$, one can express \eqref{traun} in terms of Young diagrams as follows.\\ \begin{center} \ydiagram{4} ~ \huge{-} \normalsize \ydiagram{3,1} \large{ +} \normalsize \ydiagram{2,1,1}~ \huge{-} \normalsize \ydiagram{1,1,1,1} \end{center} \vspace{.4cm} One can show \cite{bem}, \cite{stevan}, \cite{gias} that $\avg{\text{tr} U^n}$ gives the invariant of an $(n,1)$-torus knot \cite{jonesrosso}, which differs from any $(n,m)$-torus knot only by a framing factor. Equation \eqref{traun} gives an expansion of of $\text{tr} U^n$ in terms of Schur polynomials. Explicit expressions for its expectation value can be found in section \ref{sfftheta}. As noted above, an $(n,1)$-torus knot is topologically equivalent to an unknot and differs only due to framing \cite{bem}. However, terms of the form $\avg{\text{tr} U^{ n} \text{tr} U^{- n} } $, such as appear in the SFF, give $(2n,2)$-torus links, which are not topologically trivial for any $n\in \mathds{Z}\setminus \{0\}$. \section{Matrix integrals and Toeplitz minors} \label{appkl} We review the computation of the unitary group integral over Schur polynomials using a method outlined in \cite{GGT1} and \cite{GGT2}, which in turn draw from results derived by Bump and Diaconis \cite{bd}, Tracy and Widom \cite{trawid}, among others. We start from an absolutely integrable function on the unit circle in $\mathds{C}$, \begin{equation} \label{fun} f(e^{i\theta}) = \sum_{k\in\mathds{Z}} d_k e^{ik\theta}~. \end{equation} We will specifically be considering the case where $d_k = d_{-k}$, so that $f(e^{i\theta})$ is real-valued. We further require that $f(e^{i\theta})$ satisfies the assumptions of Szeg\"{o}'s theorem. That is, we write $f(e^{i\theta})$ as \begin{equation} f(e^{i\theta}) = \exp\left(\sum_{k\in\mathds{Z}}c_k e^{ik\theta}\right)~, \end{equation} and demand that \begin{equation} \label{szegreq} \sum_{k\in \mathds{Z}} \abs{c_k} <\infty ~~,~~~ \sum_{k \in \mathds{Z} } \abs{k}\abs{c_k}^2 < \infty ~. \end{equation} From the Fourier coefficients of $f$, we construct a \textit{Toeplitz matrix}, which is a matrix that is constant along its diagonals, \begin{equation} T(f) = (d_{j-k} )_{j,k \geq 1} ~. \end{equation} We denote by $T_N(f)$ the $N$ by $N$ principal submatrix of $T(f)$, i.e. the matrix obtained from $T(f)$ by taking its first $N$ rows and columns and neglecting the remainder. We will see that various matrix integrals with weight function $f$ can be expressed as minors of $T_N(f)$, that is, as determinants of matrices obtained from $T_N(f)$ by removing a (necessarily equal) number of rows and columns. For a unitary matrix $U$ with eigenvalues $e^{i\theta_1},e^{i\theta_2},\dots$, we write, \begin{equation} \label{fu} \tilde{f}(U) = \prod_{k=1}^N f(e^{i\theta_k} ) ~. \end{equation} We employ Weyl's integral formula \cite{weylint} to express the integral of $\tilde{f}(U)$ over $U(N)$ with respect to the de Haar measure as \begin{equation} \int \tilde{f}(U)dU = \frac{1}{N!}\int_0^{2\pi} \prod_{j<k}\abs{e^{i\theta_j}-e^{i\theta_k}}^2 \prod_{k=1}^N f(e^{i\theta_k})\frac{d\theta_k}{2\pi}~, \end{equation} where the angles satisfy $0 \leq \theta_k<2\pi$. The expression for the Vandermonde determinant in \eqref{vdmdet} allows us to use an identity due to Andrei\'{e}f, sometimes referred to as Heine or Gram identity \cite{andreief}. Take $g_j$ and $h_j$, $j\in \{1,2,\dots,N\}$, to be two sequences of integrable functions on some measure space with measure $\mu$, then \begin{equation} \frac{1}{N!}\int \det(g_j(x_k))_{j,k=1}^N \det(h_j(x_k)_{j,k=1}^N \prod_{k = 1}^Nd\mu(x_k) = \det\left(\int g_j(x)h_j(x)d\mu(x)\right)_{j,k=1}^N ~. \end{equation} Choosing $g_j(e^{-i\theta})=e^{i(N-j)\theta}=h_j(e^{i\theta})$ and $d\mu(e^{i\theta})=f(e^{i\theta})\frac{d\theta}{2\pi}$, we find \begin{equation} \label{ptfct} \int \tilde{f}(U)dU = \det(d_{j-k})_{j,k=1}^N~, \end{equation} where $d_k$ are again the Fourier coefficients of $f$, \begin{equation} d_k= \frac{1}{2 \pi} \int_0^{2 \pi} f(e^{i\theta}) e^{i k \theta} d\theta ~. \end{equation} Now let $\lambda=(\lambda_1,\dots,\lambda_m)$ and $\mu= (\mu_1,\dots,\mu_n)$ be partitions of $\abs{\lambda} = \sum_i^{\ell(\lambda)} \lambda_i$ and $\abs{\mu} = \sum_j^{\ell(\mu)} \mu_j$, respectively. Here, $\lambda_i, ~\mu_j \in \mathds{Z}^+$ and $\ell(.)$ is the length of the partition. Ordering as $\lambda_i\geq \lambda_{i+1}$ and similarly for $\mu_j$, these partitions label Young tableaux in the standard way. One then obtains a \textit{Toeplitz minor} $T_N^{\lambda,\mu}(f)$ via the following procedure: \begin{itemize} \item We start from $T_{N+\kappa}(f)$, where $\kappa = \max\{ \lambda_1,\mu_1 \}$ \item If $\lambda_1-\mu_1 >0$, we remove the first $\lambda_1-\mu_1$ colums from $T_{N+\kappa}(f)$, otherwise we remove $\mu_1-\lambda_1$ rows. \item We then keep the first row and remove the next $\lambda_1-\lambda_2$ rows, after which we again keep the first row and remove the next $\lambda_2-\lambda_3$ rows and so on and so forth. \item We repeat the third step where we replace $\lambda_i$ by $\mu_i$ and where we remove columns instead of rows \end{itemize} Note that the second step ensures that the resulting matrix $T_N^{\lambda,\mu}(f)$ is of order $N$. We write $s_\lambda(U) = s_\lambda( e^{i\theta_1},e^{i\theta_2},\dots)$, where $s_\lambda$ are Schur polynomials, which we review in appendix \ref{schurapp}. The determinant of $T_N^{\lambda,\mu}(f)$ can then be expressed as \cite{bd}, \cite{adlervmbk} \begin{align} \label{toepdn} D_N^{\lambda,\mu} (f) & \coloneqq \det T_N^{\lambda,\mu}(f) = \int_{U(N)}s_\lambda(U^{-1]}) s_\mu(U) \tilde{f}(U) dU \notag \\ & = \frac{ 1}{N! (2\pi)^N} \int_0^{2\pi} s_\lambda(e^{-i\theta_1} ,\dots, e^{-i\theta_N}) s_\mu(e^{i\theta_1} ,\dots, e^{i\theta_N}) \prod_{j=1}^N f(e^{i\theta_j}) \prod_{1\leq j<k\leq N} \abs{e^{i\theta_j} - e^{i\theta_k}}^2 d\theta_j~,\notag\\ & = \det\left(d_{j-\lambda_j-k+\mu_k}\right)_{j,k=1}^N~. \end{align} One can recognize the pattern of striking rows and columns involved in the construction of $T_N^{\lambda,\mu}(f)$, as the index $j$ is shifted to $j-\lambda_j$ and $k$ to $k-\mu_k$. One can easily verify that, for two functions of the form \begin{equation} a(e^{i\theta})=\sum_{k\leq 0}a_k e^{ik\theta}~~,~~~ b(e^{i\theta})=\sum_{k\geq 0}b_k e^{ik\theta}~, \end{equation} the associated Toeplitz matrix satisfies \begin{equation} T(ab) = T(a) T(b)~. \end{equation} Let us therefore write $f(e^{i\theta})$ as follows \begin{equation} \label{fhh} f(e^{i\theta})=H(x;e^{i\theta})H(y;e^{-i\theta})~, \end{equation} where $H(x;z)$ is the generating function of the homogeneous symmetric polynomials $h_k$ given in \eqref{genfuncsymp} and where we assume that $h_k(x)$ and $h_k(y)$ are square-summable, i.e. \begin{equation} \sum_k h_k < \infty~. \end{equation} Gessel \cite{gessel} showed that, for $f$ as in \eqref{fhh}, \begin{equation} \label{gesselid} D_N(f) = \sum_{\ell(\nu)\leq N} s_\nu (x) s_\nu(y)~, \end{equation} where one should note that the sum runs over all partitions $\nu$ with at most $N$ rows. Here, we only consider the case where $y=x \in \mathds{R}$, but the expressions here easily generalize to $x \neq y$ and $x,y \in \mathds{C}$, subject to the assumptions of Szeg\"{o}'s theorem. Equation \eqref{gesselid} can then be generalized as \cite{GGT1}, \cite{GGT2} \begin{equation} \label{thm5} \int s_\lambda(U^{-1})s_\mu(U)\tilde{f}(U)dU= \sum_{\ell(\nu)\leq N}s_{\nu/\lambda}(x)s_{\nu/\mu}(x)~. \end{equation} In the above expressions, we can replace $H(x;z)$ by $E(x;z)$ if we simultaneously transpose all partitions. Let us therefore consider the Jacobi triple product expansion of the third theta function \begin{align} \label{thetatrpr} \sum_{n\in \mathds{Z}}q^{n^2/2}e^{in\theta} &= (q;q)_\infty \prod_{j=1}^\infty (1+q^{k-1/2}e^{i\theta})(1+q^{k-1/2}e^{-i\theta})\notag\\ &= (q;q)_\infty E(x;e^{i\theta} )E(x ;e^{-i\theta} )~, \end{align} where we define $x=(q^{1/2},q^{3/2},\dots)$ in the last line. Then, $f(e^{i\theta}) = \sqrt{(q;q)_\infty} ~ E(x ;e^{i\theta} ) E(x ;e^{-i\theta} )$ is the weight function of the Chern-Simons matrix model. This example is treated extensively in \cite{GGT2}, more details and proofs can be found there. Using \eqref{ptfct} with $d_k=q^{k^2/2}$, we see that the partition function is given by \begin{equation} Z_N = \int \tilde{f}(U) dU =\det(q^{(j-k)^2/2})_{j,k=1}^N = q^{\sum_{j=1}^N j^2} \det(q^{-jk})_{j,k=1}^N = \prod_{j=1}^{N-1}(1-q^j)^{N-j}~, \end{equation} which is a well-known result. \subsection{Infinite $N$} Let us now take the limit $N \to \infty$. From \eqref{thm5} and the fact that [Chapter I.5, example 26 in \cite{mcd}] \begin{equation} \label{mcdssss} \sum_\nu s_{\nu/\mu}(y)s_{\nu/\lambda}(x)=\sum_\nu s_{\lambda/\nu}(y)s_{\mu/\nu}(x) \sum_\kappa s_\kappa(y)s_\kappa(x) ~, \end{equation} where the sums run over all partitions, we have \cite{GGT1}, \cite{GGT2} \begin{equation} \label{largeN} W_{\lambda \mu} \coloneqq \frac{ \int s_\lambda(U^{-1})s_\mu(U)\tilde{f}(U)dU}{ \int \tilde{f}(U)dU}= \sum_\nu s_{\lambda/\nu}(x) s_{\mu/\nu}(x)~. \end{equation} Taking \eqref{largeN} with $\mu=\emptyset$, we see that calculating the matrix integral of a single trace in some representation \eqref{toepdn} is given by the following procedure. The evaluation of the integral amounts to replacing the eigenvalues of the Schur polynomials by the variables $x_i$ in $f(z) = E(x;z)E(x;z^{-1}) $ or $f(z) = H(x;z)H(x;z^{-1}) $. For $f(e^{i\theta})$ equal to $\Theta_3(e^{i\theta})$ in \eqref{thetatrpr}, $W_{\lambda \mu}$ gives the HOMFLY invariant of the Hopf link \cite{GGT1}, \cite{GGT2}. We see that is is given by the following expression, \begin{equation} \label{largencs} W_{\lambda \mu} = \sum_\nu s_{(\lambda/\nu)^t} (q^{1/2},q^{3/2},\dots ) s_{(\mu/\nu)^t} (q^{1/2},q^{3/2},\dots ) ~, \end{equation} where one should note that the representations are transposed due to the fact that $\Theta_3(e^{i\theta})$ is expressed in terms of $E(x;z)$ rather than $H(x;z)$. Let us now consider what the assumptions of Szego's theorem imply for a function of the form $f(z) = E(x;z)E(x;z^{-1})$ or $f(z) = H(x;z)H(x;z^{-1})$. Let us consider first the case $f(z) = E(x;z)E(x;z^{-1})$. We repeat the top line of \eqref{genfuncsymp}, \begin{equation} E(x;z) = \sum_{k=0}^\infty e_k(x) z^k = \prod_{k=1}^\infty (1+x_k z) = \exp \left[ \sum_{k=1}^\infty (-1)^{k+1} \frac{ p_k(x)}{k} z^k \right] ~, \end{equation} so that \begin{equation} f(z)=\exp\left( \sum_{k=1}^\infty (-1)^{k+1} \frac{p_k(x)}{k} (z^k +z^{-k} ) \right] ~. \end{equation} Therefore, \begin{equation} c_k = (-1)^{k+1} \frac{p_k(x)}{k} = c_{-k} ~~,~~~ k \neq 0~, \end{equation} and \eqref{szegreq} is written as \begin{equation} \label{pkszeg} \sum_{k=1}^\infty \frac{\abs{p_k(x)}}{k} < \infty ~~, ~~~ \sum_{k=1}^\infty \frac{\abs{p_k(x)}^2}{k}<\infty ~, \end{equation} where we ignore an irrelevant factor 2. We see that \begin{equation} \label{szegopk} \lim_{k\to \infty } \abs{p_k(x)} \to 0~, \end{equation} as $\sum_{k=1}^\infty \frac{\abs{p_k(x)}}{k}$ diverges otherwise. If we take $x_j$ to be real-valued, as we do in the explicit examples considered here, equation \eqref{szegopk} requires that $x_j<1$. The right requirement in \eqref{pkszeg} is strictly weaker than the left, so it does give rise to any additional restrictions. In the above expressions, if we replace $E(x;z)$ by $H(x;z)$, we have, \begin{equation} c_k = \frac{p_k(x)}{k} = c_{-k} ~~,~~~ k \neq 0~, \end{equation} so that the assumptions of Szeg\"{o}'s theorem are given by \eqref{szegopk} as well. \subsection{Finite $N$} \label{appfinn} Although the expressions given above were derived for $N\to \infty$, some of them can, in fact, be generalized to finite $N$ in case the number of distinct non-zero variables $x_j $ is smaller than $N$. From equations \eqref{gesselid}, \eqref{thm5}, and \eqref{mcdssss}, we see that, for finite $N$ and $f(z) = H(x;z)H(x;z^{-1})$, \begin{equation} \label{corr} \frac{\int s_\lambda(U) s_\mu(U^{-1}) \tilde{f}(U)dU}{\int \tilde{f}(U)dU} = \frac{\sum_\kappa (s_\kappa(x))^2}{\sum_{\ell(\rho)\leq N}( s_{\rho}(x) )^2} \sum_\nu s_{\lambda/\nu}(x)s_{\mu/\nu}(x) - \frac{\sum_{\ell(\nu)> N } s_{\nu/\lambda}(x)s_{\nu/\mu}(x)}{\sum_{\ell(\rho)\leq N } (s_{\rho}(x))^2}~. \end{equation} Let us denote the number of non-zero variables by $i_{\max}$, i.e. $x_i \neq 0 $ for $i\leq i_{\max}$ and $x_i=0$ for $i>i_{\max}$. In that case, $s_\kappa(x)=0$ for $\ell(\kappa)>i_{\max}$, see equation \eqref{mcdfe}, so that \begin{equation} \frac{\sum_\kappa (s_\kappa(x))^2}{\sum_{\ell(\rho)\leq N}( s_{\rho}(x) )^2}=1~. \end{equation} Indeed, in case $N-\ell(\lambda) > i_{\max} $ and $N-\ell(\mu) > i_{\max} $, we can apply \eqref{mcdfe} again to find \begin{equation} \sum_{\ell(\nu)> N } s_{\nu/\lambda}(x)s_{\nu/\mu}(x) =0 ~. \end{equation} From this we conclude that, for $N-\abs{\lambda} > i_{\max} $ and $N-\abs{\mu} > i_{\max} $, we have \begin{equation} \frac{\int s_\lambda(U) s_\mu(U^{-1}) \tilde{f}(U)dU}{\int \tilde{f}(U)dU} = \sum_\nu s_{\lambda/\nu}(x)s_{\mu/\nu}(x)~, \end{equation} i.e. the asymptotic expression \eqref{largeN} still holds in this case. Again, the above expressions still hold if we replace $H(x;z)$ by $E(x;z)$ and all representations by their transposes. \section{Spectral form factor} \label{sffsection} Although the main focus of this paper is the SFF of the $U(N)$ Chern-Simons matrix model, many of the techniques applied to this particular case can be applied to any function $f(z)$ satisfying the assumptions of Szeg\"{o}'s theorem \cite{GGT1}, \cite{GGT2}. We will first keep the treatment general before considering the case $f(z)=\Theta_3(z)$. \subsection{The spectral form factor for general weight function} We repeat for convenience \cite{gias}, \cite{andrews} \begin{equation} \label{trun} \text{tr} U^n = \sum_\lambda \chi^\lambda_{(n)} s_\lambda (U)= \sum_{r = 0}^{n-1} (-1)^r s_{(n-r,1^r)}(U) ~, \end{equation} where we take $n \in \mathds{Z}^+$. It is clear that this also holds when we replace $U$ with $U^{-1}$. Indeed, the expressions given below generalize to all integers if we replace $n$ by $\abs{n}$ in the expressions below. The SFF is given by \begin{equation} \label{sff} N K(n) = \frac{1}{Z_N} \int dU \tilde{f}(U) \sum_{r,s = 0}^{n-1} (-1)^{r+s} s_{(n-r,1^r)}(U^{-1}) s_{(n-s,1^s)} (U)~, \end{equation} where we remind the reader that $(n-r,1^r)$ is a representation corresponding to a hook-shaped Young tableau with $n-r$ boxes in the first row and $r$ further rows with a single box. Writing $ f(e^{i\theta})=H(x;e^{i\theta})H(x;e^{-i\theta})$, we use \eqref{largeN} to find \begin{equation} \label{sffgf} N K(n) =\sum_\nu \sum_{r,s = 0}^{n-1} (-1)^{r+s} s_{(n-r,1^r)/\nu}(x ) s_{(n-s,1^s)/\nu} (x)~~, ~~~ n \in \mathds{Z} \setminus \{0\} ~. \end{equation} The first sum on the right hand side runs over all representations $\nu$ satisfying $\nu \subseteq (n-r,1^r)$ as well as $\nu \subseteq (n-s,1^s)$, so that $\nu=(a,1^b)$ with $a\leq n-r,n-s$ and $b\leq r,s$. e remind the reader that \eqref{sffgf} also holds when we replace $H(x;z)$ by $E(x;z)$ due to the fact that the SFF is invariant under transposition of the representations $(n-r,1^r)$ and $(n-s,1^s)$. There are three types of skew Schur polymomials $s_{\lambda/\mu}$ which appear in \eqref{sffgf}: \begin{enumerate} \item If $\nu = \lambda$, the skew Schur polynomial $s_{\lambda/\nu } = s_{\lambda/\lambda} = 1$. \item If $\nu$ is the empty partition $\nu = \emptyset$, $s_{\lambda/\nu} =s_\lambda$ i.e. the skew Schur polynomials reduces to the usual (non-skew) Schur polynomial. \item Then there is the case of two non-empty hook-shaped diagrams $\lambda = (n-r,1^r)$ and $\nu = (a,1^b)$ with $n-r> a$ and $r > b$ and $\nu$ non-empty, so that $\lambda/\nu$ consists of a row of $n-r-a$ boxes and a column of $r-b$ boxes. It is clear from equation \eqref{schureh} that the skew Schur polynomial factorizes as \begin{equation} \label{slmu} s_{\lambda/\mu } = s_{(n-r-a)} s_{(1^{r-b})}=h_{n-r-a} e_{r-b}~. \end{equation} This can be made more clear using Young diagrams. Taking $n=6$, $r=2$ and $a=2$, $b=1$, equation \eqref{slmu} is given by the following, where one should keep in mind that the contributions corresponding to the two disconnected young diagrams are multiplied\\ \begin{center} \ydiagram{4,1,1} { \huge{ / } } \ydiagram{2,1} \hspace{.1cm} { \LARGE = } \ydiagram{2} \hspace{.1cm} \ydiagram{1} \end{center} \vspace{.5cm} \end{enumerate} From the first point listed above, we see that there are $n$ terms in \eqref{sffgf} with for which $\lambda=\mu = \nu=(n-r,1^r)$. These terms give the following contribution \begin{equation} \sum_{r= 0}^{n-1} \underbrace{s_{\emptyset}(U^{-1})s_{\emptyset} (U)}_{=1} = n~. \end{equation} Perhaps surprisingly, we see from the above expression that terms satisfying $\lambda=\mu = \nu$ always reproduce the linear ramp of the CUE spectral form factor for $n\leq N$ (see e.g. (5.14.14) in \cite{haake}). It is well known that, for the CUE SFF, the linear ramp saturates at a plateau for $n\geq N$ \cite{mehta}, \cite{haake}. Here, too, the linear ramp gives way to a plateau, which comes about as follows. Remember that $s_{\lambda}(x)$ vanishes if the longest column in $\lambda$ contains more boxes than the number of non-zero variables in the set $x$ \eqref{mcdfe}. We saw that we get a contribution equal to unity for every term for which $(n-r,1^r) =\nu = (n-s,1^s)$ for $0\leq r \leq \text{Min}(n-1,N-1)$. However, there are only $N$ such reps, as $s_{(a,1^b)}(x) = 0 $ if $b\geq N$. From this, we conclude that the contributions coming from $\lambda = \nu = \mu $ exactly reproduce the ramp and plateau. Let us now consider those terms from which deviations from the linear ramp may arise. If $\nu$ is the empty set, as in point 2, we recover the disconnected part of the SFF, which is given by the square of \begin{equation} \sum_{r = 0}^{n-1} (-1)^{r} s_{(n-r,1^r)}(x ) = \avg{\text{tr} U^n} ~. \end{equation} The remaining terms, coming from point 3, is given by the square of \begin{equation} \label{rem1} \sum_{r= 0}^{n-1} \sum_{\substack{\nu \neq \emptyset \\ \nu\neq (n-r,1^r )} } (-1)^{r} s_{ (n-r,1^r )/\nu}( x) \end{equation} At first sight, this may seem like a rather rather complicated expression. Let us factor the expression in \eqref{rem1} into two separate sums over $r$ and $s$ and consider one such sum for a fixed choice of $\nu=(1)$. Remembering that $s_{ (n-r,1^r )/(1)}=h_{n-r-1}e_{r}$ and using equation (2.6') of \cite{mcd}, we find for a single such sum, \begin{equation} \label{ehn} \sum_{r=0}^{n-1} (-1)^{r} s_{ (n-r,1^r )/(1)} = \sum_{r=0}^{n-1} (-1)^{r} h_{n-r-1} e_r = 0~. \end{equation} Taking $n=4$, the above identity can be expressed in terms of Young diagrams as follows.\\ \begin{center} \ydiagram{3} \hspace{.1cm} \huge{-} \normalsize \ydiagram{2} \hspace{.1cm} \ydiagram{1} \hspace{.1cm} \large{+} \normalsize \ydiagram{1} \hspace{.1cm} \ydiagram{1,1} \hspace{.1cm} \huge{-} \normalsize \ydiagram{1,1,1} { \LARGE{ =} \Large{0} } \end{center} \vspace{.1cm} The identity $\sum_{r=0}^{n-1}(-1)^r h_{n-r-1} e_r=0$ can be seen from $H(x;t)E(x;-t)=1$, see equation \eqref{genfuncsymp}. Equation \eqref{ehn} can then be found by checking every order of $t$ in $H(x;t)E(x;-t)$. One can see from these considerations that any term corresponding to a single choice of $\nu$ in \eqref{rem1} is equal to zero. The contribution for general $\nu=(a,1^b)\subset (n-r,1^r)$ with $\nu\neq \emptyset$ and $\nu \neq (n-r,1^r)$ is given by \begin{equation} \label{heab} \sum_{r=b}^{n-a}(-1)^r h_{n-r-a}e_{r-b} = \sum_{r=0}^{n-b-a} (-1)^r h_{n-b-a-r}e_r = 0 ~. \end{equation} In short, the contribution arising from $\nu \neq \emptyset, (n-r,1^r)$ is equal to zero. We now compute the explicit expression for the disconnected SFF. Applying \eqref{largencs} with $\mu =\emptyset$, we have, \begin{equation} \label{hookelhom} \avg{\text{tr} U^n} = p_n(x) = \sum_i x_i^n~. \end{equation} The functions $p_n(x)$ are the power-sum polynomials, mentioned in appendix \ref{sympol}. The fact that we get power-sum polynomials should not be surprising due to the statements below equations \eqref{traun} and \eqref{largeN}. Namely, $\text{tr} U^n$ is the $n\textsuperscript{th}$ power sum polynomial in the eigenvalues of $U$, and the evaluation of the matrix integral of a single matrix trace amounts to replacing the eigenvalues by the variables $x_i$, which immediately leads to \eqref{hookelhom}. Below equation \eqref{pkszeg}, we show that the assumptions of Szeg\"{o}'s theorem require \begin{equation} \lim_{k \to \infty } p_k(x) = 0~, \end{equation} so that the disconnected part of the SFF goes to zero. Hence, we see that the plateau of the SFF is exact, that is \begin{equation} \lim_{n\to \infty} K(n) = 1 ~. \end{equation} We thus find, \begin{equation} \label{sffres} K(n) =\left\{ \begin{array}{ll} \frac{1}{N} \left[ n + p_n(x)^2\right]~~, ~~~ n/N \leq 1 ~,\\ 1~~~~~~~~~~~~~~~~~~~~~,~~~ n/N \geq 1 ~. \end{array} \right. \end{equation} This is the main result of the present work. Let us now give some basic expressions for the SFF itself. From the form of \eqref{sffres}, we can give an expression for the behaviour of the SFF upon rescaling $x_i$. The linear ramp remains unaffected by rescaling as it is independent of choice of variables $x_i$. Further, since $ \avg{\text{tr} U^n} = \sum_{r=0}^{n-1}(-1)^r s_{(n-r,1^r)}(x) $ is a sum of polynomials of degree $n$ in $x_i$, we have upon rescaling as $x_j \mapsto A x_j$, where $A$ is some number, \begin{equation} p_n(Ax) = A^n p_n(x) ~. \end{equation} Further, we take $x_j \mapsto (x_j)^k $ with $k\in \mathds{Z}^+$, we have, writing $x^k=(x_1^k,x_2^k,\dots)$, \begin{equation} \label{knconst} p_n(x^k) = p_{kn}(x)~. \end{equation} This naturally generalizes to $k\in \mathds{R}$ if we take the label $n$ of $p_n(x)$ to be a general real number. We plot an example of an SFF in figure \ref{sffplot}. In the figure, we indicate lines with $kn=$constant, which lie at 45 degrees. Although this SFF was computed for a specific choice of weight function, it follows from equation \eqref{knconst} that lines of constant $kn$ always lie at 45 degrees. The linear ramp then corresponds to $kn \to \infty$. For a finite number of variables, the calculation of the SFF from \eqref{sffres} is rather straightforward. In case we have a very large number of non-zero variables, $p_n(x)$ is generally rather hard to calculate, except for certain known examples. Let us take $x_k = 1/(k+1)^2$. Using the well-known product expansion of the hyperbolic sine as $ \sinh(\pi t) = \pi t \prod_{k\geq 1 } \left(1 +\frac{t^2}{k^2}\right) $, we have \begin{equation} \label{rmwf} f(z) = \prod_{k=1 }^\infty \left(1 +\frac{z}{(k+1)^2}\right) \left(1 +\frac{z^{-1}}{(k+1)^2}\right) = \frac{\sinh(\pi z^{1/2}) \sinh(\pi z^{-1/2})}{\pi^2(2+z+z^{-1})}~. \end{equation} Further, we have, \begin{equation} p_n(x) = \sum_{k=1}^\infty \frac{1}{(k+1)^{2n}} = \zeta(2 n)-1~, \end{equation} where $\zeta (s)$ is the Riemann zeta function. The SFF for weight function \eqref{rmwf} is therefore given by \begin{equation} N K(n) =\left\{ \begin{array}{ll} n + \left(\zeta(2n)-1\right)^2~~~, ~~~ n\leq N ~,\\ N~~~~~~~~~~~~~~~~~~~~~~,~~~ n \geq N ~. \end{array} \right. \end{equation} \subsubsection{General trace identities} \label{gtraceid} We now consider some expectation values of $\text{tr} U^{n}$ with some more general objects. For example we can conclude from the arguments leading to \eqref{sffres} that the connected part of $\langle \text{tr} U^n \text{tr} U^{-k} \rangle $, for $k,n \in \mathds{Z}^+$, is given by \begin{equation} \label{kmt} \langle \text{tr} U^n \text{tr} U^{-k} \rangle_c = \sum_{s=0}^{k-1} \sum_{r=0}^{n-1}\sum_{\nu \neq \emptyset } (-1)^{r+s} s_{(k-s,1^s)/\nu} s_{(n-r,1^r)/\nu} = n\delta_{n k}~. \end{equation} In particular, let us take $k<n$. In that case, any $\nu \in (k-s,1^s)$ for all $s \in \{ 0,\dots,k-1\}$ necessarily satisfies $\abs{\nu} \leq k < n$, so that $(n-r,1^r)/\nu \neq \emptyset $ for any partition $(n-r,1^r)$, $r \in \{ 0,\dots,n-1\}$. Using \eqref{heab}, the result is again zero. Note that equation \eqref{kmt} can easily be found for the CUE case by using bosonization \cite{douglas}\cite{kakoku}. More generally, let us consider expectation values of the form \begin{equation} \label{trantral} \avg{\text{tr} U^{-n} \text{tr}_\lambda U}_c = \sum_{\nu \neq \emptyset} \sum_{r=0}^{n-1} (-1)^r s_{(n-r,1^r)/\nu } ~ s_{\lambda/\nu} ~. \end{equation} Since fixing any $\nu \subseteq (n-r,1^r)$ in \eqref{trantral} with $\nu\neq (n-r,1^r)$ gives zero upon summing over $r$, we only get a nonzero answer for terms for which $\nu = (n-1,1^r) \subseteq \lambda$. That is, \begin{equation} \label{trutrl} \avg{\text{tr} U^{-n} \text{tr}_\lambda U}_c = \sum_{r=\min(0, n-\lambda_1)}^{\min( n-1,\lambda_1^t+1)} (-1)^r s_{\lambda/(n-r,1^r)} ~, \end{equation} where the boundaries on the sum arise from the fact that we only sum over those representations $(n-r,1^r)$ which satisfy $(n-r,1^r) \subseteq \lambda$. Equation \eqref{trutrl} greatly simplifies certain calculations. For example, consider $(n-r,1^r) \nsubseteq \lambda ~\forall~ r \in \{ 0,\dots, n-1\}$. Another way to write this is that $\lambda_1+\lambda_1^t-1 < n$. We then have, \begin{equation} \label{tralambda} \avg{ \text{tr} U^{-n} \text{tr}_\lambda U}_c =0~. \end{equation} Let us represent $\lambda $ in Frobenius notation as $\lambda = (a_1,\dots ,a_k| b_1,\dots,b_k)$ with $a_i$ and $b_j$ non-negative integers satisfying $a_1> \dots >a_k$ and $b_1>\dots >b_k$. In this case, $a_1+b_1+1$ gives the number of boxes in the upper left hook of $\lambda$, or, equivalently, the hook-length of the top left box in $\lambda$, labelled by $x=(1,1)$ in the notation of appendix \ref{schurapp}. In this notation, \eqref{tralambda} states that $\avg{ \text{tr} U^{-n} \text{tr}_\lambda U}_c =0 $ if $ a_1+b_1 +1<n$. For the specific case of the Chern-Simons matrix model this identity has an interesting interpretation which we comment on in section \ref{sfftheta}. We can find similar identities for certain representations $\lambda$ with $a_1+b_1 +1 > n$. Define $m \coloneqq a_1+b_1+1-n $ and consider $\lambda=(a|b)=(a_1,\dots,a_k|b_1,\dots b_k)$ satisfying $m \leq a_1-a_2-1 $ and $m \leq b_1 -b_2-1 $, or, equivalently, $m\leq \lambda_1 -\lambda_2$ and $m\leq \lambda_1^t - \lambda_2^t$, respectively. Let us take $\mu = (a_2,\dots,a_k|b_2,\dots,b_k) $, which is constructed from $\lambda$ by removing the first row and column. For any rep $(n-r,1^r)$ satisfying $(n-r,1^r) \subseteq \lambda$, we then have \begin{equation} \lambda/(n-r,1^r) = (a_1+1,1^{b_1}) /(n-r,1^r) \times \mu = (a_1+1-n+r)\times (1^{b_1-r}) \times \mu~. \end{equation} That is, $ \lambda/(n-r,1^r) $ factorizes as the skew partition of two hook shapes times the partition obtained from $\lambda$ by deleting the top-left hook. In terms of Young diagrams, an example is given by the following.\\ \begin{center} \ydiagram{4,3,1} { \huge{ / } } \ydiagram{3,1} \hspace{.1cm} { \huge = } \normalsize \ydiagram{1} \hspace{.1cm} \ydiagram{2} \hspace{.1cm} \ydiagram{1} \end{center} \vspace{.3cm} Since $(a_1+1,1^{b_1}) /(n-r,1^r)$ is a product of a row and a column, we can again use \eqref{ehn} to find \begin{align} \label{traidlong} \avg{ \text{tr} U^{-n} \text{tr}_\lambda U}_c&= \sum_{r=n-a_1-1}^{b_1} (-1)^r s_{\lambda/(n-r,1^r)} \notag \\ & = (-1)^{n-a_1-1} s_\mu \sum_{k=0}^{m} (-1)^k h_{m-k}e_k =0~. \end{align} \subsection{ The SFF of the Chern-Simons matrix model} \label{sfftheta} As noted before, the SFF of the Chern-Simons matrix model corresponds to a $(2n,2)$-torus link with one component in the fundamental and the other in the antifundamental representation. Whereas expressions for link invariants of the form $\langle \text{tr} U^{n_1}\text{tr} U^{n_2} \dots \rangle $ with $n_i \geq 2$ have appeared in the literature \cite{andrews}, \cite{labmartk}, \cite{labmartl}, expressions with powers of mixed signature, to the best of the authors' knowledge, have not. The expressions presented in the previous section allow us to calculate precisely those objects. In particular, the SFF, is again given by \eqref{sffres}. We can easily calculate the non-trivial part of the SFF, $\avg{\text{tr} U^n}^2$, for $\abs{q}<1$, by using the expression in terms of power-sum polynomials. However, it is instructive to see how this arises from the functional form of this object as a function of $N$, before taking $N \to \infty$. This is particularly useful in knot theory, as the expression for general $N$ may allows one to distinguish various knots and links which may have the same invariant when one ignores the dependence on $N$. Let us apply \eqref{qschur} to the hook-shaped representation $(a,1^b)$, which gives the following expression for general $N$ \begin{equation} \label{dimqhook} s_{(a,1^b)}(x_i=q^{i-1}) = q^{\frac{1}{2}b(b+1)} \frac{[N+a-1]!}{[N-b-1]![a-1]![b]![a+b]}~. \end{equation} We now use \eqref{trun} and \eqref{largencs} to calculate $\avg{\text{tr} U^n}$, which, for the lowest values of $n$, is given by \begin{align} \label{tracesp} \avg{\text{tr} U} &= q^{1/2}\frac{1-q^N}{1-q}=q^{1/2}[N]_q \notag ~,\\ \avg{\text{tr} U^2} &= \frac{q(1-q^{2N})}{1-q^2 }=q[N]_{q^2}~, \notag \\ \avg{\text{tr} U^3} &= \frac{q^{3/2}(1-q^{3N})}{1-q^3 }=q^{3/2}[N]_{q^3} ~. \end{align} One can see a simple pattern emerge in \eqref{tracesp}. Indeed, using \eqref{trun} and taking into account the comments made below \eqref{largeN}, we see that \begin{equation} \label{tranqdef} \avg{\text{tr} U^n} = p_n(x_j=q^{j-1/2}) = q^{n/2} \sum_{j=1}^N q^{n(j-1)} = q^{n/2}\frac{1-q^{nN}}{1-q^n} = q^{n/2}[N]_{q^n}~. \end{equation} That is, the asymptotic $(n,1)$-torus knot invariant is given by the $q^n$-deformation of $N$ times a factor $q^{n/2}$. As far as the authors are aware, this statement has heretofore not appeared in the literature. As mentioned above, as well as in appendix \ref{schurapp}, the limit $N\to \infty$ simplifies these expressions even further. Upon this simplification, the final expression for the SFF is then given by \begin{equation} \label{sffcsres} N K(n) = \begin{cases}n + (q^n+q^{-n}-2)^{-1} ~~,~~~n\leq N~,\\ N~~~~~~~~~~~~~~~~~~~~~~~~~~~,~~~ n \geq N~. \end{cases} \end{equation} The SFF is plotted in \ref{sffplot} for $n=1,\dots,20$, with $q=0,9^k$, $k=1,\dots,9$. \subsubsection{General identities for the Chern-Simons matrix model} The identities we derived in \ref{gtraceid} apply to the Chern-Simons matrix model as well, in which case they have an interpretation in terms of knot and link invariants. For example, take \eqref{tralambda}, which says that, for $\lambda$ satisfying $(n-1,1^r) \nsubseteq (n-r,1^r) ~ \forall r \in \{ 1,\dots,n-1\}$, \begin{equation} \label{tralamcs} \avg{ \text{tr} U^{-n} \text{tr}_\lambda U}_c =0~~ \Rightarrow ~~ \avg{ \text{tr} U^{-n} \text{tr}_\lambda U}= \avg{ \text{tr} U^{-n}} \avg{ \text{tr}_\lambda U}~. \end{equation} In terms of knot and link invariants, the above expression entails that expectation value of the product of an $(n,1)$-torus link with an unknot in representation $\lambda$ with opposite orientation equals the product of their expectation values. Another trace identity derived in \ref{gtraceid} is equation \eqref{tralamcs}. This equation expresses the fact that a Wilson line in the (anti)fundamental rep winding $n$ times around a article in rep some $\lambda$ will give a vanishing connected expectation value if the $\lambda_1 -\lambda^t_1 - 1 - n \leq \lambda_1 - \lambda_2 $ and $\lambda_1 -\lambda^t_1 - 1 - n \leq \lambda^t_1 - \lambda^t_2 $. Further, it is worth emphasizing that, using \eqref{trutrl}, one can calculate pretty much any object of the form \begin{equation} \avg{\text{tr} U^{-n}\text{tr}_\lambda U}~, \end{equation} as all the objects appearing on the right hand side of the above expression are skew Schur polynomials with variables $x_i = q^{i-1/2}$, to which we can apply the $q$-hook length formula in equation \eqref{qschur}. \section{Overview and Conclusions} Here, we put forward a conjecture that many, if not all, examples of invariant one-matrix models which exhibit intermediate statistics are given by matrix models of topological field or string theories. We explicitly support this conjecture by the example of the matrix model introduced in \cite{muttens}, which is the matrix model of $U(N)$ Chern-Simons model on $S^{3}$. The latter model is directly related to A and B topological string models via the Gopakumar-Vafa duality. To calculate the SFF of the Chern-Simons matrix model, we consider general infinite order unitary matrix models with weight functions satisfying the assumptions of Szeg\"{o}'s theorem. We find that the SFF's for these models have a surprisingly concise form, with the connected SFF giving rise to the linear ramp and plateau, while the disconnected part gives rise to a dip. Moreover, from the assumptions of Szeg\"{o}'s theorem, it follows that the dip had to go to zero, so that the plateau is exact. Further, we derive certain identities on expectation values of products of traces, as well as the behavior of the SFF under certain changes of the weight function. We then apply these general results to the matrix model for $U(N)$ Chern-Simons theory for $S^3$, studied by Muttalib and collaborators for its intermediate statistics. The SFF of this model is a topological (link) invariant. In particular, it is given by the HOMFLY invariant $(2n,2)$-torus links with one component in the fundamental and the other in the antifundamental representation, an explicit expression of which, to the best of the authors knowledge, did not appear in the literature before. It displays the hallmark characteristics of intermediate statistics, with a dip that becomes more pronounced as we move further away from the CUE limit, $q \to 0$. One can identify various matrix models which have the same SFF, an immediate example of which is given by replacing $E(x;z)$ by $H(x;z)$. The present work provides the tools to shed more light on the connections between topological field theories and intermediate statistics; we believe that the matrix models which arise in topological string theory are natural tools for describing ergodic-to-nonergodic phase transitions. Indeed, this paper provides a first example of what we suspect to be a broader connection between intermediate statistics and topology. \section{Acknowledgements} We would like to thank Wouter Buijsman, Oleksandr Gamayun, Alex Garkun, and Miguel Tierz for valuable discussions and comments, and Vladimir Kravtsov for inspiring this study and useful discussions. This work is part of the DeltaITP consortium, a program of the Netherlands Organization for Scientific Research (NWO) funded by the Dutch Ministry of Education, Culture and Science (OCW). \section{Appendices} \subsection{$q$-Numbers} \label{qnum} We review some basic facts and useful relations involving $q$-numbers, which are so-called \textit{$q$-deformations} of more familiar (generally complex) numbers. We will only be considering $q$-deformation of positive integers here, which are defined as \begin{equation} [n]_q = (1+q+\dots+q^{n-1}) = \frac{1-q^n}{1-q}~~,~~~ n \in \mathds{Z}^+~ . \end{equation} Other definitions of $[n]_q$, such as $\frac{q^{-n/2}-q^{n/2}}{q^{-1/2}-q^{1/2}}$, also appear in the literature. Their common feature is that \begin{equation} \lim_{q\to 1^-} [n]_q = n~. \end{equation} Note that, for $k,m,n ~\in ~\mathds{Z}^+$ satisfying $\frac{m}{n}=k$, we have \begin{equation} \label{qkmn} \frac{[m]_q}{[n]_q}=[k]_{q^n}~~,~~~ \frac{[r\cdot m]}{[r\cdot n]}=[k]_{q^{nr}} \end{equation} for example, \begin{equation} \frac{[8]_q}{[2]_q}=\frac{1+q+\dots+q^7}{1+q} = 1+q^2+q^4+q^6=[4]_{q^2}~. \end{equation} We will write $[n]_q$ as $[n]$ henceforth and only specify the deformation parameter in case it is different from $q$. $q$-Factorials and $q$-binomials are defined as follows. For $n, k \in \mathds{Z}^+$ \begin{equation} [N]!= (1+q)(1+q+q^2)\dots(1+q+\dots+q^{N-1})~~,~~~ \begin{bmatrix} N \\ k \end{bmatrix} = \frac{[N]!}{[N-k]! [k]! }~. \end{equation} We then introduce the \textit{$q$-Pochhammer symbol}, which is defined as \begin{equation} (a;q)_k= (1-a)(1-aq)\dots(1-aq^{k-1})~. \end{equation} Note that \begin{equation} (a;q)_n=\frac{(a;q)_\infty}{(aq^n;q)_\infty} ~. \end{equation} Note also that \begin{equation} [n]!=\frac{(q;q)_n}{(1-q)^n}~, \end{equation} from which follows \begin{equation} \begin{bmatrix} N \\ k \end{bmatrix} = \frac{(q;q)_N}{(q;q)_{N-k}(q;q)_k} = \frac{(1-q^N)(1-q^{N-1})\dots(1-q^{N-r+1})}{(1-q)(1-q^2)\dots(1-q^k)}~. \end{equation} We see from this expression that, for $ q < 1$, we have \begin{equation} \label{qninf} \lim_{N\to \infty } \begin{bmatrix}N\\k \end{bmatrix} = \frac{1}{(q;q)_k}~, \end{equation} $q$-Pochhammer symbols can be generalized as follows \begin{equation} (a_1,a_2,\dots,a_m;q)_n=\prod_{j=1}^m (a_j;q)_n~. \end{equation} These are rather versatile objects. For example, Jacobi's third theta function can be expressed through the Jacobi triple product as \begin{equation} \label{jtp} \sum_{n\in \mathds{Z}}q^{n^2/2}z^n=(q,-q^{1/2}z,-q^{1/2}/z;q)_\infty~~, ~~~ 0< \abs{q} <1~. \end{equation} Note that that the definition in \eqref{jtp} has $q^{n^2/2}$ rather than $q^{n^2}$ as expansion coefficients, following the convention of e.g. \cite{okuda}. This is the origin of the differences with the expressions appearing e.g. in \cite{andrews}, which are related to the expressions given here by taking $q \to q^2$. \subsection{Symmetric polynomials} \label{sympol} We review here some basic aspects of symmetric polynomials in the set of variables $x =( x_1,x_2,\dots )$. The \textit{elementary symmetric polynomials} are then defined as \begin{equation} \label{epol} e_k (x) = \sum_{i_1<\dots<i_k} x_{i_1}\dots x_{i_k}~. \end{equation} Some examples include \begin{align*} e_0& = 1 ~,\\ e_1(x_1)& = x_1 ~,\\ e_1(x_1, x_2 )& = x_1 +x_2~,\\ e_2(x_1, x_2 )& = x_1 x_2~. \end{align*} Closely related are the \textit{complete homogeneous symmetric polynomials}, defined as \begin{equation} \label{hpol} h_k (x) = \sum_{i_1\leq\dots\leq i_k} x_{i_1}\dots x_{i_k} , \end{equation} which contains all monomials of degree $j$. Note the difference in the summation bounds between \eqref{epol} and \eqref{hpol}. Some examples of these include \begin{align*} h_0 & = 1 ~,\\ h_1(x_1)& = x_1 ~,\\ h_1(x_1, x_2 )& = x_1 +x_2~,\\ h_2(x_1, x_2 )& = x_1^2+x_2^2+ x_1 x_2~. \end{align*} Another example is the set of \textit{power-sum symmetric polynomials}, \begin{equation} \label{powersum} p_k(x)= x_1^k+x_2^k+\dots ~. \end{equation} Note that if a matrix $U$ has $d_i$ as its eigenvalues, traces of moments of $U$ are given by power-sum symmetric polynomials, that is, \begin{equation} \text{tr} U^k= p_k(d)~. \end{equation} Defining $z=e^{i \theta}$ as in \eqref{fun}, we have the following relations between the above polynomials \cite{mcd} \begin{align} \label{genfuncsymp} E(x;z) &= \sum_{k=0}^\infty e_k(x) z^k = \prod_{k=1}^\infty (1+x_k z) = \exp \left[ \sum_{k=1}^\infty \frac{(-1)^{k+1}}{k} p_k(x) z^k \right] ~,\notag \\ H(x;z) & = \sum_{k=0}^\infty h_k(x) z^k = \prod_{k=1}^\infty \frac{1}{1-x_k z} = \exp \left[ \sum_{k=1}^\infty \frac{1}{k} p_k(x) z^k \right] ~. \end{align} Consider the example where $x_i = q^{i-1}$, so that (see \cite{mcd} I.2 examples 3 and 4) \begin{equation} E(t) =\prod_{i=0}^{N-1}(1+q^i t) =\sum_{k=0}^N q^{k(k-1)/2}\begin{bmatrix}N\\k \end{bmatrix} t^k ~. \end{equation} Similarly, \begin{equation} H(t) = \prod_{i=0}^{N-1}(1-q^i t)^{-1} =\sum_{k=0}^N \begin{bmatrix}N+k-1\\k \end{bmatrix} t^k ~, \end{equation} so that \begin{equation} e_k= q^{k(k-1)/2}\begin{bmatrix}N\\k \end{bmatrix} ~~,~~~ h_k=\begin{bmatrix}N+k-1\\k \end{bmatrix}~. \end{equation} Here, $e_k$ is only defined for $k\leq N$. From \eqref{qninf}, we see that, for $q<1$ and $N\to \infty$, \begin{equation} \label{ehninf} e_k= \frac{q^{k(k-1)/2} }{(q;q)_k} ~~,~~~ h_k=\frac{1}{(q;q)_k}~. \end{equation} \subsection{Schur polynomials} \label{schurapp} A somewhat less straightforward type of symmetric polynomial is the \textit{Schur polynomial}, which reduces to some of the above examples in certain cases. Schur polynomials play an important role as characters of irreducible representations, often referred to as irreps, of general linear groups and subgroups thereof. Irreps can be conveniently classified by partitions, and we use these terms interchangably in this work. We denote partitions as $\lambda = ( \lambda_1 , \lambda_2,\dots,\lambda_{\ell} )$, which are sequences of non-negative integers ordered as $\lambda_1 \geq \lambda_2 \geq \dots $ Typically, partitions are taken to have a finite number of elements, that is, only a finite number of $\lambda_i$ are non-zero, but we will impose no such restriction. The \textit{weight} of a partition (not to be confused with the highest weight of the corresponding irrep) is given by the sum of its terms $\abs{\lambda}= \sum_i\lambda_i$ and its \textit{length} $\ell(\lambda) $ is the largest value of $i$ for which $\lambda_i \neq 0$. A \textit{semistandard Young tableau} (SSYT) corresponding to $\lambda$ is then given by positive integers $T_{i,j}$ satisfying $1 \leq i \leq \ell(\lambda)$ and $1\leq j \leq \lambda_i$. These integers are required to increase weakly along every row and increase strongly along every column, i.e. $T_{i,j} \geq T_{i,j+1}$ and $T_{i,j} > T_{i+1,j}$ for all $i,j$. Label by $\alpha_i$ the number of times that the number $i$ appears in the SSYT. We then define \begin{equation} x^T = x_1^{\alpha_1} x_2^{\alpha_2 } \dots~. \end{equation} The \textit{Schur polynomial} $s_\lambda(x) $ is given by \cite{stanley}. \begin{equation} \label{sp} s_\lambda(x) =\sum_T x^T~, \end{equation} where the sum runs over all SSYT's corresponding to $\lambda$ i.e. all possible ways to inscribe the diagram corresponding to $\lambda$ with positive integers that increase weakly along rows and strictly along columns. We give an example of an SSYT corresponding to a Young diagram $\lambda=(3,2)$. From \eqref{ssp} one can see that the contribution of the SSYT below would be given by $x_1^2x_2 x_3^2$.\\ \begin{center} \begin{ytableau} 1 & 1 & 3 \\ 2 & 3\\ \end{ytableau} \end{center} We can see from the above definition that \begin{equation} \label{schureh} s_{(1^n)}=e_n ~~,~~~ s_{(n)}=h_n ~, \end{equation} i.e. the Schur polynomial of a column or row of $n$ boxes is given by a degree $n$ elementary or homogeneous symmetric polynomial, respectively. Schur polynomials have a natural generalization to so-called \textit{skew Schur polynomials}. In this case we have two diagrams $\lambda$ and $\mu$ such that $\mu \subseteq \lambda$ i.e. $\mu_i \leq \lambda_i, ~\forall ~ i$. We denote by $\lambda/\mu$ the complement of $\mu$ in the diagram corresponding to $\lambda$. Define a semistandard skew Young tableau corresponding to $\lambda/\mu$ similar to the above, namely, as an array of positive integers $T_{ij}$ satisfying $1\leq i \leq \ell(\lambda)$ and $\mu_i\leq j \leq \lambda_i$ which increase weakly along rows and strictly along columns. We then define the \textit{skew Schur polynomial} corresponding to $\lambda/\mu$ as \begin{equation} \label{ssp} s_{\lambda/\mu}=\sum_T x^T, \end{equation} where the sum again runs over all SSYT's corresponding to $\lambda/\mu$. Note that if $\mu$ is the empty partition, i.e. $\mu_i=0, ~ \forall i$, we have $s_{\lambda/\mu} = s_\lambda$, and if $\lambda = \mu$, $s_{\mu/\mu}=1$. Let us consider $\lambda =(3,2)$ and $\mu= (1)$. Below, we give an SSYT corresponding to the skew partition $\lambda/\mu$, which would contribute $x_1^2 x_2 x_3$ to the skew Schur polynomial.\\ \begin{center} \begin{ytableau} \none & 1 & 3 \\ 1 & 2\\ \end{ytableau} \end{center} Skew Schur polynomials can also be expressed in determinantal form. Using a matrix of the form $\mathcal{M} = (x_j^{(N-k)})_{j,k=1}^N$, we have the following expression for the Vandermonde determinant \begin{equation} \label{vdmdet} \det(x_j^{(N-k)})_{j,k=1}^N = \prod_{1\leq j<k \leq N}(x_j-x_k) ~. \end{equation} We then have \begin{equation} \label{schurxi} s_\lambda(U) = s_\lambda(x_j) = \frac{\det\left(x_j^{N-k+\lambda_k}\right)_{j,k=1}^N}{\det\left(x_j^{N-k}\right)_{j,k=1}^N}~. \end{equation} The (skew) Schur polynomials can be expressed in terms of elementary symmetric polynomials $e_k(x)$ or complete homogeneous symmetric polynomials $h_k(x)$ via the following determinantal expressions, known as the Jacobi-Trudi identities, \begin{align} \label{jtid} &s_{(\mu/\lambda)} = \det(h_{\mu_j -\lambda_k -j+k } )_{j,k=1}^{\ell(\lambda)} = \det(e_{\mu^t_j -\lambda^t _k -j+k } )_{j,k=1}^{\lambda_1} = D^{\lambda,\mu}_N(H(x;z)) ~,\notag\\ &s_{(\mu/\lambda)^t} = \det(e_{\mu_j -\lambda_k -j+k } )_{j,k=1}^{\ell(\lambda)} = \det(h_{\mu^t_j -\lambda^t _k -j+k } )_{j,k=1}^{\lambda_1} = D^{\lambda,\mu}_N(E(x;z))~. \end{align} where the partition $\lambda^t$ is obtained from $\lambda$ by transposing the corresponding Young tableau and where $\emptyset $ refers to the empty partition. The objects on the right hand side of \eqref{jtid} are explained in section \ref{appkl}. Schur polynomials satisfy various useful identities, including the so-called Cauchy identity and its dual, \begin{equation} \sum_\lambda s_\lambda(x) s_\lambda(y) = \prod_{i, j=1}^\infty \frac{1}{1-x_i y_j} ~~,~~~ \sum_\lambda s_\lambda(x) s_{\lambda^t}(y) = \prod_{i, j=1}^\infty 1-x_i y_j ~. \end{equation} Other useful identities for our purposes are the following, which can be found in Chapter I.5 of \cite{mcd}, \begin{equation} \label{mcdfe} s_{\lambda/\mu}(x_1,\dots,x_n) = 0 ~\text{ unless }~ 0 \leq \lambda_i^t-\mu_i^t \leq n ~ \text{ for all }~ i \geq 1~. \end{equation} Note that an example of \eqref{mcdfe} is given by the fact that $e_k(x_1,\dots,x_N)=0$ for $ k>N$. We consider some Schur polynomials which are treated in I.3 examples 1-4 of \cite{mcd}. Schur polynomials with all variables equal to 1 give the hook-length formula for the dimension of the representation, that is \begin{equation} \label{hookl} s_\lambda(1,\dots,1) = \prod_{x\in \lambda }\frac{N+c(x)}{h(x)} \eqqcolon \text{dim}(\lambda)~, \end{equation} where $c(x)=j-i$ for $x=(i,j)\in \lambda$ is the content of $x\in \lambda$, $h(i,j) = \lambda_i +\lambda_j^t-i-j+1$ is its hook-length, and $n(\lambda) =\sum_i (i-1)\lambda_i $. If, instead, we choose variables as $x_i=q^{i-1}$, we get the following $q$-deformation of the dimension of $\lambda$ \begin{equation} \label{qschur} s_\lambda(x_i=q^{i-1}) = q^{n(\lambda)} \prod_{x\in \lambda} \frac{[N+c(x)] }{[h(x)]} \eqqcolon q^{n(\lambda)} \dim_q (\lambda) ~. \end{equation} The quantity $\dim_q (\lambda)$ is known as the \textit{quantum dimension}, or $q$-dimension. It is given by the hook length formula \eqref{hookl} where numbers are replaced by $q$-numbers. If we consider knots and links as consisting of the world lines of anyons carrying some representations $\lambda,\dots$, $\dim_q(\lambda)$ gives the dimension of the Hilbert space of $\lambda$ \cite{kp}. Note that \eqref{qkmn} implies that irreps with the same dimension can have different quantum dimensions. This is why Chern-Simons theory with $0<q<1$ can distinguish between certain (un)knots which give identical results in the limit $q\to 1$ or $q\to 0$. In fact, the above expression simplifies even further. In particular, one can easily see that, for $\abs{q}<1$ and $ N \to \infty$, quantum dimensions for reps with finite column lengths depend only on the hook lengths. This is because $q^{N-k} = 0 $ for $k$ finite, so that, for $\lambda$ such that $c(x)$ is finite for all $x\in \lambda$, \begin{equation} \prod_{x\in \lambda} \frac{[N+c(x)] }{[h(x)]} = \frac{1}{(1-q)^{\abs{\lambda}}} \prod_{x\in \lambda} \frac{1 }{[h(x)]} ~. \end{equation} In fact, since the Jacobi triple product expansion is only valid in case $ 0 < \abs{q} <1$ and we take $N \to \infty$ here, we see that the numerical values of the Schur polynomials considered here only depend on the hook-lengths of their components. Of course, one can still use the full functional form involving terms of the form $q^N$ to in the context of knot theory, as these functional forms can allow one to distinguish between knots or links which have the same hook-lengths. For example, the quantum dimensions of $(2)$ and $(1^2)$ are different when taking into account their dependence on $N$. On the other hand, these quantum dimensions are identical when we take into account the fact that $q^N = 0$ for $\abs{q}<1$ and $N\to \infty$. Lastly, one should note that, since the hook-lengths are invariant under transposition, the quantum dimensions involved are invariant under transposition as well. \bibliographystyle{unsrt}
2,869,038,154,186
arxiv
\section*{Introduction} Machines increasingly perform tasks once believed the prerogative of biological systems. Yet, advances in automation are mostly limited to large machines such as cars, drones, and industrial robots. The advent of microrobots, artificial objects that convert energy sources such as light or chemicals into directed motion\cite{Popescu2020}, promises to bring automation down to the micro- and nano-scale. Microrobots can transport matter at the microscale, mix and pump fluids without external agitation \cite{Yuan2021}, thus offering tantalizing opportunities for performing autonomous tasks at small scales in applications ranging from targeted drug delivery \cite{Diez2021}, to environmental remediation \cite{Wang2019}, and even energy conversion \cite{Singh2015}. However, significant technological hurdles must first be overcome if microrobots are to realise their potential for real-world applications. Fabrication remains perhaps the most significant challenge to applied microrobotics, as both top-down and bottom-up approaches suffer from significant limitations with respect to scalability and modularity in combining different materials \cite{Wang2020}. Designing novel multi-functional micromachines requires increasingly expensive and specialized equipment, such as for two-photon polymerization \cite{Hu2021}, with implications for the accessibility of scientific research \cite{MaiaChagas2018}. In contrast to the complex microrobots produced by such techniques, Janus microswimmers are arguably the simplest class of synthetic active matter. These rely on surface patches with different physicochemical properties for propulsion in self-generated gradients \cite{Popescu2020}. In particular, Janus chemical swimmers do not need external actuation, requiring only a chemical fuel source to move. Nevertheless, these simple microrobots often suffer from a very low fabrication throughput due to the methodology used to produce the surface patches \cite{Zhang2019}. Janus microswimmers are typically produced by sputter-coating particle monolayers, exploiting line-of-sight vapour-phase deposition and particle self-shadowing \cite{Wang2020}. Metal coatings are thus selectively deposited as a spherical cap, whose extension can be controlled by tilting the monolayer \cite{Pawar2009,Archer2015}. Despite its widespread popularity, this approach has clear downsides, namely a yield on the order of micrograms and a highly inefficient use of metal precursors. Recently, Archer et al. demonstrated the scalable fabrication of Janus microrobots by functionalizing Pickering-wax emulsions with a two-step nanoparticle seeding and film-growth protocol \cite{Archer2018}. This represents a significant advance in the state-of-the-art of microswimmer fabrication, but the technique is specific to platinum and does not provide close control over the film morphology or composition. The constrained material and synthetic options in the literature, although inconsequential for fundamental studies, hamper the progress of applied microrobotics and, in particular, inhibit the development of propulsion mechanisms based on useful chemical reactions. Here, we demonstrate a modular approach to achieve the asymmetric functionalization of microparticles from the toposelective attachment of different nanoparticle thin films, thereby obtaining large (100 mg) quantities of photo-responsive Janus microrobots. Specifically, commercial nanoparticles are asymmetrically attached to SiO\textsubscript{2} microparticles, which are partially embedded in Pickering-wax emulsion droplets, via a poly(acrylamide) modified with silane and nitrocatechol groups \cite{Serrano2016}. The approach is not only scalable but also connects the vast literature on high-surface-area nanocatalysts \cite{Astruc2005,Leeuwen2020} to the fabrication of microswimmers, extending the range of targetable reactions and enabling the facile introduction of new functionalities. In particular, by utilizing silane and metal oxide-nitrocatechol chelation chemistry, we demonstrate Janus particles functionalized by TiO\textsubscript{2}, SrTiO\textsubscript{3}, and Fe\textsubscript{2}O\textsubscript{3} nanoparticles. Furthermore, using a post-modifiable poly(pentafluorophenylacetate) (pPFPAC) backbone presents the opportunity to exploit other metal-coordination chemistries through the introduction of various functional groups. Focusing on commercial TiO\textsubscript{2} P-25 nanocatalysts, we fabricate photoresponsive microswimmers, which not only self-propel under UV-illumination, but also exhibit an interesting orientation-dependent motion that could be exploited for navigation or directed transport \cite{Uspal2019}. \section*{Results} \paragraph*{Polymer-assisted nanoparticle attachment on microspheres} The first step in the fabrication of our photocatalytic microrobots is to identify a protocol for the robust attachment of nanoparticle thin films onto the microparticle supports. Electrostatic attachment is frequently used as a means for colloidal heteroaggregation, however, it cannot withstand the harsh cleaning protocols required to remove the solidified wax in Pickering-wax emulsions. Alternative strategies employing covalent bonds are therefore to be sought. In particular, silane groups are frequently used as anchors for SiO\textsubscript{2}, while a raft of coordination chemistries exist for transition metal complexes. However, combining all these features into a single, facile, and robust protocol presents significant challenges. Here, we overcome these obstacles by means of a polymer bridge. This comprises an acrylamide polymer backbone functionalized with both nitrocatechols and silanes to provide multiple covalent linking sites that can stably bind the nanoparticles to the SiO\textsubscript{2} surface. We base our polymer bridge on (poly)pentafluoroacetate (pPFPAC), a post-modifiable polymer containing reactive ester linkages, which can be exchanged with amine-containing functional groups by nucleophilic substitution. To decorate our microparticles with various metal-oxide nanoparticles, we functionalize this pPFPAC backbone with: i) N-boc-hexanediamine, ii) nitrocatechols, and (iii) silane-based groups to obtain poly(acrylamide)-g-(1,6-hexanediamine, 3-aminopropyldimethylethoxysilane, nitrodopamine) \cite{Serrano2016}. The N-boc protecting group on the diamine is removed with TFA, exposing protonated amines on the polymer backbone. This electrostatic component assists the polymer conformation when binding to negatively charged inorganic surfaces. The silane groups covalently bind the polymer to the SiO\textsubscript{2} microparticle support via siloxane groups, while nitrodopamine facilitates chelation to a range of metals, including titanium and iron oxides \cite{Gulley-Stahl2010,Xu2013}. In this way, the multi-functional polymer acts as a bridge that provides anchoring between the cores of silica microparticles and oxide nanoparticles in a simple heteroaggreation process. \begin{figure} \centering \includegraphics[width=0.75\textwidth]{Figure1.pdf} \caption{Overview of nanoparticle attachment and combination with Pickering-wax emulsions to achieve Janus particles. a) Protocol to attach nanoparticles to microparticle supports from the sequential mixing of the polymer and nanoparticle suspensions. b) SEM image showing the absence of nanoparticles on SiO\textsubscript{2} without intermediate polymer functionalization step (scale bar 3 $\mu$m). c,d) SEM images showing the successful attachment of TiO\textsubscript{2} P-25 to polymer-functionalized SiO\textsubscript{2} microparticles (scale bars 4 $\mu$m, 0.4 $\mu$m respectively). e) Loading curve of Ti/Si wt\% as a function of polymer added with constant TiO\textsubscript{2} concentration. The amount of added polymer is limited by its solubility in water. f) Loading curve of Ti/Si wt\% as a function of TiO\textsubscript{2} added with constant polymer concentration. g) Pickering-wax emulsion protocol to toposelectively attach pre-synthesized nanoparticles. Wax is used as a temporary mask to screen full coverage of the SiO\textsubscript{2} microparticles. h) SEM image of SiO\textsubscript{2}-wax colloidosomes produced after emulsification (scale bar 5 $\mu$m). i) SEM image showing TiO\textsubscript{2} nanoparticle-functionalized SiO\textsubscript{2} microparticles on SiO\textsubscript{2}-wax colloidosomes (scale bar 1 $\mu$m). j,k) SEM images of Janus particles obtained after removal of the wax. All scale bars are 1 $\mu$m.} \end{figure} We optimize the attachment of uniform nanoparticle films focusing on TiO\textsubscript{2} P-25 as our benchmark nanoparticle system, in concurrence with its status within the field of photocatalysis. The process, schematized in Fig. 1a, starts with the activation of the surface of the SiO\textsubscript{2} microparticles by an initial cleaning step, using a hot ammonia and hydrogen peroxide bath to provide available hydroxyl groups to form siloxane bonds with the polymer. The polymer is first dispersed at varying concentrations at $50^\circ$ C overnight. Cleaned particles are then added dropwise to the polymer solutions under magnetic stirring, and left to mix overnight at a final concentration of 0.1w/v\%. After stirring, the SiO\textsubscript{2} is then washed by centrifugation with double-distilled water to remove excess polymer from solution. The SiO\textsubscript{2} particles retain a yellow color from the polymer, due to the presence of nitrodopamine. The functionalized polymer-SiO\textsubscript{2} particles are then redispersed in a PBS 7.0 buffer solution, adapting the protocol of Serrano et al. for flat substrates \cite{Serrano2016}. The particles obtain a pinkish hue, likely arising from conformational changes of nitrodopamine in alkaline environments \cite{Cooper1936}. TiO\textsubscript{2} P-25 of varying concentrations is then added dropwise to a stirred 0.1 w\% solution of the polymer/SiO\textsubscript{2} microparticles and then left mixing overnight. Finally, the P-25/SiO\textsubscript{2} microparticles are washed extensively with alternating sonication and centrifugation steps to remove any excess TiO\textsubscript{2} P-25 not bound to the SiO\textsubscript{2}. The produced P-25/SiO\textsubscript{2} microparticles are imaged using SEM. We find that in the absence of the polymer functionalization step, no TiO\textsubscript{2} P-25 nanoparticles are bound to the SiO\textsubscript{2} microparticles after washing (Fig. 1b), even at the highest TiO\textsubscript{2} concentrations investigated. We also verify this with ICP-OES elemental analysis, and find negligible quantities of Ti in the sample (Fig. 1e), highlighting the effectiveness of the cleaning protocol and the requirement for the polymer to bind the TiO\textsubscript{2} P-25 nanoparticles (Figs. 1b-e). The TiO\textsubscript{2} loading can be effectively controlled by varying the amount of polymer and TiO\textsubscript{2} P-25 added, and is retained after washing (Figs. 1e,f). We observe a linear growth in the Ti/Si ratio with increasing polymer concentration (Fig. 1e), which is limited by the solubility of the polymer in water. The Ti/Si ratio would naturally saturate with increasing TiO\textsubscript{2} P-25 (Fig. 1f), but, at the highest TiO\textsubscript{2} concentrations we explore, aggregation of the nanoparticles starts to occur. \paragraph*{Toposelective nanoparticle attachment} Having established the success of the bulk surface modification, we combine it with a Pickering wax emulsion approach to produce asymmetrically functionalized SiO\textsubscript{2} microparticles, en route to realizing photocatalytic microrobots \cite{Hong2006}. This strategy consists in decorating the surface of molten wax droplets in an aqueous medium with SiO\textsubscript{2} microparticles. The particles are irreversbily adsorbed at the water-wax interface and are immobilized when solidifying the wax upon cooling. The surface of the particles immersed in the wax is then protected from surface modifications that are carried out in the aqueous medium. In particular, we prepare our Pickering emulsions by adapting the methodology described by Perro et al. \cite{Perro2009}. Cleaned particles are dispersed in didodecyldimethylammonium bromide (DDAB) solutions with concentrations corresponding to an approximate surfactant monolayer coverage on all particles \cite{Kalai2019}. Wax is added to the suspension, which is heated to $75^\circ$ C, and then subjected to a two-step vigorous stirring protocol \cite{Lebdioua2018} (see experimental section for more details). The hot emulsion is then rapidly cooled in an ice-bath to obtain solidified SiO\textsubscript{2}-wax colloidosomes (Figs. S1a-d). The colloidosomes are then washed consecutively by gravitational sedimentation with distilled water, a 0.1 M NaCl solution to remove the DDAB cationic surfactant, then water once more to remove the salt. At this stage, we add the multifunctional polymer to coat the exposed surface of the SiO\textsubscript{2} particles. The colloidosomes are dispersed in the polymer solution, gently agitated overnight with an orbital mixer, then washed once more with distilled water to remove excess polymer, redispersed in a pH 7.0 PBS solution, and gently mixed with the nanoparticles overnight. Finally, the colloidosomes are collected and dried, before mixing and filtering with chloroform to remove the wax (Fig. 1g). For straightforward functionalization, the colloidosomes should be denser than water (Fig. S3). Colloidosomes with a mean diameter of 33.4 $\mu$m are produced using 2.12 $\mu$m particles (Supplementary Fig. 2). A particle concentration of 5w/v\% in water is emulsified with wax in a 1:10 wax:water volumetric ratio. The process can be readily tuned to produce colloidosomes with SiO\textsubscript{2} particles of varying sizes (Figs. S1a-d). Microparticle size provides a convenient handle on controlling physical properties of the final microrobots such as the rotational diffusion coefficient, which could be exploited for e.g. enhanced mixing \cite{Lin2011} or navigation \cite{Fernandez-Rodriguez2020}. However, unlike some previous reports, we were not able to tune particle penetration into the wax with surfactant concentration - instead we found that the surfactant concentration only determined whether or not it was possible to obtain Pickering emulsions \cite{Kalai2019}. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Figure2.pdf} \caption{HR-TEM and elemental mapping of asymmetric TiO\textsubscript{2} P-25 nanoparticle thin films on SiO\textsubscript{2} microparticles. a-c) HAADF-STEM and EDS mapping of Janus particles: the asymmetric functionalization with TiO\textsubscript{2} is clearly visible (scale bars 500 nm). d-f) HR-TEM of the nanoparticle films indicate direct attachment of porous aggregate structures (scale bars 750 nm, 200 nm, 50 nm respectively).} \end{figure} Our process gives an approximate 50-50 TiO\textsubscript{2}/SiO\textsubscript{2} surface coating (Figs. 1i-k). Ti loading is also confirmed by ICP-OES, which gives an approximate 50\% coverage for the Janus microrobots, as observed with SEM (Figs. 1e,f). The value contrasts with the expected nanoparticle coverage based on the penetration of the SiO\textsubscript{2} microparticles into the wax (approximately 0.36R from direct measurement of the three-phase contact angle) and on previous findings using Pickering-wax emulsions \cite{Archer2018,Hong2006}. We hypothesize that the closely packed monolayers of SiO\textsubscript{2} form an effective barrier to the transport of TiO\textsubscript{2} P-25 nanoparticles onto the whole surface of the silica particles that is not protected by the wax, thereby preventing a higher surface coverage. The rotated particles in Fig. 1i, likely a result of subsequent filtration steps after nanoparticle attachment, also suggest this shadowing effect. The Janus morphology is retained after the harsh cleaning protocols necessary to remove excess nanoparticles and wax, and the redispersal of the dried Janus microrobots in water (Figs. 1j,k), indicating the durability of the polymer linkage. We confirm the Janus distribution of TiO\textsubscript{2} by elemental mapping, namely HAADF-EDS (Figs. 2a-c). Using TEM, we are also able to visualise the morphology of the thin nanoparticle films. The thin films are networks of attached nanoparticle clusters formed from multiple TiO\textsubscript{2} primary particles rather than individual nanoparticle structures. This is in agreement with the expected morphology of commercial TiO\textsubscript{2} P-25 \cite{Ohno2001}. The formation of such porous nanoparticle structures is favourable due to their enhanced surface area compared to dense films \cite{Gao2015}, thereby increasing catalytic activity and thus swimming speeds \cite{Choudhury2015}. Utilizing nitrocatechol chelation chemistry extends the applicability of our polymer-based nanoparticle attachment to a range of metal oxides, and could therefore be exploited to obtain composite thin films of functional nanoparticles on microparticle supports. To this end, and to demonstrate the generality of our method, we also attach different phases of TiO\textsubscript{2} (amorphous, anatase, and rutile), Fe\textsubscript{2}O\textsubscript{3}, and SrTiO\textsubscript{3} using the same protocols developed for TiO\textsubscript{2} P-25 (Fig. 3). Fe\textsubscript{2}O\textsubscript{3} imparts both photocatalytic and magnetic properties, and could be combined with TiO\textsubscript{2} P-25 for enhanced speeds \cite{Maric2020} and controlled steering \cite{Sridhar2020a}. SrTiO\textsubscript{3} is a perovskite photocatalyst widely studied for its water-splitting properties \cite{Goto2018}, and therefore is promising as the basis for a fuel-free microswimmer. Toposelective nanoparticle attachment is thus a promising modular route to obtaining large quantities of microrobots with a range of functionalities that can be tuned by selection of the starting materials. \begin{figure} \centering \includegraphics[width=\textwidth]{figure3.pdf} \caption{Symmetric and asymmetric functionalization of SiO\textsubscript{2} microparticles with various commercial, pre-synthesized nanoparticles, highlighting the versatility of the proposed approach (scale bars 500 nm).} \end{figure} \paragraph*{Microrobots from micro- and nano-particles swimming in 3D} To investigate the autonomous, photocatalytic motion of our TiO\textsubscript{2} P-25 microrobots, we perform single-particle tracking studies with bright-field microscopy, using in-house particle-tracking scripts (Fig. S8), under different illumination conditions and H\textsubscript{2}O\textsubscript{2} concentrations. TiO\textsubscript{2} is known to degrade H\textsubscript{2}O\textsubscript{2} under UV light, and the Janus distribution of TiO\textsubscript{2} P-25 nanoparticles on the SiO\textsubscript{2} core's surface thus leads to the formation of asymmetric gradients around the microswimmers. These in turn develop flow fields which result in self-diffusiophoresis of the particles \cite{Anderson1989} (Figs. 4a,b). We first confirmed the photo-responsive behavior of the microrobots by alternating off-on UV illumination cycles and found that the UV illumination is a necessary pre-condition for motion (Figs. 4c,d), with no evidence of memory or photo-charging effects \cite{Sridhar2020}. We then evaluated the instantaneous velocities of the microrobots under different illumination strengths and wavelengths (Fig. S8). The trajectories of microswimmers are typically described with a 2D active Brownian motion model \cite{Dietrich2017}, where a constant propulsion velocity is randomized in 2D by rotational diffusion. More recently, there has also been a focus on fabricating and studying microswimmers with unbounded 3D active motion \cite{Yasa2018}. However, our microrobots demonstrate more complex behavior, which does not follow the 2D and 3D active Brownian motion equations \cite{Loewen2020}. Specifically, their motion is mostly confined to the 2D plane, with interdispersed short and rapid periods of out-of-plane motion (Figs. 4e,h). The predominantly 2D motion may be explained by hydrodynamic interactions with the glass substrate, which favor in-plane motion \cite{Uspal2015a}. Competing effects from out-of-plane rotational diffusion and angular velocity arising from a non-uniform nanoparticle coating and wall-induced flows \cite{Ruhle2018} could cause the observed random out-of-plane ballistic segments. To characterise this 3D motion, we first confirm that in the absence of chemical fuel the particles are able to rotate in 3D (Video S2) and measure their rotational diffusion in solvents of varying viscosity in line with theory (Table S1) \cite{Anthony2006}. We then measure their 3D active trajectories via a simple approach making use of the changing diffraction patterns of the particles as they swim in and out of the focal plane (see Supplementary Text for details). \begin{figure} \centering \includegraphics[width=\textwidth]{figure4.pdf} \caption{Overview of particle motion under UV illumination in fuel-rich media. a) Schematic of particle motion. Under the decomposition of hydrogen peroxide, the microrobots swim with the functionalized cap forwards. b) Wide-field image with superimposed trajectories (shown for 3 s) under UV illumination (scale bar 20 $\mu$m). c) Distribution of instantaneous velocities during off (black) and on (violet) cycles. d) Mean instantaneous velocities of particles during alternating off-on cycles of UV illumination. e) Example of a microrobot trajectory in 3D, with the magnitude of the velocity vector color-coded to illustrate the occurrence of an orientation-dependent velocity. f) Plot of particle velocity vs. out-of-plane orientation angle. Particles swim faster out of plane, and the fastest motion is observed when the particle swims towards the glass substrate from above ($\phi = -90^\circ$). g) Distribution of out-of-plane orientation angles ($\phi$) across all particle trajectories. h) Distribution of displacements for all particles in x, y, and z. The distributions in x and y (in-plane) are Gaussian and similar, while the distribution of z displacements shows pronounced tails. Error bars correspond to 95\% confidence intervals, which were obtained with bootstrapping.} \end{figure} From the analysis of individual 3D trajectories, like the one reported in Fig. 4e, we measure median instantaneous velocities ranging from 2-13 $\mu$ms\textsuperscript{-1} on a per particle basis, and note that the particles swim cap first (Fig. 4a, Video S1). Under the conditions found to maximize swimming speeds (3v\% H\textsubscript{2}O\textsubscript{2}, 340 nm, 9.4 mWmm\textsuperscript{-2}), we find a median instantaneous velocity of approximately 6.5 $\mu$ms\textsuperscript{-1} on a per particle basis (Fig. S8). Lowering the fuel concentration leads to lower swimming speeds \cite{Howse2007}, and we also note a difference in swimming speed depending on the wavelength of illumination used. By plotting the instantaneous velocities as a function of particle orientation, we observe a significant asymmetry in the particle motion (Figs. 4f-h). In particular, the microrobots exhibit orientation-dependent velocities, displaying faster motion as their orientation is increasingly directed out of plane. The effect persists over multiple frames, indicating that it is not a tracking artifact. Moreover, contrary to expectations where shadowing dictates particle motion \cite{Singh2018}, the observed faster segments are not uni-directional. Furthermore, the previously discussed on-off photo-responsiveness of the particles excludes a memory or charging effect \cite{Sridhar2020}, which might have explained the bi-directional fast swimming segments (Figs. 4c,d). Based on the fact that the transmission of 340 nm wavelength light through 2 $\mu$m fused SiO\textsubscript{2} is on the order of 90\% (even after accounting for Fresnel losses), we hypothesise that the SiO\textsubscript{2} core does not block the UV light. This implies that even when the catalytic cap is completely "shadowed", upwards swimming is observed, and we attribute the fastest motion towards the substrate (cap down) to the direct illumination of the catalytic cap and the concurrent effect of gravity. The slower in-plane motion could be caused by the orientation of the propulsion direction, and the near-wall hydrodynamic interactions which increase drag forces \cite{Goldman1967}. Although statistics for 2D active Brownian trajectories with orientation-dependent velocities have been reported \cite{Sprenger2020}, their extension to the the 3D case reported here presents potential for future theoretical developments. \section*{Discussion} We have demonstrated a modular approach to fabricate large (100 mg) quantities of photo-responsive microrobots from the asymmetric attachment of commercial nanoparticles. Our method for toposelective nanoparticle attachment provides a versatile material platform which can be extended to various functional nanoparticles. We envision that it will offer an off-the-shelf modular approach to obtain large quantities of Janus particles with mixed composite films for a range of applications. The described protocol does not require specialised equipment beyond that found in typical synthesis laboratories, lending itself to wide-spread application. Moreover, the motion of our photo-responsive microrobots highlights several avenues for further research. Analysis of the observed 3D swimming behaviour requires a new theoretical framework not currently found in the literature. We also note that the TiO\textsubscript{2} P-25 microrobots self-propel more slowly at higher illumination wavelengths, which we attribute to TiO\textsubscript{2}’s large energy band gap. This demonstrates the opportunities to exploit the wealth of catalysis literature \cite{Etacheri2015} to inform the design of microrobots with desirable attributes (e.g. with faster swimming speeds or visible light activation) by appropriate selection of nanocatalytic materials. Finally, the ballistic out-of-plane motion of our photocatalytic microrobots suggests a photoreactor design where the competition between gravity and activity is exploited. Realizing motion control in 3D, for example by dynamic light patterning \cite{Arrieta2019}, could induce complex flows, which in turn can enhance overall reactor efficiencies in traditionally difficult-to-mix settings. Such control would not only be favorable for mixing in microfluidic channels \cite{Ward2015}, which have similar dimensions to the observed Z displacements of the microswimmers, but also in flat-panel reactors where mass transfer is a key limiting factor \cite{Takanabe2017}. Increased reaction rates arising from microswimmer motion have been previously demonstrated \cite{Orozco2013}, but applicability has been limited by materials and scalability. By targeting societally relevant reactions, such as water splitting, we hypothesise that scalable microrobots could be exploited in a novel photoreactor concept where the particles possess a dual catalyst-stirrer functionality. Therefore, incorporating aspects of chemical reaction engineering and soft matter physics could help overcome the four-phase mass-transfer limitations inherent to current photocatalytic systems. \section*{Materials and Methods} \paragraph*{Nitrodopamine Synthesis} Nitrodopamine was synthesized following well-established protocols \cite{Napolitano1992}. Briefly, dopamine(5g, 32.6 mmol) and sodium nitrite (6.3g, 91.3 mmol) were dissolved in 150 mL water and cooled to $0^\circ$ C under stirring in an ice bath. 25mL sulfuric acid (20v/v\%) was added dropwise and left stirring at r.T. overnight. The product mixture was cooled once more to $0^\circ$ C, filtered, and washed with copious amounts of double-distilled water at $0^\circ$ C and then ethanol at $0^\circ$ C. The resulting nitrodopamine hydrogen sulfate was then dried under high vacuum overnight. \paragraph*{poly(acrylamide)-g-(1,6-hexanediamine, 3-aminopropyl-dimethyloxysilane, nitrodopamine) synthesis} Synthesis of the pentafluorophenyl acrylate monomer and its polymerization were performed following previously published protocols \cite{Serrano2016}. Briefly, pentafluorophenol (87.21 g, 0.47 mol) was dissolved in 150 mL of dichloromethane (DCM) at $0^\circ$ C and 2,6-dimethylpyridine (60.55 mL, 0.52 mol) was added slowly through a dropping funnel, which was afterwards rinsed with another 150 mL of DCM. This second portion was added to the reaction mixture. Acryloyl chloride (42.14 mL, 0.52 mol) was then added dropwise to the reactor, still under cooling, and left to react overnight under N\textsubscript{2} atmosphere at room temperature. The resulting 2,6-dimethylpyridine hydrochloride salt was removed by filtration and the residual solution was washed three times with 100 mL of water, dried with magnesium sulfate and the solvent evaporated under reduced pressure. The product monomer was purified by distillation (in two portions) under reduced pressure (10 mbar) to give a colorless liquid (97.09 g, 78\%). The monomer pentafluorophenyl acrylate (14.31 g, 60.13 mmol), the initiator AIBN (23.83 mg, 0.15 mmol) and the chain-transfer agent 2-(dodecylthiocarbonothioylthio)-2-methylpropionic acid (158.45 mg, 0.43 mmol) were dissolved in 15 mL of toluene inside a Schlenk tube. The solution was degassed via three freeze-pump-thaw cycles and left to react under a nitrogen atmosphere at $80^\circ$C in an oil bath for 18h. After the RAFT polymerization was completed, the mixture was left to cool to room temperature and the resulting polymer (pPFPAC) isolated by precipitation in methanol and dried under vacuum for 48h (12.90 g, 90\%). Likewise, the post-modification steps were carried out as outlined in the work by Serrano et al., with the exception that the (poly)pentafluoroacetate (pPFPAC) backbone was not first PEGylated, and instead is only post-modified with the binding side groups (N-boc-hexanediamine, 3-aminopropyldimethylethoxysilane, nitrodopamine). Briefly, N-boc-hexanediamine (227 mg, 1.05 mmol) was dissolved in 6.4 mL dimethylformamide (DMF) with an excess of triethylamine (318 mg, 3.15 mmol). The mixture was added drop-wise to pPFPAC (500 mg, 2.1 mmol) dissolved in 5.07 mL DMF and left stirring overnight at $50^\circ$ C. A new solution containing 84.5 mg (0.525 mmol) of 3-aminopropyldimethylethoxysilane and triethylamine (160 mg, 1.58 mmol) was dissolved in 7.4 mL of DMF and added drop-wise to the previous solution, still at $50^\circ$ C and under stirring overnight. An excess of nitrodopamine was dissolved separately (154.4 mg, 0.525 mmol) in 7.4 mL of DMF with 160 mg of triethylamine (1.58 mmol). The latter solution was added slowly to the polymer solution and left stirring overnight at $50^\circ$ C. DMF was evaporated under reduced pressure, the mixture re-dissolved in DCM (40 mL, 4 equivalents) and trifluoroacetic acid (TFA, 10 mL, 1 equivalent) and left to react under stirring overnight. The resulting mixture was again evaporated under reduced pressure and re-dissolved in twice-distilled water (80 mL). This solution was purified by dialysis against water for two days using a membrane with a MWCO of 3,500 Da and subsequently freeze-dried to obtain the yellow-brown poly(acrylamide)-g-(1,6-hexanediamine, 3-aminopropyl-dimethyloxysilane, nitrodopamine). \paragraph*{Polymer-assisted nanoparticle attachment onto microspheres} Polymer solutions were prepared by dispersing the dry poly(acrylamide)-g-(1,6-hexanediamine, 3-aminopropyl-dimethyloxysilane, nitrodopamine) in $50^\circ$ C water overnight under magnetic stirring. The maximum polymer concentration is limited by the solubility of the polymer (approximately 60 mg/L). 1 w/v\% SiO\textsubscript{2} microparticles suspensions were first added to a bubbling $70^\circ$ C H\textsubscript{2}O\textsubscript{2}/NH\textsubscript{4}OH solution (1:1:1 volumetric ratio) under magnetic stirring for 10 minutes to activate the SiO\textsubscript{2} surface with reactive hydroxyl groups. The cleaned particles were then added dropwise under magnetic stirring to the prepared polymer solutions and left stirring overnight (final SiO\textsubscript{2} concentration 0.1 w/v\%). The polymer-SiO\textsubscript{2} particles were then washed by centrifugation to remove excess polymer and redispersed in phosphate-buffered saline (PBS pH 7.0). Nanoparticle suspensions of varying concentrations ((PBS pH 7.0 media) were then added dropwise to the polymer-SiO\textsubscript{2} suspensions under magnetic stirring and left mixing overnight (final SiO\textsubscript{2} concentration 0.1 w/v\%). Finally, the nanoparticle functionalized SiO\textsubscript{2} microparticles were washed extensively with alternating sonication and centrifugation steps to remove any excess nanoparticles not bound to the SiO\textsubscript{2} microparticles. \paragraph*{Microrobot Fabrication} Wax-SiO\textsubscript{2} Pickering Emulsions were prepared by adapting the methodology used by Perro et al \cite{Perro2009}. Suspensions containing 5w/v\% SiO\textsubscript{2} particles, 10.8 mg/L didodecyldimethylammonium bromide (DDAB) \cite{Lebdioua2018}, and a 1:10 molten wax: water volumetric ratio were heated to $75^\circ$ C, then stirred for 15 minutes at 3000 RPM before vigorous mixing at 13500 RPM for 160s using an IKA T-25 Digital Ultraturrax \cite{Chu2020}. After the emulsification step, the Pickering emulsion was immediately placed in an ice bath to rapidly solidify the colloidosomes. The emulsion was then washed in a 0.1M NaCl solution to remove surfactants, before further washing in deionised water. The SiO\textsubscript{2}-Wax colloidosomes were dispersed overnight by gentle agitation in an aqueous solution of a post-modified (poly)pentafluoroacetate (pPFPAC) polymer. The pPFPAC-colloidosomes were then washed thoroughly in deionized water before redispersion in a phosphate-buffered saline (PBS) pH 7.0 suspension containing the functional metal-oxide nanoparticles. After gentle mixing overnight, the nanoparticle functionalized colloidosomes were collected by filtration and the wax is finally removed with chloroform. \paragraph*{Light-controlled motion experiments} Stock H\textsubscript{2}O\textsubscript{2} (30 v/v\%, manufacturer) was added to dilute particle suspensions of the microrobots to obtain 300 uL of the desired H2O2 concentrations. 280 uL thereof was then pipetted into a flow-through cell (cell 137-QS; Hellma Analytics) with a light path length of 1 mm. Particles were imaged on an inverted microscope under Köhler illumination using a 40x objective (CFI S Plan Fluor ELWD 40XC) with adjustable collar (set to 1 mm), and videos were taken at 10 fps on a Hamamatsu C14440-20UP digital camera. UV illumination of the particles (340/380 nm) was achieved using a Lumencor SPECTRA X light engine as the excitation source through the objective. \paragraph*{Particle Tracking} Videos were first pre-processed with Fiji (ImageJ) for conversion to 8-bit and cropped to obtain one particle per field of view. Particles centres were tracked using a custom script combining the MATLAB implementation of the Crocker and Grier IDL particle tracking method \cite{Crocker1996}, and the radial symmetry approach outlined by Parthasarathy \cite{Parthasarathy2012}. The first invariant moment (inertia) of masks around the particle centres was determined using a MATLAB implementation of Hu’s 7 invariant moments formulation \cite{Hu1962}. Inertia look-up-tables (LUTs) were first obtained for stationary and diffusive particles. The evolution of inertia with Z was then inverted, before fitting of a cubic polynomial using an inbuilt MATLAB non-linear regression function (nlinfit). From this cubic functional form, the Z values of active particles could be determined with prediction intervals from inertia of their masks (see Supplementary Text for more details).
2,869,038,154,187
arxiv
\section{Introduction} Focused ion beam scanning electron microscopy (FIB-SEM) is an automated imaging technique capable of probing structure on the micrometer to nanometer scale. FIB-SEM consists of dual beam instrumentation in which a scanning electron microscope works in combination with high-energy ion beams that mill away, \emph{in situ}, a resin-encased specimen. In this setup, the microscope directly observes the newly revealed ultrathin layer of the sample as it is milled away. The serial images are then collected and aligned to produce a 3d volume image stack. This led to unprecedented $z$-axis resolution in the range of tens of nanometer, resulting in isotropic (i.e. $z$-axis sectional thickness is equal to $x-$ and $y-$axis pixel size), full image resolutions and user-specified site-specific selections of 4\si{\nano\meter}$^3$-sized voxels, achieved in a pioneering study of aldehyde-fixed mouse brain tissue images \cite{wu2017contacts, xu2021open, heinrich2021whole}. This represents an order of magnitude improvement in $z$-axis resolution over the best current SEM-only techniques, which alternate between imaging and cutting away the surface with diamond-tipped knives \cite{hayworth2006automating, wanner2015challenges}. While these and similar cutting strategies were significant advancements for biological imaging – this represented the first instance of a fully automated, \emph{in situ} volumetric EM approach performed by cutting material layers off \cite{denk2004serial}. Serial block-face methods lose consistency when cutting less than 25 \si{\nano \meter} deep \cite{nalla2005ultrastructural, giannuzzi1999review}, thus limiting both resolution and the acquisition volume. In replacing the diamond knife with the ion beam, FIB-SEM approaches, but does not reach, the state-of-the-art resolution for nanometer-scale 3D imaging of $\sim$1\si{\nano\meter}-sized voxels, achieved by transmission electron microscopy (TEM) where electrons are transmitted through the medium or sample. State-of-the-art cryo-TEM imaging is typically limited to sample thickness of less than $\sim$ 300 \si{\nano\meter}. FIB-SEM, however, allows for far greater imaging depth and unparalleled levels of automation that is constrained only by the availability of material, time, and operational costs. Before being applied in biological settings, FIB-SEM was first proposed and implemented in soft matter physics and has long proven invaluable to material scientists, leading to effective analysis and classification of micro structures in porous media \cite{holzer2004three, kelly2016assessing, wu2019analyses, gostick2019porespy}, characterization of nanopores in coal and other reservoir rocks \cite{fang2019methodology, garum2020micro}, the reconstruction of polymer films \cite{vcalkovsky2021comparison, roding2021three}, and better optimization and design of controlled drug release coatings \cite{fager2020optimization}. FIB-SEM dual beam setups have since been explored for biological imaging, beginning with a FIB used as a cutting device for exposing tissue and gland cells \emph{in situ} to subsequent SEM imaging \cite{drobne2004focused}. Though imaging resolution was limited to relatively shallow depths on the scale of dozens of micrometers, this was a major step forward for FIB-SEM technology and marked the beginning of modern volumetric microscopy \cite{kizilyaprak2019volume}. More recently, research developments in FIB-SEM methodology have matured the technique into an essential tool in the study of cellular biology \cite{hoffman2020correlative, muller20213d, weigel2021er, kizilyaprak2019volume, drobne2004focused, drobne2005electron, vidavsky2016cryo}. These developmental breakthroughs include correcting for anisotropic data slicing with optical flow interpolation \cite{gonzalez2022optical}, accelerating image acquisition times via adaptive sampling that leverages multiple low-dose, quicker scans \cite{dahmen2016feature}, software improvements incorporating back scattered electron detectors with positive sampling bias \cite{xu2017enhanced}, and considerable improvements in hardware and reliability of continuous, long-term operational sessions, upwards of several months \cite{xu2017enhanced, hayworth2015ultrastructurally}. In total, the improvements in FIB-SEM imaging have led to a vast increase in the amount of high quality cellular data to analyze. This is perhaps best exemplified in the recent open-source availability on the web repository ``OpenOrganelle \cite{heinrich2021whole}’’ of numerous whole cell atlas and tissue 3D images of 4 \si{nano\meter} isotropic voxel resolution, with some volumes measuring greater than $100,000$ \si{\micro\meter}$^3$ in size \cite{xu2021open}. While FIB-SEM data requires little in terms of post-processing and alignment, the sheer amount of data produced is a major bottleneck as the ever-increasing need for laborious, manual curation of expert annotations and analyses is only exacerbated. To overcome this barrier, there has been a major push towards automation in image analysis pipelines which will be crucial in furthering the understanding of cellular structures and subcellular components \cite{perez2014workflow}. The rapid advances in machine learning approaches can not only accelerate microscopy data analysis but also offer researchers tools that remain significantly more tolerant to noise than traditional computer vision techniques \cite{andrew2018quantified}. Recent applications of machine learning in electron tomography have helped identify ribosomes, proteomes, mitochondria, Golgi, and other contrast rich objects \cite{zeng2019aitom, bauerlein2021towards, li2019automatic, gubins2020shrec, moebel2021deep, de2022convolutional}. Chromatin structures in the nucleus, however, have not been extensively investigated using both traditional user-dependent approaches as well as through the use of machine learning. In this study, we apply machine learning tools for the automated delineation and pixel-by-pixel segmentation of intracellular structures in FIB-SEM-acquired image volumes of Caenorhabditis elegans ({\it C.elegans}) reproductive eggs. The data is considerably large and detailed, with a FIB-SEM state-of-the-art resolution of $4\times4\times4$ \si{\nano\meter}$^3$ for volumes measuring upwards of $11,250$ \si{\micro/meter}$^3$. This resolution and volume allows us to identify and segment \emph{all} sub-nuclear cellular structures, including the nucleolus, chromosomes, chromosome-encased synaptonemal complexes, and nuclear membrane. Machine learning network classifier training is performed on a small subset of manually curated labels consisting of only 0.1\% of \emph{all} pixels in the full dataset. To the authors' best knowledge, this full nuclear segmentation has not been performed at this resolution in an automated fashion. The remainder of this manuscript is organized as follows: Sect.~\ref{sect:data} describes the {\it C. elegans} sample preparation and imaging process; Sect.~\ref{sect:methods} details the data pre-processing, neural networks, training parameters, and evaluation metrics used for analysis; Sect.~\ref{sect:workflow} walks the reader through the network training and image segmentation workflow; Sect.~\ref{sect:results} presents machine learning-based segmentation results; and Sect.~\ref{sect:discussion} offers a discussion on future works and the scope of the manuscript. \section{The Data} \label{sect:data} The data used in this study are volumetric images of {\it C. elegans} gonads at three different stages of meiosis I pachytene: early, mid, and late. {\it C. elegans} is generally seen as an exemplary and prototypical organism in the investigation of developmental biology \cite{brenner1974genetics, hodgkin1977mutations}. {\it C. elegans} was in fact the first multicellular organism to have its entire genome sequenced \cite{coulson1986toward, c1998genome}. Studies of {\it C. elegans} have helped behavioral scientists map the neural circuitry that controls touch-induced locomotion \cite{chalfie1985neural}, deduce the functions of certain touch circuitry neurons \cite{chalfie1985neural}, investigate clues related to the evolutionary development of the circadian clock \cite{banerjee2005developmental}, and discover nested neurological dynamics/activity patterns that govern a behavioral hierarchy of motor actions across multiple time scales \cite{kaplan2020nested}. {\it C. elegans} remains the only organism to have the \emph{entirety} of its nervous system, also known as the connectome, mapped out \cite{white1986structure, towlson2013rich, cook2019whole} \subsection{FIBSEM Sample Preparation} One Durcupan-embedded {\it C. elegans} gonad sample was first mounted to the top of a 1 mm copper post which was in contact with the metal-stained sample for belter charge dissipation, as previously described in \cite{xu2017enhanced}. The vertical sample post was first trimmed to a small block of 95 x 80 x 150 \si{\micro\meter}$^3$ containing two Regions of Interest (ROI 1-2) from top to bottom. The sample block has a width perpendicular to the ion beam, and a depth in the direction of the ion beam. After FIB-SEM imaging ROI1 and ROI2, the sample was then trimmed to the second block of 80 x 80 x 100 µm3 containing ROI3. The trimming was guided by X-ray tomography data obtained by a Zeiss Versa XRM-510 and optical inspection under a microtome. Thin layers of conductive material of 10-nm gold followed by 100-nm carbon were coated on the trimmed samples using a Gatan 681 High-Resolution Ion Beam Coater. The coating parameters were 6 keV, 200 nA on both argon gas plasma sources, 10 rpm sample rotation with 45-degree tilt. \subsection{FIB-SEM 3D large volume imaging} One FIB-SEM prepared {\it C. elegans} gonad sample was imaged sequentially by a customized Zeiss FIB-SEM system (Germini 500) previously described in \cite{xu2017enhanced}, \cite{xu2020enhanced}, and \cite{xu2021open}. The block face of each ROI was imaged by a 250 pA electron beam with 0.9 keV landing energy at 200 kHz scanning rate. The $x$-$y$ pixel resolution was set at 4 nm. A subsequently applied focused Ga+ beam of 15 nA at 30 keV strafed across the top surface and ablated away 4 nm of the surface. The newly exposed surface was then imaged again. The ablation-imaging cycle continued about once every 75 seconds for 9 and 6 days to complete FIB-SEM imaging ROI1 and ROI2, respectively. Such cycle extended to once every 135 seconds for 14 days to image ROI3. The acquired image stack formed a raw imaged volume, followed by post-processing of image registration and alignment via local feature matching using a Scale Invariant Feature Transform (SIFT)-based algorithm \cite{lowe1999object, lowe2004distinctive, yang2018high}. The aligned stack consists of final isotropic volumes of $10 \times 20 \times 30$ \si{\micro\meter}$^3$, $10 \times 20 \times 20$ \si{\micro\meter}$^3$, and $25 \times 15 \times 30$ \si{\micro\meter}$^3$ for ROI1, ROI2, and ROI3, respectively. The voxel size of $4 \times 4 \times 4$ \si{\nano\meter}$^3$ was maintained for each sample throughout entire volumes, which can be viewed in any arbitrary orientations. \begin{figure}[!htb] \minipage{0.33\textwidth} \centering \includegraphics[width=5cm,height=10cm]{images/ROI1.png} \endminipage\hfill \minipage{0.33\textwidth} \centering \includegraphics[width=5cm,height=10cm]{images/ROI2.png} \endminipage\hfill \minipage{0.33\textwidth}% \centering \includegraphics[width=5cm,height=10cm]{images/ROI3.png} \endminipage \caption{Individual FIB-SEM cross sectional slices of \emph{Caenorhabditis elegans}. Pictured are (a) ROI1, (b)ROI2, and (c)ROI3 regions of interest.} \label{fig:fib-sem} \end{figure} \section{Methods}\label{sect:methods} \subsection{Data Preprocessing} \label{sect:preprocessing} The images generated for the FIBSEM images are in TIFF format. ROI1 has 6801 color-scale images that are $5000\times2500\times3$, ROI2 has 5000 images of the same size/dimensionality, and ROI3 has 8837 images that are $3750\times6250\times3$. The nuclei are extracted sequentially from each tiff file using traditional computer vision techniques. Since the nuclear membrane is not continuous due to the presence of nuclear pore complexes which show different contrast compared to the membrane, contour detection cannot be applied directly to the raw images to isolate the nuclei from the 3D volume stack. Instead, we perform image pre-processing by first minimizing image noise using Gaussian blur with an $11\times11$ kernel. Gaussian blur enhances image structure by smoothing out pixel intensities \cite{singhal2017study}, as pixel intensity and brightness are not uniform across the entirety of an image. Second, we perform Gaussian adaptive thresholding to help alleviate remaining inhomogeneous pixel intensities. Thresholding helps extract the margins of nuclei by producing a binary image of the nuclei outline. To fill the gaps in the nuclei boundaries, we first create a $7\times7$ elliptical kernel, which is then used to calculate the morphological gradient \cite{serra1982image, rivest1993morphological}, which highlights stark contrasts in neighboring pixel intensities to form object outlines using the difference between the dilation and erosion morphological operators \cite{lee1987morphologic, na2019filter}. The complete outlines will be fully recognized once the broken edge gaps have been filled. To accomplish this, the approximation method is set to chain approximation, and the retrieval method is set to RETR \textunderscore TREE. The chain approximation method used here stores all the boundary points of the contour \cite{etemadi1992robust, akinlar2011edlines}. The tree retrieval method retrieves all of the contours by traversing over the `chains' of boundary pixels, then reconstructs a full hierarchy of nested contours. These are then filtered to recover elliptical outlines within a specific size range. After filtering, the centroid is used to cluster the nuclei at the same location for any observed contour. Throughout the process of clustering nuclei based on position, a mapping of nuclei position and ID is retained. All of the nuclei collected are stored as JPEG files with a constant size of $1700\times1700$ pixels. \begin{figure}[!htb] \centering \minipage{0.5\textwidth} \centering \includegraphics[width=7cm,height=7cm]{images/orgimg.png} \endminipage \minipage{0.5\textwidth} \centering \includegraphics[width=7cm,height=7cm]{images/annotations.png} \endminipage \caption{(a) Image of an extracted nuclei from ROI1 and (b) corresponding annotations. Here, green represents nucleolus, blue represents synaptonemal complex, and yellow represents the chromosome annotations. }\label{fig:annotations} \end{figure} Figure~\ref{fig:annotations} shows the manual annotations that are created for one of the ROI1 images. The annotations are created using apeer web application (CITE apeer). All the nucleolus are labeled as 1 (green), chromosome as 2 (yellow), and the synaptonemal complex as 3 (blue) with the background as 0 for every image. For the training data, 260 images are randomly sampled and annotated from the generated images of nuclei of ROI1. The generated annotation file are shuffled and processed to generate a new label for rest of the nuclei to improve the performance of the network. 10\% of the data is used as a testing set for cross validation purposes. \subsection{Convolutional Neural Networks} Convolutional neural networks (CNNs) are feed-forward, deep learning architectures made up of several connected convolutional layers \cite{fukushima1982neocognitron, lecun1998gradient, simard2003best}. CNNs approximate some underlying unknown function that maps input data to some target domain. In this case of this study, the mapping to be approximated is that of raw input image data to the classification of each pixel to a particular label (chromosome, nucleolus, etc.); i.e. supervised semantic segmentation. As information passes through the network, each convolutional layer convolves the preceding layer's output with an increasing number of two-dimensional convolutional filters, resulting in an intermediate feature map that is passed along as input to the next layer or operation. The additional operations used between adjacent convolutional layers typically consist of nonlinear activation functions and normalization layers which help expedite the learning process, and max pooling to introduce translation invariance \cite{scherer2010evaluation} and reduce computational costs via spatial coordinate downsampling. Imperative to the CNN learning process are the convolutional filters. Each filter, typically of size $3\times3$ or $5\times5$, consists of weights to be learned during network training and acts as a smaller receptive field of view that houses some learned feature from the overall image set which may then be re-used and applied to more-complex image reconstruction tasks in the later network layers. This allows for a more-global learning paradigm and deeper, more-parameter efficient neural network architectures than that of more-traditional fully-connected neural networks (FCNNs) \cite{rosenblatt1958perceptron, goodfellow2016deep, xu2019overfitting} in which each individual pixel and connected node is assigned a learnable weight, and individual learned features remain localized to the single spatial coordinates in which they were found, not to be reused anywhere else in the network. \paragraph{U-Net} \label{sect:u-net} The main neural network model we implement for nucleolus segmentation is the U-Net \cite{u_net}, a deep convolutional network first used for pixel-by-pixel segmentation of biomedical images \cite{u_net}. Inspired by convolutional autoencoders \cite{mcclelland1987parallel, demers1992non}, the U-Net, pictured in Fig.~\ref{fig:unet_schematic}, is a symmetric encoder-decoder system made up of two distinct halves: the beginning contractive encoder-half on the left of Fig.~\ref{fig:unet_schematic} aims to capture contextual information and detect important image features with an increase in the number of convolutional channels and corresponding filters, while the expansive decoder-half on the right projects the learned features back into the higher resolution image space to reconstruct the input and predict a pixel-by-pixel semantic segmentation. Resulting from the encoder's contractive operations and partitioning the two halves is a compressed, lower-dimensional ``bottleneck'' which forces the network to learn a compression of the overall data and learn those features most imperative to the decoder reconstruction predictions. \begin{figure}[h] \centering \includegraphics[width=.95\textwidth]{images/UNet_FIBSEM.pdf} \caption{Schematic of a four-layer deep U-Net showing individual operations and intermediate feature map dimensions. In the left encoder-half, max pooling operations (red dotted arrows) halve spatial dimensions and convolutional operations (blue arrows) double the number of channels and filters at each subsequent layer. In the right decoder-half, convolutions decrease the number of channels and transposed convolutions (green dashed-dotted arrows) upsample the spatial dimensions. Skip connections (horizontal dashed arrows) join the two network halves. Lastly a single convolution with filter size $1 \times 1$ (purple long dashed arrows) reduces the network output to any desired $n$-number of channels, which in this case is four (one for background and three for the nucleolus, chromosomes, and synaptonemal complex). } \label{fig:unet_schematic} \end{figure} The U-Net remains popular in a number of current segmentation applications due to its robustness, simplicity, and ability to more readily propagate contextual information through the entirety of the network \cite{cciccek20163d, punn2022modality}. This is accomplished through three means: ~a) an increase in the number of convolutional channels over traditional FCNNs, largely due to depth that U-Nets achieve, ~b) successive max-pooling of the data between network layers local features more-easily correlated with behavior and context at differing length scales \cite{noh2019scale}, and ~c) channel-wise concatenations of encoder feature map outputs to the decoder layers. These long-reaching concatenations, know as skip connections, decouple the encoder and decoder halves, allowing for an aggregation of multi-scale feature representation at different network stages \cite{kumar2018u, drozdzal2016importance, noh2019scale} and helping alleviate the vanishing gradient problem which plagues deeper networks \cite{ioffe2015batch}. \paragraph{Mixed-Scale Dense Networks} \label{sect:msdnet} While U-net architectures remain popular, common implementations often require over several million trainable parameters which can lead to overfitting problems and harm network robustness, especially in applications where the amount of training data is low \cite{goodfellow2016deep, srivastava2014dropout}. In response, the MSDNet \cite{pelt2018mixed, pelt2018improving} architecture, depicted in Fig.~\ref{fig:msdnet}, was developed as a deep learning framework containing fewer trainable parameters (typically two to three orders of magnitude \emph{fewer}) than U-Nets. This is accomplished by densely connecting \emph{all} network layers to encourage maximum reusability of image features and by replacing the typical scaling operations found in encoder-decoder networks with dilated convolutions \cite{yu2015multi} in order to probe images at different length scales. By assigning a specific dilation to each MSDNet layer, the network can learn which dilation combinations are most effective. As a result, the number of network layers and the maximum integer dilation in which to cycle through are the most significant hyperparameters in which to toggle, drastically simplifying network design. Additionally, the dense connections among intermediate feature maps creates skip connections of \emph{all} possible lengths. Lost spatial information is more readily recovered with the inclusion of these dense skip connections, which furthermore helps alleviate the vanishing gradient problem that plagues deep networks \cite{ioffe2015batch}. \begin{figure} [h] \centering \includegraphics[width=.95\textwidth]{images/MSDNet_FIBSEM.pdf} \caption{Schematic of a three-layer mixed-scale dense network (MSDNet). Blue, green, and red lines above represent $3\times3$ dilated convolutions between each possible pair of input and intermediate layers. Different dilations are assigned to each color. Black dotted lines below represent the $1\times 1$ convolutional operator between the output and all other layers, amounting to a linear combination with learnable weights of all input and intermediate feature maps.} \label{fig:msdnet} \end{figure} \subsection{Training Parameters} During the model training phase, we use the multi-class cross entropy loss metric to measure how well the models classify each pixel to its respective class. The popular ADAM optimizer \cite{kingma2014adam} was chosen to minimize the loss and update the neural network weights accordingly. As for the network learning rates, all neural networks were trained for 200 epochs with an initial rate of $10^{-1}$ that was dropped by a factor of ten midway through training. For each trained model, a subset of 10\% of training data was set aside as a validation set for cross-correlation purposes and to monitor model overfitting. The network weight set correlating to the epoch with the lowest validation set loss was chosen. Lastly, each model was trained on single Nvidia RTX 3090 GPU with $24$ GB memory capacity and $936$ GB/second bandwidth using a 10 core/20 thread Intel i9-10900X CPU. \subsection{Evaluation Metrics} \label{sect:evaluation} To gauge model segmentation predictions in both the training and validation data sets, we use the F1 score, a popular measure of classifier performance \cite{chinchor1993muc}. It is defined as the harmonic mean of the model prediction's precision and recall, given by: \begin{equation} \label{eq:f1} \text{F1} = 2*\frac{\text{precision} * \text{recall} }{\text{precision} + \text{recall}}. \end{equation} \noindent To further elaborate on this metric, it is useful to focus on the model predictions within each of the individual classes; we denote TP$_i$ (true positive) as the number of pixels within a class $i$ correctly identified by the model, FP$_i$ (false positive) as the number of pixels incorrectly predicted to belong in class $i$, and FN$_i$ (false negative) as the number of pixels belonging to class $i$ that are misclassified by the model. The confusion matrix diagrams these entities in Fig.~\ref{fig:confusion}. The model precision and recall within a single class $i$ is then given by: \begin{equation} \label{eq:precision_recall} \text{precision}_i = \frac{\text{TP}_i }{\text{TP}_i + \text{FP}_i}, \quad\quad\quad\quad \text{recall}_i = \frac{\text{TP}_i }{\text{TP}_i + \text{FN}_i}. \end{equation} Precision and recall are often at odds with each other, as increasing recall (the ratio of how many instances within a particular ground truth class were correctly predicted) often reduces precision (the accuracy among all model predictions made of a single class), and vice-versa. For example, an overzealous classifier may over-predict, correctly identifying most instances of certain class but erroneously producing many more false positives, leading to suitable recall but poor precision. To alleviate this, the F1 metric offers a suitable balance between the two. \begin{figure} [!htb] \centering \includegraphics[width=.3\textwidth]{images/precisionVrecall.pdf} \caption{Confusion matrix of model predictions vs. the actual ground truth labels. Constituents of precision are highlighted vertically in purple, while constituents of recall are highlighted horizontally in red.} \label{fig:confusion} \end{figure} To calculate the F1 score for the entire model, individual F1 scores are calculated for each individual class from their respective precision and recall metrics in Eq.~\ref{eq:precision_recall}. The full model F1 score, our target evaluation metric in Eq.~\ref{eq:f1}, then results from averaging each individual class score. To adjust for class imbalance, we compute the micro F1 score which aggregates the class scores by weighing each one differently according to relative size; i.e. the ratio of pixels belonging in each class to the total number of pixels in the dataset. \subsection{Software Availability} All scripts used in this project are available on GitHub (\url{https://github.com/nirajmg/FIBSEM_segmentation}). Information on how to use each file is provided in the \texttt{README} file that is part of this repository. Additionally, all neural networks were implemented using the Python-based \emph{pyMSDtorch} deep learning software library (\url{https://pymsdtorch.readthedocs.io}), On the \emph{pyMSDtorch} platform, U-Nets and MSDNets are enhanced by allowing the specification of network architecture-defining hyperparameters, such as the number of network layers, initial number of channels, convolutional channel growth rate, and custom sets of MSDNet dilations. This level of user-defined custom which allows one to easily tune network hyperparameters to optimize performance. \section{Overall Workflow} \label{sect:workflow} The ROI1, ROI2, and ROI3 nuclei tiff stacks were divided into a single nuclei jpeg file. These images were pre-processed and annotated according to Sect.~\ref{sect:preprocessing} and diagrammed in the left half of workflow chat in Fig~\ref{fig:workflow}. Only nuclei images from ROI1 are used for training the various networks. This was due to the contrast between the three stacks being remarkably similar. \begin{figure} [!htb] \centering \includegraphics[width=.99\textwidth]{images/workflowdraft.jpg} \caption{Complete end-to-end workflow for the segmentation of FIB-SEM data. } \label{fig:workflow} \end{figure} The segmentation process was split into three network classifiers for extracting different groups of intracellular structures. Separating the classifiers improved segmentation accuracy and yields better results over training a single network for segmenting for all classes. Included in the training data for the first extractor model are four total annotations: one for the nucleolus, another for the chromosomes, and two background annotations, namely the interior nucleus background and exterior background located outside of the nucleus. The inclusion of two background classes greatly improved nucleolus and chromosome segmentation, as the exterior background class had a vastly different contrast and could be ignored in the gradient calculations and model parameter update steps. For the second segmentation model, the synaptonemal complex class was predicted, with prior synaptonemal complex annotations added to the aforementioned four annotations. Identifying the synaptonemal complex structures proved difficult, as their contrast was strikingly more homogeneous with the nucleus background than the other structures (blue structures in Fig.~\ref{fig:annotations}). For the third and final network, the nucleolus and nuclear membrane were the only background elements in the third batch of training data. Lastly, once all network training was complete and new segmentation predictions were inferred from the trained networks, we performed a post-processing step using skimage \cite{van2014scikit}, a Python-based open-source image library, to filter out small, superfluous objects from each individual class. More specifically, small objects of volume 1000, 2000, and 3000 pixels were filtered out for the synaponemal complex, chromosome, and nucleolus classes, respectively. \section{Results}\label{sect:results} We perform a parameter sweep on the U-Net architecture-governing hyperparameters to find the best performing neural network classifiers for segmenting the target cell structures consisting of the nucleolus, chromosomes, synaptonemal complex, and the cellular membrane. The hyperparameters considered in this analysis were the depth of the U-Net, number of initial base channels, interlayer growth rate of convolutional channels, and batch size. Results of the U-Net model sweep for the three separate classifiers are shown in Table \ref{tab:table1}, where for each column, the same hyperparameters are used for the three networks. Alternatively. the parameter sweep results for the single classifier accommodating all classes are shown in Table \ref{tab:table2}. For both, the best performing networks with respect to the micro F1 evaluation score, referenced and described in Sect.~\ref{sect:evaluation}, are highlighted in gray. \begin{table} [!htb] \small \parbox{.47\linewidth}{ \centering \begin{tabular}{c|cccc>{\columncolor[gray]{0.8}}ccccc} \hline \\[-1em] Model & 0 & 1 & 2 & 3 & 4 \\ \hline \hline\\[-1em] Depth & 4 & 4 & 4 & 5 & 5 \\ \hline\\[-1em] \begin{tabular}{@{}c@{}} Base channels \end{tabular} & 32 & 64 & 64 & 32 & 64 \\ \hline\\[-1em] \begin{tabular}{@{}c@{}}Growth rate \end{tabular} & 2 & 2 & 1.5 & 2 & 2 \\ \hline \hline\\[-1em] \begin{tabular}{@{}c@{}}Parameter \\ count ($10^6$)\end{tabular} & 2.14 & 8.56 & 2.57 & 8.63 & 34.51 \\ \hline \hline\\[-1em] \begin{tabular}{@{}c@{}}Training \\ loss ($10^{-2}$)\end{tabular} & 2.86 & 2.94 & 2.97 & 2.73 & 2.78 \\ \hline\\[-1em] \begin{tabular}{@{}c@{}}Validation \\ loss ($10^{-2}$)\end{tabular} & 3.19 & 3.3 & 3.42 & 3.40 & 3.45 \\ \hline\\[-1em] \begin{tabular}{@{}c@{}}Micro F1 \\ score \end{tabular} & .902 & .902 & .896 & .898 & .904 \\ \hline \end{tabular} \vspace{10pt} \caption{Summary of hyperparameter sweep results for U-Nets trained on data with nucleolus and chromosome \emph{only}. The best performing network according to the F1 evaluation metric is highlighted in gray. \label{tab:table1}} } \hfill \parbox{.47\linewidth}{ \begin{tabular}{c|c>{\columncolor[gray]{0.8}}cccccccc} \hline\\[-1em] Model & 0 & 1 & 2 & 3 & 4 \\ \hline \hline\\[-1em] Depth & 4 & 5 & 4 & 4 & 5 \\ \hline\\[-1em] \begin{tabular}{@{}c@{}} Base channels \end{tabular} & 32 & 32 & 64 & 64 & 64 \\ \hline\\[-1em] \begin{tabular}{@{}c@{}}Growth rate \end{tabular} & 2 & 2 & 1.5 & 2 & 2 \\ \hline \hline\\[-1em] \begin{tabular}{@{}c@{}}Parameter \\ count ($10^6$)\end{tabular} & 2.14 & 8.63 & 2.57 & 8.56 & 34.51 \\ \hline \hline\\[-1em] \begin{tabular}{@{}c@{}}Training \\ loss ($10^{-2}$)\end{tabular} & 3.617 & 3.82 & 3.89 & 3.55 & 3.41 \\ \hline\\[-1em] \begin{tabular}{@{}c@{}}Validation \\ loss ($10^{-2}$)\end{tabular} & 3.99 & 3.88 & 4.11 & 3.97 & 3.87 \\ \hline\\[-1em] \begin{tabular}{@{}c@{}}Micro F1 \\ score \end{tabular} & .766 & .781 & .7445 & .769 & 0.781 \\ \hline \end{tabular} \vspace{10pt} \caption{Summary of hyperparameter sweep results for the U-Nets trained on \emph{all} labeled classes, including synaptonemal complex. The best performing network is once again highlighted in gray. \label{tab:table2}} } \end{table} Upon inspection, the variation in depth or base channels has little effect on training and validation losses in the multi-network classifiers in Table \ref{tab:table1}. According to the F1 scores, all U-Net configurations perform within 1\% of each other, which is certainly a testament to the robustness of U-Net segmentation schemes. With a training F1 score of 0.904, the U-Net Model 4 with 64 base channels and a depth of 5 produce the best results. In contrast, for the single-network classifiers in Table \ref{tab:table2}, variation in the number of base channels and depth has a more pronounced impact on the variability of segmentation results. F1 scores are within 5\% of each other. Here, with a score of 0.781, the Model 1 with 32 base channels and depth 5 is the best performing configuration. No sweep was performed for the MSDNet classifier, as memory constraints limited this study to MSDNet layer depth of 40 and a maximum dilation setting of 15, considerably lower than the 100- and 200-layer networks implemented in the original paper and subsequent applications \cite{pelt2018improving, pelt2018mixed, zeegers2020task}. MSDNet performance was satisfactory with a validation F1 score of 0.8773, despite this constraint, However, this mark falls generally below the single network U-Net classifiers in Fig.~\ref{tab:table1}. We focus primarily on results from the multi-network classifiers described in Sect.~\ref{fig:workflow}, particularly the Model 4 configuration in Table \ref{tab:table1}. All of the networks are primarily trained on ROI1 images using random samples from various nuclei in ROI1. This is done to guarantee the full range of image contrast is represented during the network phase. Figure \ref{fig:ROI1} depicts the segmentation results for ROI1 from the Model 4 U-Net configuration. The left-most graphic depicts a 3D volumetric image the first two network classifier results, one trained to segment the cell chromosomes (in red) and nucleolus, and another trained to segment the synaptonemal complex (in blue). The middle subimage displays only the chromosome segmentations, while the right-most subimage shows the synaptonemal complexes segmentations, independent from their chromosome encasing. The synaptonemal complex segmentation remains particularly impressive, as their contrast is nearly homogeneous with the nuclear background. This result highlights the strength and generalizability of neural network-based segmentation. While the ROI1 segmentation in Fig.~\ref{fig:ROI1} results from a U-Net trained on small subset of 160 images from ROI1 itself, no images from ROI2 were present in the training data or network learning process. The networks were completely blind to ROI2 data. The resulting segmentation of ROI2 nucleolus, chromosomes, and synaptonemal complexes is shown in Figure \ref{fig:ROI2}. \begin{figure}[!htb] \minipage{0.33\textwidth} \centering \includegraphics[width=5cm,height=5cm]{images/ROI1_fullnuclie.png} \endminipage\hfill \minipage{0.33\textwidth} \centering \includegraphics[width=5cm,height=5cm]{images/chromosome.png} \endminipage\hfill \minipage{0.33\textwidth}% \centering \includegraphics[width=5cm,height=5cm]{images/sc.png} \endminipage \caption{ROI1 stack segmentation results.The first image displays the 3d orientation of nucleolus, chromosome and synaptonemal complex in a nucleus. The synaptonemal complex is seen in the Third image, whereas the second image shows a three-dimensional representation of a chromosome.}\label{fig:ROI1} \end{figure} \begin{figure}[!htb] \minipage{0.33\textwidth} \centering \includegraphics[width=5cm,height=5cm]{images/ROI2_fullnuclie.png} \endminipage\hfill \minipage{0.33\textwidth} \centering \includegraphics[width=5cm,height=5cm]{images/chromosomeROI2.png} \endminipage\hfill \minipage{0.33\textwidth}% \centering \includegraphics[width=5cm,height=5cm]{images/ROI2pattern.png} \endminipage \caption{ROI2 stack segmentation results.The first image displays the 3d orientation of nucleolus, chromosome and synaptonemal complex in a nucleus. The synaptonemal complex is seen in the Third image, whereas the second image shows a three-dimensional representation of a chromosome.}\label{fig:ROI2} \end{figure} The third network classifier identifies the nucleus membrane, pictured in Fig.~\ref{fig:membrane}. Gaps in the membrane walls appear, those these gaps are expected as they are artefacts resulting from imaging post-processing steps. Additionally, Fig.~\ref{fig:chromosomes} displays individually connected chromosomes that are labeled using 3D connected components. By setting the connectivity to 26, which constructs the decision tree for labeling decisions, and delta to 10, which dictates that any nearby voxel value less than 10 is treated as the same component, the segmented chromosome's twelve biggest components can be identified using the cc3d library \cite{rosenfeld1966sequential, sutheebanjard2012decision, silversmith2021cc3d}. \begin{figure}[!htb] \begin{minipage}{.45\textwidth} \centering \includegraphics[width=7.2cm,height=7.2cm]{images/membrane.jpg} \caption{Segmentation results of nuclear membrane from ROI1.} \label{fig:membrane} \end{minipage} \hfill \begin{minipage}{.45\textwidth} \centering \includegraphics[width=7.2cm,height=7.2cm]{images/single_chromosome.png} \caption{Extraction of individual chromosome from ROI1 using the 3d connected components.} \label{fig:chromosomes} \end{minipage} \end{figure} \section{Discussion and Future Work} \label{sect:discussion} In this study, we apply machine learning-based automated on large swaths of FIB-SEM nucleus data of C. elegans gonad eggs, providing pixel-by-pixel segmentation of all sub-nuclear cellular structures, including the nucleolus, chromosomes, synaptonemal complexes enveloped in chromosome pairs, and the nucleus membrane. We achieve an impressive micro F1 classification scoring metric to 0.904 and provide valuable insight into the morphological analysis and contextual insights with regard to cellular processes. We employ two deep learning architectures: U-Net and mixed-scale dense networks. Particularly impressive are the networks' ability to learn the synaptonemal complex pixels; these structures had strikingly similar contrast to that of the nucleus background. We hypothesize that the networks learned not only from the complexes' visual patterns, but also from the added context that the complexes were almost entirely encased in dual chromosome strands. In total, we analyzed a sizable amount of isotropic volumetric data; 6000 cross sectional slices of size $2500\times5000$ pixels spread across three regions of interest (ROIs), though the machine learning classifiers were trained on smaller cross sectional image slices encapsulating single nuclei at varying depths, each sized at $1700\times1700$ pixels. Impressively, this data was resolved to voxels of size $4 \times 4 \times 4$ \si{\nano\meter}$^3$. In order to generalize the training and segmenting of such large images onto smaller platforms, our approaches offer batch segmentation of images to reduce memory costs. For future work, we envision deep learning models incorporating full 3D volumes of images. Though memory intensive, 3D deep learning methods may be able to better contextualize local information given the extra neighboring information in higher dimensionality. A challenge here remains, namely the difficulty in curating 3D masks and labels for a large and representative training set of images. However, this difficulty may be alleviated with the use of mixed-scale dense networks (MSDNets, detailed in Sect.~\ref{sect:msdnet}). Though MSDNets performed sub-optimally compared to U-Nets in this current study, their densely-connected architecture and maximum reusability of data was specifically designed to perform better on smaller training data sets and sparse labeling \cite{pelt2018mixed, pelt2018improving}, and their ability to accommodate 3D volumes of images with little network architecture is advantageous, Alternatively, patch-based (or tile-based) data augmentation schemes in which smaller, overlapping subsets of images are drawn from the original data \cite{innamorati2019learning, cui2019deep} may be generalized to 3D volumes of data. allowing 3D U-Nets and MSDNets to learn from significantly smaller sets of labels and training data, evidenced by similar overlap averaging techniques \cite{pielawski2020introducing}. The contrast disparity between different ROI stacks remains a difficulty, as the network performs sub-optimally on ROI3. This necessitates either retraining the network on a sample of photos or pre-processing of images to match the contrast with the training images. In this instance, one of the methods utilized to match the ROI3 contrast with the ROI1 and ROI2 was histogram matching. To generalize our workflow to cryo-electron microscopy (cryo-EM) data, we intend to use similar techniques. Cryo-EM, a new biophysical method for determining the structure of protein complexes, is becoming more and more popular. Recognizing the sophisticated molecular features on medium-resolution cryo-EM density maps is difficult, though experimental cryo EM structures could be segmented more efficiently by applying deep learning to them. \section{Funding} This work was partially funded by NIH award number 5R00GM132544-04 and University of Colorado, Boulder start-up funds belonging to V. K. Further support originates from the National Institute of General Medical Sciences of the National Institutes of Health (NIH) under Award 5R21GM129649-02 and from the Laboratory Directed Research and Development Program of Lawrence Berkeley National Laboratory under U.S. Department of Energy Contract number DE-AC02-05CH11231. \section{Author Contributions} VK and PHZ supervised the project; NG, EJR, SP, CSX, HFH, and VK wrote the manuscript (original draft); VK, PHZ, NG, and EJR reviewed and edited the manuscript; CSX, SP, and HFH collected FIB-SEM data; FW, DG, and VK prepared C elegans samples. FW and AD provided C. elegans strains and advised on the study; NG and EJR performed data analysis and machine learning architecture design and training; EJR and PHZ designed pyMSDtorch machine learning software suite; NG and VK uploaded data and provided workflow to the GitHub repository; PHZ proposed the machine learning solutions; VK proposed the biological questions and conceived the study. \bibliographystyle{unsrt}
2,869,038,154,188
arxiv
\section{Introduction} \label{sec:introd} Our understanding of the cosmological world relies on two fundamental assumptions: 1) The validity of General Relativity, and 2) conservation of matter since the Big Bang. Both assumptions yield the concordance cosmological model (CCM), according to which an initial inflationary period is followed by (exotic, i.e., non-baryonic) dark-matter (DM) structures forming and then accreting baryonic matter, which fuels star formation in the emerging galaxies, and according to which dark energy (represented by a cosmological constant $\Lambda$) drives the acceleration of the Universe at a later epoch. One important way to test assumption~(1) is to compare the phase-space properties of the nearest galaxies with the expectations of the CCM. These tests are the focus of the present contribution. The possibility of the existence of DM was considered more than 85 years ago \citep{Einstein21,Oort32,Zwi33}, and has been under heavy theoretical and experimental scrutiny \citep{Bertone05} since the discovery of non-Keplerian galactic rotation curves by \cite{RF70} and their verification and full establishment by \cite{Bosma81}. The existence of DM is popularly assumed because it complies with the General Theory of Relativity, and therefore Newtonian dynamics, in the weak-field limit. Newtonian dynamics is the simplest form of gravitational dynamics given that the equations of motion are linear in the potential, and is thus readily accessible to numerical simulations of cosmic evolution, upon which the concordance scenario of structure formation is based \citep{BFPR84}. The concordance bottom-up scenario of structure formation involving the repeated accretion of clumps of cold dark matter (CDM) is assumed to operate throughout the Universe on all scales. CDM particles with masses of order of about~$100\,$GeV are the preferred candidates to account for constraints placed on the matter density, $\Omega_M$, of thermal relics with realistic cross-sections (see, e.g., eq.~28 of \citealt{Bertone05}). For lighter particle candidates, the damping scale becomes too large: for instance, a hot DM (HDM) particle candidate ($m_{\rm HDM} \approx$ few eV) would have a free-streaming length of $\approx 100\,$Mpc leading to too little power at the small-scale end of the matter power spectrum. The existence of galaxies at redshift $z \approx 6$ implies that the coherence scale should have been smaller than $100\,$kpc or so, meaning that warm DM (WDM) particles with mass $m_{\rm WDM}\approx 1 - 10\,$keV are close to being ruled out \citep{Peacock03}. CDM is a concept that, together with the cosmological constant ($\Lambda$), has been motivated primarily by large-scale observations of, e.g., the cosmic microwave background (CMB) radiation (WMAP, \citealt{Spergel07, Komatsu09}), the accelerating universe (\citealt{Riess98, Perlmutter99}), or the power spectrum of density perturbations from the SDSS \citep{Tegmark04} and the 2dF galaxy redshift survey \citep{Cole05}, all of which serve as empirical benchmarks for calibrating and constraining theoretical scenarios and cosmological models. This concordance $\Lambda$CDM model is consistent with observations on the~Gpc to~Mpc scales \citep{Reyesetal2010}, but it implies that the Universe evolves towards an infinite energy content\footnote{One may refer to this issue as the ``cosmological energy catastrophy'' in allusion to the black body UV catastrophy, which led Max Planck to heuristically introduce an auxiliary ($=$ {\sl \uline{H}ilfsgr\"o{\ss}e} in German) number $h$, to reproduce the black body spectrum.} due to the creation of vacuum energy from dark-energy-driven accelerated expansion (e.g. \citealt{Peacock99})\footnote{Energy conservation is a problematical issue in General Relativity (GR). The stress-momentum-energy tensor is a pseudo tensor and so is not invariant under a transformation to a different coordinate system and back. This may perhaps be considered to indicate that GR may not be complete.}. Less problematically perhaps, but nevertheless noteworthy, the DM particle cannot be contained in the Standard Model of particle physics without implying a significant revision of particle physics (e.g. \citealt{Peacock99}). Strong evidence for the existence of DM has been suggested by the observations of the interacting galaxy-cluster pair 1E0657-56 (the ``Bullet cluster'', \citealt{Clowe06}). The velocity of the sub-cluster relative to the large cluster has since been calculated to be about 3000~km\,s$^{-1}$ so that the observed morphology can arise \citep{MB08}. But according to \cite{AM08} and \cite{LeeKomatsu10}, such high velocities between a sub-cluster and a main galaxy cluster are virtually excluded in the CCM. Near the centre of lens-galaxies, the observed delay times between the multiple images of strongly lensed background sources cannot be understood if the galaxy has a standard (NFW or isothermal) DM content and if, at the same time, the Hubble constant has a classical value of 70 km\,s$^{-1}$\,Mpc$^{-1}$: the solution is either to decrease the Hubble constant (in disagreement with other observations), or to consider the known baryonic matter (with constant mass-to-light ratio) as the one and only source of the lensing \citep{KS04}. On Local Volume scales (within about 8~Mpc), it has been pointed out that the Local Void contains far fewer dwarf galaxies than expected if the CCM were true. At the same time, there are too many large galaxies in the less crowded parts such that the arrangement of massive galaxies in the Local Volume is less than 1~per cent likely in the CCM \citep{PN10}. This discussion highlights that there are important unsolved issues in the CCM. This clearly means that substantial effort is required to understand the problems, to perhaps distill additional clues from the data that can provide solutions, and to improve the theory. Galaxy formation and evolution is a process that happens on scales much smaller than 1~Mpc. Ironically, a major limitation of our ability to develop a physically consistent model of how galaxies evolved out of the dark comes from incomplete knowledge of the Local Group, in particular from the lack of understanding of the structure and distribution of dwarf satellite galaxies. But, over the past few years, a steady flow of new results from nearby galaxies including the Milky Way (MW) and the improving numerical resolution of computational studies of galaxy formation have allowed ever more rigorous tests of the CCM. According to the DM hypothesis, galaxies must have assembled by means of accretion and numerous mergers of smaller DM halos. Therefore, galaxies such as the MW should be swarmed by hundreds to thousands of these halos \citep{Mooreetal99,Diemand08}, whereby the number of sub-halos is smaller in WDM than in CDM models \citep{Knebeetal08}. Furthermore, the triaxial nature of the flow of matter at formation would make it impossible to destroy halo substructure by violent relaxation \citep{Boilyetal04}. These sub-halos should be distributed approximately isotropically about their host, and have a mass function such that the number of sub-halos in the mass interval $M_{\rm vir}, M_{\rm vir}+dM_{\rm vir}$ is approximately $dN \propto M_{\rm vir}^{-1.9}\,dM_{\rm vir}$ \citep{Gaoetal04}. In contrast to this expectation, only a few dozen shining satellites have been found around both the MW and Andromeda (M31), while the next largest disc galaxy in the Local Group, M33, has no known satellites. The MW hosts the 11 ``classical'' (brightest) satellites, while 13 additional ``new'' and mostly ultra-faint satellite galaxies have been discovered in the past 15 years primarily by the Sloan Digital Sky Survey (SDSS)\footnote{For convenience, the 11~brightest satellite galaxies are here referred to as the ``classical'' satellites because these were known before the SDSS era. These include the LMC and the SMC with the others being dwarf spheroidals. The other, more recently discovered satellites are fainter than the faintest ``classical'' satellites (UMi and Draco), and these are called the ``new'' or the ``ultra-faint'' satellites or dwarfs (see Table~\ref{tab:satellites}).}. While the MW satellites are distributed highly anisotropically (e.g. \citealt{Klimentowskietal09}), observations of the internal kinematics (velocity dispersion) of the satellites suggest they are the most DM dominated galaxies known (e.g. fig.~15 in \citealt{SimonGeha07}). That is, the velocity dispersions of their stars seem to be defined by an unseen mass component: the stars are moving faster than can be accounted for by their luminous matter. The known satellites may therefore be the luminous ``tip of the iceberg'' of the vast number of dark sub-halos orbiting major galaxies such as the MW. Much theoretical effort has been invested in solving the problem that the number of luminous satellites is so much smaller than the number of DM-halos predicted by the currently favoured concordance $\Lambda$CDM hypothesis: stellar feedback and heating processes limit baryonic growth, re-ionisation stops low-mass DM halos from accreting sufficient gas to form stars, and tidal forces from the host halo limit growth of the DM sub-halos and lead to truncation of DM sub-halos \citep{DekelSilk86, DekelWoo03, MKM, Koposovetal09, OF09, Kirby09, Shaya2009, Busha09, Maccioetal09}. This impressive and important theoretical effort has led to a detailed quantification of the DM-mass--luminosity relation of MW satellite galaxies. Moreover, the discovery of new (ultra-faint) dSph satellites around the MW suggests the validity of the ``tip of the iceberg'' notion. These lines of reasoning have generally led to the understanding that within the $\Lambda$CDM cosmology, no serious small-scale issues are apparent (e.g. \citealt{Tollerudetal2008,Primack09}). In this contribution we test whether the CCM can be viewed as a correct description of the Universe by studying generic properties of the Local Group\footnote{Useful reviews of the Local Group are provided by \cite{Mateo98} and \cite{vandenBergh99}.}, which is a typical environment for galaxies -- the Local Group properties {\sl must} conform to the CCM if it is to be valid universally. To test this hypothesis, we critically examine state-of-the art models calculated within the CDM and WDM framework by a number of independent research groups developed to explain the properties of the faint satellite galaxies, by comparing them with the following observations: the mass-luminosity relation for dSph satellites of the Milky Way (Sect.~\ref{sec:ML}); the mass-distribution of luminous-satellite halo-masses (Sect.~\ref{sec:mfn}); and the observed relation between the bulge mass of the host galaxy and the number of satellites (Sect.~\ref{sec:origin}). The question of whether the Disc-of-Satellites (DoS) exists, and if in fact the latest MW satellite discoveries follow the DoS, or whether the existence of the DoS is challenged by them, is addressed in Sect.~\ref{sec:DoS}. In Sect.~\ref{sec:DoS}, the observed invariance of late-type baryonic galaxies is also discussed in the context of the Local Group. In these sections it emerges that the CCM has problems relating to the observed data. In Sect.~\ref{sec:tdgs} the problems are interpreted as clues to a possible solution of the origin of the satellite galaxies. The implications of testing the CCM on the Local Group for gravitational theories are also discussed. Conclusions regarding the consequences of this are drawn in Sect.~\ref{sec:concs}. \section{The satellite mass -- luminosity relation (problem~1)} \label{sec:ML} Our understanding of the physical world relies on some fundamental physical principles. Among them is the conservation of energy. This concept implies that it is increasingly more difficult to unbind sub-components from a host system with increasing host binding energy. Within the DM hypothesis, the principle of energy conservation therefore governs how DM potentials fill-up with matter. There are two broadly different physical models exploring the consequences of this, namely models of DM halos based on internal energy sources (mostly stellar feedback), and models based on external energy input (mostly ionisation radiation). In the following, the observational mass--luminosity data for the known satellite galaxies are discussed, and the data are then compared to the theoretical results that are calculated within the CCM. \subsection{The observational data} \label{ssec:obsLM} Based on high quality measurements of individual stellar line-of-sight velocities in the satellite galaxies, \cite{Strigari08} (hereinafter S08) calculate dynamical masses, $M_{\rm 0.3kpc}$, within the inner~0.3~kpc of~18 MW~dSph satellite galaxies over a wide range of luminosities ($10^3 \simless L/L_\odot \simless 10^7$). The LMC and SMC are excluded, as is Sagittarius because it is currently experiencing significant tidal disturbance. S08 significantly improve the previous works by using larger stellar data sets and more than double the number of dwarf galaxies, and by applying more detailed mass modelling. Their results confirm the earlier suggestion by \cite{Mateoetal93}, \cite{Mateo98}, \cite{Giletal07}, and \cite{ Penarrubia08} that the satellites share a common DM mass scale of about $10^7\,M_\odot$, ``and conclusively establish'' (S08) this common mass scale. The finding of S08 can be quantified by writing \begin{equation} {\rm log}_{10}M_{\rm 0.3 kpc}= {\rm log}_{10}M_0 + \kappa\,{\rm log}_{10}L, \label{eq:ML} \end{equation} and by evaluating the slope, $\kappa$, and the scaling, $M_0$. S08 derive $\kappa=0.03\pm0.03$ and $M_0 \approx 10^7\,M_\odot$. Using the Dexter Java application of \cite{Demleitneretal01}, a nonlinear, asymmetric error weighted least squares fit to the S08 measurements reproduces the common mass and slope found by S08, as can be seen from the parameters listed in Table~\ref{tab:fits}. By excluding the least luminous dSph data point, one obtains the same result (Table~\ref{tab:fits}). It follows from Eq.~\ref{eq:ML} that \begin{eqnarray} (M_{\rm 0.3 kpc})^{1/\kappa} & = & M_0^{1/\kappa}\,L \quad (\kappa \ne 0), \nonumber \\ M_{\rm 0.3 kpc} & = & M_0 \quad\quad\; (\kappa=0). \label{eq:exp} \end{eqnarray} This central mass of the DM halo can be tied by means of high-resolution CDM simulations to the total halo virial mass before its fall into the host halo (S08, see also Sect.~\ref{sec:mfn}), \begin{equation}\label{eqn:Mh=M0.3} M_{\rm vir} = (M_{\rm 0.3 kpc})^{1/0.35}\times10^{-11}M_\odot, \label{eq:bullock} \end{equation} yielding $M_{\rm vir}=10^9\,M_\odot$ for $M_{\rm 0.3 kpc}=10^7\,M_\odot$ (the common-mass scale for $\kappa=0$). Thus, substituting $M_{\rm 0.3 kpc}$ into Eq.~\ref{eq:bullock} using Eq.~\ref{eq:exp} with $\kappa\ne 0$, leads to \begin{equation}\label{eqn:MhLv} (M_{\rm vir})^{0.35/\kappa} = M_0^{1/\kappa}\times10^{-(11\times0.35)/\kappa}\,L. \label{eq:etakappa} \end{equation} This value of the halo mass near $10^9\,M_\odot$ for the satellites in the S08 sample is confirmed by a new analysis, in which \cite{Wolfetal09} show that the mass can be derived from a velocity dispersion profile within the deprojected 3D half light profile with minimal assumptions about the velocity anisotropy. In this way they obtain a robust mass estimator. The observed 5$\sigma$ lower value for $0.35/\kappa \equiv \eta$ is thus~$\eta = 2.06$ (with $\kappa=0.02+5\times0.03$ from Table~\ref{tab:fits}). \subsection{Model type A: Internal energy sources} \cite{DekelSilk86} and \cite{DekelWoo03} studied models according to which star formation in DM halos below a total halo mass of $M_{\rm vir} \approx 10^{12}M_\odot$ is governed by the thermal properties of the inflowing gas, which is regulated primarily by supernova feedback. These models demonstrate that the mass-to-light ratio of sub-halos follows $M_{\rm vir}/L \propto L^{-2/5}$ (eq.~24 of \citealt{DekelWoo03}; see also eq.~33 of \citealt{DekelSilk86}). This approximately fits the observed trend for dSph satellite galaxies \citep{Mateo98}. These models thus imply that \begin{equation}\label{eq:DS} \left(M_{\rm vir}\right)^{\eta_{\rm th}} = \zeta \; L, \end{equation} where $L$ is the total luminosity, $M_{\rm vir}$ is the virial DM halo mass, $\eta_{\rm th}=5/3$, and $\zeta$ is a proportionality factor. In essence, this relation states that more-massive halos have a larger binding energy such that it becomes more difficult to remove matter from them than from less massive halos. Comparing with Eq.~\ref{eq:etakappa} and with its resulting $\eta$ value as given at the end of Sect.~\ref{ssec:obsLM}, it follows that the observed 5$\sigma$ lower value for $\eta = 0.35/\kappa =2.06$ is in conflict with Eq.~\ref{eq:DS} where $\eta_{\rm th}=5/3=1.67$. \begin{table} \begin{center} \caption[]{The slope of the DM-mass--luminosity relation of dSph satellite galaxies. Fitted parameters for Eq.~\ref{eq:ML}.\label{tab:fits}} \begin{tabular}{lccc} \hline\hline\\[-3mm] data &$\kappa$ &radius &$M_0$\\ & & [pc] & $[10^7\,M_\odot]$\\ \hline {\bf Observational:}\\ 1 & $+0.02\pm0.03$ &300 &$1.02\pm0.39$\\ 2 & $+0.02\pm0.03$ &300 &$1.01\pm0.40$\\ 3 & $+0.01\pm0.03$ &300 &$1.09\pm0.44$\\ *4 & $-0.03\pm0.05$ &600 &$6.9\pm4.9$\\ \hline {\bf DM Models:}\\ A:$\;$ feedback &$0.21$ &300 &---\\ B1:$\;$ re-ionisation, SPS &$0.15\pm0.02$ &300 &$0.24\pm0.06$\\ B2:$\;$ re-ionisation &$0.17\pm0.01$ &300 &$0.18\pm0.02$\\ C:$\;$ SAM &$0.42\pm0.02$ &300 &$2.0\pm0.9$\\ *D:$\;$ Aq-D-HR &$0.17\pm0.02$ &600 &$0.41\pm0.14$\\ E1:$\;$ 1keV(WDM) &$0.23\pm0.04$ &300 &$0.069\pm0.045$\\ E2:$\;$ 5keV(WDM) &$0.12\pm0.02$ &300 &$0.43\pm0.081$\\ F:$\;$ Aq-infall &$0.13\pm0.01$ &300 &$0.32\pm0.022$\\ \hline \end{tabular} \end{center} Notes to the table: Fits to $\kappa=0.35/\eta$: data 1--4 are observational values, data A--F are models (see Sect.~\ref{sec:ML}). Notes: 1: our fit to S08 (who give central 300~pc masses, 18 satellites, their fig.~1). 2: our fit to S08 without Seg.1 (faintest satellite, i.e. 17 satellites, their fig.~1). 3: our fit to S08 without Seg.1 and without Hercules (i.e. 16 satellites, their fig.~1). 4: our fit to the observational data plotted by \cite{OF09} (who give central 600~pc masses, only 8 satellites, their fig.~1). A: \cite{DekelSilk86,DekelWoo03}, stellar feedback (Eq.~\ref{eq:DS}). B1: our fit to \cite{Busha09}, their SPS model. B2: our fit to \cite{Busha09}, inhomogeneous re-ionisation model. C: our fit to \cite{Maccioetal09}, semi-analytical modelling (SAM), fit is for $L_V>3\times10^5\,L_{V,\odot}$. D: our fit to \cite{OF09} (Aq-D-HR). E1: our fit to the $1\,$keV WDM model of \cite{MF09}. E2: our fit to the $5\,$keV WDM model of \cite{MF09}. F: our fit to the Aquarius sub-halo-infall models of \cite{Cooper10}. *: the entries with an asterisk are for the central 600~pc radius region. \end{table} \subsection{Model type B1, B2: External energy source} \cite{Busha09} follow a different line of argument to explain the dSph satellite population by employing the DM halo distribution from the {\sl via Lactea} simulation. Here the notion is that re-ionisation would have affected DM halos variably, because of an inhomogeneous matter distribution. A given DM halo must grow above a critical mass before re-ionisation to form stars or accrete baryons. Thus the inhomogeneous re-ionisation model (\citealt{Busha09}{, their fig.~6}) implies, upon extraction of the theoretical data and using the same fitting method as above, theoretical $\kappa$-values of 0.15--0.17. These disagree however, with the observational value of~0.02 with a significance of more than 4~$\sigma$, i.e. the hypothesis that the observational data are consistent with the models can be discarded with a confidence of 99.99~per cent (Table~\ref{tab:fits}). \cite{Busha09} suggest that adding scatter into the theoretical description of how DM halos are filled with luminous baryons would reduce the discrepancy, but it is difficult to see how this can be done without violating the actual scatter in the observed $M_{\rm 0.3 kpc}-L$ relation. \subsection{Model type C: Semi-analytical modelling (SAM)} Filling the multitude of DM halos with baryons given the above combined processes was investigated by \cite{Maccioetal09}. They semi-analytically modelled (SAM) DM sub-halos based on $N-$body merger tree calculations and high-resolution recomputations. The authors state ``We conclude that the number and luminosity of Milky Way satellites can be naturally accounted for within the ($\Lambda$)Cold Dark Matter paradigm, and this should no longer be considered a problem.'' Their theoretical mass--luminosity data are plotted in their fig.~5, and a fit to the redshift $z=0$ data for $L_V>3\times10^5\,L_{V,\odot}$ satellites is listed in Table~\ref{tab:fits}. The theoretical SAM data set shows a steep behaviour, $\kappa=0.42$. Given the observational data, this model is ruled out with a confidence of more than ten~$\sigma$. \subsection{Model type D: High-resolution baryonic physics simulations (Aq-D-HR)} The satellite population formed in a high-resolution N-body $\Lambda$CDM re-simulation with baryonic physics of one of the MW-type ``Aquarius'' halos is studied by \cite{OF09}. The treatment of baryonic processes include time-evolving photoionisation, metallicity-dependent gas cooling and photo-heating, supernova (SN) feedback, and chemical enrichment by means of SN~Ia and~II and AGB stars. Re-ionisation is included and the galactic winds driven by stellar feedback are assumed to have velocities proportional to the local velocity dispersion of the dark-matter halo. In these models 100~per cent of the SNII energy is deposited as thermal energy. Galactic winds are thus produced even for the least-massive dwarf galaxies. Winds are observed in strong starbursts induced through interactions rather than in self-regulated dwarf galaxies, which may pose a problem for this ansatz \citep{Ott05}. The details of the simulations are provided by \cite{Okamotoetal09}. The resultant sub-halo population with stars can, as claimed by the authors, reproduce the S08 common-mass scale. Following the same procedure as for the above models, this claim is tested by obtaining $\kappa$ from their fig.~1 (upper panel, red asterisks) and comparing it to the observational data also plotted in their fig.~1 (note that \citealt{OF09} plot the masses within 600~pc rather than 300~pc as used above). From their plot of the observational data, which only includes central-600~pc masses for the eight most luminous satellites, it follows that $\kappa_{\rm obs, OF} = -0.03\pm0.05$. This is nicely consistent with the full S08 sample (18~satellites) discussed above. However, for their model data one finds that $\kappa=0.17\pm0.02$, i.e. the model can be discarded with a confidence of 3$\sigma$ or 99.7~per cent. \subsection{Model type E1, E2: WDM} \label{ssec:WDM} \cite{MF09} present theoretical distributions of satellite galaxies around a MW-type host halo for different cosmological models, namely $\Lambda$CDM and WDM with three possible DM-particle masses of $m_{\rm WDM}=1$, 2, and 5~keV. They perform numerical structure formation simulations and apply semi-analytic modelling to associate the DM sub-halos with luminous satellites. They suggest the luminosity function and mass--luminosity data of observed satellites is reproduced by the WDM models implying a possible lower limit to the WDM particle of $m_{\rm WDM}\approx 1\,$keV. The model and observational mass--luminosity data are compared in their fig.~5 for $m_{\rm WDM}=1$ and~5~keV. The slopes of these model data are listed in Table~\ref{tab:fits}. From Table~\ref{tab:fits} it follows that the WDM model with $m_{\rm WDM} \approx 1$~keV is ruled out with very high confidence (4$\sigma$ or 99.99~per cent), and also has too few satellites fainter than $M_V\approx -8$ (their fig.~4). WDM models with $m_{\rm WDM}\approx 5$~keV are excluded at least with a 3$\sigma$ or 99.7~per cent confidence, and, as is evident from their fig.~4, the models contain significantly too few satellites brighter than $M_V =-11$. \subsection{Model type F: Infalling and disrupting dark-matter satellite galaxies} \cite{Cooper10} study CDM model satellites in individual numerical models of dark matter halos computed within the Aquarius project. Semi-analytical modelling is employed to fill the sub-halos with visible matter, and the orbits of the infalling satellites are followed. General agreement with the observed satellites is claimed. Much as the other models above, in this numerical CDM model of substructure and satellite formation in a MW type host halo, the MW sub-halos fall-in stochastically and therefore do not agree with the observed phase-space correlated satellites, i.e. with the existence of a rotating DoS (Sect.~\ref{sec:DoS} below). Furthermore, the presented model mass-luminosity data (their fig.~5) lead to a too steep slope (Table~\ref{tab:fits}) compared to the observations and the DM-based model is excluded with a confidence of at least 99.7~per cent. In addition, fig.~5 of \cite{Cooper10} shows a significant increase in the number of model satellites with a similar brightness as the faintest known satellite (Segue~1, hereinafter Seg.\,1). This is in contradiction with the failure to find any additional satellites of this luminosity in the most recent data mining of the existing northern SDSS data, as discussed in Sect.~\ref{ssec:sub} below. Indeed, observations suggest that Seg.~1 is a star cluster rather than a satellite galaxy \citep{Niederste09}, worsening this problem. \subsection {Discussion} \label{ssec:ABC} In Fig.~\ref{fig:fits}, the latest theoretical ansatzes~A--F to solve the cosmological substructure problem are compared with the latest observational limit on the slope $\kappa$ of the DM-mass--luminosity relation of dSph satellite galaxies (Eq.~\ref{eq:ML}). \begin{figure} \includegraphics[angle=0,scale=0.43]{kappa.eps} \vspace{0mm} \caption{The slope of the mass--luminosity relation, $\kappa$ (Eq.~\ref{eq:ML}), for the models listed in Table~\ref{tab:fits}. The observational constraints with confidence intervals are depicted as hatched regions (1, 2, and 3$\sigma$ region). Satellites with a larger dark-matter mass are on average more luminous such that the mass--luminosity relation has $\kappa>0$. However, the observational constraints lie in the region $\kappa\approx 0$ (see Table~\ref{tab:fits}). The hypothesis that the data are consistent with any one of the models can be discarded with very high (at least 3$\sigma$, or more than 99.7~per cent) confidence. \label{fig:fits}} \end{figure} The theoretical results always lead to a trend of luminosity with halo mass as a result of energy conservation. But the observed satellites do not show an increasing trend of luminosity with DM mass, according to \cite{Mateo98}, \cite{Penarrubia08}, and \cite{Strigari08}. From Fig.~\ref{fig:fits} we note that seven $\Lambda$CDM models of the satellites deviate $4 \sigma$ or more from the data, while only one (the WDM model E2 with $m_{\rm WDM}=5\,$keV, Table~\ref{tab:fits}) deviates more than $3 \sigma$ from the data. The likelihood \footnote{The {\sl likelihood} $=$$1-$(confidence in per cent)/100 gives an indication of how well the data can be accounted for by a given model. The {\sl confidence}, as used throughout this text, is the probability level at which a model can be discarded.} that any of the DM models describes the data is thus less than 0.3~per cent. As a caveat, the observed absence of a DM-mass-luminosity relation partially depends on the data for the ultra-faint dwarfs: indeed, for the classical (most luminous) dSphs, \cite{Serra09} argue that there may be a trend, $\kappa>0$, essentially because of their proposed increase in the mass of the Fornax dSph satellite. It is on the other hand plausible that the ultra-faint dwarfs do not possess any dark halo (see Sect.~\ref{sec:tdgs}), and that the enclosed mass derived is due to observational artifacts. In that case they should not be used as a possible improvement for the missing satellite problem. This, however, would pose a problem for the DM hypothesis. \cite{Adenetal09b} suggest that for the Hercules dSph satellite inter-loper stars need to be removed from the observational sample, which would require a revision of the mass within 300~pc to the value $M_{\rm 0.3 kpc}=1.9^{+1.1}_{-1.6}\times 10^6\,M_\odot$ (instead of the value $M_{\rm 0.3 kpc}=7.2^{+0.51}_{-0.21}\times 10^6\,M_\odot$ derived by S08). This new mass measurement, however, now lies more than $4\,\sigma$ away from all $\Lambda$CDM-models considered above (Table~\ref{tab:fits}). Hercules can thus not be understood in terms of a DM-dominated model. \cite{Adenetal09b} do state that DM-free models cannot be excluded (note also Fig.~\ref{fig:hercules} below), or that Hercules may be experiencing tidal disturbances in its outer parts. Tidal disturbance, however, would have to be very significant for its inner structure to be affected, because if one would require conformity with the theoretical DM-models its $M_{\rm 0.3 kpc}$ mass would have to have been much higher and similar to the value derived by S08 ($\approx 10^7\,M_\odot$). Given the current Galactocentric distance of Hercules of 130~kpc and the result that the inner region of a satellite is only affected by tides after significant tidal destruction of its outer parts \citep{Kazantzidisetal04}, this scenario is physically implausible. There are therefore three possibilities: (i)~Hercules is a DM-dominated satellite. This, however, then implies that no logically consistent solution within the CDM framework is possible because its mass--luminosity datum would lie well away from the theoretical expectation. (ii)~Hercules has no DM. This implies that it cannot be used in the mass-luminosity data analysis above and would also imply there to exist an additional type of DM-free satellites, which, however, share virtually all observable physical characteristics with the putatively DM filled satellites. (iii)~Hercules has been significantly affected by tides. This case is physically implausible because of its large distance, but it would imply that Hercules cannot be used in the mass-luminosity analysis above (just as Sagittarius is excluded because of the significant tidal effects it is experiencing). Omitting Hercules from the data leads to a revised observational slope $\kappa=0.01\pm0.03$ such that none of the conclusions reached above about the performance of the DM-models are affected. A point of contention for DM models of dSph satellite galaxies is that the DM halos grow at various rates and are also truncated variously due to tidal influence. The highly complex interplay between dark-matter accretion and orbit-induced accretion truncation leads to the power-law mass function of DM halos, and at the same time would imply that the outcome in which all luminous DM sub-halos end up having the same DM mass were incompatible with the DM-theoretical expectations (see Sect.~\ref{sec:mfn}). {\sl Summarirising Sect.~\ref{sec:ML}}, while the theoretical results always lead to a trend of luminosity with halo mass, the observed satellites do not show this trend. The hypothesis that the CCM accounts for the data can be discarded with more than 99.7~per cent significance. \section{The mass function of CDM halo masses (problem~2)} \label{sec:mfn} One of the predictions of the $\Lambda$CDM hypothesis is the self-similarity of DM-halos down to (at least) the mass range of dwarf galaxies, i.e. that massive halos contain sub-halos of lower mass, with the same structure in a statistical sense (\citealt{Mooreetal99}; for a major review see \citealt{DelPopolo2007}). The mass function of these sub-halos is, up to a critical mass $M_{\rm{crit}}$, well approximated by \begin{equation} \xi_{\rm{sub}}(M_{\rm{vir}})=\frac{dN}{dM_{\rm{vir}}} \propto M_{\rm{vir}}^{-1.9}, \label{eq:subhaloMF} \end{equation} where $dN$ is the number of sub-halos in the mass interval $M_{\rm{vir}}, M_{\rm{vir}}+dM_{\rm{vir}}$ \citep{Gaoetal04}, $M_{\rm{crit}}$ is given by $M_{\rm{vir}} \approx 0.01 M_{\rm{h}}$ with $M_{\rm{h}}$ being the virial mass of the hosting CDM-halo. The virial mass, $M_{\rm{vir}}$, is defined by \begin{equation} M_{\rm{vir}}=\frac{4 \pi}{3} \Delta_{\rm{vir}} \rho_0 r_{\rm{vir}}^3, \label{eq:VirMass} \end{equation} where $\rho_0$ is the critical density of the Universe and $\Delta_{\rm{vir}}$ is a factor such that $\Delta_{\rm{vir}} \rho_0$ is the critical density at which matter collapses into a virialised halo, despite the overall expansion of the Universe. The virial radius $r_{\rm vir}$ is thereby determined by the density profile of the collapsed CDM-halo. For $M_{\rm{vir}} > 0.01\, M_{\rm{h}}$, the mass function steepens \citep{Gaoetal04}, so that it is effectively cut off at a mass $M_{\rm max}$ (see Eq.~\ref{eq:lumMF} below). It is reasonable to identify $M_{\rm max}$ with the mass of the most massive sub-halo, which must be higher than $M_{\rm crit}$, where the mass function begins to deviate from Eq.~\ref{eq:subhaloMF} and lower than $M_h$, the mass of the host-halo. Therefore, $M_{\rm crit} < M_{\rm max} < M_h$. Thus, a halo with $M_{\rm{vir}}\approx 10^{12}\, M_{\odot}$, like the one that is thought to be the host of the MW, should have a population of sub-halos spanning several orders of magnitude in mass. It is well known that, in consequence, a steep sub-halo mass function such as Eq.~\ref{eq:subhaloMF} predicts many more low-mass sub-halos than the number of observed faint MW satellites \citep{Mooreetal99,Klypin99}, a finding commonly referred to as the {\sl missing satellite problem}. Efforts to solve this problem rely on physical processes that can either clear CDM-halos of all baryons or inhibit their gathering in the first place, which would affect low-mass halos preferentially (e.g. \citealt{Moore06,Lietal09}; Sect.~\ref{sec:ML}). More specifically, \citet{Lietal09} find that the mass function of luminous halos, $\xi_{\rm{lum}}(M_{\rm{vir}})$, would essentially be flat for $10^7 M_{\odot} \le M_{\rm{vir}} < 10^9 M_{\odot}$. All sub-halos with $M_{\rm{vir}} \ge 10^9 M_{\odot}$ would keep baryons and therefore $\xi_{\rm{lum}}(M_{\rm{vir}})=\xi_{\rm{sub}}(M_{\rm{vir}})$ in this mass range. Thus, the mass function of {\sl luminous sub-halos} can be written as \begin{equation} \xi_{\rm{lum}}(M_{\rm{vir}}) =k k_i M_{\rm{vir}}^{-\alpha_i}, \label{eq:lumMF} \end{equation} with \begin{math} \begin{array}{@{\hspace{-0.6cm}}lll} &&\\[-4pt] \alpha_1 = 0, & \ k_1=1, & \ 10^7 \le \frac{M_{\rm{vir}}}{M_{\odot}} < 10^9,\\[3pt] \alpha_2 = 1.9, & \ k_2=k_1\, (10^9)^{\alpha_2-\alpha_1}, & \ 10^9 \le \frac{M_{\rm{vir}}}{M_{\odot}} \le M_{\rm{max}},\\[-4pt] &&\\ \end{array} \end{math}\\ where the factors $k_i$ ensure that $\xi_{\rm{vir}}(M_{\rm{vir}})$ is continuous where the power changes and $k$ is a normalisation constant chosen such that \begin{equation} \int^{M_{\rm{max}}}_{10^7}\xi_{\rm{vir}}(M_{\rm{vir}}) \, dM_{\rm{vir}}=1. \label{eq:norm} \end{equation} From a mathematical point of view, Eq.~\ref{eq:lumMF} is the probability distribution of luminous sub-halos. We note that the luminous sub-halo mass function proposed in \citet{Moore06} is similar to the one in \citet{Lietal09}. In the high-mass part, it has the same slope as the mass function for all sub-halos and flattens in the low-mass part (cf. fig.~3 in \citealt{Moore06}). The lower mass limit for luminous halos is however suggested to be $M_{\rm vir} \approx 10^8\,M_{\odot}$ in \citet{Moore06}. The mass function of {\sl all sub-halos} has $\alpha_1\approx\alpha_2\approx 1.9$ \citep{Gaoetal04}. \subsection{NFW halos} \label{ssec:NFW} It is well established that the theoretical density profiles of galaxy-sized CDM-halos are similar to a universal law, as proposed by \cite{NFW}. The NFW profile is given by \begin{equation} \label{eq:NFW} \rho_{\rm{NFW}}(r) =\frac{\delta _c\rho _0}{r/r_{\rm{s}}\left( 1+r/r_{\rm{s}}\right) ^2}, \end{equation} where $r$ is the distance from the centre of the halo and $\rho_0$ is the critical density of the Universe, while the characteristic radius $r_{\rm{s}}$ and $\delta_c$ are mass-dependent parameters. By integrating $\rho_{\rm{NFW}}(r)$ over a volume, the total mass of CDM within this volume is obtained. Thus, \begin{equation} M(r)=\int^r_0 \rho (r')4\pi r'^2 \,dr' \label{eq:enclmass1} \end{equation} is the mass of CDM contained within a sphere of radius $r$ around the centre of the CDM-halo, and $M(r)=M_{\rm{vir}}$ for $r=r_{\rm{vir}}$. Performing the integration on the right-hand side of Eq.~\ref{eq:enclmass1} and introducing the concentration parameter $c=r_{\rm{vir}}/r_{\rm{s}}$ leads to \begin{equation} M(r) =\frac{4 \pi \rho_0 \delta _c r_{\rm vir}^3}{c^3} \; \left[ \frac{r_{\rm{vir}}}{r_{\rm{vir}}+c\;r}+\ln \left( 1+\frac{c\;r}{r_{\rm{vir}}} \right) -1\right]. \label{eq:enclmass2} \end{equation} The parameter $\delta_c$ can be expressed in terms of $c$, \begin{equation} \delta _c =\frac{\Delta_{\rm{vir}}}{3} \frac{c^3}{\ln \left( 1+c \right) -c/(1+c)}, \label {eq:delta} \end{equation} as can be verified by setting $r=r_{\rm{vir}}$ in Eq.~\ref{eq:enclmass2} and substituting $M(r_{\rm vir})=M_{\rm vir}$ by Eq.~\ref{eq:VirMass}. If the halo is luminous, it is evident that $M(r)$ is smaller than the total mass included within $r$, $M_r$. However, assuming that the MW satellites are in virial equilibrium and that their dynamics is Newtonian in the weak-field limit, the mass-to-light ratios calculated for them are generally high and imply that they are DM-dominated and thus, $M(r)=M_r$ would be a good approximation. This relation is therefore adopted for the present discussion. In this approximation $M(r=0.3 {\rm kpc})=M_{\rm 0.3kpc}$. In principle, the parameters $\rho_0$ \citep{NFW}, $c$ \citep{Bullock01}, and $\Delta_{\rm{vir}}$ \citep{Mainini03} depend on the redshift $z$ but for the purpose of the present paper only $z=0$ needs to be considered, as this is valid for the local Universe. Thus, \begin{equation} \rho_0 =\frac{3H_0^2}{8\pi G}, \label{rhocrit} \end{equation} where the Hubble constant $H_0=71 \, \rm{km} \, \rm{s}^{-1} \, \rm{Mpc}^{-1}$ \citep{Spergel07}, $\Delta_{\rm{vir}} \simeq 98$ for $\Lambda$CDM-cosmology \citep{Mainini03}, and \begin{equation} \log_{10}(\overline{c})=2.31-0.109 \log_{10}\left(\frac{M_{\rm{vir}}}{M_{\odot}}\right), \label{eq:c} \end{equation} where $\overline{c}$ is the expectation value of $c$ as a function of $M_{\rm{vir}}$. Thus, $\overline{c}$ decreases slowly with $M_{\rm{vir}}$, while the scatter in the actual $c$ is rather large, being \begin{equation} \sigma_{\log_{10} c}=0.174 \label{eq:sigc} \end{equation} \citep{Maccio07}. The only caveat here is that the NFW profile is used to integrate the mass, while the now-preferred Einasto profile (\citealt{Navarro09}, Sect.~\ref{sec:introd}) makes only a small difference in the central parts. \subsection{Probing the $\Lambda$CDM hypothesis with $M_{\rm 0.3kpc}$} \begin{figure} \includegraphics[angle=0,scale=0.80]{DMscatter.eps} \vspace{0mm} \caption{The {\sl mass function of luminous satellite problem}. The cumulative distribution function for the mass within the central 300~pc, $M_{\rm 0.3kpc}$, of the MW satellites (solid line) and the cumulative distribution function for $M_{\rm 0.3kpc}$ of a sample of $10^6$ CDM-halos picked from the parent distribution of luminous sub-halos (Eq.~\ref{eq:lumMF}, dashed line). The null hypothesis is that the MW satellite $M_{\rm 0.3 kpc}$ masses are drawn from this parent distribution. The maximum distance between the two curves is 0.333 so that the null hypothesis can be discarded with~98.9~per cent confidence.} \label{fig:DMscatter} \end{figure} S08 use the stellar motions in 18 MW satellites to calcule their mass within the central 300 pc, $M_{\rm 0.3kpc}$. They assume the satellites to be in virial equilibrium and that Newtonian dynamics can be applied to them. The sample from S08 can be enlarged to 20~satellites by including the Large Magellanic Cloud (LMC) and the Small Magellanic Cloud (SMC), since \citet{vanderMarel02} estimated the mass of the LMC within the innermost 8.9~kpc, $M_{\rm{LMC}}$, using the same assumptions as S08. This implies that $M_{\rm{LMC}}=(8.7\pm4.3)\times 10^9 \, M_{\odot}$, of which the major part would have to be DM. Equations.~\ref{eq:VirMass},~\ref{eq:enclmass2},~\ref{eq:delta} and~\ref{eq:c} have been used to create tabulated expectation values of $M(r)$ for NFW-halos with different $M_{\rm{vir}}$ and it can thereby be seen that for a typical NFW-halo with $M(r=8.9 \, {\rm kpc})=8.7 \times 10^9 \, M_\odot$, $M(r=0.3 \, {\rm kpc})=2.13 \times 10^7 \, M_\odot = M_{\rm 0.3kpc}$, and $M_{\rm vir}=1.2\times 10^{11} \, M_\odot$. We note that the SMC has about 1/10th of the mass of the LMC \citep{Kallivayalil06}, hence the virial mass of its halo can be estimated as $M_{\rm vir}=1.2\times 10^{10} \, M_\odot$, corresponding to $M_{\rm 0.3kpc}=1.51 \times 10^7 \, M_\odot$. To test the shape of the MW satellite distribution function against the shape of the distribution of the $M_{\rm 0.3kpc}$ values of the MW-satellites, artificial samples of $10^6 \ M_{\rm 0.3kpc}$ masses are generated in concordance with the $\Lambda$CDM hypothesis, using Monte Carlo simulations. As noted in Sect.~\ref{ssec:NFW}, $M_{\rm 0.3kpc}$ is well approximated by $M(r=0.3 \rm{kpc})$ in a CDM-dominated galaxy. $M(r=0.3 \rm{kpc})$ can be calculated if $M_{\rm{vir}}$ and $c$ are given, and the expectation value for $c$ is a function of $M_{\rm{vir}}$. The first step is therefore to choose a value for $M_{\rm{vir}}$ using uniform random deviates and the probability distribution of luminous halos given in Eq.~\ref{eq:lumMF} (see e.g. chapter~7.2 in \citealt{NumRecipes} for details). The next step is to attribute a value for $\log_{10}(c)$ to the chosen $M_{\rm{vir}}$. This is done by multiplying Eq.~\ref{eq:sigc} with a Gaussian random deviate and adding the result to the value for $\log_{10}(\overline{c})$, which is calculated from Eq.~\ref{eq:c}. After transforming $\log_{10}(c)$ to $c$, $M_{\rm 0.3kpc}=M(r=0.3 \rm{kpc})$ of the given halo can be calculated from Eq.~\ref{eq:enclmass2}, using Eqs.~\ref{eq:VirMass} and~\ref{eq:delta}. These steps are repeated, until a sample of $10^6 \ M_{\rm 0.3kpc}$ values is generated. If two samples are given, the maximum distance between their cumulative distribution functions, $D$, can be calculated. Performing the KS-test, this quantity $D$ allows an estimate of how likely it is that they are drawn from the same distribution function. The null hypothesis is that the observed satellite galaxies are drawn from the theoretically calculated mass function of luminous halos; the parent distribution is thus assumed to be the mass function of $M({\rm 0.3 kpc})$ values of luminous sub-halos according to the $\Lambda$CDM hypothesis. Assuming in Eq.~\ref{eq:lumMF} that $M_{\rm{max}} = 10^{11} M_{\odot}$, which is approximately the mass estimated for the CDM halo of the LMC, and taking $M_{\rm min}=10^7\,M_\odot$, leads to $D=0.333$. According to the KS-test, given the parent distribution the probability of an even larger distance is~0.011. This means that the null hypothesis can be excluded with 98.9~per cent confidence. Both cumulative distributions are shown in Fig.~\ref{fig:DMscatter}\footnote{ Monte Carlo experiments are used to quantify the confidence values for the KS-tests: Drawing the corresponding number of sub-halo masses (e.g.~20 as in this case) from Eq.~\ref{eq:lumMF}, $D'$ is calculated. This is repeated $10^5$ times. Counting of $D'$ values gives the fraction of cases when $D'>D$, where $D$ is the actually obtained $D'$ value from the data (e.g. $D=0.333$ in this case). These fractions are reported here as likelihood values, and are about half as large as the probability values obtained using approximate methods, as, e.g., by \cite{NumRecipes}.}. Omitting the LMC and SMC from the observational sample but keeping $M_{\rm min}=10^7\,M_\odot$ and $M_{\rm max}=10^{11}\,M_\odot$ in the theoretical sample yields $D=0.294$, leading to the exclusion of the null hypothesis with a confidence of 95.5~per cent. In addition setting $M_{\rm max} = 4\times10^{10}\,M_\odot$, which is the $M_{\rm vir}$ that corresponds to the most massive $M_{\rm 0.3 kpc}$ in the S08 sample (i.e. the most massive remaining sub-halo), yields $D=0.301$ leading to exclusion of the null hypothesis with a confidence of 96.3~per cent. The latter two tests comprise a homogeneous mass-sample of observed satellites as compiled by S08. That the mass function is expected to steepen at $M_{\rm{crit}}=0.01\,M_{\rm{h}}$ even increases the discrepancy between the $\Lambda$CDM hypothesis and the observations. Reinstating the LMC and SMC back into the observational sample and cutting off $\xi_{\rm{sub}}(M_{\rm{vir}})$ at $M_{\rm{max}}=10^{10} M_{\odot}$ (with $M_{\rm min}=10^7\,M_\odot$), which would be close to $M_{\rm{crit}}$ for the CDM-halo of the MW (see Sect.~\ref{sec:mfn}), and one order of magnitude below the estimated mass of the CDM-halo of the LMC, implies that $D=0.359$ and an exclusion with 99.5~per cent confidence. On the other hand, setting $M_{\rm{max}}=10^{12} \, M_{\odot}$ (with $M_{\rm{min}}=10^7 \, M_{\odot}$) leads to $D=0.329$ and an exclusion with 98.8 per cent confidence. Any reasonable uncertainty in the actual value of $M_{\rm{max}}$ can therefore be excluded as an explanation of the discrepancy between the observed sample of $M_{0.3\, \rm{kpc}}$ and a sample generated based on the $\Lambda$CDM hypothesis. As a consequence, the same is true for the uncertainty in the actual mass of the halo of the MW, $M_{\rm{h}}$, since $M_{\rm{max}}$ is linked to $M_{\rm{h}}$ (see Sect.~\ref{sec:mfn}). Thus $M_{\rm{max}}$ is kept at $10^{11} \, M_{\odot}$ in the following. Adjusting the lower limit of $\xi_{\rm{lum}}(M_{\rm{vir}})$ from $10^7 \, M_{\odot}$ to $10^8 \, M_{\odot}$ then leads to $D=0.319$ and an exclusion of the null-hypothesis with a confidence of 98.4 per cent. The mass of $10^8 \, M_{\odot}$ is the $M_{\rm{vir}}$ suggested by the lowest $M_{0.3\rm{kpc}}$ in the sample from S08. We note that the likelihood decreases with decreasing $M_{\rm max}$, because of the overabundance of $M_{\rm 0.3\,kpc}\approx10^7\,M_\odot$ halos becoming more prominent in the observational sample. S08 suggest that $\xi_{\rm{lum}}(M_{\rm{vir}})$ might even be cut off below a mass of $\approx 10^9 M_{\odot}$, either because halos below that mass do not contain baryons or do not form at all. Indeed, modifying $\xi_{\rm{lum}}(M_{\rm vir})$ given by Eq.~\ref{eq:lumMF} accordingly, results in an agreement between the theoretical distribution and the data ($D=0.188$ with an exclusion confidence of only 70~per cent). A $\xi_{\rm{lum}}(M_{\rm{vir}})$ with a lower mass limit of $10^9 \, M_{\odot}$ is however in disagreement with the $\Lambda$CDM hypothesis, since the limiting mass below which all CDM-halos are dark ought to be two orders of magnitude lower according to \citet{Lietal09}. As a final note, the newly derived reduced mass of Hercules (see end of Sect.~\ref{ssec:ABC}) affects neither the calculated likelihoods nor the conclusions reached here. {\sl Summarising Sect.~\ref{sec:mfn}}, the mass distribution of the predicted DM halos of observed satellites is consistent with the $\Lambda$CDM hypothesis with at most 4.5~per cent likelihood. Assuming the dSph satellites are in virial equilibrium, the observationally deduced DM halo masses of the MW satellites show a significant overabundance of $M_{\rm 0.3kpc}\approx 10^7\,M_\odot$ halos and a lack of less-massive values compared to the theoretically calculated distribution for luminous sub-halos, despite much effort to solve the {\sl common-mass-scale problem} (Sect.~\ref{sec:ML}). \section{The bulge mass versus satellite number relation (problem~3)} \label{sec:origin} According to a straight forward interpretation of the CCM, more massive DM host halos have a larger number of luminous satellites because the number of sub-halos above a low-mass threshold increases with host halo mass, given the host halo mass waxes by accreting sub-halos. The sub-halos are accreted mostly individually without a physical link to the processes occurring at the centre of the host halo. There indeed does not appear to be an observed relation between the halo mass and the bulge mass, since pairs of galaxies with and without a bulge (such as M31, \citealt{RF70}, and M101, \citealt{Bosmaetal81}, respectively) but with the same rotation velocity can be found. It would be useful to return to models A--F (Sect.~\ref{sec:ML}) and to include the formation of the host galaxy in the modelling to quantify the degree of correlation between the bulge mass and number of luminous satellites actually expected in the CCM. When doing so, the same type of models will also have to account for the presence of bulge-less galaxies having the same DM-halo mass, as pointed out above. That is, it would {\sl not} suffice to merely demonstrate that some sort of bulge-mass--satellite number correlation emerges in the CCM. The case $M_{\rm bulge}=0$ must emerge naturally within the model, since two-thirds of all bright disk galaxies have no bulge or only a small one \citep{C009b}. On the basis of extragalactic observational data, \cite{Karachentsevetal05} note, but do not quantify, the existence of a correlation between the bulge luminosity and the number of associated satellite galaxies such that galaxies without a bulge have no known dSph companions, such as M101. \cite{Karachentsevetal05} also point out that the number of known dSph satellites increases with the tidal environment. The existence of this correlation can be tested in the Local Group, where good estimates of the number of satellites within the nominal virial radii of the respective hosts and of the stellar bulge masses of the three major galaxies (MW, M31, and M33) exist. Only the satellites brighter than $L_V = 0.2\times10^6\,L_\odot$ ($M_V<-8.44$) are considered, given that the census of fainter satellites is incomplete for the MW (notably in the southern hemisphere), and also for M31 and M33 given their distances. By restricting this analysis to satellites with $L_V>0.2\times10^6\,L_\odot$, the result becomes robust against the discovery of additional satellites since these would typically be fainter. The result is displayed in Fig.~\ref{fig:correl}: a linear correlation between the bulge mass and the number of early-type satellites is suggested. An error-weighted least squares linear fit to the data yields \begin{equation} N_{\rm dSph} = (4.03\pm 0.04)\times M_{\rm bulge}/(10^{10}\,M_\odot). \label{eq:bulge} \end{equation} In terms of the present-day stellar mass fraction, the dSph satellites of the MW add-up to at most a few times $10^7\,M_\odot$, so that they amount to about 0.15~per cent of the mass of the bulge. Given that Eq.~\ref{eq:bulge} is a linear fit to three data points only, it will be important to check the reality of this correlation by surveying disc galaxies in the Local Volume with different bulge masses for a deep and exhaustive sampling of satellite galaxies. Given the small number of observational data points underlying Eq.~\ref{eq:bulge}, one should not over-interpret this result, but it is legitimate to inquire how significant the empirical correlation between bulge mass and the number of satellites is. In view of the observation by \cite{Karachentsevetal05} noted above, it may be indicative of a physical correlation. The significance of the Local Group bulge--satellite correlation is evaluated by performing a Monte Carlo experiment, the null hypothesis of which is that there is no correlation. This hypothesis would appear to be plausible in the CCM because the number of satellites depends mostly on the host DM halo mass, while the bulge is produced by baryonic processes taking place near the center of the host DM halo. Three pairs of $M_{\rm bulge}$ and $N_{\rm dSph}$ values are chosen randomly from uniform distributions such that $M_{\rm bulge} \in [0,4.6\times 10^{10}\,M_\odot]$ and $N_{\rm dSph} \in [0,28]$\footnote{The upper bounds of the intervals are the 3$\sigma$ upper values of $M_{\rm bulge}$ and $N_{\rm dSph}$ of M31. The scaling of the axes is, however, irrelevant for the results of the Monte Carlo experiments, because the aim is to test how likely a correlation results, given the null hypothesis.}. For each three-point data set, a linear regression yields a measure of the degree of correlation. This is performed $10^6$ times. The following incidences are counted: 1) the resulting linear relation passes the $(M_{\rm bulge}, N_{\rm dSph}) = (0,0)$ point\footnote{The precise condition here is as follows: Let there be three Monte Carlo pairs $(M_{\rm bulge}, N_{\rm dSph})_i, i=1...3$. A linear regression yields a slope and an axis intersection, both with uncertainties expressed as $\sigma$ values. If the axis intersection lies within $5\sigma$ of the $(0,0)$ point, then this particular set of bulge--satellite pairs is counted. Note that the test does not require the slope to be the same as the observed value.} {\sl and} the slope of the linear relation has a relative uncertainty smaller than a given value; and the second test is 2) the slope of the linear relation has a relative uncertainty smaller than a given value. The relative uncertainty in the slope used here is based on the uncertainties in the data. Applying this relative uncertainty to Eq.~\ref{eq:bulge} leads to $N_{\rm dSph}\approx (4\pm 1)\times M_{\rm bulge}/(10^{10}\, M_{\odot})$. Taking the upper and the lower 1$\sigma$ limit of the slope, this equation thereby passes the lower and the upper 1$\sigma$ values of the data (Fig.~\ref{fig:correl})\footnote{The uncertainty in the slope given by Eq.~\ref{eq:bulge} is a measure for how close the data lie to the straight line fitted to them, i.e. very close in the given case. However, the uncertainties on the data suggests that the observed case is rather improbable (although obviously not impossible), even if the correlation between $N_{\rm dSph}$ and $M_{\rm bulge}$ is real. The uncertainty on the slope stated in Eq.~\ref{eq:bulge} would therefore not be a good basis for the test performed here.}. The Monte Carlo result is that case 1) occurs~$44\,000$ times, while case~2) occurs~$157\,000$ times. Thus, if the correlation evident in Fig.~\ref{fig:correl} were unphysical, then observing it would have a likelihood of~$0.044$ and~$0.157$, respectively. Given the data on the Local Group, the above hypothesis that the bulge mass and number of satellites are not correlated can therefore be discarded with a confidence of~95.6 per cent and~84.3 per cent in case 1) and case 2), respectively. {\sl Summarising Sect.~\ref{sec:origin}}, the null hypothesis that the bulge mass and the number of satellites are independent quantities is rejected, based on the Local Group data, with a confidence of more than~95.6~per cent. With the absence of a DM-mass--luminosity relation for the observed satellites (Sect.~\ref{sec:ML}), this suggests that our present understanding of how satellite dwarf galaxies form and evolve may need revision. In the formation modelling of satellite galaxies within the CCM it will therefore be necessary to include also the formation of the host galaxy, to quantify the correlation between bulge mass and the number of satellites within the CCM. It will also be essential to refine this correlation using deep observational extra galactic surveys. \begin{figure} \includegraphics[angle=0,scale=0.43]{Mbulge_NdSph.eps} \vspace{-15mm} \caption{The number of dSph and dE satellite galaxies more luminous than $0.2\times10^6\,L_\odot$ is plotted versus the bulge mass of the host galaxy (MW: \citealt{Zhao96}, M31: \citealt{Kent89}, M33: \citealt{Gebhardtetal01}). Only satellites within a distance of 270~kpc of the MW and M31 are used. The solid line (slope$=4.03$) is Eq.~\ref{eq:bulge}. The upper (slope$=5.03$) and the lower (slope$=3.03$) dotted lines illustrate the relative uncertainty assumed in the Monte Carlo experiment (see Sect.\,\ref{sec:origin}). \label{fig:correl}} \end{figure} \section{The disc of satellites (DoS) and invariant baryonic galaxies (problems~4 and~5)} \label{sec:DoS} The DoS is now addressed in view of the new satellite galaxies, and in Sect.~\ref{ssec:baryonic_invariant} the issue that the two major DM halos of the Local Group, which happen to be similar, are occupied by similar disk galaxies is addressed within the context of the CCM. An important constraint for understanding the origin and nature of the observed satellite galaxies comes from them being significantly anisotropically distributed about the MW, and possibly also about Andromeda. The problem of the MW system for the CCM was emphasised by \cite{KTB05}. They pointed out that the observed satellite system of the MW was incompatible at the 99.5~per cent confidence level with the theoretical distribution expected if the satellites were DM sub-halos tracing an isotropic DM host halo. Until then, the prediction within the DM hypothesis was that the distribution of sub-halos ought to be nearly spherical and tracing the shape of the host DM halo. For example, \cite{APC04} show a MW-type DM halo to have an infall asymmetry of only about 15~per cent. The sub-halos enter the host halo along filaments and then phase-mix and virialise within the growing host DM halo. Similar sub-halo distributions are obtained in CDM and WDM models \citep{Knebeetal08}. The DoS is a pronounced feature of the MW satellite system \citep{Metz09b}, and a similar structure was reported for the Andromeda system \citep{KG06} for which, however, the distance uncertainties are larger and the satellite population is richer and more complex including dSph, dE, and dIrr galaxies. In the case of the well-studied MW, the DoS is very pronounced for the classical (11 brightest) satellites including the LMC and SMC. But how are the new satellites, the ultra-faint ones, distributed? Much hope for the CCM rests on the new discoveries being able perhaps to alleviate the DoS problem. \cite{Watkins09} and \cite{Belokurov10} reported the discovery of two new MW satellite galaxies, Pisces~I and~II, respectively, enlarging the total MW satellite system to 24~satellites. Pisces~I and~II were found in the southern part of the SDSS survey area, making them the two first non-classical satellite galaxies found in the Southern Galactic hemisphere. Furthermore, distances to a number of the already known satellite galaxies have been updated in recent years, most notably the new distance estimate for Boo II by \cite{Walsh08}, which changes the distance from~60 to 42~kpc. An updated list of all currently known satellites is provided in Table.~\ref{tab:satellites} upon which the following analysis is based. \begin{sidewaystable*} \begin{center} \caption[]{ Data for the currently known MW satellites.\label{tab:satellites} } \begin{tabular}{lrrrrrrcrrrccc} \hline\hline\\[-3mm] Name & $\alpha_{2000}$ & $\delta_{2000}$ & $r_{\rm{helio}}$ & $l_{\rm{MW}}$ & $b_{\rm{MW}}$ & $r_{\rm{MW}}$ & Ref. & $v_{\rm{GSR}}^{220}$ & $v_{\rm{GSR}}^{250}$ & $\Delta v$ & Ref. & $L_{\rm{V}}$ & Ref. \\ & [h~m~s] & [$^\circ$~m~s] & [kpc] & [$^{\circ}$] & [$^{\circ}$] & [kpc] & & [kms$^{-1}$] & [kms$^{-1}$] & [kms$^{-1}$] & & $[L_{\sun}]$ \\ \hline {\bf New:}\\ Boo & 14 ~ 00 ~ 05 & +14 ~ 30 ~ 21 & $ 64 \pm 2 $ & 357.9 & 76.5 & 61 & 2; 10; 28; 8 & 94.4 & 94.2 & $ \pm 3.4 $ & 21 & $ (2.6 \pm 0.5) \times 10^{ 4 } $ & 2; 18 \\ Boo II & 13 ~ 58 ~ 05 & +12 ~ 51 ~ 36 & $ 45 \pm 2 $ & 348.6 & 78.9 & 43 & 30; 31; 14 & & & & & $ (9.2 \pm 5.4) \times 10^{ 2 } $ & 30; 18; 31 \\ CVn & 13 ~ 28 ~ 04 & +33 ~ 33 ~ 27 & $ 214 \pm 9 $ & 84.2 & 80.0 & 213 & 36; 8; 16; 17 & 64.8 & 69,7 & $ \pm 0.6 $ & 13; 29 & $ (2.0 \pm 0.3) \times 10^{ 5 } $ & 36; 18 \\ CVn II & 12 ~ 57 ~ 10 & +34 ~ 19 ~ 20 & $ 154 \pm 5 $ & 129.4 & 81.3 & 155 & 25; 3; 12; 8 & -97,5 & -93,2 & $ \pm 1.2 $ & 29 & $ (7.5 \pm 3.1) \times 10^{ 3 } $ & 25; 3; 18 \\ CBe & 12 ~ 26 ~ 59 & +23 ~ 54 ~ 37 & $ 43 \pm 2 $ & 202.2 & 75.5 & 44 & 3; 8; 23 & 47,4 & 40,4 & $ \pm 0.9 $ & 29 & $ (3.1 \pm 1.1) \times 10^{ 3 } $ & 3; 18; 22 \\ Her & 16 ~ 31 ~ 04 & +12 ~ 47 ~ 24 & $ 135 \pm 4 $ & 31.2 & 38.2 & 129 & 3; 7; 8; 1; 26 & 128,6 & 140,0 & $ \pm 1.1 $ & 29; 1 & $ (2.9 \pm 0.7) \times 10^{ 4 } $ & 3; 18; 26 \\ Leo IV & 11 ~ 32 ~ 58 & -00 ~ 32 ~ 09 & $ 156 \pm 5 $ & 261.1 & 56.3 & 156 & 3; 20 & 10,6 & -6,0 & $ \pm 1.4 $ & 29 & $ (1.3 \pm 0.3) \times 10^{ 4 } $ & 3; 18; 27; 9 \\ Leo V & 11 ~ 31 ~ 09 & +02 ~ 13 ~ 05 & $ 176 \pm 10 $ & 257.9 & 58.3 & 176 & 4; 9 & 58,5 & 42,8 & $ \pm 3.1 $ & 4 & $ (6.4 \pm 2.4) \times 10^{ 3 } $ & 4; 9 \\ Pis I & 23 ~ 40 ~ 00 & -00 ~ 18 ~ 00 & $ 80 \pm 14 $ & 100.2 & -57.8 & 80 & 32; 15 & 42,1 & 58,0 & & 15 & & \\ Pis II & 22 ~ 58 ~ 31 & +05 ~ 57 ~ 09 & $ 182 \pm 36 ^{*}$ & 84.1 & -47.6 & 181 & 6 & & & & & $ \sim 8.6 \times 10^{ 3 } $ & 6 \\ \textit{Seg I} & 10 ~ 07 ~ 04 & +16 ~ 04 ~ 40 & $ 23 \pm 2 $ & 206.2 & 39.5 & 28 & 3 & 94,5 & 79,3 & $ \pm 1.3 $ & 11 & $ (3.4 \pm 2.7) \times 10^{ 2 } $ & 3; 18 \\ \textit{Seg II} & 02 ~ 19 ~ 16 & +20 ~ 10 ~ 31 & $ 35 \pm 2 $ & 157.0 & -31.1 & 41 & 5 & 54,8 & 67,6 & $ \pm 2.5 $ & 5 & $ (8.6 \pm 2.7) \times 10^{ 2 } $ & 5 \\ UMa & 10 ~ 34 ~ 49 & +51 ~ 55 ~ 48 & $ 100 \pm 4 $ & 162.0 & 51.3 & 105 & 34; 29; 24 & -6,9 & -0,3 & $ \pm 1.4 $ & 29 & $ (1.4 \pm 0.4) \times 10^{ 4 } $ & 18 \\ UMa II & 08 ~ 51 ~ 30 & +63 ~ 08 ~ 22 & $ 30 \pm 5 $ & 159.7 & 30.4 & 37 & 35 & -29,0 & -17,1 & $ \pm 1.9 $ & 29 & $ (3.3 \pm 1.0) \times 10^{ 3 } $ & 35; 18; 22 \\ Wil1 & 10 ~ 49 ~ 22 & +51 ~ 03 ~ 10 & $ 41 \pm 6 $ & 164.4 & 48.8 & 46 & 33; 8 & & & & & $ (1.1 \pm 0.6) \times 10^{ 3 } $ & 33; 18 \\ \hline {\bf Classical:}\\ Car$^{\dagger}$ & 06 ~ 41 ~ 37 & -50 ~ 58 ~ 00 & $ 101 \pm 5 $ & 255.2 & -21.7 & 103 & 19 & 22,5 & -4,9 & $ \pm 3 $ & 19 & $ 4.5 \times 10^{ 4 } $ & 19 \\ Dra$^{\dagger}$ & 17 ~ 20 ~ 19 & +57 ~ 54 ~ 48 & $ 82 \pm 6 $ & 93.5 & 34.6 & 82 & 19 & -112,3 & -87,7 & $ \pm 2 $ & 19 & $ 2.8 \times 10^{ 5 } $ & 19 \\ For$^{\dagger}$ & 02 ~ 39 ~ 59 & -34 ~ 27 ~ 00 & $ 138 \pm 8 $ & 230.0 & -63.4 & 140 & 19 & -29,2 & -40,4 & $ \pm 3 $ & 19 & $ 1.6 \times 10^{ 7 } $ & 19 \\ Leo I & 10 ~ 08 ~ 27 & +12 ~ 18 ~ 30 & $ 250 \pm 30 $ & 224.7 & 48.6 & 254 & 19 & 179,9 & 165,4 & $ \pm 2 $ & 19 & $ 4.9 \times 10^{ 6 } $ & 19 \\ Leo II & 11 ~ 13 ~ 29 & +22 ~ 09 ~ 12 & $ 205 \pm 12 $ & 217.5 & 66.1 & 208 & 19 & 17,0 & 9,0 & $ \pm 2 $ & 19 & $ 5.9 \times 10^{ 5 } $ & 19 \\ LMC$^{\dagger}$ & 05 ~ 23 ~ 34 & -69 ~ 45 ~ 24 & $ 49 \pm 2 $ & 268.5 & -33.4 & 48 & 19 & 143,3 & 118,6 & & 19 &$2.1\times10^9$ &37 \\ SMC$^{\dagger}$ & 00 ~ 52 ~ 44 & -72 ~ 49 ~ 42 & $ 58 \pm 2 $ & 291.6 & -47.4 & 55 & 19 & 49,6 & 32,5 & & 19 &$5.7 \times 10^8$ &37 \\ Sgr$^{\dagger}$ & 18 ~ 55 ~ 03 & -30 ~ 28 ~ 42 & $ 24 \pm 2 $ & 9.4 & -22.4 & 16 & 19 & 161,1 & 164,0 & $ \pm 5 $ & 19 & $ 2.0 \times 10^{ 7 } $ & 19 \\ Scu$^{\dagger}$ & 01 ~ 00 ~ 09 & -33 ~ 42 ~ 30 & $ 79 \pm 4 $ & 234.6 & -81.9 & 79 & 19 & 77,9 & 73,8 & $ \pm 3 $ & 19 & $ 2.4 \times 10^{ 6 } $ & 19 \\ Sex & 10 ~ 13 ~ 03 & -01 ~ 36 ~ 54 & $ 86 \pm 4 $ & 237.8 & 40.8 & 89 & 19 & 76,9 & 56,4 & $ \pm 3 $ & 19 & $ 5.4 \times 10^{ 5 } $ & 19 \\ UMi$^{\dagger}$ & 15 ~ 09 ~ 11 & +67 ~ 12 ~ 54 & $ 66 \pm 3 $ & 114.2 & 43.2 & 68 & 19 & -92,9 & -71,7 & $ \pm 2 $ & 19 & $ 3.1 \times 10^{ 5 } $ & 19 \\ \hline \end{tabular} \end{center} Notes to the table: Data for the MW satellites used for fitting the DoS. Seg~1 and~2 (marked in \textit{italics}) are included in this list for reference, but they have not been included in the fitting because they appear to be diffuse star clusters \citep{Niederste09}. The positions are given both in Heliocentric coordinates (right ascension $\alpha_{2000}$, declination $\delta_{2000}$, and Heliocentric distance $r_{\rm{helio}}$ for epoch J2000.0) and in Galactocentric coordinates assuming the Sun to have a distance of 8.5 kpc from the MW centre. $l_{\rm{MW}}$\ gives the Galactic longitude with $0^{\rm o}$ pointing from the Galactic centre to the Sun. $b_{\rm{MW}}$ is the latitude as seen from the Galactic centre and $r_{\rm{MW}}$ the radial distance from the centre of the MW. The coordinates were obtained using data from the references listed in the column labelled Ref., where more than one source is given, the distances to the satellites were obtained by error-weighted averaging over the available measurements. The satellite's line-of-sight velocities with respect to the Galactic standard of rest (GSR) are calculated assuming the Sun to move into the direction $l = 90^{\circ}$, $b = 0^{\circ}$ (in Heliocentric, Galactic coordinates) with a velocity of either 220 km$s^{-1}$ ($v_{\rm{GSR}}^{220}$) or 250 km$^{-1}$ ($v_{\rm{GSR}}^{250}$). The measurement-uncertainties for the radial velocities reported in the respective papers (referred to in column Ref.) are reproduced in the column labelled $\Delta v$. Finally, $L_{\rm{V}}$ gives the satellite luminosities in the photometric V-band; again uncertainty-weighted averages are quoted when more than one reference is given in column Ref. Data marked with $\dagger$ have measured proper motions, listed in table 1 of \cite{Metz08}. $^*$:~As no distance uncertainties for Pisces II are available in the literature, the error is estimated to be 20 percent of the distance. {\sl References}: 1: \cite{Adenetal09}, 2: \cite{Belokurov 2006}, 3: \cite{Belokurovetal07}, 4: \cite{Belokurov 2008}, 5: \cite{Belokurov 2009}, 6: \cite{Belokurov10}, 7: \cite{Coleman 2007}, 8: \cite{de Jong 2008}, 9: \cite{de Jong 2010}, 10: \cite{Dall'Ora 2006}, 11: \cite{Geha 2009}, 12: \cite{Greco 2008}, 13: \cite{Ibata 2006}, 14: \cite{Koch 2009}, 15: \cite{Kollmeier 2009}, 16: \cite{Kuehn 2008}, 17: \cite{Martin 2008a}, 18: \cite{Martin 2008b}, 19: \cite{Mateo98}, 20: \cite{Moretti 2009}, 21: \cite{Munoz 2006}, 22: \cite{Munoz 2009}, 23: \cite{Musella 2009}, 24: \cite{Okamoto 2008}, 25: \cite{Sakamoto 2006}, 26: \cite{Sand 2009a}, 27: \cite{Sand 2009b}, 28: \cite{Siegel 2006}, 29: \cite{SimonGeha07}, 30: \cite{Walsh 2007}, 31: \cite{Walsh08}, 32: \cite{Watkins09}, 33: \cite{Willman 2005a}, 34: \cite{Willman 2005b}, 35: \cite{Zucker 2006a}, 36: \cite{Zucker 2006b}, 37: \cite{vandenBergh99}. \end{sidewaystable*} \cite{Metz07} and \cite{Metz09} employed a sophisticated fitting routine to find the DoS. Here, an intuitive plane-fitting algorithm and a new disc-test are introduced. The plane-fitting algorithm leads to perfect agreement with the results obtained by Metz et al., and the new test allows an assessment of how discy the satellite distribution is. \subsection{Parameters of the DoS} \label{ssec:DoSparameters} A simple and straightforward method is described to calculate the DoS parameters $l_{\rm MW}$, $b_{\rm MW}$, $D_{\rm p}$, and $\Delta$, which are, respectively, the direction of the DoS-normal vector in Galactic longitude and latitude, the smallest distance of the DoS plane to the Galactic centre, and the root-mean-square height (half the thickness) of the DoS. The positions of satellites on the sky and their radial distances (compiled for convenience in Table~\ref{tab:satellites}) are transformed into a Galactocentric, cartesian coordinate system assuming the distance of the Sun to the centre of the MW to be 8.5~kpc. The $z$-coordinate points into the direction of the Galactic North Pole and the Sun lies in the MW disk plane. The 3D coordinates are projected into two dimensions, plotting $z$ against a projection onto a plane defined by the Galactic longitude $l_{\rm{MW}}$. This resembles a view of the MW satellite system as seen from infinity and from within the MW disc plane. The view of the satellite system is rotated in steps of $1^\circ$. For each step, a linear fit is made to the projected satellite distribution. The linear fit is determined using the least squares method, demanding the satellite-distances, as measured perpendicularly to the fitted line, to become minimal. This line constitutes a plane seen edge-on in the current projection. The two free parameters of the fit are the closest distance from the MW centre, $D_{\rm{P}}$, and the inclination $b_{\rm{MW}}$ of the normal vector to the z-axis (a polar plane has $b_{\rm MW}=0^\circ$). The plane-normal-vector's longitude is $l_{\rm{MW}}$, given by the projection. The fits are performed for each angle $l_{\rm{MW}}$ between $0^\circ$ and $360^\circ$. After half of a rotation, the view is identical to the one of $180^\circ$ before, mirrored along the z-axis. For each angle $l_{\rm{MW}}$, the root mean square (RMS) height, $\Delta$, of the satellite distribution around the fitted line is determined. The normal vector to the best-fit disc solution (the DoS) to the full 3-dimensional distribution of the MW satellites is then given by those $l_{\rm{MW}}$ and $b_{\rm{MW}}$ that give the smallest RMS height $\Delta_{\rm{min}}$. To account for the uncertainties in the distance of the satellites, the major source of error, the procedure is repeated~1000 times. Each time, the radial position of each satellite is randomly chosen from a normal distribution centered on the satellite's radial distance. It has a standard deviation given by the distance uncertainties to the satellite galaxies. Once a realisation with varied radial distances is set up, the coordinate transformation into the Galactic coordinate system is performed. The parameters of the best fits are determined for each realisation. Their final values are determined by averaging the results of all realisations, the standard deviations of their values are adopted as the uncertainties in the fits. Fitting all~24 currently known satellite galaxies within a distance of 254~kpc from the MW, the minimum disc height is found to be $\Delta_{\rm{min}} = 28.9 \pm 0.6~\rm{kpc}$. This is more than 14$\sigma$ away from the maximum height of $\Delta_{\rm{max}} = 55.7 \pm 1.3~\rm{kpc}$ obtained at a $90^{\rm o}$ projection of the data. {\sl Thus, the DoS is highly significant.} The position of the minimum height gives the best-fit disc, the DoS. The normal vector defining the DoS points to $l_{\rm{MW}} = 156^\circ.4 \pm 1^\circ.8$ and has an inclination of $b_{\rm{MW}} = -2^\circ.2 \pm 0^\circ.6$, i.e. is nearly perfectly polar. $D_{\rm{P}}$, the closest distance of the DoS from the MW centre, is~$8.2 \pm 1.0~\rm{kpc} \ll \Delta_{\rm min}$. \begin{figure} \hspace{-5mm} \includegraphics[angle=0,scale=0.70]{satview.eps} \caption{Parameters of the MW DoS: the 3-D distribution of the MW satellite galaxies. The 11~classical satellites are shown as large (yellow) circles, the 13~new satellites are represented by the smaller (green) dots, whereby Pisces~I and~II are the two southern dots. The two open squares near the MW are Seg~1 and~2; they are not included in the fit because they appear to be diffuse star clusters nearby the MW, but they do lie well in the DoS. The obscuration-region of $\pm 10^{\circ}$ from the MW disc is given by the horizontal gray areas. In the centre, the MW disc orientation is shown by a short horizontal line, on which the position of the Sun is given as a blue dot. The near-vertical solid line shows the best fit (seen edge-on) to the satellite distribution at the given projection, the dashed lines define the region $\pm 1.5 \times \Delta_{\rm{min}}$, $\Delta_{\rm min}$ being the RMS-height of the thinnest DoS ($\Delta_{\rm min}=28.9\;$kpc in both panels). {\sl Upper panel}: an edge-on view of the DoS. Only three of the 24 satellites are outside of the dashed lines, giving $N_{\rm{in}} = 21$, $N_{\rm{out}} = 3$ and thus a ratio of ${\cal R} = N_{\rm in}/N_{\rm out} = 7.0$. Note the absence of satellites {\sl in large regions of the SDSS survey volume} (upper left and right regions of the upper panel, see also fig.~1 in \citealt{Metz09} for the SDSS survey regions). {\sl Lower panel}: a view rotated by $90^\circ$, the DoS is seen face-on. Now, only 13 satellites are close to the best-fit line, 11~are outside, resulting in ${\cal R} = 1.2$. Note that by symmetry the Southern Galactic hemisphere ought to contain about the same number of satellites as the Northern hemisphere. Thus, {\sl The Stromlo Milky Way Satellite Survey} is expected to find about eight additional satellites in the Southern hemisphere. \label{fig:discfit}} \end{figure} \subsection{A novel disc test} Another test to determine whether the satellite galaxies are distributed in a disc can be performed by comparing the number of satellites near the plane to the number further away: Let $N_{\rm{in}}$ be the number of all satellites that have a perpendicular distance of less than~1.5 times the minimal disc height $\Delta_{\rm{min}}$ from the line-fit. Accordingly, $N_{\rm{out}}$ represents all satellites further away. Both $N_{\rm in}$ and $N_{\rm out}$ are determined for each rotation angle, measuring distances from the line (i.e. plane viewed edge-on in the given projection) that fits the distribution in the given projection best. This is illustrated in Fig.~\ref{fig:discfit}. It shows an edge-on view of the best-fit plane, along with a view rotated by $90^{\rm o}$. Both views see the disc of the MW edge-on. Figure~\ref{fig:rfit} shows the ratio of galaxies found within the DoS to those outside (solid black line), ${\cal R} = N_{\rm{in}} / N_{\rm{out}}$. The situation is shown for the unvaried distances. If the MW satellites were distributed in a disc, ${\cal R}$ would approach a maximum when looking edge-on, while it will rapidly decrease once the projection is rotated out of the disc plane. It is a good test to discriminate a disc-like distribution from a spheroidal one. The latter would not lead to much variation in the ratio. It can be seen that ${\cal R}$ approaches a maximum close to the best-fit $l_{\rm{MW}}$. At the maximum, only two of the 24 satellite galaxies are found outside of the required distance from the disc. The maximum ${\cal R}$ is thus 11.0, situated only a few degrees away from the $l_{\rm{MW}}$ that gives the smallest height. This has to be compared to the broad minimum of ${\cal R}\approx 1$. The disc-signature is obvious, proving the existence of a DoS that incorporates the new satellites found in the SDSS. \begin{figure} \includegraphics[angle=0,scale=0.55]{rplot.eps} \caption{Testing for the existence of the DoS. The behaviour of ${\cal R}$ for each view of the MW, given by the Galactic longitude of the normal vector for each plane-fit. ${\cal R}=N_{\rm in}/N_{\rm out}$ is the ratio of the number of satellites within $1.5 \times \Delta_{\rm{min}}$ ($\Delta_{\rm min}=28.9\;$kpc), $N_{\rm{in}}$, to those further away from the best-fit line, $N_{\rm{out}}$, calculated for all~24 known satellites, as well as for the fits to the 11~classical and the 13~new satellites separately (taking their respective RMS heights as the relevant $\Delta_{\rm min}$). The disc-like distribution can be clearly seen as a strong peak close to $l_{\rm{MW}} = 150^\circ$. Note that the position of the peaks are close to each other for both subsamples separately. This shows that the new satellite galaxies independently define the same DoS as the classical satellite galaxies. \label{fig:rfit}} \end{figure} \subsection{Classical versus new satellites: is there a DoS in both cases?} \label{ssec:newDoS} In addition to the above analysis of all~24 known MW satellites, the analysis is also carried out separately for two distinct subsamples: the~11 classical, most-luminous satellite galaxies and the 13~new satellites discovered mostly in the SDSS. Each of them uses an own minimal height, given by the subsample distribution, in determining ${\cal R}$. If all satellite galaxies follow the same distribution, given by the DoS, a separate fitting should lead to similar parameters. If, on the other hand, the new (mostly ultra-faint) satellites follow a different distribution, then this would challenge the existence of a DoS. {\sl It is worth emphasising that while the brightest satellites in a $\Lambda$CDM model of a MW-type halo may exceptionally form a weak disc-like structure \citep{Libeskindetal09}, none of the existing CCM-based theoretical satellite distributions predict the whole luminous satellite population to be disc-like.} Furthermore, comparing the results for the classical~11 satellites with the ones obtained by the more sophisticated fitting technique used by \cite{Metz07} is a good test to check whether the present technique gives reliable results. The graphs for both subsamples are included in Fig.~\ref{fig:rfit}, the results for classical satellites are represented by the dashed yellow, the new (SDSS) satellite galaxies by the dashed green line. Both are in good agreement not only with the combined sample, but also with each other. They peak at their best-fit $l_{\rm{MW}}$, with each of them having an $N_{\rm{out}}$ of only one galaxy at the peak. Applying the technique presented in Sect.~\ref{ssec:DoSparameters} to calculate the DoS parameters, the new satellites have a best-fit disc with a normal vector pointing to $l_{\rm{MW}} = 151^\circ.4 \pm 2^\circ.0$, only five degrees away from the direction that was obtained by considering all known MW satellites. The inclination is $b_{\rm{MW}} = 9^\circ.1 \pm 1^\circ.0$, again an almost perpendicular orientation of the DoS relative to the MW disc, being only 11~degrees away from the value determined before. The derived RMS height is $\Delta_{\rm{min}} = 28.6 \pm 0.5~\rm{kpc}$, essentially identical to the one given by all satellite galaxies. The minimum distance from the MW centre is $D_{\rm{P}} = 18.3 \pm 1.3~\rm{kpc}$. The fitting to the~11 classical satellites leads to results that are in very good agreement, too. The best-fit position for the 11 classical satellites is $l_{\rm{MW}} = 157^\circ.6 \pm 1^\circ.1$ and $b_{\rm{MW}} = -12^\circ.0 \pm 0^\circ.5$, the height is found to be $\Delta = 18.3 \pm 0.6~\rm{kpc}$, and the closest distance to the MW centre is $D_{\rm{P}} = 8.4 \pm 0.6~\rm{kpc}$. This is in excellent agreement with the results of \cite{Metz07}. In that paper, the authors reported that $l_{\rm{MW}} = 157^\circ.3$, $b_{\rm{MW}} = -12^\circ.7$, $\Delta_{\rm{min}} = 18.5~\rm{kpc}$, and $D_{\rm{P}} = 8.3~\rm{kpc}$. This illustrates that the results are extremely accurate despite employing a more simple disc-finding technique. The agreement of the fit parameters for the two subsamples {\sl separately} is impressive. Two populations of MW satellite galaxies (classical versus ultra-faint) with different discovery histories and methods define the same DoS. This shows that the new, faint satellites fall close to the known, classical, DoS ($\equiv$DoS$_{\rm cl}$). Even without considering the classical satellite galaxies, the new satellites define a disc, DoS$_{\rm new}$, that has essentially the same parameters. This confirms the existence of a common DoS$\approx$DoS$_{\rm new}\approx$DoS$_{\rm cl}$. \subsection{The DoS -- Discussion} \label{ssec:DoSDisc} A pronounced DoS is therefore a physical feature of the MW system. But what is its origin? Is the existence of both the classical-satellite DoS$_{\rm cl}$ and the new-satellite DoS$_{\rm new}$, such that DoS$_{\rm new}\approx\;$DoS$_{\rm cl}$, consistent with the CCM? It has been suggested that the highly anisotropic spatial satellite distribution maps a highly prolate DM halo of the MW that would need to have its principal axis oriented nearly perpendicularly to the MW disc \citep{Hartwick00}. However, there is still much uncertainty and disagreement as to the shape and orientation of the MW DM halo: \cite{Fellhaueretal06} used the bifurcation of the Sagittarius stream to constrain the shape of the MW DM halo to within about 60~kpc, finding it to be close to spherical. The measurement of the shape of the DM halo of the MW within 60~kpc by \cite{LMJ09}, also based on the Sagittarius stream, suggests that the DM halo is triaxial, but with major and minor axes lying within the plane of the MW disc. The DM halo of the MW would therefore not trace a similar three-dimensional structure as the satellites, unless the major axis of the MW halo changes its orientation by about 90~degrees beyond~60~kpc and becomes essentially disc-like (i.e. highly oblate). \cite{LM10} find a new slightly oblate solution to the MW DM halo shape at distances from 20~to~60~kpc. In this solution, the minor axis points along the line Sun--MW-centre suggesting a similar orientation of this extra potential as the DoS. The authors emphasise that this model is not strongly motivated within the current CDM paradigm, it merely serving as a ``numerical crutch''. Given this disagreement about the shape and orientation of the MW DM halo, a significant future observational and theoretical effort to clarify the situation is needed. An additional issue for the CCM is that the normal to the DoS is defined mostly by the outermost satellites, while the direction of the average orbital angular momentum vector is defined by the innermost satellites for which proper motions have been measured. Both, the normal and the average orbital angular momentum vector are nearly co-aligned implying a strong degree of {\sl phase-space correlation} between the satellites such that the DoS is rotating \citep{Metz08}. This rotational DoS is not expected if the satellites merely trace the MW DM halo, because they would have independent infall histories and would therefore not be correlated in phase space. This phase-space feature has been addressed by \cite{Libeskindetal09}. In a thorough analysis of structure formation on MW-galaxy scales, they show that the MW constitutes an improbable but possible constellation of CDM-dominated satellites about a MW-type disk galaxy, the satellites having (of course) independent infall and accretion histories. They analyse an N-body sample of~30946 MW-mass DM host halos with mass in the range $2\times 10^{11}\,M_\odot$ to $2\times 10^{12}\,M_\odot$ for the properties of their substructure distribution. They first select from this sample only those halos that host a galaxy of similar luminosity as the MW (specifically, galaxies more luminous in the V-band than $M_V=-20$). From this remaining sample of 3201 (10~per cent) hosts, they select those that contain at least~11 luminous satellites, leaving~436 (1.4~per cent) host halos. In this sample of 436 systems, about~30~per cent have~6~luminous satellites with orbital angular momenta aligned to a degree similar to that of the MW system. Thus, only 0.4~per cent of all existing MW-mass CDM halos would host a MW-type galaxy with the right satellite spatial distribution. As the authors point out, this probability of $4\times 10^{-3}$ that the DM model accounts for the observed MW-type satellite system would be lower still if proper motion measurements of additional satellites affirm the orbital angular momentum correlation highlighted by \cite{Metz08}, or if the satellites that may be discovered in the southern hemisphere by the {\sl Stromlo Milky Way Satellite Survey} \citep{Jerjen10}\label{note1}\footnote{http://www.mso.anu.edu.au/$\sim$jerjen/SMS\_Survey.html} also lie within the DoS. All 13~new satellites define the same DoS as the~11 classical ones, and furthermore, the latest additions in the southern Galactic hemisphere also lie in the DoS (Sect.~\ref{ssec:newDoS}), {\sl suggesting that the DM hypothesis is much less likely than~0.4~per cent to be able to account for the MW satellite system in MW-type DM halos}. \cite{Li08} and \cite{DOnghia08} propose an interesting alternative solution to the {\sl satellite phase-space correlation problem}: they suggest that the correlation is caused by the infall of groups of DM-dominated dwarf galaxies. Unfortunately, this proposition is challenged by all known nearby groups of dwarf galaxies being spatially far too extended to account for the thinness of the DoS \citep{Metz09b}. It may be thought that the groups that have fallen in correspond to compact dwarf groups that do not exist any longer because they have subsequently merged. But this is compromised by the observation that their putative merged counterparts in the field do not seem to exist \citep{Metz09b}. Indeed, \cite{Klimentowskietal09} model a MW-type system and deduce ``... that such a disc is probably not an effect of a group infall unless it happened very recently'' (their section~4.2.2). Furthermore, this notion would seem to imply that dwarf galaxy groups are full of dSph galaxies, while the pristine (before group infall) MW halo would have formed none, in conflict with the observed morphology-density relation (e.g. \citealt{OT00}). It needs to be emphasised that the DM-based models have so far not addressed the issue that the DoS lies nearly perpendicularly to the MW disc; DM-based models need to {\sl postulate} that this occurs, and it may indeed simply be chance. The combined probability that a DM-based model accounts for the observed MW-type system, which has the properties that the satellites have correlated angular momenta and form a DoS highly inclined to the baryonic disc of the host galaxy, cannot currently be assessed but is, by logical implication, smaller than $4\times 10^{-3}$. But perhaps the MW is a very special system, an outlier within the DM-based framework? This possibility can be assessed by considering the nearest MW-similar DM halo. It hosts a similar disc galaxy, Andromeda, which has a similar satellite system to the MW but is however richer and more complex, and has a larger bulge mass than the MW (Fig.~\ref{fig:correl}). Andromeda may also have a DoS (\citealt{KG06}, see also fig.~4 in \citealt{Metz09})\footnote{Note that the rich satellite system of M31 may have a sub-population of satellites in a disc-like structure \citep{Metz09}.} suggesting that these satellite distributions may not be uncommon among MW-type DM halos. Thus, a Local Group consisting of two dominant DM halos of similar (MW-type) mass would have a likelihood of 0.4~per cent times 1.4~per cent, i.e. $5.6\times 10^{-5}$, to appear with two MW-type disc galaxies, one of them having a pronounced rotating DoS with~11 or more luminous satellites, and the other having at least~11 luminous satellites. \subsection{Invariant baryonic galaxies} \label{ssec:baryonic_invariant} The \cite{Libeskindetal09} analysis, described in Sect.~\ref{ssec:DoSDisc}, also shows that about 10~per cent of MW-type DM halos would host a MW-luminous galaxy, the 90~per cent of others would presumably host galaxies with lower luminosities suggesting a large variation between DM halo and luminous galaxy properties. This however, appears to be a problem considering the properties of observed disc galaxies. By using a principal component analysis on hundreds of disc galaxies, \cite{Disneyetal08} demonstrate that observed disc galaxies are simple systems defined by one underlying parameter, rather than about six if the galaxies were immersed in DM halos. Citing additional similar results, \cite{vandenBergh08} and \cite{Gavazzi09} reach the same conclusion, as well as \cite{Gentileetal09} and \cite{Milgrom09}. This is further supported by an entirely independent study of star-forming galaxies, which again shows a remarkably small variation of behaviour \citep{PAK09b}. The discovery that the ratio of DM mass to baryonic mass within the DM core radius is constant for galaxies (Sect.~\ref{ssec:nonNewt} below) is another statement of the same effect. The small amount of variation for disc galaxies thus appears to be very difficult to reconcile with the large variation inherent in the DM model, as quantified by the \cite{Libeskindetal09} analysis: 90~per cent of MW-mass DM halos would have disc galaxies that differ substantially in luminosity from the MW in the CCM, and yet the closest neighbour, Andromeda, is similar to the MW. This is the {\sl invariant-baryonic-galaxy problem}. {\sl Summarising Sect.~\ref{sec:DoS}}, the CCM is highly significantly challenged by the spatial distribution of MW satellite galaxies and by the similarity of rotationally supported galaxies. The {\sl phase-space correlation problem} of the classical satellites is enhanced significantly after the inclusion of the new ultra-faint satellites, and the Local Group enhances the {\sl invariant baryonic galaxy problem}. \section{The origin of dSph and dE galaxies: The Fritz Zwicky Paradox, an alternative proposition and deeper implications} \label{sec:tdgs} What has been learned so far? The DM-mass--luminosity data of MW dSph satellite galaxies appear to be in conflict with the CCM results, and the mass function of DM masses of the dSph satellites is not in good agreement with the mass function of luminous sub-halos calculated within the CCM. The correlation bulge-mass versus satellite-number is tentative (for having only three points) but will very likely pass the test of time because the error bars allow for a conclusive significance test. The two quantities appear to be physically related as indicated strongly by the Local Group data and also extragalacitc surveys, but clearly much more work needs to be done both observationally and theoretically to refine the implied correlation. The highly pronounced phase-space correlation of the MW satellites means that any formation mechanism must have involved correlated orbital angular momenta of the satellites. Given that the formation of a bulge involves highly dissipative processes, it emerges that a highly dissipative process seems to have formed the bulge and the correlated orbital angular momenta of the satellites. This leads to the real possibility that the origin of both the MW bulge and its satellite population is related to a galaxy--galaxy encounter. Indeed, it is well known and documented that galaxy encounters lead to the formation of bulges {\sl and} tidal arms that can host the formation of tidal-dwarf galaxies (TDGs). These are then naturally correlated in phase space. Since the bulge and the satellites of the MW are about 11~Gyr old, we are led to the scenario that the proto-Galaxy may have had a major encounter or merger about 11~Gyr ago during which the bulge and the satellites formed \citep{Pawlowskietal2010}. \cite{Wetzsteinetal07} demonstrated in a series of numerical models that the number of TDGs increases indeed with the gas fraction of the pre-collision galaxy. This is relevant to galaxy encounters at a high redshift, where galaxy encounters are expected to have been frequent. Noteworthy is that a scenario for the origin of dSph satellite galaxies along the above lines had been suggested already before the DM hypothesis was widely accepted, namely that they may be ancient TDGs \citep{LyndenBell76,LyndenBell83,Kunkel79}. This proposition can naturally account for their correlated phase-space distribution in the form of a rotating disc-like distribution (Sect.~\ref{sec:DoS}), and would lend a new viewpoint on the difficulty of understanding the properties of the MW dSph satellites as DM sub-halos documented above. Indeed, in a famous conjecture, Fritz Zwicky (\citealt{Zw56}, on p. 369) states that new dwarf galaxies form when galaxies interact. As shown here this leads to a contradiction with observational data when this conjecture is combined with his other famous conjecture according to which the masses of galaxies are dominated by Dark Matter \citep{Zw56}. This contradiction is referred to as the Fritz Zwicky Paradox. \subsection{The evolution of TDGs} \label{ssec:tdgsevol} A natural way to explain the satellite phase-space correlation as well as the bulge-satellite relation is thus to identify the dSph satellite galaxies of the MW with a population of ancient TDGs that probably formed during a gas-rich encounter between the early MW and another galaxy. But if they all formed at the same time, how can the different chemical properties and star-formation histories of the different dwarf galaxies then be explained within this scenario? If the DM hypothesis is not viable for the MW satellite population, how can the high mass-to-light ratios of the satellites be explained? It is known that the satellite galaxies all have ancient populations of an indistinguishable age \citep{Grebel08}, perhaps being created when the TDGs were born. Or, the ancient population comes from the precursor host galaxy. TDGs may also form with globular clusters as long as the star-formation rate surpasses a few~$M_\odot$/yr for~10~Myr \citep{WKL04}. The chemo-dynamical modelling by \cite{Recchietal07} has shown that once a TDG (without DM) forms it is not natural for it to blow out the gas rapidly. Rather, the rotationally-supported small gas-rich discs of young TDGs begin to evolve through self-regulated star formation either until their gas is consumed or removed through ram-pressure stripping. Consequently, their internal evolution through star formation can be slow and individual, such that TDGs that formed during one encounter event can exhibit different chemical properties many~Gyr after their formation. Removal of the interstellar medium from the TDG through ram-pressure takes about half to a few orbital times, which is typically one to a few Gyr after formation. This time scale is consistent with the observed cessation of star formation in most MW~dSph satellites \citep{Grebel99}. The TDGs that have remained at large distances from their hosts retain their gas and appear as dIrr galaxies \citep{Hunteretal00}. Once formed, TDGs cannot fall back onto their hosts and merge since dynamical friction is insignificant for them. A TDG may be dispersed (but not accreted) if it happens to be on a near radial orbit, which, however, is unlikely given the torques acting on the tidally expelled material from which the TDG forms during the encounter. If the dSph satellites are ancient TDGs then understanding their internal kinematics remains a challenge though because TDGs do not contain significant amounts of DM \citep{BarnesHernquist92, Wetzsteinetal07, Bournaud07b, Gentile07, Milgrom07}. However, the inferred large $M/L$ ratios of dSph satellites (and especially of the ultra-faints) may not be physical values but may be a misinterpretation of the stellar phase-space distribution within the satellite. If this were to be the case then the absence of a ``DM-mass"-luminosity relation (Sect.~\ref{sec:ML}) for dSph satellites would be naturally understood. The following gedanken-experiment illustrates that this could be the case. An unbound population of stars on similar orbits, each slightly inclined relative to the other orbits, will reconfigure at apogalacticon and an observer would see a stellar phase-space density enhancement and would also observe a velocity dispersion. The $M/L$ ratio calculated from the observed velocity dispersion would not be a true physical $M/L$ ratio. Models related to this idea have been studied by \cite{Kuhn93}. Moreover, resonant orbital coupling can periodically inflate kinematically measured $M/L$ values \citep{KM89,KSH96}. Fully self-consistent Newtonian N-body models have demonstrated that unphysically high $M/L$ ratios arise indeed if TDGs are allowed to orbit a host galaxy sufficiently long such that the remaining stellar population within the ancient TDG adopts a highly non-isotropic phase-space distribution function \citep{Kroupa97,KlKr98,MK07}. These models suggest that it may be wrong to use an observed velocity dispersion to calculate a mass for a dSph satellite. Thus, tidal shaping of TDGs over a Hubble time can produce remnant objects that have internal highly-anisotropic stellar phase-space distributions that would be falsely interpreted by an observer as corresponding to a high $M/L$ ratio, as explicitly shown by \cite{Kroupa97}. Intriguingly, these models reproduce the gross physical parameters of dSph satellites well \citep{MK07}, and thus constitute the simplest available stellar dynamical solutions of dSph satellites constructed without fine-tuning. It is indeed remarkable how model RS1-5 of \cite{Kroupa97}, shown here as a snapshot (Fig.~\ref{fig:hercules}), is an essentially perfect match to the dSph satellite Hercules (see fig.~2 in \citealt{Coleman2007}) discovered~10 years later by \cite{Belokurovetal07}. The half-light radius is 180~pc in the model and 168~pc for Hercules, RS1-5 has a velocity dispersion of about 2.8~km\,s$^{-1}$ (table~2 in \citealt{Kroupa97}), while Hercules has a measured velocity dispersion of $3.72\pm0.91$~km\,s$^{-1}$ \citep{Adenetal09}, and the inferred mass-to-light ratio that one would deduce from velocity dispersion measurements based on the assumption of equilibrium is about $200$ in both cases. Both RS1-5 and Hercules have luminosities agreeing within one order of magnitude (the model being the brighter one), yet RS1-5 has no DM. \begin{figure} \hspace{-5mm} \includegraphics[angle=0.5,scale=0.499]{hercules.eps} \caption{Model RS1-5 from \cite{Kroupa97} (on the kpc grid) is plotted over the surface brightness contours of Hercules by \cite{Coleman2007} (celestial coordinate grid). The dashed and dotted curve are, respectively, the past and future orbit of RS1-5. \label{fig:hercules}} \end{figure} The TDG models for dSph satellites proposed by \cite{LyndenBell76,LyndenBell83} and \cite{Kunkel79} and calculated by \cite{Kroupa97} and \cite{KlKr98}, which are based on observed properties of TDGs, thus lead to a population of ancient TDGs that are in reasonable agreement with the observed luminosities, dimensions, and $M/L$ ratios of dSph satellites \citep{MK07}. These model-dSph satellites require no fine tuning of parameters but only assume the formation about~10~Gyr ago of about $10^7\,M_\odot$ heavy TDGs composed purely of baryonic matter. This theoretical framework of satellite galaxies does not imply any relation between luminosity and (wrongly inferred) ``dynamical mass", in agreement with the lack of this relation (Sect.~\ref{sec:ML}). And it would naturally explain why the mass function of luminous DM sub-halos cannot account for the observations (Sect.~\ref{sec:mfn}). Within Newtonian dynamics, this dynamical modelling over many orbits around the MW DM halo has demonstrated that even low-mass satellites do not easily disrupt unless they are on virtually radial orbits \citep{Kroupa97, MK07}. {\sl Summarising Subsect.~\ref{ssec:tdgsevol}}, the physics of TDG formation and evolution is sufficiently well understood to conclude that 1) {\sl once formed at a sufficient distance from the host, TDGs will take an extremely long time to dissolve, if at all}; and 2) the TDGs formed will naturally lead to a population of ancient TDGs that resemble dSph satellites. A bulge-mass--number of satellite correlation and a DoS arise naturally in this scenario. \subsection{On the substructure problem} \label{ssec:sub} The MW dSph satellites can therefore be understood as ancient TDGs that formed within a DM universe. But on the other hand, the extensive modelling within the CCM strictly implies, if DM is cold or warm (but not hot), that MW-luminous galaxies must be accompanied by hundreds (with a slight dependence on the cold or warm nature of DM) of shining albeit faint satellites, which are not of tidal origin \citep{Knebeetal08,Maccioetal09,Busha09,Koposovetal09}. For example, \cite{Tollerudetal2008} conjecture that ``there should be between ~300 and~600 satellites within $D=400$~kpc of the Sun that are brighter than the faintest known dwarf galaxies and that there may be as many as~1000, depending on assumptions.'' Deep follow-up observations of the low S/N ultra-low-luminosity satellite candidates introduced by \cite{Walshetal09} show that these are not dSphs as a population. These results show that there is not a significant number of missing, ultra-low-luminosity satellites ($M_V > -2, D < 40\,$kpc) in the SDSS footprint, i.e. an area covering half of the Northern hemisphere (Jerjen et al., in prep.). This may be a problem because of the $\Lambda$CDM prediction that there should be a dozen additional satellites ($M_V<0, D<40$~kpc) in a quarter celestial sphere (e.g. fig.~4 in \citealt{Koposovetal09}; see also \citealt{Cooper10}). If the dSph satellites are ancient TDGs stemming from an early gas-rich encounter involving the proto-MW and probably contributing a collision product to the MW bulge (see Sect.~\ref{sec:origin}), then this would mean that the MW would have a severe substructure problem as there would not be any satellites with DM halos less massive than about $10^{10}\,M_\odot$ with stars, in conflict with DM predictions provided by, e.g., \cite{Knebeetal08}, \cite{Diemand08}, \cite{Busha09}, \cite{Maccioetal09}, and \cite{Koposovetal09}. Perhaps a few dSph satellites are ancient TDGs, such as the classical or nine brightest satellites, and the remainder are the DM dominated sub-halos? This possibility is unlikely, because the new satellites span the same DoS (Sect.~\ref{ssec:newDoS}) and because they do not form a population with physical properties that differ distinctly from those of the classical satellites (e.g. \citealt{Strigari08}). {\sl Summarising Subsect.~\ref{ssec:sub}}, based purely on the existence of the satellite phase-space correlation and the formation and survival of TDGs in a hierarchical structure formation framework the Fritz Zwicky Paradox emerges and the validity of the DM hypothesis must be questioned, because the dSph satellites cannot be two types of object at the same time, namely DM-dominated sub-structures and ancient DM-free TDGs. \subsection{Early-type galaxies} \label{ssec:dE} But if TDGs account for the dSph satellites of the MW, would they then not also be an important population in other environments? The production of TDGs in the CCM has been calculated by \cite{OT00}. Intriguingly, they find that TDGs naturally match the observed number of dE galaxies in various environments. The result of \cite{OT00} is rather striking, since they find that within the CCM framework only one to two long-lived (i.e., bright) TDGs need to be produced on average per gas-dissipational encounter to cater for the population of dwarf elliptical (dE) galaxies and for the density--morphology relation in the field, in galaxy groups and in clusters\footnote{Note that \cite{OT00} write: ``Adopting the galaxy interaction scenario proposed by Silk \& Norman, we find that if only a few dwarf galaxies are formed in each galaxy collision, we are able to explain the observed morphology-density relations for both dwarf and giant galaxies in the field, groups of galaxies, and clusters of galaxies.'' They also state ``The formation rate of TDGs is estimated to be~$\sim1-2$ in each galaxy interaction.'' and proceed to compare this number with the actually observed number of TDGs born in galaxy encounters. This statement is at odds with the quotation in \cite{Bournaud09}.}. Viewing dE galaxies as old TDGs would be consistent with them deviating from the mass-radius, $M(r)$, relation of pressure-supported (early-type) stellar systems. The dE and dSph galaxies follow a $r\propto M^{1/3}$ sequence reminiscent of tidal-field-dominated formation. {\sl All} other pressure-supported galactic systems (elliptical galaxies, bulges, and ultra-compact dwarf galaxies) with stellar mass $M>10^6\,M_\odot$ follow instead the relation $r\propto M^{0.60\pm0.01}$ (see fig.~2 in \citealt{DHK08}, see also fig.~7 in \citealt{Forbesetal08} and fig.~11 in \citealt{GrahamWorley08}), which may result from opacity-limited monolithic collapse \citep{Murray09}. Viewing dE galaxies as TDGs would also be consistent with the observation that they have essentially stellar mass-to-light ratios similar to globular clusters \citep{Benderetal92,Gehaetal03,DHK08,Forbesetal08}. If dE (baryonic mass $>10^8\,M_\odot$) and dSph (baryonic mass $<10^8\,M_\odot$) galaxies are old TDGs, why do they appear as different objects? That the dE and dSph galaxies differ in terms of their baryonic-matter density may be a result of the finding that below~$10^8\,M_\odot$ spheroidal objects on the $r\propto M^{1/3}$ relation cannot hold their warm gas and consequently they must expand \citep{PAK09}, becoming more susceptible to tides from their host. dE galaxies are pressure-supported stellar systems, while young TDGs are rotationally supported \citep{Bournaudetal08}. With a mass of less than typically $10^9\,M_\odot$, the velocity dispersion of their stellar populations becomes comparable to their rotational velocity (of the order of $30$~km\,s$^{-1}$). That a sizeable fraction of dE galaxies show rotation, some even with spiral structure \citep{Jerjen00, Barazzaetal02,Gehaetal03,Ferrarese06,Chilingarian09, Beasleyetal09}, is thus also consistent with their origin as TDGs. For an excellent review on dE galaxies the reader is referred to \cite{Lisker09}. One is thus led to the following seemingly logical impasse, i.e. to the Fritz Zwicky Paradox. In the CCM, TDGs are formed and their number and distribution is calculated to match the number and distribution of observed dE galaxies in the different environments. Within the CCM, the observed luminous dwarf sub-structures are thus naturally accounted for by TDGs. But the dE galaxies cannot be both, DM sub-halos {\sl and} TDGs at the same time. {\sl Summarising Subsect.~\ref{ssec:dE}}, the physical processes at play during structure formation in the CCM imply that dE galaxies ought to be identified as ancient TDGs. Thus, there would be no room for shining DM substructures. \subsection{Deeper implications: gravitational dynamics} \label{sec:gravdyn} In Sects.~\ref{ssec:sub} and~\ref{ssec:dE} it has been shown that the DM hypothesis leads to the Fritz Zwicky Pradox when accounting for the number of satellite and dE galaxies because the formation of TDGs is an intrinsic outcome of structure formation. In Sects.~\ref{sec:ML} to \ref{sec:DoS} it has also been shown that the CCM seems to have a rather major problem accounting for the observed Galactic satellites and their internal properties. This situation suggests that alternative ideas should be considered to help us understand the origin of these problems, and indeed repeat the steps that had led to a full-fledged DM framework of structure formation but with a different outlook. Since structure formation in the DM framework relies on Newtonian gravitation in the weak-field limit, one is naturally led to relax insistence on Newtonian dynamics in the weak-field limit and to consider modified gravitation theories, which remain compatible with General Relativity in the strong field regime and with the observed large-scale structure. We note that adopting non-Newtonian dynamics in the weak-field limit would {\sl not} necessarily rule out the existence of DM: on the scale of galaxy clusters DM might still be needed, but instead of being warm or cold, it would be {\sl hot} \citep{Angusetal09}. \subsubsection{Non-Newtonian weak-field gravity} \label{ssec:nonNewt} Alternatives to Newtonian dynamics in the weak-field limit have been studied in great detail. The increasingly popular mo\-di\-fied-\-New\-tonian-\-dynamics (MOND) approach rests on a modification of the Newtonian acceleration in the weak-field limit, i.e. when the Newtonian acceleration $a$ is smaller than a threshold $a_0$ \citep{Milgrom83,BekensteinMilgrom84, SM02,Bekenstein04, FB05, Fameyetal07, Sanders07,Sanders08, McGaugh08,Nipetal08, TC08, Brunetonetal09}. A modified-gravity (MOG) adding a Yukawa-like force in the weak-field limit has also been under investigation (\citealt{MT09,MT09b}, and references therein). In addition, an extension of the General Theory of Relativity to a class of alternative theories of gravity without DM and based on generic functions $f(R)$ of the Ricci scalar curvature $R$ have been developed and successfully applied to the problem of galactic rotation curves (e.g. \citealt{Cap09}). For a brief review of MOND and MOG and Milgrom's proposition on the possible physical origin for the existence of $a_0$, the reader is directed to the Appendix. Both the MOND and MOG approaches have been applied to the satellite galaxy problem with appreciable success \citep{Milgrom95,BM00,Angus08,MT08,Hernandezetal10,McGaughWolf10}. It has already been conclusively demonstrated that spiral galaxy rotation curves are well recovered in MOND purely by the baryon distribution without any parameter adjustments \citep{SM02,McGaugh04,McGaugh05,Sanders07b}, and MOG is reported to also do well on this account \citep{BM06}. In contrast, the DM approach can only poorly reproduce the vast variety of rotation curves, and cannot explain the amazing regularities found in them \citep{McGaugh04,McGaughetal07,Gentileetal09,Milgrom09}. Notably, the realisation \citep{Gentileetal09, Milgrom09} that the ratio of DM mass to baryonic mass within the DM core radius is constant despite the large variation in the DM--to--baryonic-matter ratio globally within galaxies cannot be understood within the DM hypothesis. A constant ratio within that radius implies that the distribution of baryonic matter is indistinguishable from that of the supposedly present DM (as already found by \citealt{Bosma81}). This implies a hitherto not predicted near-exact coupling between DM and baryonic matter that does not arise naturally in the CCM, while outside that radius the effects of DM should become noticeable \cite{McGaugh10}. The only way to physically couple DM and baryons with each other to this degree would be by postulating the existence of an unknown dark force that acts only between DM particles and baryons. The modified DM cosmology would then comprise inflation, dark matter, a dark force, and dark energy. In MOND~models, this behaviour of gravity comes naturally. That the rotation curves would be purely defined by the baryonic matter distribution in non-DM models indeed would naturally explain the later finding based on a large sample of galaxies by \cite{Disneyetal08}, \cite{Gentileetal09}, and \cite{Milgrom09} that disc galaxies appear to be governed by a single parameter. Furthermore, the high galaxy-cluster--galaxy-cluster velocities required to obtain the features of the Bullet cluster have been shown to be extremely unlikely in the CCM (Sect.~\ref{sec:introd}), but these velocities are found to naturally occur in MOND \citep{AM08}. Last but not least, the {\sl time-delay problem} of the CCM mentioned in Sect.~\ref{sec:introd} would disappear naturally. \subsubsection{A consistency check} \label{ssec:nonnewton} If it were true that the physical Universe is non-Newtonian in the weak-field limit, then a simple test would provide a consistency check: high dynamical mass-to-light ratios, $(M/L)_{\rm dyn}$, (derived assuming Newtonian dynamics) would not be due to DM but due to the dynamics being non-Newtonian in the weak-field limit and/or be due to objects being unbound non-equilibrium systems (Sect.~\ref{ssec:tdgsevol}). Thus, taking MOND to be a proxy for non-Newtonian dynamics in the weak-field limit (MOND is, arguably, the currently available simplest alternative to Newtonian dynamics in the weak-field limit), all systems with non-stellar $(M/L)_{\rm dyn}$ values (as derived in Newtonian gravity) would have to have internal accelerations roughly below the MONDian value\footnote{Note that this statement is approximately true for all non-Newtonian gravitational theories since they must account for the same non-Newtonian phenomena in the weak-field limit.} $a_o= 3.9$~pc$/$Myr$^2$. That is, all pressure-supported (spheroidal) stellar systems that appear to be dominated dynamically by DM would need to have an internal acceleration $a<a_o$. Note that the emphasis here is on pressure-supported systems since rotationally supported systems have been extensively and successfully studied in non-Newtonian gravitational theories and because dSph and dE galaxies are mostly pressure-supported objects. Figure~\ref{fig:accel} shows the acceleration, \begin{equation} a(r_e) = G\,{M \over r_{\rm e}^2} = G \frac{0.5 \Upsilon \cdot L_V}{r_e^2}, \label{eq:a} \end{equation} that a star inside a pressure-supported system experiences at the effective radius, $r_e$, of its host system with luminosity spanning $10^4$ to $10^{12}\,L_\odot$. Here $M=0.5\,\Upsilon \, L_V$ is the stellar mass within $r_e$ and $L_V$ is the absolute V-band luminosity in solar units. The stellar mass-to-light ratio in the V-band is $\Upsilon\approx3$ for collisionless systems (two-body relaxation time longer than a Hubble time), while $\Upsilon\approx1.5$ for collisional systems, i.e. for systems that have evaporated a significant fraction of their faint low-mass stars by means of energy equipartition \citep{KL08,KM09}. Values of $(M/L)_{\rm dyn}$ as high as~10 can be expected for purely baryonic systems if these retain their stellar remnants and hot gas. For example, the mass of an E~galaxy may be comprised of only 30~per cent or less of stars, the rest consisting of stellar remnants and gas that cannot cool to form new stars \citep{PB08,DKB09}, meaning that $\Upsilon=5$ would be an underestimate in that case. Ultra-compact dwarf galaxies, UCDs (sometimes also understood as extremely massive star clusters), have high stellar $M/L$ values perhaps due to a bottom-heavy IMF \citep{MK08} or a top-heavy IMF \citep{DKB09}. \begin{figure} \vspace{3mm} \includegraphics[angle=0,scale=0.48]{acceleration.eps} \vspace{2mm} \caption{{\sl Upper panel}: The dynamical $(M/L)_{\rm dyn}$ ratio (calculated assuming Newtonian dynamics to be valid in the weak-field limit) in dependence of the luminosity, $L_V$, for pressure-supported stellar systems following \cite{DHK08}. Note that here dE ($<10^{10}\,L_\odot$) and E ($>10^{10}\,L_\odot$) galaxies are both plotted with the same symbol. {\sl Lower panel}: The Newtonian acceleration (Eq.~\ref{eq:a}) of a star located at the effective radius within the host system in dependence of the host luminosity. The dashed line is $a_0$. Note that $M/L_{\rm dyn}$ is high in pressure-supported stellar systems only when $a<a_0$. In both panels: UCD$=$ultra compact dwarf galaxy. Comparing the upper and lower panels shows that evidence of DM ($M/L_{\rm dyn}>10$) appears only when $a<a_0$. \label{fig:accel}} \end{figure} By comparing the two panels in Fig.~\ref{fig:accel}, it is indeed evident that only those systems with $a<a_o$ show non-baryonic $(M/L)_{\rm dyn}$ values. This is more clearly shown in Fig.~\ref{fig:a_correl} where the MOND prediction for the range of dynamical mass-to-light ratios measured by a Newtonist living in a MONDian universe is plotted as a function of Newtonian acceleration. For this figure, the MOND expectation for the mass-to-light ratio, which an observer who thinks to live in a Newtonian world would deduce, was calculated as follows. Adopting a conservative value of the baryonic mass-to-light ratio $\Upsilon_{\rm bar}$ between 0.7 (for a globular cluster with an old metal-poor population depleted in low-mass stars) and 5 (for an old metal-rich population), the prediction of MOND inside the effective radius is \citep{FB05,Angusetal09} \begin{equation} (M/L)_{\rm dyn \, mond} = 0.5 \times \Upsilon_{\rm bar} \times \left( 1+\sqrt{1+4a_o/a} \right) \ \ . \label{eq:MOND} \end{equation} We note that, writing customarily $x=g/a_o$, where $g$ is the actual full acceleration experienced by a ballistic particle (in MOND)\footnote{In the notation applied here, the MOND formula becomes $a=\mu(x)\,g$, where the Newtonian acceleration $a$ is given by Eq.~\ref{eq:a}.}, Eq.~\ref{eq:MOND} follows from the form of the transition MOND function \citep{Milgrom83} \begin{equation} \mu(x)=x/(1+x), \label{eq:mu} \end{equation} which is valid up to $x\approx10$. The theoretical transition derived by \cite{Milgrom99} and mentioned in the Appendix would yield virtually the same result. The three classical dwarfs that lie outside the predicted MOND range for $(M/L)_{\rm dyn}$ in Fig.~\ref{fig:a_correl} are UMa, Draco, and UMi. UMa may have an anisotropic velocity dispersion \citep{Angus08}; Draco is known to be a long-standing problem for MOND, but the technique of interloper removal developed by \cite{Serra09} could probably solve the problem, although this particular case remains open to debate; UMi is a typical example of a possibly out-of-equilibrium system, as it is elongated with substructure and shows evidence of tidal tails (D. Martinez-Delgado, priv. communication). Ultra-faint dwarf spheroidals are expected to be increasingly affected by this kind of non-equilibrium dynamics, as shown to be true even for Newtonian weak-field dynamics (\citealt{Kroupa97}, Sect.~\ref{ssec:tdgsevol}), and even more strongly so in MOND \citep{McGaughWolf10}. \begin{figure} \vspace{3mm} \includegraphics[angle=0,scale=0.48]{correlation.eps} \vspace{2mm} \caption{The correlation between the acceleration $a(r_e)$ and the dynamical mass-luminosity ratio $(M/L)_{\rm dyn}$ derived assuming Newtonian dynamics is shown for the same objects as in Fig.~\ref{fig:accel}. The shaded region indicates the range in $(M/L)_{\rm dyn}$ as it follows directly from MOND models (without any parameter adjustments) using Eq.~\ref{eq:MOND}. The graph shows the consistency of the data in a MONDian universe for an observer who interprets observations with Newtonian dynamics. Encircled dwarf spheroidals outside this range (UMa, Dra, and UMi) may indicate non-equilibrium dynamics, either because the whole system is unbound, or because of unbound interloper stars among the member stars (see Sect.~\ref{ssec:nonnewton}). That virtually all pressure-supported stellar systems fall in the shaded MOND region suggests a successful consistency check. That is, stellar dynamics is MONDian rather than Newtonian on galactic scales. \label{fig:a_correl}} \end{figure} {\sl Summarising Subsect.~\ref{sec:gravdyn}}, well-developed non-Newtonian weak-field approaches exist and have been shown to account for galaxy properties much more succesfully than the CCM, which would need to be extended by a dark force to account for the observed strong coupling between DM and baryons. All known pressure-supported stellar systems ranging from elliptical to dwarf satellite galaxies behave dynamically as expected in a MONDian universe. In DM cosmology, the association of highly non-stellar $(M/L)_{\rm dyn}$ values with $a<a_o$ would be coincidental as it is not built into the theory. It is, however, natural in a MONDian universe for observers who interpret weak-field observations with Newtonian dynamics. \section{Conclusions and perspectives} \label{sec:concs} We inhabit a Universe for which physicists seek mathematical formulations. A successful formulation of gravitational physics, the General Theory of Relativity (GR), requires the existence of non-baryonic dark matter (DM) in order to account for the observed rotation curves of galaxies and other dynamical effects in this theory, which has Newtonian dynamics as its weak-field limit. On the other hand, non-Newtonian weak-field gravitational theories have also been formulated to account for the ``DM-effects'' observed in galaxies. Finding a definitive test that distinguishes between these two different solutions to the problem of galactic dynamics and cosmological structure formation is difficult. Both DM and modified gravity are designed to solve similar problems, so the test must rely on subtle differences between the models and the observational data. Thus far, GR$+$DM$+\Lambda$+inflation (the CCM) accounts for the emergence of structure on large scales, and \cite{Reyesetal2010} were able to exclude certain versions of alternative gravitational theories that had already been known by the respective community to be unstable \citep{Contaldietal08}. But, as shown here, the CCM appears to have insurmountable problems on galaxy scales such that other alternative approaches need to be studied. A speculative ansatz to perhaps solve the observed near-exact DM--baryon coupling in galaxies within a DM-Newtonian cosmology would be to extend the CCM by postulating the existence of a {\sl dark force} (DF) leading to a GR$+$DM$+$DF$+$$\Lambda$$+$inflation cosmology that should perhaps be investigated in more detail in the future. The greatest differences between the two competing approaches (CCM versus non-Newtonian dynamics in the weak-field limit) are expected in the weak gravitational regime where the subtleties of non-Newtonian weak-field dynamics are most pronounced, which is why the constituents of the outer edges of galaxies allow the most stringent tests. This contribution has statistically assessed whether the observed properties of satellite galaxies in the Local Group, which are the result of structure formation in the weak-field limit, are consistent with the CCM. Given that a substantial number of independent research groups working in the traditional CDM and WDM approaches have by now made firm statements about the dwarf satellite galaxies of the MW and Andromeda such that the missing satellite problem is deemd to be solved, the CCM can be further tested sensitively on these scales within the Local Group. Five new problems for the CCM on the scale of the Local Group and dwarf galaxies have been uncovered: (i) the observed absence of a mass-luminosity relation (Sect.~\ref{sec:ML}, the {\sl DM-mass--luminosity problem}); (ii) the mass function of luminous galactic satellites (Sect.~\ref{sec:mfn}, the {\sl mass function of luminous satellite problem}); (iii) the observed relation between the bulge mass and the number of satellites (Sect.~\ref{sec:origin}, the {\sl bulge-satellite correlation problem}); (iv) the accordance with the Milky Way's disc-of-satellites of the recently detected ultra-faint dwarfs (Sect.~\ref{sec:DoS}, the {\sl phase-space correlation problem}); and (v) the low probability that two neighbouring MW-type DM halos contain similar MW-type disk galaxies (Sect.~\ref{ssec:baryonic_invariant}, the {\sl invariant-baryonic-galaxy problem}). It is found that the CCM is consistent with the Local Group data with a combined probability\footnote{Summarising the likelihoods, $p$, that the CCM accounts for the observed data in the Local Group are in the individual tests: (1)~mass--luminosity data: $p_1<0.3$~per cent (Sec.~\ref{sec:ML}); (2)~mass function of luminous sub-halos: $p_2<4.5$~per cent (Sect.~\ref{sec:mfn}); (3)~bulge--satellite number: $p_3\approx4.4$~per cent (Sect.~\ref{sec:origin}); (4)~a~MW-type galaxy with at least 11~satellites in a DoS: $p_{4}=0.4$~per cent; (5)~a~M31-type galaxy with at least 11 satellites: $p_{5}=1.4$~per cent (Sect.~\ref{ssec:DoSDisc}). Thus, the combined probability that the general CCM framework accounts for the Local Group is $p \ll 3\times 10^{-3}$.} $p\ll 3 \times 10^{-3}$. The five problems thus appear to rather strongly challenge the notion that the CCM successfully accounts for galactic structure in conflict with a vast volume of reported research (compare with \citealt{Fanelli10}). All these challenges constitute a strong motivation for numerous future observational and theoretical investigations. For instance, the disk of satellites will have to be confirmed by surveys such as Pan-Starrs \citep{Burgett09} and the Stromlo Milky Way Satellite Survey (SMS) \citep{Jerjen10}. Given the existence of the DoS and by symmetry, the Southern hemisphere ought to also contain about 16 satellites, such that the SMS survey is expected to discover about 8 new southern satellites (Fig.~\ref{fig:discfit}). It will also be essential to refine the correlation between bulge-mass and satellite-number with extragalactic surveys. On the theoretical side, more inclusive modelling is needed to address these challenges within the CCM while, at the same time, existing viable alternatives should be explored with more emphasis. With this contribution, the following clues have emerged suggesting the need for a new scenario for the origin and nature of dSph satellite galaxies. The observed correlation between bulge mass and number of satellites suggests that a link between these two quantities may exist. The phase-space correlation of the classical and ultra-faint satellite galaxies implies that angular momentum conservation played an important role in establishing the satellite distribution. Given that bulges form in dissipational encounters, during which angular-momentum conservation rearranges matter on Galactic scales to be in highly correlated phase-space structures (tidal arms), a natural path thus opens to understand the likely origin of satellite galaxies. Already in the 1970's a tidal origin for dwarf spheroidal galaxies was suggested, based on their arrangement around the Milky Way (Sect.~\ref{sec:tdgs}). This solution does imply, however, that the dSph galaxies are ancient TDGs and not DM sub-haloes. Furthermore, by logical implication, dE galaxies would also be TDGs (Sec.~\ref{ssec:dE}). This would imply that the vast majority of $\simless 10^{10}\,M_\odot$ DM sub-halos are unable to make stars. This, however, would be in conflict with all the CCM computations (the Fritz Zwicky Paradox) available to date {\sl to the extend that the CCM would have to be discarded in favour of a universe without cold or warm DM}. In this case, the non-Keplerian rotation curves of galaxies and other DM effects additionally suggest that we live in a non-Newtonian weak-field framework within which galaxies would be pure baryonic objects\footnote{Given that Newton derived the gravitational $1/r^2$ law over a very limited physical range (Solar System), while with the Local Group gravitational physics is probed on a length scale nearly eight orders of magnitude larger and in a much weaker field regime, it need not be surprising that an adjusted gravitational law is needed.}. This scenario would naturally solve problems (iii) and (iv), while it would not imply a ``dynamical mass"-luminosity relation if the dwarfs are out of equilibrium, so could possibly solve problem (i). For purely baryonic galaxies, problem (ii) would not exist anymore by definition. Problem~(v) would also vanish naturally. What is more, while in the CCM the association of highly non-stellar $(M/L)_{\rm dyn}$ values with $a<a_o$ would be coincidental because it is not built into the theory, it is natural in a non-Newtonian universe for weak-field observers who interpret observations with Newtonian dynamics. Noteworthy is that the same statement can be made for the Tully-Fisher scaling relation for rotationally-supported galaxies \citep{TF77,McGaugh05b,CO09} as well as the newly found scaling relation of \cite{Gentileetal09} and \cite{Milgrom09}. The supposed mass-deficit seen in young rotating and gaseous TDGs (such as those of NGC~5291) constitutes independent empirical evidence towards this same statement. Young tidal dwarf galaxies (TDG), which should be devoid of collisionless DM, appear to nevertheless exhibit a mass-discrepancy in Newtonian dynamics. This is a significant problem for the DM hypothesis, but it is naturally explained by MOND \citep{Gentile07, Milgrom07}. Also, while the high Bullet-cluster velocity is hard to account for in the CCM, it is natural in MOND (Sect.~\ref{sec:introd}, \ref{sec:gravdyn} and~\ref{ssec:nonNewt}). And, it has already been noted by \cite{Sanders99} that the dynamical-mass -- baryon-mass discrepancy observed in galaxy clusters is nearly removed in MONDian dynamics. {\sl It would thus appear that within the non-Newtonian weak-field framework a much more complete, self-consistent, and indeed simpler understanding of the Galaxy's satellites as well as of major galaxies may be attained, than within the CCM.} However, to affirm this statement, this alternative cosmological scenario will have to be investigated in as much detail as is now available for the CCM in order to perform equivalent tests as presented here for the DM hypothesis and to ascertain which of the non-Newtonian weak-field dynamics theories (and which versions of the theories) can most successfully account for the physical world. Models of merging gas-rich disc galaxies need to be computed in MOND, for example, to study how the formation of TDGs proceeds and how the number of satellites thus formed correlates with the bulge that forms as a result of the encounter. These populations of satellites associated with globular clusters that formed along with them would naturally appear in (more than one) closely related planes explaining the \cite{LL95} streams, because a gas-rich galaxy pair undergoes many close encounters in MOND, each spawning some TDGs and globular clusters, before perhaps finally merging. Figure~\ref{fig:mangrove} schematically depicts the structure formation scenario in this non-Newtonian weak-field framework: while purely baryonic galaxies would merge, these events would spawn dwarf galaxies such that a density--morphology relation would be established (more dE galaxies in denser environments, \citealt{OT00}). \begin{figure} \vspace{3mm} \includegraphics[angle=0,scale=0.5]{mangrovetree.eps} \vspace{2mm} \caption{A new cosmological structure formation framework: the mangrove merger tree. In a modified-Newtonian weak-field framework, purely baryonic galaxies merge thereby spawning new dwarf galaxies giving rise to the morphology-density relation (adapted from \citealt{Metz08a}). \label{fig:mangrove}} \end{figure} The MONDian modelling by \cite{TC08} and \cite{CT09} has already shown that TDGs are produced during gas-dissipational galaxy mergers, and that the interaction times between galaxies are much longer, while the number of mergers is smaller than in a DM universe. Hence, the number of observed galaxy encounters would be given foremost by the long time scale of merging and thus by more close galaxy-galaxy encounters per merging event rather than on a high number of mergers. This would imply that compact galaxy groups do not evolve statistically over more than a crossing time. In contrast, assuming DM-Newtonian dynamics to hold, the merging time scale would be about one crossing time because of dynamical friction in the DM halos such that compact galaxy groups ought to undergo significant merging over a crossing time. The lack of significant evolution of compact groups, if verified observationally, would appear not to be explainable if DM dominates galaxy dynamics. Analyses of well-studied compact groups indeed indicate this to be the case \citep{Presottoetal10}. Thus, many observational problems may be solved uncontrived by adopting non-Newtonian weak-field dynamics, and perhaps this was, in the end, the most self evident explanation to the discovery of non-Keplerian rotation curves by \cite{RF70}\footnote{On 19~June~2009, the final day of the conference "Unveiling the Mass: Extracting and Interpreting Galaxy Masses" in Kingston, Ontario, in honour of the career of Vera Rubin, PK asked her whether she would be very dismayed if her discovery that galaxies have non-Keplerian rotation curves would not be due to dark mater but rather non-Newtonian weak-field dynamics. Prof.~Rubin replied that she would in fact be delighted, since the non-Keplerian rotation curves are an empirical observation of hitherto not understood physics, and one needs to keep an open mind in seeking solutions.}. \acknowledgements{This work was suported by the Alexander von Humboldt Foundation (BF), and by the German Research Foundation (DFG) through grants KR1635/18-1 and HE1487/36-2 within the priority programme 1177 ``Witnesses of Cosmic History: Formation and Evolution of Black Holes, Galaxies and Their Environment'', and a grant from the DAAD-Go8 Germany Australia Joint Research co-operative scheme. We acknowledge useful discussions with Iskren Georgiev, Anton Ippendorf, Antonino Del Popolo and Beth Willman. We thank Jelte de Jong for allowing us to use the image from \cite{Coleman2007} in our Fig.~\ref{fig:hercules}.} \bibliographystyle{aa}
2,869,038,154,189
arxiv
\section{Introduction} Many applied problems require the estimation of a quantity of interest from noisy linear measurements, for instance compressed sensing \cite{candes2006robust,candes2006near,donoho2006compressed,rudelson2005geometric,tsaig2006extensions}, image processing \cite{rudin1992nonlinear,osher1990feature,rudin1994total,chambolle2004algorithm,chambolle1997image,osher2005iterative,xiao2010dual}, matrix completion \cite{cai2010singular,candes2010matrix,candes2009exact,molinari2021iterative}, and various problems in machine learning \cite{shalev2014understanding,moulines2011non,rosasco2014learning,duchi2009efficient,bauer2007regularization,xiao2010dual,yao2007early}. In all these problems, we are interested in finding stable solutions to an equation where the accessible data are corrupted by noise. This is classically achieved by regularization \cite{engl1996regularization}. The most classical procedure in the literature is Tikhonov (or variational) regularization \cite{engl1996regularization}, and consists in minimizing the sum of an error term on the residual of the equation plus a regularizer, which is explicitly added to the objective function. The regularizer entails some a priori knowledge or some desired property of the solutions that we want to select. A trade-off parameter is then introduced to balance the fidelity and the regularizer. In practice, this implies that the optimization problem has to be solved many times for different values of the parameter. Finally, a parameter - and the correspondent solution - is chosen accordingly to the performance with respect to some criteria, such as Morozov discrepancy principle \cite{engl1996regularization} or, popular technique in machine learning, cross-validation on left-out data \cite{steinwart2008support,golub1979generalized}. \\ An efficient alternative to explicit regularization is offered by iterative regularization, also known as implicit regularization \cite{engl1996regularization,burger2007error,boct2012iterative,bachmayr2009iterative}. The chosen regularizer is minimized under the constraint given by the equation, but with the available data affected by noise. A numerical algorithm to solve the optimization problem is chosen and early stopped, to avoid convergence to the noisy solution. Running the iterative procedure until convergence would give an undesired noisy solution. In this setting, the number of iterations plays the role of the regularization parameter. The best performing iterate, according to some a priori criterion (for instance, cross-validation), is then considered as the regularized solution. This procedure is very efficient when compared to explicit regularization, because it requires to solve only one optimization problem and not even until convergence. \ \\ In this paper we are interested in iterative regularization procedures via early stopping. First we focus on linearly constrained minimization problems, when the regularizer is only convex, but not necessarily smooth nor strongly convex. The main novelty of this work is the design and analysis of two new iterative regularization methods based on primal-dual algorithms \cite{Chambolle_Pock11,Condat13,Vu13}, which perform one minimization step on the primal variable followed by one on the dual, to jointly solve the primal and the dual minimization problems. Primal-dual algorithms are computationally efficient, as only matrix-vector multiplications and the calculation of a proximity operator are required. In order to design our algorithms, we adapt the framework presented in \cite{briceno2021random} to the context of inverse problems. The key idea is to reuse data constraint at every iteration of the primal-dual algorithm, by activations of the redundant information available. The first method that we propose is a primal-dual algorithm (\ref{A: PDSP}) with additional activactions of the linear equations. We propose different variants of this procedure, depending on the extra activation step. For instance, we are able to exploit the data constraints more than once at every iteration via gradient descent, with fixed or adaptive step size. The second method is a dual-primal algorithm (\ref{A: DPSP}) where a subset containing the dual solutions is activated at each step. This subset is not affected by the noise in the data and is usually determined by a finite number of independent constraints. This formulation may seem artificial or inefficient. However, while maintaining an easy implementation, our methods achieve better numerical performances and considerable speed-ups with respect to the vanilla primal-dual algorithm. We extend to the noisy case the techniques studied in \cite{briceno2019projected,briceno2021random} for the exact case. The assumptions on the noise are the classical ones in inverse problems, see e.g. \cite{matet2017don,calatroni2021accelerated,burger2007error,molinari2021iterative}. We generalize the results in \cite{molinari2021iterative}, by including in the primal-dual procedure a diagonal preconditioning and an extra activation step. Since we are in a non-vanishing noisy regime, it is not reasonable to expect the convergence of the iterations to the solution set of the noise free problem, thus we provide an early stopping criterion to recover a stable approximation of an ideal solution, in the same spirit of \cite{matet2017don,calatroni2021accelerated,burger2007error,raskutti2014early,zhang2005boosting,yao2007early,blanchard2010optimal,bartlett2007adaboost}. The early stopping rule is derived from theoretical stability bounds and feasibility gap rates for both algorithms, obtaining implicit regularization properties similar to those stated in \cite{molinari2021iterative} and \cite{matet2017don}. Theoretical results are complemented by numerical experiments for robust sparse recovery and total variation, showing that state-of-the-art performances can be achieved with considerable computational speed-ups.\\ \textbf{Related works.} In this section, we briefly discuss the literature about variational and iterative regularization techniques. Tikhonov regularization has been introduced in \cite{tihonov1963solution}. See also \cite{engl1996regularization,benning2018modern} and references therein for an extensive treatment of the topic. The most famous iterative regularization method is the Landweber algorithm \cite{landweber1951iteration,engl1996regularization}, namely gradient descent on the least squares problem. Duality theory in optimization gives another interpretation which sheds light on the regularizing properties of this procedure. Indeed, consider the problem of minimizing the squared norm under the linear constraint. Running gradient descent on its dual problem and mapping back to the primal variable, we obtain exactly the Landweber method. This provides another explanation of why the iterates of Landweber algorithm converge to the minimal norm solution of the linear equation. Stochastic gradient descent on the previous problem is the generalization of the Kaczmarz method \cite{lorenz2008convergence,schlor19}, which consists in applying cyclic or random projections onto single equations of the linear system. Accelerated and diagonal versions are also discussed in \cite{engl1996regularization,neubauer2017nesterov} and \cite{bakushinsky2005iterative,kaltenbacher2008iterative,scherzer1998modified}, respectively. The regularization properties of other optimization algorithms for more general regularizers have been also studied. If strong convexity is assumed, mirror descent \cite{beck2003mirror,nemirovskij1983problem} can also be interpreted as gradient descent on the dual problem, and its regularization properties (and those of its accelerated variant) have been studied in \cite{matet2017don}. Diagonal approaches \cite{bahraoui1994convergence} with a regularization parameter that vanishes along the iterations have been studied in \cite{garrigos2018iterative}, see \cite{calatroni2021accelerated} for an accelerated version. Another common approach relies on the linearized Bregman iteration \cite{yin2008bregman,yin2010analysis, xiao2010dual, osher2005iterative}, which has found applications in compressed sensing \cite{cai2009linearized,osher2010fast,yin2008bregman} and image deblurring \cite{cai2009linearized}. However, this method requires to solve non-trivial minimization problems at each iteration. For convex, but not strongly convex regularizers, the regularization properties of a primal-dual algorithms have been investigated in \cite{molinari2021iterative}. \ \\ The rest of the paper is organized as follows. In Section~\ref{sec:NB} we introduce the notation jointly with its mathematical background. In Section~\ref{s: MPA} we present the main problem and propose five classes of algorithms to solve it numerically. In Section~\ref{s: MR} we derive stability and feasibility gap bounds and related early stopping rules. In Section~\ref{s: app} we verify the performance of the algorithm on two numerical applications: robust sparse recovery problem and image reconstruction by total variation. Finally, we provide some conclusions. \section{Notation and background} \label{sec:NB} First we recall some well known concepts and properties used in the paper. \ \\ Let $X$, $Y$ be two finite-dimensional real vector spaces equipped with an inner product $\scal{\cdot}{\cdot}$ and the induced norm $\|\cdot\|^2$. We denote the set of convex, lower semicontinuous, and proper functions on $X$ by $\Gamma_{0}(X)$. The subdifferential of $F\in \Gamma_{0}(X)$ is the set-valued operator defined by \begin{align} \partial F\colon \ X\to 2^{X}, \quad x\mapsto\{u\in X\hspace{0.2cm}|\hspace{0.2cm}(\forall y\in X)\hspace{0.2cm} F(x)+\langle y-x\mid\hspace{0mm} u\rangle\leq F(y)\}. \label{d: subdifferential} \end{align} If the function $F$ is G\^ateaux differentiable at the point $x$, then $\partial F(x)=\{\nabla F(x)\}$\hspace{2mm}\cite[Proposition 17.31 (i)]{bauschke2011convex}. In general, for $F\in \Gamma_{0}(X)$, it holds that $(\partial F)^{-1}=\partial F^{*}$ \hspace{2mm}\cite[Corollary 16.30]{bauschke2011convex}, where $F^{*}\in \Gamma_{0}(X)$ is the conjugate function of $F$, defined by $F^{*}(x):=\sup _{u \in X} \ \scal{x}{u}- F(u)$. \ \\ For every self-adjoint positive definite matrix $\Sigma$, we define the proximity operator of $F$ relative to the metric induced by $\|\cdot\|_{\Sigma}^2:=\scal{\cdot}{\Sigma \cdot}$ as $\operatorname{prox}^{\Sigma}_{F}=(\ensuremath{\operatorname{Id}}\,+\Sigma\partial F)^{-1}$. If $\Sigma=\sigma\ensuremath{\operatorname{Id}}\,$ for some real number $\sigma>0$, it is customary to write $\ensuremath{\operatorname{prox}}_{\sigma F}$ rather than $\operatorname{prox}^{\Sigma}_{F}$ . The projector operator onto a nonempty closed convex set $C \subseteq X$ is denoted by $P_{C}$. If we define the indicator $\iota_{C}\in\Gamma_{0}(X)$ as the function that is $0$ if $x$ on $C$ and $+\infty$ otherwise, then $\ensuremath{\operatorname{prox}}_{\iota_{C}}=P_{C}$. Moreover, if $C$ is a singleton, say $C=\{b\}$, we have that $\iota^{*}_{\{b\}}(u)=\scal{u}{b}$. The relative interior of $C$ is $\ensuremath{\operatorname{ri}}(C)=\left\{ x\in C\mid \ensuremath{\mathbb{R}}_{++} (C-x)= \ensuremath{\operatorname{span}} (C-x)\right\},$ where $\ensuremath{\mathbb{R}}_{++}C=\left\{\lambda y\mid (\lambda >0)\wedge(y\in C)\right\}$ and $\ensuremath{\operatorname{span}}(C)$ is the smallest linear subspace of $X$ containing $C$. \ \\ Given $\alpha~\in~]0, 1[ $, an operator $T : \ X \rightarrow X$ is $\alpha$-averaged non-expansive iff $$(\forall x\in X )(\forall y\in X)\hspace{3mm} \|T x - Ty\|^2 \leq \|x - y\|^2 -\frac{1-\alpha}{\alpha}\|(\ensuremath{\operatorname{Id}}\,-T)x- (\ensuremath{\operatorname{Id}}\,-T)y\|^2,$$ and it is quasi-non-expansive iff: $$(\forall x\in X )(\forall y\in \ensuremath{\operatorname{Fix}} T)\hspace{3mm} \|T x - y\|^2 \leq \|x - y\|^2,$$ where the set of fixed points of $T$ is defined by $\ensuremath{\operatorname{Fix}} T=\{x\in X\mid Tx=x\}$. For further results on convex analysis and operator theory, the reader is referred to \cite{bauschke2011convex}. \ \\ For a real matrix $A\in\ensuremath{\mathbb{R}}^{d\times p}$, its operator norm is denoted by $\|A\|$ and its adjoint by $A^{*}$. We define the Frobenius norm of $A$ as $\|A\|^{2}_{F}:=\sum_{i=1}^{d}\|a_{i}\|^2$, where, for every $i\in[d]:=\{1,\ldots,d\}$, $a_{i}$ denotes the $i$-th row of $A$. We also denote by $A_i$ the $i$-th column of $A$. We denote by $\ensuremath{\operatorname{ran}}(A)$ and $\ker(A)$ the range and the kernel of $A$, respectively. \section{Main problem and algorithm} \label{s: MPA} Many applied problems require to estimate a quantity of interest $x\in\ensuremath{\mathbb{R}}^p$ based on linear measurements $b=Ax$, for some matrix $A\in\ensuremath{\mathbb{R}}^{d \times p}$. For simplicity, we carry the analysis in this finite dimensional case, but note that it can be easily extended to the infinite dimensional setting. A standard approach to obtain the desired solution is to assume that it is a minimizer of the following linearly constrained optimization problem: \begin{align} \min_{x\in \ensuremath{\mathbb{R}}^p} J(x) \hspace{4mm} \text{s.t.} \hspace{4mm} Ax=b, \tag{$\mathcal{P}$} \label{P: problem} \end{align} where $J\in\Gamma_{0}(\ensuremath{\mathbb{R}}^p)$ encodes a priori information on the solution and is usually hand-crafted. Typical choices are: the squared norm \cite{engl1996regularization}; the elastic net regularization \cite{matet2017don}; the $\ell^{1}$-norm \cite{candes2006robust,candes2006near,donoho2006compressed,tsaig2006extensions}; the total variation \cite{rudin1992nonlinear,osher1990feature,rudin1994total,chambolle2004algorithm}. Note that, in the previous examples, the first two regularizers are strongly convex, while the second two are just convex and non-smooth. \\ \\ If we use the indicator function of $\{b\}$, \eqref{P: problem} can be written equivalently as \begin{align} \min_{x\in \ensuremath{\mathbb{R}}^p} J(x)+\iota_{\{b\}}(Ax). \label{P: Pc} \end{align} We denote by $\mu$ the optimal value of $\eqref{P: problem}$ and by $\ensuremath{{\mathcal S}}$ the set of its minimizers. We assume that $\ensuremath{{\mathcal S}}\neq\emptyset$. In order to build our regularization procedure, we consider the Lagrangian functional for problem $\eqref{P: problem}$: \begin{equation} \label{e:saddle point} \mathcal{L}(\prim,\dal):=J(\prim)+\scal{\dal}{A\prim-b}. \end{equation} This approach allow us to split the contribution of the non-smooth term $J$ and the one of the linear operator $A$, without requiring to compute the projection on the set $C:=\{x\in\ensuremath{\mathbb{R}}^{p}\mid Ax=b\}$. We define the set of saddle points of $\mathcal{L}$ as \begin{equation} \mathcal{Z}=\left\{(\prim, \dal)\in \ensuremath{\mathbb{R}}^p\times\ensuremath{\mathbb{R}}^d: \ \mathcal{L}(\prim,v)\leq \mathcal{L}(\prim,\dal)\leq \mathcal{L}(y,\dal) \ \ \forall(y,v)\in \ensuremath{\mathbb{R}}^p\times\ensuremath{\mathbb{R}}^d \right\}. \end{equation} The set $\mathcal{Z}$ is characterized by the first-order optimality condition: \begin{align} \mathcal{Z}= \left\{(x,u)\in \ensuremath{\mathbb{R}}^p\times\ensuremath{\mathbb{R}}^d: 0\in\partial J(x)+A^{*}u\hspace{2mm}\text{ and }\hspace{2mm}Ax=b\right\}. \end{align} In the following, we always assume that $\mathcal{Z}\neq \emptyset.$ \ \\ \begin{remark}[Saddle points and primal-dual solutions] The set of saddle points is ensured to be nonempty when some qualification condition holds (see \cite[Proposition 6.19]{bauschke2011convex} special cases), for instance when \begin{align} b\in \ensuremath{\operatorname{ri}}\left(A\left(\ensuremath{\operatorname{dom}} J\right)\right). \label{e: qualication conditions} \end{align} Observe that the objective function of \eqref{P: problem} is the sum of two functions in $\Gamma_{0}(\ensuremath{\mathbb{R}}^p)$ where one of the two is composed with a linear operator. This formulation is suitable to apply Fenchel-Rockafellar duality. Recalling that $\iota^{*}_{\{b\}}(u)=\scal{u}{b}$ \cite[Example 13.3(i)]{bauschke2011convex}, the dual problem of \eqref{P: problem} is given by \begin{align} \min_{u\in \ensuremath{\mathbb{R}}^d} J^{*}(-A^*u)+\scal{u}{b}. \label{P: Pd} \tag{$\mathcal{D}$} \end{align} We denote its optimal value by $\mu_{*}$ and by $\ensuremath{{\mathcal S}}^{*}$ its set of minimizers. Then, $\mathcal{Z}\subseteq\ensuremath{{\mathcal S}}\times \ensuremath{{\mathcal S}}^{*}$, and equality holds if \eqref{e: qualication conditions} is satisfied \cite[Proposition 19.21 (v)]{bauschke2011convex}.\ \\ In addition, condition \eqref{e: qualication conditions} implies that problem \eqref{P: Pd} has a solution. Then under the qualification condition, since we assumed that $S\neq\emptyset$, we derive also that $\mathcal{Z}\neq \emptyset$. \end{remark} \ \\ In practical situations, the exact data $b$ is unknown and only a noisy version is accessible. Given a noise level $\delta\geq0$, we consider a worst case scenario, where the error is deterministic and the accessible data $b^\delta$ is such that \begin{equation} \|b^{\delta}-b\|\leq\delta. \end{equation} This is the classical model in inverse problems \cite{engl1996regularization,kaltenbacher2008iterative}. The solution set of the inexact linear system $Ax=b^{\delta}$ is denoted by $C_{\delta}$. Analogously, we denote by $\ensuremath{{\mathcal S}}_\delta$ and $\ensuremath{{\mathcal S}}_\delta^*$ the set of primal and dual solutions with noisy data. It is worth pointing out that, if $b^\delta\not\in\ensuremath{\operatorname{ran}}(A)$, then $\ensuremath{{\mathcal S}}_\delta\subseteq C_{\delta}= \emptyset$ but our analysis and bounds still hold. \subsection{Primal-Dual Splittings with a priori Information}\label{s:pd} In this section, we propose an iterative regularization procedure to solve problem \eqref{P: problem}, based on a primal-dual algorithm with preconditioning and arbitrary activations of a predefined set of operators. While the use of primal-dual algorithms \cite{chambolle2011first} as iterative regularization methods is somewhat established \cite{molinari2021iterative}, in this paper we focus on the possibility of reusing the data constraints during the iterations. This idea was originally introduced in \cite{briceno2021random}, where the authors studied the case in which the exact data is available, and consists in the activation of extra operators, that encode information about the solution set, to improve the feasibility of the updates. In our setting, we can reuse data constraints, and we project, in series or in parallel, onto some equations given by the (noisy) linear constraint. But we will show that other interesting choices are possible, as projections onto the set of dual constraints. \ \\ More formally, for $i\in [m]$, we consider a finite number of operators $T_i\colon \ \ensuremath{\mathbb{R}}^p\to \ensuremath{\mathbb{R}}^p$ or $T_i\colon \ \ensuremath{\mathbb{R}}^d\to \ensuremath{\mathbb{R}}^d$, such that the set of noisy primal solutions is contained in $\ensuremath{\operatorname{Fix}} T_i$ for every $i\in [m]$. We refer to this as a redundant a priori information. A list of operators suitable to our setting (and with practical implementation) can be found in Section~\ref{s: app}. \ \\ The primal-dual algorithms with reuse of constraints which are given in Table~\ref{t:algos} are a preconditioned and deterministic version of the one proposed in \cite{briceno2021random} applied to the case of linearly constrained minimization. \begin{table}[ht!] \label{t:algos} \centering \resizebox{0.8\columnwidth}{!}{ \begin{tabular}{c c} \hspace{-5mm} \begin{tabular}{|m{65mm}|} \hline Primal-Dual splitting with activations \\ \hline\vspace{2mm} \textbf{Input}: $(\bar{p}^0,p^0,\dal^0)\in\ensuremath{\mathbb{R}}^{2p}\times\ensuremath{\mathbb{R}}^{d}$.\\\vspace{-0mm} \begin{flushleft}\textbf{For} $k=1,\ldots,\text{N:}$\end{flushleft}\vspace{-4mm}\\ \vspace{-10mm} \begin{align} \begin{array}{l} \dal^{k+1}= \dal^k+\Gamma( A\bar{p}^k-b^\delta)\\ \prim^{k+1}=\ensuremath{\operatorname{prox}}^{\Sigma }_{J}(p^k-\Sigma A^*\dal^{k+1})\\ p^{k+1}=T_{\epsilon_{k+1}}\prim^{k+1}\\ \bar{p}^{k+1}= p^{k+1}+ \prim^{k+1}-p^{k}, \end{array} \label{A: PDSP}\tag{PDA}\end{align}\vspace{-4mm}\\ \vspace{0mm}\begin{flushleft}\textbf{End}\end{flushleft} \\ \hline \end{tabular} & \hspace{-4mm} \begin{tabular}{|m{65mm}|} \hline Dual-Primal splitting with activations \\ \hline\vspace{2mm} \textbf{Input}: $(\prim^{0},\bar{\prop}^{0},\dal^0)\in \ensuremath{\mathbb{R}}^{p}\times\ensuremath{\mathbb{R}}^{2d}$.\\\vspace{-0mm} \begin{flushleft}\textbf{For} $k=1,\ldots,\text{N:}$\end{flushleft}\vspace{-4mm}\\ \vspace{-10mm} \begin{align} \begin{array}{l} \prim^{k+1}=\ensuremath{\operatorname{prox}}^{\Sigma }_{J}(\prim^k-\Sigma A^*\Bar{\prop}^{k})\\ \dal^{k+1}= \prop^k+\Gamma( A\prim^{k+1}-b^\delta)\\ \prop^{k+1}=T_{\epsilon_{k+1}}\dal^{k+1}\\ \bar{\prop}^{k+1}= \prop^{k+1}+ \dal^{k+1}-\prop^{k}, \end{array} \label{A: DPSP}\tag{DPA}\end{align}\vspace{-4mm}\\ \vspace{0mm}\begin{flushleft}\textbf{End}\end{flushleft} \\ \hline \end{tabular} \\ \end{tabular}}\caption{Proposed algorithms for iterative regularization.} \label{T:Alg} \end{table} We first focus on the Primal-Dual splitting. It is composed by four different steps, to be performed in series. The first step is the update of the dual variable, in which the residuals to the linear equation $Ax=b^\delta$ are accumulated after preconditioning by the operator $\Gamma$. The second step is an implicit prox-step, with function $J$ and norm $\|\cdot\|_{\Sigma^{-1}}$, on the primal variable. The third one is the activation of the operator related to reusing data constraint, on the primal variable. Finally, the last step is an extrapolation again on the primal variable. Notice that, if no operator is activated, it corresponds simply to $\bar{p}^{k+1}= 2 \prim^{k+1}-\prim^{k}$, that is the classical update in primal-dual algorithm. On the other hand, the Dual-Primal Splitting algorithm, except for permutation in the order of the steps, differs from the previous one because the activation of the operator is done not on the primal variable but on the dual one. Indeed, Lemma \ref{L: PD=DP} establishes that, without the activation of the operator, there is an equivalence between the primal variables generated by \ref{A: PDSP} and the ones generated by \ref{A: DPSP}. \\ \begin{remark} As already mentioned, our analysis can be easily extended to infinite dimensional problems. In particular, note that the primal-dual algorithms above can be formulated exactly in the same way for infinite dimensional problems. The convergence guarantees of the plain methods in Hilbert and Banach spaces have been studied in \cite{Condat13,Vu13,silveti2021stochastic}. Another possible extension of the algorithm, that we do not analyse explicitly in this work, is related with the stochastic version of primal-dual; see \cite{chambolle2018stochastic,alacaoglu2019convergence,gutierrez2021convergence}. On the other hand, note that in \eqref{A: PDSP} the redundant activation of the data constraint is arbitrary. In particular, it can be chosen in a stochastic way at every iteration. \end{remark} \ \\ In the following, we list the assumptions that we require on the parameters and the operators involved in the algorithm. \begin{assumption}\label{A: structured error1} Consider the setting of \ref{A: PDSP} or \ref{A: DPSP}: \begin{enumerate} \item[($A1$)]\label{A: structured error1a} The preconditioners $\Sigma\in\ensuremath{\mathbb{R}}^{p\times p}$ and $\Gamma\in\ensuremath{\mathbb{R}}^{d\times d}$ are two diagonal positive definite matrices such that \begin{align} 0<\alpha:=1-\|\Gamma^{\frac{1}{2}} A\Sigma^{\frac{1}{2}}\|^2. \label{c: ConditionL D1} \end{align} \item[($A2$)]\label{A: structured error1b} For every $k\in\mathbb{N}$, $\epsilon_k\in[m]$. \end{enumerate} Consider the setting of \ref{A: PDSP}: \begin{enumerate} \item[($A3$)]\label{A: structured error2} $\left\{T_{i}\right\}_{i\in [m]}$ is a family of operators from $\ensuremath{\mathbb{R}}^{p}$ to $\ensuremath{\mathbb{R}}^{p}$ and for every $i\in [m]$: \begin{enumerate} \item $\ensuremath{\operatorname{Fix}} T_{i}\supseteq\ensuremath{{\mathcal S}}_{\delta} \supseteq\emptyset $; \item there exist $e_i\geq 0$ such that, for every $\prim\in\ensuremath{\mathbb{R}}^p$ and $\bar{\prim}\in \ensuremath{{\mathcal S}}$, \begin{align} \label{A: pitagoras error 1} \|T_i\prim-\bar{\prim}\|_{\Sigma^{-1}}^2\leq \|\prim-\bar{\prim}\|_{\Sigma^{-1}}^2+e_i\delta^{2}.\end{align} We denote by $e=\max_{i\in[m]} e_i$. \end{enumerate} \label{c: ConditionL D3} \end{enumerate} Now consider the setting of \ref{A: DPSP}: \begin{enumerate} \item[($A4$)\label{A: structured error3}] $\left\{T_{i}\right\}_{i\in [m]}$ is a family of operators from $\ensuremath{\mathbb{R}}^{d}$ to $\ensuremath{\mathbb{R}}^{d}$ and for every $i\in [m]$: \begin{enumerate} \item $\ensuremath{\operatorname{Fix}} T_{i}\supseteq\ensuremath{{\mathcal S}}^{*}_{\delta}\supseteq \ensuremath{\operatorname{Fix}} T_{i}\emptyset$; \item for every $u\in\ensuremath{\mathbb{R}}^d$ and $\bar{u}\in \ensuremath{{\mathcal S}}^{*}_{\delta}$, \begin{align} \label{A: pitagoras error 2} \|T_iu-\bar{u}\|_{\Gamma^{-1}}^2\leq \|u-\bar{u}\|_{\Gamma^{-1}}^2.\end{align} \end{enumerate} \label{c: ConditionL D4} \end{enumerate} \end{assumption} \begin{remark}[Hypothesis about the operators] If Assumptions A3-(a) holds and $\delta=0$, Assumptions A3-(b) is implied by quasi-nonexpansivity of $T_i$ on $\ensuremath{{\mathcal S}}$. The previous is a weaker condition than the one proposed in \cite{briceno2021random}, where, due to the generality of the setting, $\alpha$-averaged non-expansive operators are needed. A similar reasoning applies to Assumption A4. \end{remark} \section{Main results} \label{s: MR} In this section, we present and discuss the main results of the paper. We derive stability properties of primal-dual and dual-primal splitting for linearly constrained optimization with a priori information. \ \\ First, we define the averaged iterates and the square weighted norm induced by $\Sigma$ and $\Gamma$ on $\ensuremath{\mathbb{R}}^p\times\ensuremath{\mathbb{R}}^d$, namely \begin{align} \left(\Prim^n,\Dal^n\right):=\frac{\sum_{k=1}^{n}z^{k}}{n}\hspace{1mm} \text{ and }\hspace{1mm} V(z):=\frac{\|\prim\|_{\Sigma^{-1}}^2}{2}+\frac{\|\dal\|_{\Gamma^{-1}}^2}{2}, \label{D: Wnorm} \end{align} where $z^{k}:=(\prim^{k},\dal^{k})$ is the $k$-th iterate and $z:=(\prim,\dal)$ is a primal-dual variable. We also recall the the definition of the Lagrangian as $\mathcal{L}(\prim,\dal):=J(\prim)+\scal{\dal}{A\prim-b}$ The first result establishes the stability properties of algorithm \ref{A: PDSP}, both in terms of Lagrangian and feasibility gap. We recall that here we use activation operators based on the noisy feasibility constraints in the primal space, namely the set $C_\delta$. \begin{theorem}\textbf{} \label{Th:PPD}Consider the setting of \ref{A: PDSP} under Assumptions A1, A2, and A3. Let $(\bar{p}^0,p^{0},\prim^{0})\in\ensuremath{\mathbb{R}}^{2p}\times\ensuremath{\mathbb{R}}^{d}$ be such that $p^0=\bar{p}^{0}$. Then, for every $z~=~(\prim,\dal)~\in~\mathcal{Z}$ and for every $N\in\ensuremath{\mathbb N}$, we have \begin{align} \mathcal{L}\left(\Prim^{N},\dal\right)- \mathcal{L}\left(\prim,\Dal^{N}\right)\leq& \frac{V(z^{0}-z)}{N}+\frac{2N\|\Gamma^{\frac{1}{2}}\|^2\delta^{2}}{\alpha}+\delta\|\Gamma^{\frac{1}{2}}\|\left(\frac{2 V(z^{0}-z)}{\alpha}\right)^{\frac{1}{2}}\nonumber\\ &+\delta\|\Gamma^{\frac{1}{2}}\|\left(\frac{ N e\delta^2}{\alpha}\right)^{\frac{1}{2}}+\frac{e\delta^2}{2} \hspace{2mm}\label{e: DG} \end{align} and \begin{align} \|A\Prim^N-b\|^2\leq&\frac{16N\|\Gamma\|\|\Gamma^{-1}\|\delta^{2}}{\alpha^{2}}+8\delta\|\Gamma^{-1}\|\left(\frac{2\|\Gamma\| V(z^{0}-z)}{\alpha^3}\right)^{\frac{1}{2}}+8\delta^{2}\|\Gamma^{-1}\|\left(\frac{ \|\Gamma\|e N}{\alpha^3}\right)^{\frac{1}{2}\nonumber}\\ &+\frac{8\|\Gamma^{-1}\|V(z^{0}-z)}{N\alpha}+2\delta^{2}+\frac{4\|\Gamma^{-1}\|e\delta^2}{\alpha}, \label{e: RN} \end{align} where we recall that the constants $\alpha$ and $e$ are defined in Assumptions A1 and A3, respectively. \end{theorem} The proof of Theorem~\ref{Th:PPD} is given in the Appendix, Section \ref{Proof:PPD}. The proof combines and extends the techniques developed in \cite{briceno2021random} and \cite{molinari2021iterative}, based on the firm non-expansivity of the proximal point operator and discrete Bihari's lemma to deal with the error; see also \cite{rasch2020inexact}. In the next result, we establish upper bounds for the Lagrangian and feasibility gap analogous to those proposed in Theorem \ref{Th:PDP}, but for algorithm PDA. The main difference is that now the activation step is based on a priori information in the dual space $\ensuremath{\mathbb{R}}^d$, and not on $C_{\delta}$. This set is represented by the intersection of fixed point sets of a finite number of operators and encodes some knowledge about the dual solution. \begin{theorem} \label{Th:PDP} Consider the setting of \ref{A: PDSP} under Assumptions A1, A2, and A4. Let $(\bar{p}^0,\prop^{0},\prim^{0})\in \ensuremath{\mathbb{R}}^{2d}\times\ensuremath{\mathbb{R}}^{p}$ be such that $\prop^0=\bar{p}^{0}$. Then, for every $z~=~(\prim,\dal)~\in~\mathcal{Z}$ and for every $N\in \ensuremath{\mathbb N}$, we have that\\ \scalebox{0.99}{\parbox{\linewidth}{\begin{align} \label{B: Dual-lagrangian} \mathcal{L}\left(\Prim^{N},\dal\right)- \mathcal{L}\left(\prim,\Dal^{N}\right)\leq& \frac{V(z^{0}-z)}{N}+2\|\Gamma^{\frac{1}{2}}\|^{2} N\delta^{2}+\|\Gamma^{\frac{1}{2}}\|\delta\left(2V(z^{0}-z)\right)^{\frac{1}{2}}, \hspace{2mm}\end{align}}}\\ and \\\scalebox{0.99}{\parbox{\linewidth}{\begin{align} \label{B: Dual-feasibility} \|A\Prim^N-b\|^2\leq& \frac{8\|\Gamma^{\frac{1}{2}}\|^{2}\|\Gamma^{-1}\| N\delta^{2}}{\alpha}+\frac{4\|\Gamma^{\frac{1}{2}}\|\|\Gamma^{-1}\|\delta\left(2V(z^{0}-z)\right)^{\frac{1}{2}}}{\alpha}\nonumber\\&+\frac{4\|\Gamma^{-1}\|V(z^{0}-z)}{N\alpha}+2 \delta^{2}. \end{align}}} \\ where we recall that the constants $\alpha$ is defined in Assumptions A1. \end{theorem} The proof is given in the Appendix, Section \ref{Proof:PDP}.\\ \\ \ First, we comment the chosen optimality measures. If the penalty is strongly convex, the Bregman divergence is an upper bound of the squared norm of the difference between the reconstructed and the ideal solution, while if $J$ is only convex, the Bregman divergence gives only limited information. As discussed in \cite{rasch2020inexact}, the Lagrangian gap is equivalent to the Bregman distance of the iterates to the solution, and in general it is a very weak convergence measure. For instance, in the exact case, a vanishing Lagrangian gap does not imply that cluster points of the generated sequence are primal solutions. However, as can be derived from \cite{molinari2021iterative}, a vanishing Lagrangian gap coupled with vanishing feasibility gap implies that every cluster point of the primal sequence is a solution of the primal problem. In both theorems, the established result ensures that the two optimality measures can be upper bounded with the sum of two terms. The first one, which can be interpreted as an optimization error, is of the order $\mathcal{O}(N^{-1})$ and so it goes to zero as $N$ tends to $+\infty$. Note that, in the exact case $\delta=0$, only this term is present and both the Lagrangian and the feasibility gap are indeed vanishing, guaranteeing that every cluster point of the sequence is a primal solution. The second term, which can be interpreted as a stability control, collects all the errors due to the perturbation of the exact datum and takes also into account the presence of the activation operators $T$, when the reuse data constraint is noisy. It is an increasing function of the number of iterations and the noise level $\delta$. \begin{remark} Theorems~\ref{Th:PPD} and \ref{Th:PDP} are an extension of \cite{briceno2021random}, where the authors prove that the sequence generated by the algorithms converges to an element in $\mathcal{Z}$ when $\delta=0$, but no convergence rates neither stability bounds were given. In this work, we filled the gap for linearly constrained convex optimization problems. \ \\ Moreover, in the noise free case, our assumptions on the additional operators $T$ are weaker than those proposed in \cite{briceno2021random}, where $\alpha$-averagedness is required. For the noisy case, without the activation operators (so with $e=0$), our bounds are of the same order as \cite{molinari2021iterative} in the number of iterations and noise level. \end{remark} As mentioned above, in \eqref{e: DG} and \eqref{e: RN}, when $\delta>0$ and $N\rightarrow +\infty$ the upper bounds for the \ref{A: PDSP} iterates tend to infinity and the iteration may not converge to the desired solution. The same comment can be made for the \ref{A: DPSP} iterates, based on \eqref{B: Dual-lagrangian} and \eqref{B: Dual-feasibility}. In both cases, to obtain a minimal reconstruction error, we need to impose a trade off between convergence and stability. The next corollary introduces an early stopping criterion, depending only on the noise level and leading to stable reconstruction. \begin{corollary}\label{ESPDA} (Early-stopping). Under the assumptions of Theorem \ref{Th:PPD} or Theorem~\ref{Th:PDP}, choose $N={c}/{\delta}$ for some $c>0$. Then, for every $z~=~(\prim,\dal)~\in~\mathcal{Z}$, there exist constants $C_1$, $C_2$, and $C_3$ such that \begin{align} \mathcal{L}\left(\Prim^{N},\dal\right)- \mathcal{L}\left(\prim,\Dal^{N}\right)\leq& C_1\delta\nonumber\\ \|A\Prim^N-b\|^2\leq& C_2\delta+ C_3\delta^{2}. \label{e: earlystoppingp} \end{align} \end{corollary} The early stopping rule prescribed above is computationally efficient, in the sense that the number of iterations is proportional to the inverse of the noise level. In particular, if the error $\delta$ is small then more iterations are useful, while if $\delta$ is big, it is convenient to stop sooner. So, the number of iterations plays the role of a regularization parameter. Using the early stopping strategy proposed above, we can see that the error in the data transfers to the error in the solution with the same noise level, which is the best that one can expect for a general operator $A$. \begin{remark}\textbf{Comparison with Tikhonov regularization.} The reconstruction properties of our proposed algorithm are comparable to the ones obtained using Tikhonov regularization \cite{engl1996regularization}, with the same dependence on the noise level \cite{benning2011error}. We underline that in the previous paper only the Bregman divergence is considered, and not the feasibility. One main difference between Tikhonov and iterative regularization techniques is the fact that the Tikhonov parameter $\lambda$ is a continuous regularization parameter, while the iteration counter is a discrete one. This may be seen as a disadvantage, but usually in the practise it may be fixed with the choice of a smaller step-size in the algorithm. On the other hand, iterative regularization is way more efficient from the computational point of view, as it requires the solution of only one optimization problem, while explicit regularization amounts to solve a family of problems indexed by the regularization parameter. Let us also note that, when $\delta$ is unknown, any principle used to determine a suitable $\lambda$ can be used to determine the stopping time. \end{remark} \section{Implementation details} \label{s: app} In this section we discuss some possible standard choices to construct non-expansive operators $T$ that satisfy our assumptions and encode some redundant information on the solution set. We first present examples for \ref{A: PDSP}, and later for \ref{A: DPSP}. To define the operators, we first recall the projection on a row. For every $j\in [d]$ we denote by $a_j$ the $j$-th row of $A$ and by $P_j$ the projection onto the $j$-th linear equation; namely, \begin{align} P_{j}\colon\mathbb{R}^p \mapsto \mathbb{R}^p, \hspace{2mm} \prim \mapsto \prim+\frac{b_{j}-\scal{a_{j}}{\prim}}{\|a_{j}\|^2}a_{j}^{*}. \label{d: averaged} \end{align} Analogously, for every $j\in [d]$, we denote by $P^\delta_{j}$ the projection operator as in the previous definition but with the noisy data $b^\delta$ instead of $b$. We proceed to define the four families of operators proposed in this paper for \ref{A: PDSP}. \begin{definition}\label{O: Operators} The operator $T\colon \ensuremath{\mathbb{R}}^{p}\mapsto\ensuremath{\mathbb{R}}^{p}$ is a \begin{enumerate} \item \textbf{Serial projection} if \begin{align} T=P^{\delta}_{\beta_{l}}\circ\cdots\circ P^{\delta}_{\beta_{1}}, \label{d: averaged1} \end{align} where, for every $j\in [l]$, $\beta_{j}\in [d]$. \item \textbf{Parallel projection} if \begin{align} T=\sum\limits_{j=1}^{l}\alpha_{j} P^{\delta}_{\beta_{j}} \label{d: averaged2} \end{align} where, for every $j\in [l]$, $\beta_{j}\in [d]$ and $\left(\alpha_{j}\right)_{j=1}^{l}$ are real numbers in $[0,1]$, such that $\sum\limits_{j=1}^{l}\alpha_{j}=1$. \item \textbf{Landweber operator} with parameter $\alpha$ if \begin{align} T:\mathbb{R}^p \mapsto \mathbb{R}^p, \hspace{2mm} \prim \mapsto \prim-\alpha A^{*}(A\prim-b^\delta). \label{d: averaged3} \end{align} where $\alpha\in ]0,\frac{2}{\|A\|^2}[$. \item \textbf{Landweber operator with adaptive step} and parameter $M$ if \begin{align} T\colon\mathbb{R}^p \mapsto \mathbb{R}^p, \hspace{2mm} \prim \mapsto \left\{ \begin{array}{ll} \prim-\beta(x) A^{*}(A\prim-b^\delta) & \text{\ \ \ if } A^{*}A\prim\neq A^{*}b_\delta \\ \prim & \text{\ \ \ otherwise.} \end{array}\right. \label{d: averaged4} \end{align} where, for $M>0$, $\beta(x)=\min\left(\frac{\|A\prim-b^\delta\|^2}{\|A^{*}(A\prim-b^\delta)\|^2},M\right)$. \end{enumerate} \end{definition} The next lemma states that the operators in Definition~\ref{O: Operators} satisfy Assumption A3. \begin{lemma} \label{L: Series Parallel} Let $T\colon\ensuremath{\mathbb{R}}^p\to\ensuremath{\mathbb{R}}^p$ be one of the operators given in Definition~\ref{O: Operators}. Then Assumption A3 holds with \begin{enumerate} \item $e_T~=\sum_{j=1}^l \frac{1}{\|a_{{\beta_j}}\|^2}$, if $T$ is a serial projection; \item $e_T~=\sum_{j=1}^l\frac{\alpha_{j}}{\|a_{\beta_j}\|^2}$, if $T$ is a parallel projection; \item $e_T=\frac{\alpha}{2-\alpha\|A\|^2}$, if $T$ is the Landweber operator with parameter $\alpha$; \item $e_T=M$, if $T$ is the Landweber operator with adaptive step and parameter $M$. \end{enumerate} \end{lemma} \begin{remark}\label{R: Parallel-Landweber} \textbf{Relationship between Parallel projection and Landweber operator}. A particular parallel projection is the one corresponding to $l=d$, $\beta_{j}=j$, and $\alpha_{j}=\frac{\|a_j\|^2}{\|A\|_{F}^2}$. Then, \eqref{d: averaged2} reduces to \begin{equation} T(x)=x-\frac{1}{\|A\|_{F}^2}A^{*}(Ax-b^\delta).\label{T:Land-Paralell} \end{equation} Observe that, since $\|A\|\leq \|A\|_{F}$, the previous is a special case of Landweber operator with $\alpha=\frac{1}{\|A\|_{F}^2}$. \end{remark} \begin{remark}\textbf{Steepest descent}. Let $\bar{\prim}\in \ensuremath \mathbb{R}^p$ such that $A\bar{\prim}=b$. Then, from \eqref{d: averaged4}, we derive (see also equation \eqref{e:Steepest descentbeta} in the Appendix) \begin{align} \|T\prim-\bar{\prim}\|^2&=\|\prim-\bar{\prim}\|^2-2\beta(x)\scal{b^{\delta}-b}{A\prim-b^\delta}-2\beta(x)\|A\prim-b^\delta\|^2\nonumber\\&\hspace{2mm}+\beta(x)^2\|A^{*}(A\prim-b^\delta)\|^2.\label{e:Steepest descentbeta0} \end{align} If $\delta=0$, then the choice of $\beta(x)$ given in \eqref{d: averaged4} minimizes the right hand side of \eqref{e:Steepest descentbeta0}, if the minimizer is smaller than $M$. In this case, $\beta$ is chosen in order to maximize the contractivity with respect to a fixed point of $T$. While we cannot repeat the same procedure for $\delta > 0$, since we do not know $b$, we still keep the same choice. If $b^\delta\in \ensuremath{\operatorname{ran}}(A)$, then $\sup\limits_{x\in\ensuremath{\mathbb{R}}^p}\frac{\|A\prim-b^\delta\|^2}{\|A^{*}(A\prim-b^\delta)\|^2}<+\infty$. However, in general, if $\delta > 0$, this is not true and $M$ is needed to ensure that $\beta(x)$ is bounded. \end{remark} \begin{remark} From a computational point of view, parallel projections and Landweber operators are more efficient than serial projections. In particular, note that the quantity $(Ax^{k}-b^\delta)$ needs to be computed anyway in the other steps of the algorithm. \end{remark} While for the primal space the reuse data constraint that we want to exploit is clearly given by the linear constraint, for the dual is not always so. In the following we present an example related to the $\ell^1$ norm. A similar implementation can be extended to the case of $1$-homogenous penalty functions, for which the Fenchel conjugate is the indicator of a closed and convex subset of the dual space \cite[Proposition 14.11 (ii)]{bauschke2011convex}. \begin{example} \label{e:dpl1} Consider the noisy version of problem \ref{P: problem} with $J(x)= \|x\|_1$. Then the dual is given by \[ \min_{u\in\ensuremath{\mathbb{R}}^d} \langle b^\delta, u\rangle \,:\, |(A^*u)_i| \leq 1, \text{ for every $i\in [p]$}. \] For every $i\in [p]$, set $D_i=\{u\in\ensuremath{\mathbb{R}}^d\,:\, |(A^*u)_i| \leq 1\}$ and denote by $T_i$ the projection over $D_i$. Note that this is trivial to compute, since it is the projection onto the intersection of two parallel hyperplanes. Clearly Assumption A4 holds. Differently from the primal case, here we are projecting on exact constraints, independent from the noisy data $b^{\delta}$. \end{example} \section{Numerical results} In this section, to test the efficiency of the proposed algorithms, we perform numerical experiments in two relevant settings: regularization with the $\ell^1$-norm and total variation regularization. For the $\ell^1$-norm regularization, we compare our results with other regularization techniques. In the more complex problem of total variation we explore the properties of different variants of our procedure. \textbf{Code statement:} All numerical examples are implemented in MATLAB\textsuperscript{\textregistered} on a laptop. In the second experiment we also use the library Numerical tours \cite{peyre2011numerical}. The corresponding code can be downloaded at \href{https://github.com/cristianvega1995/L1-TV-Experiments-of-Fast-iterative-regularization-by-reusing-data-constraints} {https://github.com/cristianvega1995/L1-TV-Experiments-of-Fast-iterative-regularization-by-reusing-data-constraints} \subsection{$\ell^1$-norm regularization} In this section, we apply the routines \ref{A: PDSP} and \ref{A: DPSP} when $J$ is equal to the $\ell^1$-norm. We compare the results given by our method with two state-of-the-art regularization procedures: iterative regularization by vanilla primal-dual \cite{molinari2021iterative}, and Tikhonov explicit regularization, using the forward-backward algorithm \cite{Combettes_Wajs2005}. In addition, we compare to another classical optimization algorithm for the minimization of the sum of two non-differentiable functions, namely Douglas-Rachford \cite{briceno2012douglas}. In the noise free case, this algorithm is very effective in terms of number of iterations, but at each iteration it requires the explicit projection on the feasible set. In the noisy case, a stability analysis of the previous is not available. We use the four variants of the algorithm \ref{A: PDSP} corresponding to the different choices of the operators $T$ in Definition \ref{O: Operators} and the version of \ref{A: DPSP} described in Example~\ref{e:dpl1}. Unless otherwise stated, in all the experiments we use as preconditioners $\Sigma=\Gamma=\frac{0.99}{\|A\|} \ensuremath{\operatorname{Id}}\,$, which both satisfy \eqref{c: ConditionL D1}. Let $d=2260$, $p=3000$, and let $A\in\ensuremath{\mathbb{R}}^{d\times p}$ be such that every entry of the matrix is an independent sample from $\mathcal{N}(0,1)$, then normalized column by column. We set $b:=Ax^{*}$, where $x^{*}\in \ensuremath{\mathbb{R}}^{p}$ is a sparse vector with approximately $300$ nonzero entries uniformly distributed in the interval $[0,1]$. It follows from \cite[Theorem 9.18]{foucart2013invitation} that $x^{*}$ is the unique minimizer of the problem with probability bigger than $0.99$. Let $b^\delta$ be such that $b^\delta=b+\|b\| u $ where the vector $u$ is distributed, entry-wise, as $U[-0.2,0.2]$. In this experiment, to test the reconstruction capabilities of our method, we use the exact datum $x_*$ to establish the best stopping time, i.e. the one minimizing $\|x_k-x_*\|$. The exact solution is also used for the other regularization techniques. In a real practical situation, if $\delta$ is unknown, we would need to use parameter tuning techniques in order to select the optimal stopping time, but we do not address this aspect here. We detail the used algorithms and their parameters below. \begin{itemize} \item[(Tik)] \textbf{Tikhonov Regularization}: We consider a grid of penalty parameters $$G=\left\{\left(1-\frac{l-1}{5}\right)10^{1-d}\|Ab^\delta\|_{\infty} : \ \ l\in[5], \ d\in [6]\right\}$$ and, for each value $\lambda\in G$, the optimization problem \begin{equation} \label{ProbTyk} \min\limits_{x\in\ensuremath{\mathbb{R}}^{p}}~\left\{\lambda\|x\|_{1}~+~\frac{1}{2}\|Ax-b^\delta\|^2\right\}. \end{equation} We solve each one of the previous problems with $300$ iterations of forward-backward algorithm, unless the stopping criterion $\|x^{k+1}-x^{k}\|\leq 10^{-3}$ is satisfied earlier. Moreover, to deal efficiently with the sequence of problems, we use warm restart \cite{becker2011nesta}. We first solve problem \eqref{ProbTyk} for the biggest value of $\lambda$ in $G$. Then, we initialize the algorithm for the next value of $\lambda$, in decreasing order, with the solution reached for the previous one; and so on. \item[(DR)] \textbf{Douglas Rachford}: see \cite[Theorem 3.1]{briceno2012douglas}. \item[(PD)] \textbf{Primal-dual}: this corresponds to PDA with $m=1$ and $T_1=\ensuremath{\operatorname{Id}}\,$. \item[(PDS)] \textbf{Primal-dual with serial projections}: at every iteration, we compute a serial projection using all the equations of the noisy system, where the order of the projections is given by a random shuffle. \item[(PDP)]\textbf{Primal-dual with parallel projections}: $m=1$ and $T_1x=x-\frac{1}{\|A\|_{F}^2}A^{*}(Ax-b^{\delta})$, see Remark \ref{R: Parallel-Landweber}. \item[(PDL)]\textbf{Primal-dual Landweber}: $m=1$ and $T_1x=x-\frac{2}{\|A\|^2}A^{*}(Ax-b^{\delta})$. \item[(PDAL)] \textbf{Primal-dual Landweber with adaptive step}: $m=1$, and $T_1x~=~x~-\beta(x)A^{*}~(Ax~-~b^{\delta})$, where $\beta(x)=\min\left(\frac{\|Ax-b^\delta\|^2}{\|A^{*}(Ax-b^\delta)\|^2}, M\right)$ for $M=10^{6}$. \item[(DPS)]\textbf{Dual primal with serial projections}: at every iteration, we compute a serial projection over every inequality of $|A^{*}u|_{\infty}\leq 1$, where the order is given by a random shuffle of the rows of $A^*$. \end{itemize} \begin{table}[ht!]\begin{center} \begin{tabular}{|l|l|l|l|} \hline & Time [S] & Iteration & \begin{tabular}[c]{@{}l@{}}Reconstruction \\ error\end{tabular} \\ \hline Tik & 1.89 & 109 & 3.07 \\ \hline DR & 3.08 & 5 & 5.01 \\ \hline PD & 0.36 & 14 & 3.11 \\ \hline PDS & 1.41 & 11 & 2.58 \\ \hline PDP & 0.35 & 14 & 3.11 \\ \hline PDL & \textcolor{red}{0.28} & 12 & \textcolor{red}{2.60} \\ \hline PDAL & \textcolor{red}{0.27} & 11 & \textcolor{red}{2.56} \\ \hline DPS & 0.54 & 17 & 2.83 \\ \hline \end{tabular}\caption{Run-time and number of iterations of each method until it reaches its reconstruction error. We compare the proposed algorithms with Tikhonov regularization (Tik), Douglas-Rachford (DR), and iterative regularization (PD).} \label{table:1} \end{center} \end{table} \begin{figure}[ht] \centering \includegraphics[scale=0.25]{NF_Error.jpg} \caption{ Graphical representation of early stopping. Note that the first iterates are closer to the noise free solution, then converges to the noisy solution.} \label{fig: Early stopping} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=0.25]{Feas.jpg} \caption{ Early stopping with respect the feasibility. Note that they are similar with respect to previous Figure.} \label{fig: Early stoppingFeas} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=0.25]{Tik.jpg} \caption{ Reconstruction error of Tikhonov Method with different penalties.} \label{fig: Tik} \end{figure} In Table \ref{table:1}, we reported also the number of iterations needed to achieve the best reconstruction error, but it is important to note that the iteration of each method has a different computational cost, so the run-time is a more appropriate comparison criterion. Douglas-Rachford with early stopping is the regularization method performing worst on this example, both in terms of time and reconstruction error. This behavior may be explained by the fact that this algorithm converges fast convergence to the noisy solution, from which we infer that Douglas-Rachford is not a good algorithm for iterative regularization. Moreover, since we project on the noisy feasible set at every iteration, the resolution of a linear system is needed at every step. This explains also the cost of each iteration in terms of time. Note in addition that in our example $b^\delta$ is in the range of $A$ and so the noisy feasible set is nonempty. Tikhonov regularization performs similarly in terms of time, but it requires many more (cheaper) iterations. The achieved error is smaller than the one of DR, but bigger then the minimal one achieved by other methods. Regarding our proposals, we observe that the proposed methods perform better than (PD). This supports the idea of reusing the data constraints is beneficial with respect to vanilla primal-dual. The benefit is not evident for (PDP), which achieves the worst reconstruction error, since $\|A\|^2_F$ is very big and so $T_1$ is very close to the identity. All the other methods give better results in terms of reconstruction error. (PDS) is the slowest since it requires computing several projections at each iteration in a serial manner. We also observe that (PDL) and (PDAL) have better performance improving 22.2\% and 25.0\% in reconstruction error and 16.4\% and 17.7\% in run-time. Figure \ref{fig: Early stopping} empirically shows the existence of the trade-off between convergence and stability for all the algorithms, and therefore the advantage of early stopping. Similar results were obtained for the feasibility gap. \subsection{Total variation} In this section, we perform several numerical experiments using the proposed algorithms for image denoising and deblurring. As done in the classical image denoising method introduced by Rudin, Osher and Fantemi in \cite{rudin1992nonlinear}, we rely on the total variation regularizer. See also \cite{rudin1992nonlinear,osher1990feature,rudin1994total,chambolle2004algorithm,chambolle1997image,osher2005iterative,xiao2010dual}. We compare (PD) with (PDL) and (PDAL) algorithms, which were the algorithms performing the best in the previous application. In this section, we use two different preconditioners, which have been proved to be very efficient in practice \cite{pock2011diagonal}. Let $x^{*} \in \mathbb{R}^{N^2}$ represent an image with $N\times N$ pixels in $[0,1]$. We want to recover $x^{*}$ from a blurry and noisy measurement $y$, i.e. from \begin{align} y=Kx^{*}+e, \end{align} where $K$ is a linear bounded blurring operator and $e$ is a random noise vector. A standard approach is to assume that the original image is well approximated by the solution of the following constrained minimization problem: \begin{align} \label{D:ROF} \tag{TV} \min\limits_{u\in \ensuremath{\mathbb{R}}^{N\times N}}&\|Du\|_{1,2}\nonumber\\\text{s.t.}&\hspace{2mm}Ku=y\nonumber, \end{align} In the previous, \begin{align} \|\cdot\|_{1,2}\colon (\ensuremath{\mathbb{R}}^{2})^{N\times N}\rightarrow \ensuremath{\mathbb{R}}\colon p\rightarrow \sum_{i=1}^{N}\sum_{j=1}^{N}\|p_{ij}\|, \end{align} and $D\colon \ensuremath{\mathbb{R}}^{N^2}\rightarrow (\ensuremath{\mathbb{R}}^{2})^{N^2}$ is the discrete gradient operator for images, which is defined as \begin{align} \left(D u\right)_{ij}=&\left((D_{x}u)_{ij},(D_{y}u)_{ij}\right) \end{align} with \begin{align} \left(D_y u\right)_{ij}= &\left\{ \begin{array}{cc} u_{i+1,j}-u_{i,j} & \text{if } 1\leq i\leq N-1 \\ 0 & \text{if } i=N \end{array}\right.\nonumber\\ \left(D_x u\right)_{ij}=&\left\{ \begin{array}{cc} u_{i,j+1}-u_{i,j} & \text{if } 1\leq j\leq N-1 \\ 0 & \text{if } j=N. \end{array}\right.\nonumber \end{align} In order to avoid the computation of the proximity operator of $\| D \cdot\|_{1,2} $, we introduce an auxiliary variable $v=Du \in Y:=\ensuremath{\mathbb{R}}^{2N^2}$. Since the value in each pixel must belong to $[0,1]$, we add the constraint $u\in X:=[0,1]^{N^2}$. In this way, \eqref{D:ROF} becomes \begin{align} \label{D:ROFL1} \tag{TV} \min\limits_{(u,v)\in X\times Y}&\|v\|_{1,2}\nonumber\\\text{s.t.}&\hspace{2mm}Ku=y\nonumber\\ &\hspace{2mm}Du=v\nonumber. \end{align} \subsubsection{Formulation and Algorithms} Problem~\eqref{D:ROFL1} is a special instance of \eqref{P: problem}, with \begin{align} \left\{\begin{array}{l} J\colon \ensuremath{\mathbb{R}}^{N^2}\times \ensuremath{\mathbb{R}}^{2N^2}\mapsto \ensuremath{\mathbb{R}}\cup\{+\infty\}\colon x:=(u,v)\mapsto \|v\|_{1,2}+\iota_{X}(u), \\ \\ A=\left[\begin{array}{cc} K & 0 \\ D & -\ensuremath{\operatorname{Id}}\, \end{array}\right],\hspace{2mm} b^{\delta}=\left[\begin{array}{c} y \\ 0 \end{array}\right], \text{ and } p=d=3N^2. \end{array} \right. \label{S: TV} \end{align} Clearly, $A$ is a linear bounded nonzero operator, and $J\in\Gamma_0(\ensuremath{\mathbb{R}}^{N^2}\times \ensuremath{\mathbb{R}}^{2N^2})$. \begin{table}[ht] \centering \begin{tabular}{|m{120mm}|} \hline Primal-Dual for total variation \\ \hline\vspace{2mm} \textbf{Input}: $(p^0,p^{-1},\prim^{0},v^0)\in\ensuremath{\mathbb{R}}^{6N^2}\times\ensuremath{\mathbb{R}}^{2N^2}$ and $(q^0,q^{-1},z^{0},w^0)\in\ensuremath{\mathbb{R}}^{3N^2}\times\ensuremath{\mathbb{R}}^{N^2}$.\\\vspace{2mm} \begin{flushleft}\textbf{For} $k=1,\ldots,\text{N:}$\end{flushleft}\vspace{-4mm}\\ \vspace{-10mm} \begin{align} \begin{array}{l} v^{k+1}= v^k+\Gamma( K(p^{k}+ \prim^{k}-p^{k-1})^k-y)\\ w^{k+1}= w^k-\Gamma(q^{k}+ z^{k}-q^{k-1})+\Gamma D(p^{k}+ \prim^{k}-p^{k-1})\\ \prim^{k+1}=P_{X}(p^k-\Sigma K^*v^{k+1}+\Sigma w^{k+1})\\ z^{k+1}=\ensuremath{\operatorname{prox}}_{\Sigma \|\cdot\|_{1,2}}(q^k-\Sigma D^*w^{k+1})\\ p^{k+1}=x^{k}-\alpha(x^{k})\left (K^{*}(Kx^{k}-y)+(Dx^{k}-z^{k})\right)\\q^{k+1}=q^{k}-\alpha(x^{k})D^{*}\left(Dx^{k}-z^{k}\right) \end{array} \label{A: PDA-TV}\end{align}\vspace{-4mm}\\ \vspace{0mm}\begin{flushleft}\textbf{End}\end{flushleft} \\ \hline \end{tabular} \caption{General form of the algorithms.} \label{tab: Tabla 1} \end{table} We compare the algorithms listed below. Note that all the proposed algorithms are different instances of the general routine described in Table~\ref{tab: Tabla 1}, and each one of them corresponds to a different choice of $\alpha(x^k)$: \begin{enumerate} \item PD, the vanilla primal-dual algorithm, corresponding to $\alpha(x^k)=0$; \item PPD, the preconditioned primal-dual algorithm, obtained by $\alpha(x^k)=0$ and $\Sigma$ and $\Gamma$ as in \cite[Lemma 2]{pock2011diagonal}; \item PDL, corresponding to $\alpha(x^k)=1/\|A\|^2$; \item PDAL, corresponding to $\alpha(x^k)=\beta(x^k)$ as \eqref{d: averaged4}. \end{enumerate} Initializing by $p^0=\bar{p}^{0}=x^{0}$ and $q^0=\bar{q}^{0}=z^{0}$, we recover the results of Theorem \ref{Th:PPD} and Corollary \ref{ESPDA}. \begin{remark} In order to implement the algorithm in \ref{A: PDA-TV}, we first need to compute the following operators. \begin{enumerate} \item It follows from \cite[Proposition 24.11]{bauschke2011convex} and \cite[Example 24.20]{bauschke2011convex} that $$\ensuremath{\operatorname{prox}}^{\Sigma}_{\|\cdot\|_{1,2}}(v)=\left(\ensuremath{\operatorname{prox}}^{\Sigma_{i}}_{\|\cdot\|}(v_{i})\right)_{i=1}^{N^2}=\left(\left(1-\frac{\Sigma}{\max\{\Sigma,\|v\|\}}\right)v_{i}\right)_{i=1}^{N^2},$$ where $v_i\in\ensuremath{\mathbb{R}}^{2}$. Analogously, the projection onto $X$ can be computed as $$P_{X}(u)=\left(P_{[0,1]}(u_i)\right)_{i=1}^{N^2},$$ where $P_{[0,1]}(u_i)=\min\{1,\max\{u_i,0\} \}.$ \item It follows from \cite{chambolle2004algorithm} that \begin{align} \hspace{-6mm}-D^{*}p= \operatorname{div}p=\left\{ \begin{array}{ll} (p_1)_{i,j}-(p_1)_{i-1,j} & \text{if } 1< i<N \\ (p_1)_{i,j} & \text{if } i=1\\ -(p_1)_{i-1,j} & \text{if } i=N \end{array}\right.\hspace{-2mm}+\left\{ \begin{array}{ll} (p_2)_{i,j}-(p_2)_{i,j-1} & \text{if } 1< j<N \\ (p_2)_{i,j} & \text{if } j=1\\ -(p_2)_{i,j-1} & \text{if } j=N. \end{array}\right.\nonumber \end{align} \end{enumerate} \end{remark} \subsubsection{Numerical results} Set $N=256$, let $x^{*}$ be the image \textquotedblleft boat\textquotedblright\ in the library Numerical tours \cite{peyre2011numerical}. We suppose that $K$ is an operator assigning to every pixel the average of the pixels in a neighborhood of radius 8 and that $e\thicksim U[-0.025,0.025]^{N^2}$. We use the original image as exact solution. For denoising and deblurring, we early stop the procedure at the iteration minimizing the mean square error (MSE), namely $\|x^k-x^*\|^2/N^2$, and we measure the time and the number of iterations needed to reach it. Another option for early stopping could be to consider the image with minimal structural similarity (SSIM). Numerically, in our experiments, this gives the same results. Additionally, we use the peak signal-to-noise ratio (PSNR) to compare the images. Note that primal-dual algorithm with preconditioning is the method that needs less time and iterations among all the procedures. Moreover, due to \cite[Lemma 2]{chambolle2011first}, condition \eqref{c: ConditionL D1} is automatically satisfied, while for the other methods we need to check it explicitly, which is computationally costly. However, (PPD) is the worst in terms of SSIM, PNSR, and MSE. We verify that all other algorithms have a superior performance in terms of reconstruction, with a small advantage for the Landweber with fixed and adaptive step-sizes, reducing the MSE of $94\%$ with respect the noisy image. In addition, compared to (PD), (PDL) and (PDAL) require less iterations and time to satisfy the early stopping criterion. We believe that this is due to the fact that the extra Landweber operator improves the feasibility of the primal iterates. Visual assessment of the denoised and deblurred images are shown in Figure \ref{fig: Comparision_TV}, that highlights the regularization properties achieved by the addition of the Landweber operator and confirms the previous conclusions. \begin{table}[ht]\centering \resizebox{0.75\textwidth}{!}{% \begin{tabular}{|l|l|l|l|l|l|} \hline & Iterations & Time & SSIM & PNSR & MSE \\ \hline Noisy image & - & - & 0.4468 & 21.4801 & 0.0071 \\ \hline PD & 54 & 8.9773 & 0.8928 & 32.3614 & 0.0006 \\ \hline \begin{tabular}[c]{@{}l@{}}PD with\\ preconditioning\end{tabular} & 5 & 1.5515 & 0.8581 & 27.3753 & 0.0018 \\ \hline PDL & 46 & 7.1846 & 0.9066 & 34.2174 & 0.0004 \\ \hline PDAL & 31 & 5.4542 & 0.9112 & 34.3539 & 0.0004 \\ \hline \end{tabular}% } \caption{Quantitative comparison of the algorithms in terms of Structural similarity (SSIM), peak signal-to-noise ratio (PSNR), Mean square error (MSE), time, and iterations to reach the early stopping.} \label{tab: Tabla 2} \end{table} \begin{figure}[ht] \centering \includegraphics[scale=0.40]{Boat_NF.jpg} \caption{ Qualitative comparison of the 4 proposed methods.} \label{fig: Comparision_TV} \end{figure} \section{Conclusion and Future Work} \label{s: Conclusion} In this paper we studied two new iterative regularization methods for solving a linearly constrained minimization problem, based on an extra activation step reusing the the data constraint. The analysis was carried out in the context of convex functions and worst-case deterministic noise. We proposed five instances of our algorithm and compared their numerical performance with state of the art methods and we observed considerable improvement in run-time. In the future, we would like to extend Theorem~\ref{Th:PPD} to structured convex problems and more specific algorithms. Possible extensions are: 1) the study of problems including, in the objective function, a $L$-smooth term and a composite linear term; 2) the analysis of random updates in the dual variable (see \cite{chambolle2018stochastic}) and stochastic approximations for the gradient; 3) the theoretical study of the impact of different preconditioners; 4) the improvement of the convergence and stability rates for strongly convex objective functions. \section{Acknowledgement} This project has been supported by TraDE-OPT project which received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No 861137. L.R. acknowledges support from the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. L.R. also acknowledges the financial support of the European Research Council (grant SLING 819789), the AFOSR projects FA9550-18-1-7009, FA9550-17-1-0390 and BAA-AFRL-AFOSR-2016-0007 (European Office of Aerospace Research and Development), and the EU H2020-MSCA-RISE project NoMADS - DLV-777826. C. M. e S. V. are members of the INDAM-GNAMPA research group. This work represents only the view of the authors. The European Commission and the other organizations are not responsible for any use that may be made of the information it contains. \section{Proofs} \subsection{Equivalence between Primal-dual and Dual-primal algorithms.}\label{Proof:PD=Dp} In the following lemma we establish that, if $T=\ensuremath{\operatorname{Id}}\,$ and the initialization is the same, then there is an equivalence between the $k$-th primal variable of \ref{A: PDSP} and \ref{A: DPSP}, denoted by $\dal^{k}_{PD}$ and $\dal^{k}_{DP}$, respectively. \begin{lemma} \label{L: PD=DP} Let $(p^0_{PD},\bar{p}^{0}_{PD},\dal^0_{PD})\in \ensuremath{\mathbb{R}}^{2p}\times\ensuremath{\mathbb{R}}^{d}$ and $(\prim_{DP}^{0},\prop_{DP}^0,\bar{\prop}_{DP}^{0})\in \ensuremath{\mathbb{R}}^{p}\times\ensuremath{\mathbb{R}}^{2d}$ the initialization \ref{A: PDSP} and \ref{A: DPSP}, respectively, in the case when $m=1$ and $T=\ensuremath{\operatorname{Id}}\,$. Suppose that $p_{PD}^0=\bar{p}_{PD}^{0}$, $\prop_{DP}^0=\bar{\prop}_{DP}^{0}$, $\dal_{P D}^0=\prop_{DP}^{0}$, and $\prim_{PD}^{1}=\prim_{DP}^{1}$, then for every $k\in\ensuremath{\mathbb N}$, $\prim^{k}_{PD}=\prim^{k}_{DP}$. \end{lemma} \begin{proof} Since $m=1$ and $T=\ensuremath{\operatorname{Id}}\,$ in both algorithms, for every $k\in\ensuremath{\mathbb N}$, yields $\prim_{PD}^{k}=p_{PD}^{k}$ and $\dal_{DP}^{k}=\prop_{DP}^{k}$. On one hand, by definition of \ref{A: PDSP}, we have that \begin{align} \dal^{k+1}_{PD}&=\dal^{1}_{PD}+\Gamma\sum_{i=1}^{k}\left(A\bar{p}^{i}_{PD}-b^{\delta}\right)\nonumber\\&=\dal^{1}_{PD}+\sum_{i=1}^{k}\Gamma A(p^{i}_{PD}-p^{i-1}_{PD})+\Gamma\sum_{i=1}^{k}\left(A\prim^{i}_{PD}-b^{\delta}\right)\nonumber\\&=\dal^{1}_{PD}+\Gamma A(p^{k}_{PD}-p^{0}_{PD})+\Gamma\sum_{i=1}^{k}\left(A\prim^{i}_{PD}-b^{\delta}\right)\nonumber\\&=\dal^{0}_{PD}+\Gamma(A\prim^{k}_{PD}-b^\delta)+\Gamma\sum_{i=1}^{k}\left(A\prim^{i}_{PD}-b^{\delta}\right),\label{V: Uprimal} \end{align} where in the last equality is obtained since $p_{PD}^0=\bar{p}_{PD}^{0}$. Replacing \eqref{V: Uprimal} in the definition of $x^{k+1}_{PD}$ \begin{align} \prim_{PD}^{k+1}= \ensuremath{\operatorname{prox}}^{\Sigma }_{J}\left(\prim_{PD}^{k}-\Sigma A^{*}\left(\dal_{PD}^{0}+\Gamma (A\prim_{PD}^{k}-b^\delta)+\Gamma\sum_{i=1}^{k}\left(A\prim_{PD}^{i}-b^{\delta}\right)\right)\right). \label{e: algonestep} \end{align} On the other hand, by \ref{A: DPSP} we have that \begin{align} \dal^{k+1}_{DP}=\prop^{k+1}_{DP}=\prop^{0}_{DP}+\Gamma\sum_{i=1}^{k+1}\left(A\prim^{i}_{DP}-b^{\delta}\right),\label{V: Udual} \end{align} and \begin{align} \bar{\prop}^{k}_{DP}=\prop^{k}_{DP}+\dal^{k}_{DP}-\prop^{k-1}_{DP}=\prop^{0}_{DP}+\Gamma(A\prim^{k}_{DP}-b^\delta)+\Gamma\sum_{i=1}^{k}\left(A\prim^{i}_{DP}-b^{\delta}\right),\label{V: UbarDual}. \end{align} Replacing \eqref{V: UbarDual} in \ref{A: DPSP}, for every $k>1$, we can deduce that \begin{align} \prim_{DP}^{k+1}= \ensuremath{\operatorname{prox}}^{\Sigma }_{J}\left(\prim_{DP}^{k}-\Sigma A^{*}\left(\prop_{DP}^{0}+\Gamma (A\prim_{DP}^{k}-b^\delta)+\Gamma\sum_{i=1}^{k}\left(A\prim_{DP}^{i}-b^{\delta}\right)\right)\right). \label{e: algonestep1} \end{align} Since $\dal^{0}_{PD}=\prop^{0}_{DP}$ and $\prim^{1}_{PD}=\prim^{1}_{DP}$ the result follows by induction. \end{proof} \begin{remark} An analysis similar to that in the proof of Lemma \ref{L: PD=DP} shows that \begin{align} \prim_{PD}^{k+1}= \ensuremath{\operatorname{prox}}^{\Sigma }_{J}\left(\prim^{k}_{PD}-\Sigma A^{*}\left(\dal^{0}_{PD}+\Gamma (A T_{\epsilon_{k}}\prim^{k}_{PD}-b^\delta)+\Gamma\sum_{i=1}^{k}\left(A\prim^{i}_{PD}-b^{\delta}\right)\right)\right), \label{e: algonestepproj} \end{align} which implies that the algorithm can be written in one step if we only care about the primal variable. \end{remark} \subsection{Proof of Theorem \ref{Th:PPD}}\label{Proof:PPD} \begin{proof} From \ref{A: PDSP}, we deduce that: \begin{align} \Sigma^{-1}(p^k-\prim^{k+1})- A^*\dal^{k+1}&\in\partial J(\prim^{k+1})\nonumber\\ \Gamma^{-1}(\dal^k-\dal^{k+1}) +A\bar{p}^k &=b^{\delta}\label{e:PMI} \end{align} Therefore, we have \begin{align} \left(\forall x\in\ensuremath{\mathbb{R}}^p\right)\hspace{3mm} J(\prim^{k+1})+\scal{\Sigma^{-1}(p^k-\prim^{k+1})- A^*\dal^{k+1}}{\prim-\prim^{k+1}}\leq J(\prim) \label{e:subdif P} \end{align} and \eqref{e:subdif P} yields \begin{align} 0\geq& J(\prim^{k+1})-J(\prim)+\scal{\Sigma^{-1}(p^k-\prim^{k+1})-A^*\dal^{k+1}}{\prim-\prim^{k+1}}\nonumber\\=& J(\prim^{k+1})-J(\prim)+\frac{\|p^k-\prim^{k+1}\|_{\Sigma^{-1}}^2}{2}+\frac{\|\prim^{k+1}-\prim\|_{\Sigma^{-1}}^2}{2}\nonumber\\&-\frac{\|p^k-\prim\|_{\Sigma^{-1}}^2}{2}+\scal{\prim^{k+1}-\prim}{A^*\dal^{k+1}} \label{e: psub P} \end{align} Analogously by \eqref{e:PMI} we get \begin{align} 0=&\scal{\Gamma^{-1}(\dal^k-\dal^{k+1})+ A\bar{p}^k-b^\delta}{\dal-\dal^{k+1}}\nonumber\\0=& \frac{\|\dal^{k+1}-\dal^{k}\|_{\Gamma^{-1}}^2}{2}+\frac{\|\dal^{k+1}-\dal\|_{\Gamma^{-1}}^2}{2}-\frac{\|\dal^{k}-\dal\|_{\Gamma^{-1}}^2}{2}+\scal{b^{\delta}-A\bar{p}^k}{\dal^{k+1}-\dal}\label{e: dsub P} \end{align} Recall that $z:=(\prim,\dal)\in\mathcal{Z}\subset C\times \ensuremath{\mathbb{R}}^{d}$, $z^{k}:=(\prim^{k},\dal^{k})$, and $V(z):=\frac{\|\prim\|^2_{\Sigma^{-1}}}{2}+\frac{\|\dal\|_{\Gamma^{-1}}^2}{2}$. Summing \eqref{e: psub P} and \eqref{e: dsub P}, and by Assumption A3, we obtain \\ \begin{align} J(\prim^{k+1})-J(\prim)+\frac{\|\prim^{k+1}-p^k\|_{\Sigma^{-1}}^2}{2} +\frac{\|\dal^{k+1}-\dal^{k}\|_{\Gamma^{-1}}^2}{2}+V(z^{k+1}-z)-V(z^{k}-z)&\nonumber\\+\scal{A(\prim^{k+1}-\prim)}{\dal^{k+1}}+\scal{b^\delta-A\bar{p}^k}{\dal^{k+1}-\dal}-\frac{e\delta^2}{2}&\leq 0\label{e: prestimate P} \end{align} Now compute \begin{align} & J(\prim^{k+1})-J(\prim)+\scal{A(\prim^{k+1}-\prim)}{\dal^{k+1}} +\scal{b^{\delta}-A\bar{p}^k}{\dal^{k+1}-\dal}\nonumber\\ =&\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})-\scal{A\prim^{k+1}-b}{\dal}+\scal{A\prim-b}{\dal^{k+1}}\nonumber\\&+\scal{A(\prim^{k+1}-\prim)}{\dal^{k+1}} +\scal{b^{\delta}-A\bar{p}^k}{\dal^{k+1}-\dal}\nonumber\\ =&\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})-\scal{A\prim^{k+1}}{\dal}+\scal{b}{\dal}+\scal{A\prim}{\dal^{k+1}}-\scal{b}{\dal^{k+1}}\nonumber\\&+\scal{A\prim^{k+1}}{\dal^{k+1}}-\scal{A\prim}{\dal^{k+1}} +\scal{b^\delta}{\dal^{k+1}-\dal}-\scal{A\bar{p}^k}{\dal^{k+1}-\dal}\nonumber\\ =&\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})+\scal{b^{\delta}-b}{\dal^{k+1}-\dal} +\scal{A\prim^{k+1}-A\bar{p}^k}{\dal^{k+1}-\dal}\nonumber\\ \geq&\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})-\delta\|\Gamma^{\frac{1}{2}}\|\|\dal^{k+1}-\dal\|_{\Gamma^{-1}} +\scal{A\prim^{k+1}-A\bar{p}^k}{\dal^{k+1}-\dal}. \label{e: lagrange P} \end{align} From \eqref{e: lagrange P} and \eqref{e: prestimate P} we obtain \begin{align} &\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})+\frac{\|\prim^{k+1}-p^k\|_{\Sigma^{-1}}^2}{2} +\frac{\|\dal^{k+1}-\dal^{k}\|_{\Gamma^{-1}}^2}{2}\nonumber\\&+V(z^{k+1}-z)-V(z^{k}-z)-\delta\|\Gamma^{\frac{1}{2}}\|\|\dal^{k+1}-\dal\|_{\Gamma^{-1}}-\frac{e\delta^2}{2}\nonumber\\ \leq& -\scal{A(\prim^{k+1}-\bar{p}^k)}{\dal^{k+1}-\dal}\label{e: lagrangeI.25 P} \\ = & -\scal{A(\prim^{k+1}-p^k)}{\dal^{k+1}-\dal}+\scal{A(\prim^{k}-p^{k-1})}{\dal^{k}-\dal}\nonumber\\ &+\scal{A(\prim^{k}-p^{k-1})}{\dal^{k+1}-\dal^{k}}\label{e: lagrangeI.5 P} \\ = & -\scal{A(\prim^{k+1}-p^k)}{\dal^{k+1}-\dal}+\scal{A(\prim^{k}-p^{k-1})}{\dal^{k}-\dal}\nonumber\\ &+\scal{\Gamma^{\frac{1}{2}} A\Sigma^{\frac{1}{2}}\Sigma^{-\frac{1}{2}}(\prim^{k}-p^{k-1})}{\Gamma^{-\frac{1}{2}}(\dal^{k+1}-\dal^{k})}\nonumber\\ \leq& -\scal{A(\prim^{k+1}-p^k)}{\dal^{k+1}-\dal}+\scal{A(\prim^{k}-p^{k-1})}{\dal^{k}-\dal}\nonumber\\ &+\|\Gamma^{\frac{1}{2}} A\Sigma^{\frac{1}{2}}\|^2\frac{\|\dal^{k+1}-\dal^{k}\|_{\Gamma^{-1}}^2}{2}+\frac{\|\prim^{k}-p^{k-1}\|_{\Sigma^{-1}}^2}{2}\label{e: lagrangeIP} \end{align} Then, recalling that $\alpha=1-\|\Gamma^{\frac{1}{2}} A\Sigma^{\frac{1}{2}}\|^2$, we have the following estimate \begin{align} &\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})+\frac{\|\prim^{k+1}-p^{k}\|_{\Sigma^{-1}}^2}{2}-\frac{\|\prim^{k}-p^{k-1}\|_{\Sigma^{-1}}^2}{2}\nonumber\\&+\frac{\alpha}{2}\|\dal^{k+1}-\dal^{k}\|_{\Gamma^{-1}}^2+V(z^{k+1}-z)-V(z^{k}-z)\nonumber\\ \leq& \delta\|\Gamma^{\frac{1}{2}}\|\|\dal^{k+1}-\dal\|_{\Gamma^{-1}} -\scal{A(\prim^{k+1}-p^k)}{\dal^{k+1}-\dal}\nonumber\\&+\scal{A(\prim^{k}-p^{k-1})}{\dal^{k}-\dal}+\frac{e\delta^2}{2} \label{e: lagrangeII P} \end{align} Summing from $1$ to $N-1$ we obtain \begin{align} &\sum_{k=1}^{N-1}\left(\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})\right)+\frac{\|\prim^{N}-p^{N-1}\|_{\Sigma^{-1}}^2}{2}-\frac{\|\prim^{1}-p^{0}\|_{\Sigma^{-1}}^2}{2}\nonumber\\&+\frac{\alpha}{2}\sum_{k=1}^{N-1}\|\dal^{k+1}-\dal^{k}\|_{\Gamma^{-1}}^2+V(z^{N}-z)-V(z^{1}-z)-\scal{ A(\prim^{1}-p^{0})}{\dal^{1}-\dal} \nonumber\\\leq& \delta\|\Gamma^{\frac{1}{2}}\|\sum_{k=1}^{N}\|\dal^{k}-\dal\|_{\Gamma^{-1}}-\scal{\Gamma^{\frac{1}{2}} A\Sigma^{\frac{1}{2}}\Sigma^{-\frac{1}{2}}(\prim^{N}-p^{N-1})}{\Gamma^{-\frac{1}{2}}(\dal^{N}-\dal)}+\frac{(N-1) e\delta^2}{2}\nonumber\\\leq&\delta\|\Gamma^{\frac{1}{2}}\|\sum_{k=1}^{N}\|\dal^{k}-\dal\|_{\Gamma^{-1}}+\frac{\|\prim^{N}-p^{N-1}\|_{\Sigma^{-1}}^2}{2}+\|\Gamma^{\frac{1}{2}} A\Sigma ^{\frac{1}{2}}\|^2\frac{\|\dal^{N}-\dal\|_{\Gamma^{-1}}^2}{2}+\frac{(N-1) e\delta^2}{2} \label{e: lagrangeIII.5 P} \end{align} Now, by choosing $k=1$ in \eqref{e: lagrangeI.25 P} we get \begin{align} &\mathcal{L}(\prim^{1},\dal)-\mathcal{L}(\prim,\dal^{1})+\frac{\|\prim^{1}-p^0\|_{\Sigma^{-1}}^2}{2} +\frac{\alpha}{2}\|\dal^{1}-\dal^{0}\|_{\Gamma^{-1}}^2\nonumber\\&+V(z^{1}-z)-V(z^{0}-z) +\scal{A(\prim^{1}-\bar{p}^0)}{\dal^{1}-\dal}\nonumber\\ \leq&\delta\|\Gamma^{\frac{1}{2}}\|\|\dal^{1}-\dal\|_{\Gamma^{-1}}+\frac{e\delta^2}{2}. \label{e: lagrangeIII.5 P1} \end{align} Adding \eqref{e: lagrangeIII.5 P} and \eqref{e: lagrangeIII.5 P1} we obtain \begin{align} &\sum_{k=0}^{N-1}\left(\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})\right)+\frac{\alpha}{2}\|\dal^{N}-\dal\|_{\Gamma^{-1}}^2\nonumber\\&+\sum_{k=1}^{N}\frac{\alpha}{2}\|\dal^{k}-\dal^{k-1}\|_{\Gamma^{-1}}^2+\frac{\|\prim^{N}-\prim\|_{\Sigma^{-1}}^2}{2} \nonumber\\\leq& \delta\|\Gamma^{\frac{1}{2}}\|\sum_{k=1}^{N}\|\dal^{k}-\dal\|_{\Gamma^{-1}}+V(z^{0}-z)+\frac{N e\delta^2}{2} \label{e: lagrangeIV.5 P} \end{align} Next, by \eqref{e: lagrangeI.5 P} we have the following estimate \begin{align} &\frac{\|\prim^{k+1}-p^{k}\|_{\Sigma^{-1}}^2}{2}-\scal{A(\prim^{k}-p^{k-1})}{\dal^{k+1}-\dal^k}+\frac{\|\dal^{k+1}-\dal^k\|_{\Gamma^{-1}}^2}{2}\nonumber\\&+\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})+V(z^{k+1}-z)-V(z^{k}-z)\nonumber\\ \leq& \delta\|\Gamma^{\frac{1}{2}}\|\|\dal^{k+1}-\dal\|_{\Gamma^{-1}} -\scal{A(\prim^{k+1}-p^{k})}{\dal^{k+1}-\dal} \nonumber\\&+\scal{A(\prim^{k}-p^{k-1})}{\dal^{k}-\dal}+\frac{e\delta^2}{2} \label{e: lagrangeII.5 P} \end{align} Summing from $1$ to $N-1$ we obtain \begin{align} &\sum_{k=1}^{N-1}\left(\frac{\|\prim^{k+1}-p^{k}\|_{\Sigma^{-1}}^2}{2}-\scal{A(\prim^{k}-p^{k-1})}{\dal^{k+1}-\dal^{k}} +\frac{\|\dal^{k+1}-\dal^{k}\|_{\Gamma^{-1}}^2}{2}\right)\nonumber\\&+\sum_{k=1}^{N-1}\left(\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})\right)+V(z^{N}-z)-V(z^{1}-z) -\scal{A(\prim^{1}-p^{0})}{\dal^{1}-\dal}\nonumber\\ \leq& \delta\|\Gamma^{\frac{1}{2}}\|\sum_{k=1}^{N-1}\|\dal^{k+1}-\dal\|_{\Gamma^{-1}} -\scal{A(\prim^{N}-p^{N-1})}{\dal^{N}-\dal}+\frac{(N-1) e\delta^2}{2}\nonumber\\ =& \delta\|\Gamma^{\frac{1}{2}}\|\sum_{k=1}^{N-1}\|\dal^{k+1}-\dal\|_{\Gamma^{-1}} -\scal{\Gamma^{\frac{1}{2}} A\Sigma^{\frac{1}{2}}\Sigma^{-\frac{1}{2}}(\prim^{N}-p^{N-1})}{\Gamma^{-\frac{1}{2}}(\dal^{N}-\dal)}+\frac{(N-1) e\delta^2}{2}\nonumber\\\leq& \delta\|\Gamma^{\frac{1}{2}}\|\sum_{k=1}^{N-1}\|\dal^{k+1}-\dal\|_{\Gamma^{-1}} +\frac{ \|\Gamma^{\frac{1}{2}} A\Sigma^\frac{1}{2}\|^2}{2}\|\prim^{N}-p^{N-1}\|_{\Sigma^{-1}}^2+\frac{\|\dal^{N}-\dal\|_{\Gamma^{-1}}^2}{2}+\frac{(N-1) e\delta^2}{2} \label{e: lagrangeIII P} \end{align} Now, since $\dal^{k+1}-\dal^{k}=\Gamma\left(A\bar{p}^k-b^\delta\right)$ we derive that \begin{align} &\sum_{k=1}^{N-1}\left(\frac{\|\prim^{k+1}-p^{k}\|_{\Sigma^{-1}}^2}{2}-\scal{A(\prim^{k}-p^{k-1})}{\dal^{k+1}-\dal^{k}} +\frac{\|\dal^{k+1}-\dal^{k}\|_{\Gamma^{-1}}^2}{2}\right)\nonumber\\ =& \sum_{k=1}^{N-1}\left(\frac{\|\prim^{k}-p^{k-1}\|_{\Sigma^{-1}}^2}{2}-\scal{A(\prim^{k}-p^{k-1})}{\dal^{k+1}-\dal^{k}}+\frac{\|\dal^{k+1}-\dal^{k}\|_{\Gamma^{-1}}^2}{2}\right)\nonumber\\&+ \frac{\|\prim^{N}-p^{N-1}\|_{\Sigma^{-1}}^2}{2}-\frac{\|\prim^{1}-p^{0}\|_{\Sigma^{-1}}^2}{2}\nonumber\\=& \sum_{k=1}^{N-1}\left(\frac{\|\Gamma^{\frac{1}{2}} A(\prim^{k}-p^{k-1})\|^2}{2}-\scal{\Gamma^{\frac{1}{2}}A(\prim^{k}-p^{k-1})}{\Gamma^{\frac{1}{2}}(A\bar{p}^k-b^{\delta})}+\frac{\|\Gamma^{\frac{1}{2}}(A\bar{p}^k-b^{\delta})\|^2}{2}\right)\nonumber\\ &+ \frac{\|\prim^{N}-p^{N-1}\|_{\Sigma^{-1}}^2}{2}-\frac{\|\prim^{1}-p^{0}\| ^2}{2}\nonumber\\&+\sum_{k=1}^{N-1}\left(\frac{\|\prim^{k}-p^{k-1}\|_{\Sigma^{-1}}^2}{2}-\frac{\|\Gamma^{\frac{1}{2}} A(\prim^{k}-p^{k-1})\|^2}{2}\right)\nonumber\\ =& \sum_{k=1}^{N-1}\frac{\|\Gamma^{\frac{1}{2}} ( Ap^{k}-b^{\delta})\|^2}{2}+ \frac{\|\prim^{N}-p^{N-1}\|_{\Sigma^{-1}}^2}{2}-\frac{\|\prim^{1}-p^{0}\|_{\Sigma^{-1}}^2}{2}\nonumber\\&+\sum_{k=1}^{N-1}\left(\frac{\|\prim^{k}-p^{k-1}\|_{\Sigma^{-1}}^2}{2}-\frac{\|\Gamma^{\frac{1}{2}} A(\prim^{k}-p^{k-1})\|^2}{2}\right),\end{align} and since $\alpha=1-\|\Gamma^{\frac{1}{2}} A\Sigma^{\frac{1}{2}}\|^2>0$ we obtain \begin{align} & \sum_{k=1}^{N-1}\frac{\|\Gamma^{\frac{1}{2}} ( Ap^{k}-b^{\delta})\|^2}{2}+ \frac{\|\prim^{N}-p^{N-1}\|_{\Sigma^{-1}}^2}{2}-\frac{\|\prim^{1}-p^{0}\|_{\Sigma^{-1}}^2}{2}\nonumber\\&+\sum_{k=1}^{N-1}\left(\frac{\|\prim^{k}-p^{k-1}\|_{\Sigma^{-1}}^2}{2}-\frac{\|\Gamma^{\frac{1}{2}} A(\prim^{k}-p^{k-1})\|^2}{2}\right) \nonumber\\ \geq& \sum_{k=1}^{N-1}\frac{\|\Gamma^{\frac{1}{2}} (Ap^{k}-b^{\delta})\|^2}{2}+ \frac{\|\prim^{N}-p^{N-1}\|_{\Sigma^{-1}}^2}{2}-\frac{\|\prim^{1}-p^{0}\|_{\Sigma^{-1}}^2}{2}\nonumber\\ &+\frac{\alpha}{2}\sum_{k=1}^{N-1}\|\Gamma^{\frac{1}{2}} A(\prim^{k}-p^{k-1})\|^2\nonumber\\ \geq& \sum_{k=1}^{N-1}\frac{\|\Gamma^{\frac{1}{2}} ( Ap^{k}-b^{\delta})\|^2}{2}+ \frac{\|\prim^{N}-p^{N-1}\|_{\Sigma^{-1}}^2}{2}-\frac{\|\prim^{1}-p^{0}\|_{\Sigma^{-1}}^2}{2}\nonumber\\&-\frac{\alpha}{2}\|\Gamma^{\frac{1}{2}} A(\prim^{N}-p^{N-1})\|^2+\frac{\alpha}{2}\|\Gamma^{\frac{1}{2}} A(\prim^{1}-p^{0})\|^2\nonumber\\ &+\frac{\alpha}{2}\sum_{k=1}^{N-1}\|\Gamma^{\frac{1}{2}} A(\prim^{k+1}-p^{k})\|^2.\end{align} In turn, by convexity of $\|\cdot\|^2$ results in \begin{align} & \sum_{k=1}^{N-1}\frac{\|\Gamma^{\frac{1}{2}} ( Ap^{k}-b^{\delta})\|^2}{2}+ \frac{\|\prim^{N}-p^{N-1}\|_{\Sigma^{-1}}^2}{2}-\frac{\|\prim^{1}-p^{0}\|_{\Sigma^{-1}}^2}{2}\nonumber\\&-\frac{\alpha}{2}\|\Gamma^{\frac{1}{2}} A(\prim^{N}-p^{N-1})\|^2+\frac{\alpha}{2}\|\Gamma^{\frac{1}{2}} A(\prim^{1}-p^{0})\|^2\nonumber\\ &+\frac{\alpha}{2}\sum_{k=1}^{N-1}\|\Gamma^{\frac{1}{2}} A(\prim^{k+1}-p^{k})\|^2\nonumber\\ \geq & \frac{\alpha}{4}\sum_{k=1}^{N-1}\|\Gamma^{\frac{1}{2}} ( A\prim^{k+1}-b^{\delta})\|^2-\frac{\|\prim^{1}-p^{0}\|_{\Sigma^{-1}}^2}{2}+\frac{\alpha}{2}\|\Gamma^{\frac{1}{2}} A(\prim^{1}-p^{0})\|^2\nonumber\\ &+ \frac{\alpha^{2}+\|\Gamma^{\frac{1}{2}} A\Sigma^{\frac{1}{2}}\|^2}{2}\|\prim^{N}-p^{N-1}\|_{\Sigma^{-1}}^2\nonumber\\ \geq& \frac{\alpha}{4}\sum_{k=2}^{N}\|\Gamma^{\frac{1}{2}} ( A\prim^{k}-b^{\delta})\|^2-\frac{\|\prim^{1}-p^{0}\|_{\Sigma^{-1}}^2}{2}+\frac{\alpha}{2}\|\Gamma^{\frac{1}{2}} A(\prim^{1}-p^{0})\|^2\nonumber\\ &+ \frac{\alpha^{2}+\|\Gamma^{\frac{1}{2}} A\Sigma^{\frac{1}{2}}\|^2}{2}\|\prim^{N}-p^{N-1}\|_{\Sigma^{-1}}^2.\label{e: PX} \end{align} On the other hand, we get \begin{align} \|\Gamma^{\frac{1}{2}}(A\prim^k-b^\delta)\|^2 \geq &\frac{\|A\prim^k-b^{\delta}\|^2}{\|\Gamma^{-1}\|}\nonumber\\\geq&\frac{1}{\|\Gamma^{-1}\|}\left(\frac{\|A\prim^k-b\|^2}{2}-\|b^\delta-b\|^2\right). \label{e: Axsep P} \end{align} Combining \eqref{e: lagrangeIII.5 P1}, \eqref{e: lagrangeIII P}, \eqref{e: PX}, and \eqref{e: Axsep P} we have that \begin{align} &\sum_{k=0}^{N-1}\left(\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})\right)+\frac{\alpha^2}{2}\|\prim^{N}-p^{N-1}\|_{\Sigma^{-1}}^2\nonumber\\ &\sum_{k=1}^{N}\frac{\alpha}{8\|\Gamma^{-1}\|}\|A\prim^{k+1}-b\|^2+\frac{\|\prim^{N}-\prim\|^2_{\Sigma^{-1}}}{2} \nonumber\\ \leq& \delta\|\Gamma^{\frac{1}{2}}\|\sum_{k=1}^{N}\|\dal^{k}-\dal\|_{\Gamma^{-1}}+V(z^{0}-z)+\frac{N e\delta^2}{2}+N\frac{\alpha}{4\|\Gamma^{-1}\|}\delta^2 \label{e: lagrangeIV P} \end{align} It remains to bound $\delta\|\Gamma^{\frac{1}{2}}\|\sum_{k=1}^{N}\|\dal^{k}-\dal\|_{\Gamma^{-1}}$. From \eqref{e: lagrangeIV.5 P} and since $(x,u)$ is a saddle-point of the Lagrangian we deduce that\\ \begin{align} \|\dal^{N}-\dal\|^2\leq\frac{2\|\Gamma^{\frac{1}{2}}\|\delta}{\alpha} \sum_{k=1}^{N}\|\dal^{k}-\dal\|+\frac{2 V(z^{0}-z)}{\alpha}+\frac{ N e\delta^2}{\alpha}. \label{e: U bound P} \end{align} Applying \cite[Lemma A.1]{rasch2020inexact} to Equation \eqref{e: U bound P} with $\lambda_{k}:=\frac{2\|\Gamma^{\frac{1}{2}}\|\delta}{\alpha} $ and $S_{N}:= \frac{2 V(z^{0}-z)}{\alpha}+\frac{N e\delta^2}{\alpha}$ we get \begin{align} \|\dal^{N}-\dal\|\leq& \frac{N\|\Gamma^{\frac{1}{2}}\|\delta}{\alpha}+\left(\frac{2 V(z^{0}-z)}{\alpha}+\frac{ N e\delta^2}{\alpha}+\left(\frac{N\|\Gamma^{\frac{1}{2}}\|\delta}{\alpha}\right)^2\right)^{\frac{1}{2}}\nonumber\\\leq& \frac{2N\|\Gamma^{\frac{1}{2}}\|\delta}{\alpha}+\left(\frac{2 V(z^{0}-z)}{\alpha}\right)^{\frac{1}{2}}+\left(\frac{ N e\delta^2}{\alpha}\right)^{\frac{1}{2}} \end{align} Insert the previous in Equation \eqref{e: lagrangeIV.5 P}, to obtain \begin{align} \sum_{k=0}^{N-1}\left(\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})\right)\leq& \frac{2(N\|\Gamma^{\frac{1}{2}}\|\delta)^{2}}{\alpha}+N\|\Gamma^{\frac{1}{2}}\|\delta\left(\frac{ V(z^{0}-z)}{\alpha}\right)^{\frac{1}{2}}\nonumber\\&+N\|\Gamma^{\frac{1}{2}}\|\delta\left(\frac{ N e\delta^2}{\alpha}\right)^{\frac{1}{2}} +V(z^{0}-z)+\frac{N e\delta^2}{2} \label{e: lagrangeV P} \end{align} Analogously from \eqref{e: lagrangeIV P} \begin{align} \sum_{k=1}^{N}\|A\prim^k-b\|^2\leq&\frac{16N^2\|\Gamma\|\|\Gamma^{-1}\|\delta^{2}}{\alpha^{2}}+8N\delta\|\Gamma^{-1}\|\left(\frac{2\|\Gamma\| V(z^{0}-z)}{\alpha^3}\right)^{\frac{1}{2}}+8N\delta^{2}\|\Gamma^{-1}\|\left(\frac{ \|\Gamma\|e N}{\alpha^3}\right)^{\frac{1}{2}\nonumber}\\ &+\frac{8\|\Gamma^{-1}\|V(z^{0}-z)}{\alpha}+2N\delta^{2}+\frac{4N\|\Gamma^{-1}\|e\delta^2}{\alpha} \end{align} and both results are straightforward from the Jensen's inequality.\end{proof} \subsection{Proof of Theorem \ref{Th:PDP}}\label{Proof:PDP} \begin{proof} It follows from \ref{A: DPSP} that \begin{align} \Sigma^{-1}(\prim^k-\prim^{k+1})-A^*\bar{\prop}^{k}\in\partial J(\prim^{k+1})\nonumber\\ \Gamma^{-1}(\prop^k-\dal^{k+1})+A\prim^{k+1} =b^{\delta} \label{I: DPinclsuion} \end{align} Thus,\\ \begin{align} J(\prim^{k+1})+\scal{\Sigma^{-1}(\prim^k-\prim^{k+1})- A^*\bar{\prop}^{k}}{\prim-\prim^{k+1}}\leq J(\prim) \label{e:subdif D} \end{align} and \eqref{e:subdif D} yields \begin{align} 0\geq& J(\prim^{k+1})-J(\prim)+\scal{\Sigma^{-1}(\prim^{k}-\prim^{k+1})-A^*\bar{\prop}^{k}}{\prim-\prim^{k+1}} \nonumber\\ =& J(\prim^{k+1})-J(\prim)+\frac{\|\prim^{k}-\prim^{k+1}\|_{\Sigma^{-1}}^2}{2}+\frac{\|\prim^{k+1}-\prim\|_{\Sigma^{-1}}^2}{2}\nonumber\\&-\frac{\|\prim^{k}-\prim\|_{\Sigma^{-1}}^2}{2}+\scal{\prim^{k+1}-\prim}{A^*\bar{\prop}^{k}} \label{e: psub D} \end{align} From \eqref{I: DPinclsuion}, it follows that \begin{align} 0=&\scal{\Gamma^{-1}(\prop^k-\dal^{k+1})+A\prim^{k+1}-b^\delta}{\dal-\dal^{k+1}}\nonumber\\ 0=&\frac{\|\dal^{k+1}-\prop^{k}\|_{\Gamma^{-1}}^2}{2}+\frac{\|\dal^{k+1}-\dal\|_{\Gamma^{-1}}^2}{2}-\frac{\|\prop^{k}-\dal\|_{\Gamma^{-1}}^2}{2}+\scal{b^{\delta}-A\prim^{k+1}}{\dal^{k+1}-\dal}\label{e: dsub D} \end{align} Recall that $z:=(\prim,\dal)\in\mathcal{Z}\subset C\times \ensuremath{\mathbb{R}}^{d}$, $z^{k}:=(\prim^{k},\dal^{k})$, and $V(z):=\frac{\|\prim\|^2_{\Sigma^{-1}}}{2}+\frac{\|\dal\|_{\Gamma^{-1}}^2}{2}$. Summing \eqref{e: psub D} and \eqref{e: dsub D}, we obtain \begin{align} J(\prim^{k+1})-J(\prim)+\frac{\|\prim^{k+1}-\prim^{k}\|_{\Sigma^{-1}}^2}{2} +\frac{\|\dal^{k+1}-\prop^{k}\|_{\Gamma^{-1}}^2}{2}+V(z^{k+1}-z)-V(z^{k}-z)&\nonumber\\+\scal{A(\prim^{k+1}-\prim)}{\bar{\prop}^{k}} +\scal{b^{\delta}-A\prim^{k+1}}{\dal^{k+1}-\dal}&\leq0\label{e: prestimate D} \end{align} Now compute \begin{align} & J(\prim^{k+1})-J(\prim)+\scal{A(\prim^{k+1}-\prim)}{\bar{\prop}^{k}} +\scal{b^{\delta}-A\prim^{k+1}}{\dal^{k+1}-\dal}\nonumber\\ =&\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})-\scal{A\prim^{k+1}-b}{\dal}+\scal{A\prim-b}{\dal^{k+1}}\nonumber\\&+\scal{A(\prim^{k+1}-\prim)}{\bar{\prop}^{k}} +\scal{b^{\delta}-A\prim^{k+1}}{\dal^{k+1}-\dal}\nonumber\\ =&\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})-\scal{A\prim^{k+1}}{\dal}+\scal{b}{\dal}+\scal{A\prim}{\dal^{k+1}}-\scal{b}{\dal^{k+1}}\nonumber\\&+\scal{A(\prim^{k+1}-\prim)}{\bar{\prop}^{k}} +\scal{b^\delta}{\dal^{k+1}-\dal}-\scal{A\prim^{k+1}}{\dal^{k+1}}+\scal{A\prim^{k+1}}{\dal}\nonumber\\ =&\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})+\scal{b^{\delta}-b}{\dal^{k+1}-\dal}+\scal{A(\prim^{k+1}-\prim)}{\bar{\prop}^{k}-\dal^{k+1}}\label{e: lagrange Di1}\\ =&\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})+\scal{b^{\delta}-b}{\dal^{k+1}-\dal}+\scal{A(\prim^{k+1}-\prim)}{\prop^{k}-\dal^{k+1}}\nonumber\\ &+\scal{A(\prim^{k+1}-\prim)}{\dal^{k}-\prop^{k-1}}\nonumber\\ =&\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})+\scal{b^{\delta}-b}{\dal^{k+1}-\dal}+\scal{A(\prim^{k+1}-\prim)}{\prop^{k}-\dal^{k+1}}\nonumber\\ &+\scal{A(\prim^{k}-\prim)}{\dal^{k}-\prop^{k-1}}+\scal{A(\prim^{k+1}-\prim^{k})}{\dal^{k}-\prop^{k-1}}\nonumber\\ =&\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})+\scal{b^{\delta}-b}{\dal^{k+1}-\dal}+\scal{A(\prim^{k+1}-\prim)}{\prop^{k}-\dal^{k+1}}\nonumber\\ &+\scal{A(\prim^{k}-\prim)}{\dal^{k}-\prop^{k-1}}+\scal{\Gamma^{\frac{1}{2}} A(\prim^{k+1}-\prim^{k})}{\Gamma^{-\frac{1}{2}}(\dal^{k}-\prop^{k-1})}. \label{e: lagrange D} \end{align} From \eqref{e: lagrange D} and \eqref{e: prestimate D} we obtain \begin{align} &\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})+\frac{\|\prim^{k+1}-\prim^{k}\|_{\Sigma^{-1}}^2}{2} +\frac{\|\dal^{k+1}-\prop^{k}\|_{\Gamma^{-1}}^2}{2}+V(z^{k+1}-z)-V(z^{k}-z) \nonumber\\ \leq&-\scal{b^{\delta}-b}{\dal^{k+1}-\dal} -\scal{A(\prim^{k+1}-\prim)}{\prop^{k}-\dal^{k+1}} +\scal{A(\prim^{k}-\prim)}{\prop^{k-1}-\dal^{k}}\nonumber\\ &-\scal{\Gamma^{\frac{1}{2}} A(\prim^{k+1}-\prim^{k})}{\Gamma^{-\frac{1}{2}}(\dal^{k}-\prop^{k-1})}\nonumber\\\leq&\delta\|\Gamma^{\frac{1}{2}}\|\|\dal^{k+1}-\dal\|_{\Gamma^{-1}} -\scal{A(\prim^{k+1}-\prim)}{\prop^{k}-\dal^{k+1}} +\scal{A(\prim^{k}-\prim)}{\prop^{k-1}-\dal^{k}}\nonumber\\ &+\frac{\|\prim^{k+1}-\prim^{k}\|_{\Sigma^{-1}}^2}{2}+\|\Gamma^{\frac{1}{2}} A\Sigma^{\frac{1}{2}}\|^2\frac{\|\dal^{k}-\prop^{k-1}\|_{\Gamma^{-1}}^2}{2}\label{e: lagrangeI D} \end{align} Therefore we have that \begin{align} &\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1}) +\frac{\|\dal^{k+1}-\prop^{k}\|_{\Gamma^{-1}}^2}{2}-\|\Gamma^{\frac{1}{2}} A\Sigma^{\frac{1}{2}}\|^2\frac{\|\dal^{k}-\prop^{k-1}\|_{\Gamma^{-1}}^2}{2}\nonumber\\&+V(z^{k+1}-z)-V(z^{k}-z) \nonumber\\\leq&\delta\|\Gamma^{\frac{1}{2}}\|\|\dal^{k+1}-\dal\|_{\Gamma^{-1}} -\scal{A(\prim^{k+1}-\prim)}{\prop^{k}-\dal^{k+1}} +\scal{A(\prim^{k}-\prim)}{\prop^{k-1}-\dal^{k}} \label{e: lagrangeII D} \end{align} Summing from $1$ to $N-1$ we obtain \begin{align} &\sum_{k=1}^{N-1}\left(\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1}) \right)+\frac{\alpha}{2}\sum_{k=1}^{N-1}\|\dal^{k+1}-\prop^{k}\|_{\Gamma^{-1}}^2\nonumber\\&+V(z^{N}-z)+\|\Gamma^{\frac{1}{2}} A\Sigma^{\frac{1}{2}}\|^2\frac{\|\dal^{N}-\prop^{N-1}\|_{\Gamma^{-1}}^2}{2}\nonumber\\\leq&\delta\|\Gamma^{\frac{1}{2}}\|\sum_{k=1}^{N-1}\|\dal^{k+1}-\dal\| -\scal{A(\prim^{N}-\prim)}{\prop^{N-1}-\dal^{N}}+\scal{A(\prim^{1}-\prim)}{\prop^{0}-\dal^{1}} +V(z^{1}-z) \nonumber\\\leq&\delta\|\Gamma^{\frac{1}{2}}\|\sum_{k=1}^{N-1}\|\dal^{k+1}-\dal\| +\|\Gamma^{\frac{1}{2}} A\Sigma^{\frac{1}{2}}\|^2\frac{\|\dal^{N}-\prop^{N-1}\|_{\Gamma^{-1}}^2}{2} +\frac{\|\prim^{N}-\prim\|_{\Sigma^{-1}}^2}{2} \nonumber\\&+\scal{A(\prim^{1}-\prim)}{\prop^{0}-\dal^{1}} +V(z^{1}-z) \label{e: lagrangeIII D} \end{align} Reordering \eqref{e: lagrangeIII D} we obtain \begin{align} &\sum_{k=1}^{N-1}\left(\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})\right)+\frac{\alpha}{2}\sum_{k=1}^{N-1}\|\dal^{k+1}-\prop^{k}\|_{\Gamma^{-1}}^2+\frac{\|\dal^{N}-\dal\|_{\Gamma^{-1}}^2}{2} \nonumber\\\leq& \delta\|\Gamma^{\frac{1}{2}}\|\sum_{k=1}^{N-1}\|\dal^{k+1}-\dal\|+\scal{A(\prim^{1}-\prim)}{\prop^{0}-\dal^{1}} +V(z^{1}-z). \label{e: lagrangeIV D} \end{align} On the other hand, from \eqref{e: prestimate D}, \eqref{e: lagrange Di1}, and \eqref{e: Axsep D} we get \begin{align} \mathcal{L}(\prim^{1},\dal)-\mathcal{L}(\prim,\dal^{1})+\frac{\alpha}{2}\|\dal^{1}-\prop^{0}\|^2\leq& \delta\|\dal^{1}-\dal\|-\scal{A(\prim^{1}-\prim)}{\bar{\prop}^{0}-\dal^{1}}\nonumber\\ &V(z^{0}-z)-V(z^{1}-z)\label{e: firsstiteation} \end{align} Summing \eqref{e: lagrangeIV D} and \eqref{e: firsstiteation} yields \begin{align} &\sum_{k=1}^{N}\left(\mathcal{L}(\prim^{k},\dal)-\mathcal{L}(\prim,\dal^{k})\right)+\frac{\alpha}{2}\sum_{k=1}^{N}\|\dal^{k}-\prop^{k-1}\|_{\Gamma^{-1}}^2+\frac{\|\dal^{N}-\dal\|_{\Gamma^{-1}}^2}{2} \nonumber\\\leq& \delta\|\Gamma^{\frac{1}{2}}\|\sum_{k=1}^{N}\|\dal^{k}-\dal\|+V(z^{0}-z). \label{e: lagrangeVIII D} \end{align} Moreover, since $\dal^{k+1}-\prop^{k}=\Gamma(A\prim^{k+1}-b^{\delta})$ \begin{align} \|\dal^{k+1}-p^{k}\|_{\Gamma^{-1}}^2=& \scal{\Gamma (A\prim^{k+1}-b^\delta)}{A\prim^{k+1}-b^\delta}\nonumber\\\geq&\frac{\|A\prim^{k+1}-b^\delta\|^2}{\|\Gamma^{-1}\|}\nonumber\\\geq&\frac{1}{\|\Gamma^{-1}\|}\left(\frac{\|A\prim^{k+1}-b\|^2}{2}-\|b^\delta-b\|^2\right) \label{e: Axsep D} \end{align} and from \eqref{e: lagrangeVIII D} and \eqref{e: Axsep D} we obtain \begin{align} &\sum_{k=0}^{N-1}\left(\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})\right)+\frac{\alpha}{4\|\Gamma^{-1}\|}\sum_{k=1}^{N}\|A\prim^{k}-b\|^2+\frac{\|\dal^{N}-\dal\|_{\Gamma^{-1}}^2}{2} \nonumber\\\leq& \delta\|\Gamma^{\frac{1}{2}}\|\sum_{k=1}^{N}\|\dal^{k}-\dal\|_{\Gamma^{-1}}+V(z^{0}-z)+\frac{\alpha N\delta^{2}}{2\|\Gamma^{-1}\|} \label{e: lagrangeV D} \end{align} From \eqref{e: lagrangeVIII D} it follows that \begin{align} \|\dal^{k}-\dal\|_{\Gamma^{-1}}^2\leq2\delta\|\Gamma^{\frac{1}{2}}\|\sum_{k=1}^{N}\|\dal^{k}-\dal\|_{\Gamma^{-1}}+2V(z^{0}-z) , \label{e: U bound D} \end{align} Apply \cite[Lemma A.1]{rasch2020inexact} to Equation \eqref{e: U bound D} with $\lambda_{k}:=2\delta\|\Gamma^{\frac{1}{2}}\|$ and $S_{k}:= 2V(z^{0}-z)$ to get \begin{align} \|\dal^{k}-\dal\|_{\Gamma^{-1}}\leq& N\|\Gamma^{\frac{1}{2}}\|\delta+\left(2 V(z^{0}-z)+\left(N\|\Gamma^{\frac{1}{2}}\|\delta\right)^2\right)^{\frac{1}{2}}\nonumber\\\leq& 2N\|\Gamma^{\frac{1}{2}}\|\delta+\left(2 V(z^{0}-z)\right)^{\frac{1}{2}}\label{e: ubound D} \end{align} Insert the previous in Equation \eqref{e: lagrangeVIII D}, to obtain \begin{align} \sum_{k=0}^{N-1}\left(\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})\right)\leq& 2\|\Gamma^{\frac{1}{2}}\|^{2} N^2\delta^{2}+N\|\Gamma^{\frac{1}{2}}\|\delta\left(2V(z^{0}-z)\right)^{\frac{1}{2}}+V(z^{0}-z) \label{e: lagrangeVI D} \end{align} and by \eqref{e: lagrangeV D} and \eqref{e: ubound D} we have \begin{align} \sum_{k=1}^{N}\|A\prim^{k}-b\|^2\leq&\frac{4\|\Gamma^{-1}\|}{\alpha} &\left(2\|\Gamma^{\frac{1}{2}}\|^{2} N^2\delta^{2}+N\|\Gamma^{\frac{1}{2}}\|\delta\left(2V(z^{0}-z)\right)^{\frac{1}{2}}+V(z^{0}-z)+\frac{\alpha N\delta^{2}}{2\|\Gamma^{-1}\|}\right) \label{e: lagrangeVII D} \end{align} and both results follows from the Jensen's inequality.\end{proof} \subsection{Proof of Lemma \ref{L: Series Parallel}}\label{LP: Series Parallel} \begin{proof} Note that every single equation in $C$ and $C_{\delta}$ Let us first recall that \begin{align} P^{\delta}\hspace{2mm} \prim \mapsto \prim+\frac{b^{\delta}_{j}-\scal{a_{j}}{\prim}}{\|a_{j}\|^2}a_{j}^{*} \end{align} Note that the $j$-th equation of $C$ and $C_{\delta}$ are parallel. Then, for every $j\in [d]$ and $\bar{\prim}\in C$, we get \begin{align} \|P^{\delta}_{j}\prim-\bar{\prim}\|^2=&\|P_{j}\prim-\bar{\prim}\|^2+2\scal{P_{j}\prim-\bar{\prim}}{P^{\delta}_{j}\prim-P_{j}\prim}\nonumber\\&+\|P_{j}\prim-P^{\delta}_{j}\prim\|^2\nonumber\\=&\|P_{j}\prim-\bar{\prim}\|^2+\|P_{j}\prim-P^{\delta}_{j}\prim\|^2, \label{E:Pitagoras_1} \end{align} analogously, we have that \begin{align} \|\prim-\bar{\prim}\|^2=&\|\prim-P_{j}\prim\|^2+\|P_{j}\prim-\bar{\prim}\|^2. \label{E:Pitagoras_2} \end{align} It follows from \eqref{E:Pitagoras_1} and \eqref{E:Pitagoras_2} that \begin{align} \|P^{\delta}_{j}\prim-\bar{\prim}\|^2+\|\prim-P_{j}\prim\|^2=\|\prim-\bar{\prim}\|^2+\|P^{\delta}_{j}\prim-P_{j}\prim\|^2 , \end{align} hence \begin{align} \|P^{\delta}_{j}\prim-\bar{\prim}\|^2&\leq\|\prim-\bar{\prim}\|^2+\|P^{\delta}_{j}\prim-P_{j}\prim\|^2\nonumber\\ &\leq \|\prim-\bar{\prim}\|^2+\frac{(b^{\delta}_{j}-b_{j})^2}{\|a_{j}\|^2}\nonumber\\ &\leq \|\prim-\bar{\prim}\|^2+\frac{\delta^2}{\|a_{j}\|^2}\label{P: Paralellogram} \end{align} \begin{enumerate} \item Since $T=P^{\delta}_{\beta_{l}}\circ\cdots\circ P^{\delta}_{\beta_{1}}$ it is clear that $C_{\delta}\subset\ensuremath{\operatorname{Fix}} T$ and by induction we have that, \begin{align} \|T\prim-\bar{\prim}\|^2&\leq\|\prim-\bar{\prim}\|^2+e\delta^{2}, \end{align} where $e=\frac{l}{\max\limits_{i=1,\dots,d}\|a_i\|}$. \item The proof follows from the convexity of $\|\cdot\|^{2}$ which is obtained with $e=\frac{1}{\max\limits_{i=1,\dots,d}\|a_i\|}$. \item Let $\bar{\prim}\in C$, by \eqref{d: averaged3}, we have \begin{align} \|T\prim-\bar{\prim}\|^2=&\|\prim-\bar{\prim}\|^2-2\alpha\scal{\prim-\bar{\prim}}{A^{*}(A\prim-b^\delta)}+\alpha^2\|A^{*}(A\prim-b^\delta)\|^2\nonumber\\=&\|\prim-\bar{\prim}\|^2-2\alpha\scal{\prim-b}{A\prim-b^\delta}+\alpha^2\|A^{*}(A\prim-b^\delta)\|^2\nonumber\\ \leq&\|\prim-\bar{\prim}\|^2-2\alpha\scal{b^\delta-b}{A\prim-b^\delta}+\left(\alpha^2\|A\|^{2}-2\alpha\right)\|A\prim-b^{\delta}\|^2\label{e:Steepest descentalpha} \end{align} Now using the Young inequality with parameter $2-\alpha\|A\|^2$, we have that \begin{align} \|T\prim-\bar{\prim}\|^2\leq&\|\prim-\bar{\prim}\|^2+\frac{\alpha}{2-\alpha\|A\|^2}\|b^{\delta}-b\|^2\nonumber\\\leq&\|\prim-\bar{\prim}\|^2+\frac{\alpha\delta^{2}}{2-\alpha\|A\|^2}. \end{align} It remains to prove that if $C_{\delta}\neq0$ then $C_{\delta}\subset \ensuremath{\operatorname{Fix}} T$, which is clear from \eqref{d: averaged3}. \item Let $\bar{\prim}\in C$ and $x\in\ensuremath{\mathbb{R}}^p$, if $A^{*}Ax=A^{*}b^{\delta}$ then \eqref{A: pitagoras error 1} immediately holds. Otherwise, we have \begin{align} \|T\prim-\bar{\prim}\|^2=&\|\prim-\bar{\prim}\|^2-2\beta(x)\scal{\prim-\bar{\prim}}{A^{*}(A\prim-b^\delta)}+\beta(x)^2\|A^{*}(A\prim-b^\delta)\|^2\nonumber\\=&\|\prim-\bar{\prim}\|^2-2\beta(x)\scal{A\prim-b}{A\prim-b^\delta}+\beta(x)^2\|A^{*}(A\prim-b^\delta)\|^2\nonumber\\ =&\|\prim-\bar{\prim}\|^2-2\beta(x)\scal{b^{\delta}-b}{A\prim-b^\delta}-2\beta(x)\|A\prim-b^\delta\|^2\nonumber\\&+\beta(x)^2\|A^{*}(A\prim-b^\delta)\|^2\label{e:Steepest descentbeta} \end{align} Now using the Young inequality with parameter $2-\beta(x)\frac{\|A^{*}(A\prim-b^\delta)\|^2}{\|A\prim-b^\delta\|^2}$, we have that \begin{align} \|T\prim-\bar{\prim}\|^2\leq&\|\prim-\bar{\prim}\|^2+\frac{\beta(x)}{2-\beta(x)\frac{\|A^{*}(A\prim-b^\delta)\|^2}{\|A\prim-b^\delta\|^2}}\|b^{\delta}-b\|^2\nonumber\\\leq&\|\prim-\bar{\prim}\|^2+M\delta^{2}. \end{align} Finally, it is clear from \eqref{d: averaged4} that if $C_{\delta}\neq0$ then $C_{\delta}\subset \ensuremath{\operatorname{Fix}} T$. \end{enumerate} \end{proof} \printbibliography \end{document} \section{Introduction} Many applied problems require the estimation of a quantity of interest from noisy linear measurements, for instance compressed sensing \cite{candes2006robust,candes2006near,donoho2006compressed,rudelson2005geometric,tsaig2006extensions}, image processing \cite{rudin1992nonlinear,osher1990feature,rudin1994total,chambolle2004algorithm,chambolle1997image,osher2005iterative,xiao2010dual}, matrix completion \cite{cai2010singular,candes2010matrix,candes2009exact,molinari2021iterative}, and various problems in machine learning \cite{shalev2014understanding,moulines2011non,rosasco2014learning,duchi2009efficient,bauer2007regularization,xiao2010dual,yao2007early}. In all these problems, we are interested in finding stable solutions to an equation where the accessible data are corrupted by noise. This is classically achieved by regularization \cite{engl1996regularization}. The most classical procedure in the literature is Tikhonov (or variational) regularization \cite{engl1996regularization}, and consists in minimizing the sum of an error term on the residual of the equation plus a regularizer, which is explicitly added to the objective function. The regularizer entails some a priori knowledge or some desired property of the solutions that we want to select. A trade-off parameter is then introduced to balance the fidelity and the regularizer. In practice, this implies that the optimization problem has to be solved many times for different values of the parameter. Finally, a parameter - and the correspondent solution - is chosen accordingly to the performance with respect to some criteria, such as Morozov discrepancy principle \cite{engl1996regularization} or, popular technique in machine learning, cross-validation on left-out data \cite{steinwart2008support,golub1979generalized}. \\ An efficient alternative to explicit regularization is offered by iterative regularization, also known as implicit regularization \cite{engl1996regularization,burger2007error,boct2012iterative,bachmayr2009iterative}. The chosen regularizer is minimized under the constraint given by the equation, but with the available data affected by noise. A numerical algorithm to solve the optimization problem is chosen and early stopped, to avoid convergence to the noisy solution. Running the iterative procedure until convergence would give an undesired noisy solution. In this setting, the number of iterations plays the role of the regularization parameter. The best performing iterate, according to some a priori criterion (for instance, cross-validation), is then considered as the regularized solution. This procedure is very efficient when compared to explicit regularization, because it requires to solve only one optimization problem and not even until convergence. \ \\ In this paper we are interested in iterative regularization procedures via early stopping. First we focus on linearly constrained minimization problems, when the regularizer is only convex, but not necessarily smooth nor strongly convex. The main novelty of this work is the design and analysis of two new iterative regularization methods based on primal-dual algorithms \cite{Chambolle_Pock11,Condat13,Vu13}, which perform one minimization step on the primal variable followed by one on the dual, to jointly solve the primal and the dual minimization problems. Primal-dual algorithms are computationally efficient, as only matrix-vector multiplications and the calculation of a proximity operator are required. In order to design our algorithms, we adapt the framework presented in \cite{briceno2021random} to the context of inverse problems. The key idea is to reuse data constraint at every iteration of the primal-dual algorithm, by activations of the redundant information available. The first method that we propose is a primal-dual algorithm (\ref{A: PDSP}) with additional activactions of the linear equations. We propose different variants of this procedure, depending on the extra activation step. For instance, we are able to exploit the data constraints more than once at every iteration via gradient descent, with fixed or adaptive step size. The second method is a dual-primal algorithm (\ref{A: DPSP}) where a subset containing the dual solutions is activated at each step. This subset is not affected by the noise in the data and is usually determined by a finite number of independent constraints. This formulation may seem artificial or inefficient. However, while maintaining an easy implementation, our methods achieve better numerical performances and considerable speed-ups with respect to the vanilla primal-dual algorithm. We extend to the noisy case the techniques studied in \cite{briceno2019projected,briceno2021random} for the exact case. The assumptions on the noise are the classical ones in inverse problems, see e.g. \cite{matet2017don,calatroni2021accelerated,burger2007error,molinari2021iterative}. We generalize the results in \cite{molinari2021iterative}, by including in the primal-dual procedure a diagonal preconditioning and an extra activation step. Since we are in a non-vanishing noisy regime, it is not reasonable to expect the convergence of the iterations to the solution set of the noise free problem, thus we provide an early stopping criterion to recover a stable approximation of an ideal solution, in the same spirit of \cite{matet2017don,calatroni2021accelerated,burger2007error,raskutti2014early,zhang2005boosting,yao2007early,blanchard2010optimal,bartlett2007adaboost}. The early stopping rule is derived from theoretical stability bounds and feasibility gap rates for both algorithms, obtaining implicit regularization properties similar to those stated in \cite{molinari2021iterative} and \cite{matet2017don}. Theoretical results are complemented by numerical experiments for robust sparse recovery and total variation, showing that state-of-the-art performances can be achieved with considerable computational speed-ups.\\ \textbf{Related works.} In this section, we briefly discuss the literature about variational and iterative regularization techniques. Tikhonov regularization has been introduced in \cite{tihonov1963solution}. See also \cite{engl1996regularization,benning2018modern} and references therein for an extensive treatment of the topic. The most famous iterative regularization method is the Landweber algorithm \cite{landweber1951iteration,engl1996regularization}, namely gradient descent on the least squares problem. Duality theory in optimization gives another interpretation which sheds light on the regularizing properties of this procedure. Indeed, consider the problem of minimizing the squared norm under the linear constraint. Running gradient descent on its dual problem and mapping back to the primal variable, we obtain exactly the Landweber method. This provides another explanation of why the iterates of Landweber algorithm converge to the minimal norm solution of the linear equation. Stochastic gradient descent on the previous problem is the generalization of the Kaczmarz method \cite{lorenz2008convergence,schlor19}, which consists in applying cyclic or random projections onto single equations of the linear system. Accelerated and diagonal versions are also discussed in \cite{engl1996regularization,neubauer2017nesterov} and \cite{bakushinsky2005iterative,kaltenbacher2008iterative,scherzer1998modified}, respectively. The regularization properties of other optimization algorithms for more general regularizers have been also studied. If strong convexity is assumed, mirror descent \cite{beck2003mirror,nemirovskij1983problem} can also be interpreted as gradient descent on the dual problem, and its regularization properties (and those of its accelerated variant) have been studied in \cite{matet2017don}. Diagonal approaches \cite{bahraoui1994convergence} with a regularization parameter that vanishes along the iterations have been studied in \cite{garrigos2018iterative}, see \cite{calatroni2021accelerated} for an accelerated version. Another common approach relies on the linearized Bregman iteration \cite{yin2008bregman,yin2010analysis, xiao2010dual, osher2005iterative}, which has found applications in compressed sensing \cite{cai2009linearized,osher2010fast,yin2008bregman} and image deblurring \cite{cai2009linearized}. However, this method requires to solve non-trivial minimization problems at each iteration. For convex, but not strongly convex regularizers, the regularization properties of a primal-dual algorithms have been investigated in \cite{molinari2021iterative}. \ \\ The rest of the paper is organized as follows. In Section~\ref{sec:NB} we introduce the notation jointly with its mathematical background. In Section~\ref{s: MPA} we present the main problem and propose five classes of algorithms to solve it numerically. In Section~\ref{s: MR} we derive stability and feasibility gap bounds and related early stopping rules. In Section~\ref{s: app} we verify the performance of the algorithm on two numerical applications: robust sparse recovery problem and image reconstruction by total variation. Finally, we provide some conclusions. \section{Notation and background} \label{sec:NB} First we recall some well known concepts and properties used in the paper. \ \\ Let $X$, $Y$ be two finite-dimensional real vector spaces equipped with an inner product $\scal{\cdot}{\cdot}$ and the induced norm $\|\cdot\|^2$. We denote the set of convex, lower semicontinuous, and proper functions on $X$ by $\Gamma_{0}(X)$. The subdifferential of $F\in \Gamma_{0}(X)$ is the set-valued operator defined by \begin{align} \partial F\colon \ X\to 2^{X}, \quad x\mapsto\{u\in X\hspace{0.2cm}|\hspace{0.2cm}(\forall y\in X)\hspace{0.2cm} F(x)+\langle y-x\mid\hspace{0mm} u\rangle\leq F(y)\}. \label{d: subdifferential} \end{align} If the function $F$ is G\^ateaux differentiable at the point $x$, then $\partial F(x)=\{\nabla F(x)\}$\hspace{2mm}\cite[Proposition 17.31 (i)]{bauschke2011convex}. In general, for $F\in \Gamma_{0}(X)$, it holds that $(\partial F)^{-1}=\partial F^{*}$ \hspace{2mm}\cite[Corollary 16.30]{bauschke2011convex}, where $F^{*}\in \Gamma_{0}(X)$ is the conjugate function of $F$, defined by $F^{*}(x):=\sup _{u \in X} \ \scal{x}{u}- F(u)$. \ \\ For every self-adjoint positive definite matrix $\Sigma$, we define the proximity operator of $F$ relative to the metric induced by $\|\cdot\|_{\Sigma}^2:=\scal{\cdot}{\Sigma \cdot}$ as $\operatorname{prox}^{\Sigma}_{F}=(\ensuremath{\operatorname{Id}}\,+\Sigma\partial F)^{-1}$. If $\Sigma=\sigma\ensuremath{\operatorname{Id}}\,$ for some real number $\sigma>0$, it is customary to write $\ensuremath{\operatorname{prox}}_{\sigma F}$ rather than $\operatorname{prox}^{\Sigma}_{F}$ . The projector operator onto a nonempty closed convex set $C \subseteq X$ is denoted by $P_{C}$. If we define the indicator $\iota_{C}\in\Gamma_{0}(X)$ as the function that is $0$ if $x$ on $C$ and $+\infty$ otherwise, then $\ensuremath{\operatorname{prox}}_{\iota_{C}}=P_{C}$. Moreover, if $C$ is a singleton, say $C=\{b\}$, we have that $\iota^{*}_{\{b\}}(u)=\scal{u}{b}$. The relative interior of $C$ is $\ensuremath{\operatorname{ri}}(C)=\left\{ x\in C\mid \ensuremath{\mathbb{R}}_{++} (C-x)= \ensuremath{\operatorname{span}} (C-x)\right\},$ where $\ensuremath{\mathbb{R}}_{++}C=\left\{\lambda y\mid (\lambda >0)\wedge(y\in C)\right\}$ and $\ensuremath{\operatorname{span}}(C)$ is the smallest linear subspace of $X$ containing $C$. \ \\ Given $\alpha~\in~]0, 1[ $, an operator $T : \ X \rightarrow X$ is $\alpha$-averaged non-expansive iff $$(\forall x\in X )(\forall y\in X)\hspace{3mm} \|T x - Ty\|^2 \leq \|x - y\|^2 -\frac{1-\alpha}{\alpha}\|(\ensuremath{\operatorname{Id}}\,-T)x- (\ensuremath{\operatorname{Id}}\,-T)y\|^2,$$ and it is quasi-non-expansive iff: $$(\forall x\in X )(\forall y\in \ensuremath{\operatorname{Fix}} T)\hspace{3mm} \|T x - y\|^2 \leq \|x - y\|^2,$$ where the set of fixed points of $T$ is defined by $\ensuremath{\operatorname{Fix}} T=\{x\in X\mid Tx=x\}$. For further results on convex analysis and operator theory, the reader is referred to \cite{bauschke2011convex}. \ \\ For a real matrix $A\in\ensuremath{\mathbb{R}}^{d\times p}$, its operator norm is denoted by $\|A\|$ and its adjoint by $A^{*}$. We define the Frobenius norm of $A$ as $\|A\|^{2}_{F}:=\sum_{i=1}^{d}\|a_{i}\|^2$, where, for every $i\in[d]:=\{1,\ldots,d\}$, $a_{i}$ denotes the $i$-th row of $A$. We also denote by $A_i$ the $i$-th column of $A$. We denote by $\ensuremath{\operatorname{ran}}(A)$ and $\ker(A)$ the range and the kernel of $A$, respectively. \section{Main problem and algorithm} \label{s: MPA} Many applied problems require to estimate a quantity of interest $x\in\ensuremath{\mathbb{R}}^p$ based on linear measurements $b=Ax$, for some matrix $A\in\ensuremath{\mathbb{R}}^{d \times p}$. For simplicity, we carry the analysis in this finite dimensional case, but note that it can be easily extended to the infinite dimensional setting. A standard approach to obtain the desired solution is to assume that it is a minimizer of the following linearly constrained optimization problem: \begin{align} \min_{x\in \ensuremath{\mathbb{R}}^p} J(x) \hspace{4mm} \text{s.t.} \hspace{4mm} Ax=b, \tag{$\mathcal{P}$} \label{P: problem} \end{align} where $J\in\Gamma_{0}(\ensuremath{\mathbb{R}}^p)$ encodes a priori information on the solution and is usually hand-crafted. Typical choices are: the squared norm \cite{engl1996regularization}; the elastic net regularization \cite{matet2017don}; the $\ell^{1}$-norm \cite{candes2006robust,candes2006near,donoho2006compressed,tsaig2006extensions}; the total variation \cite{rudin1992nonlinear,osher1990feature,rudin1994total,chambolle2004algorithm}. Note that, in the previous examples, the first two regularizers are strongly convex, while the second two are just convex and non-smooth. \\ \\ If we use the indicator function of $\{b\}$, \eqref{P: problem} can be written equivalently as \begin{align} \min_{x\in \ensuremath{\mathbb{R}}^p} J(x)+\iota_{\{b\}}(Ax). \label{P: Pc} \end{align} We denote by $\mu$ the optimal value of $\eqref{P: problem}$ and by $\ensuremath{{\mathcal S}}$ the set of its minimizers. We assume that $\ensuremath{{\mathcal S}}\neq\emptyset$. In order to build our regularization procedure, we consider the Lagrangian functional for problem $\eqref{P: problem}$: \begin{equation} \label{e:saddle point} \mathcal{L}(\prim,\dal):=J(\prim)+\scal{\dal}{A\prim-b}. \end{equation} This approach allow us to split the contribution of the non-smooth term $J$ and the one of the linear operator $A$, without requiring to compute the projection on the set $C:=\{x\in\ensuremath{\mathbb{R}}^{p}\mid Ax=b\}$. We define the set of saddle points of $\mathcal{L}$ as \begin{equation} \mathcal{Z}=\left\{(\prim, \dal)\in \ensuremath{\mathbb{R}}^p\times\ensuremath{\mathbb{R}}^d: \ \mathcal{L}(\prim,v)\leq \mathcal{L}(\prim,\dal)\leq \mathcal{L}(y,\dal) \ \ \forall(y,v)\in \ensuremath{\mathbb{R}}^p\times\ensuremath{\mathbb{R}}^d \right\}. \end{equation} The set $\mathcal{Z}$ is characterized by the first-order optimality condition: \begin{align} \mathcal{Z}= \left\{(x,u)\in \ensuremath{\mathbb{R}}^p\times\ensuremath{\mathbb{R}}^d: 0\in\partial J(x)+A^{*}u\hspace{2mm}\text{ and }\hspace{2mm}Ax=b\right\}. \end{align} In the following, we always assume that $\mathcal{Z}\neq \emptyset.$ \ \\ \begin{remark}[Saddle points and primal-dual solutions] The set of saddle points is ensured to be nonempty when some qualification condition holds (see \cite[Proposition 6.19]{bauschke2011convex} special cases), for instance when \begin{align} b\in \ensuremath{\operatorname{ri}}\left(A\left(\ensuremath{\operatorname{dom}} J\right)\right). \label{e: qualication conditions} \end{align} Observe that the objective function of \eqref{P: problem} is the sum of two functions in $\Gamma_{0}(\ensuremath{\mathbb{R}}^p)$ where one of the two is composed with a linear operator. This formulation is suitable to apply Fenchel-Rockafellar duality. Recalling that $\iota^{*}_{\{b\}}(u)=\scal{u}{b}$ \cite[Example 13.3(i)]{bauschke2011convex}, the dual problem of \eqref{P: problem} is given by \begin{align} \min_{u\in \ensuremath{\mathbb{R}}^d} J^{*}(-A^*u)+\scal{u}{b}. \label{P: Pd} \tag{$\mathcal{D}$} \end{align} We denote its optimal value by $\mu_{*}$ and by $\ensuremath{{\mathcal S}}^{*}$ its set of minimizers. Then, $\mathcal{Z}\subseteq\ensuremath{{\mathcal S}}\times \ensuremath{{\mathcal S}}^{*}$, and equality holds if \eqref{e: qualication conditions} is satisfied \cite[Proposition 19.21 (v)]{bauschke2011convex}.\ \\ In addition, condition \eqref{e: qualication conditions} implies that problem \eqref{P: Pd} has a solution. Then under the qualification condition, since we assumed that $S\neq\emptyset$, we derive also that $\mathcal{Z}\neq \emptyset$. \end{remark} \ \\ In practical situations, the exact data $b$ is unknown and only a noisy version is accessible. Given a noise level $\delta\geq0$, we consider a worst case scenario, where the error is deterministic and the accessible data $b^\delta$ is such that \begin{equation} \|b^{\delta}-b\|\leq\delta. \end{equation} This is the classical model in inverse problems \cite{engl1996regularization,kaltenbacher2008iterative}. The solution set of the inexact linear system $Ax=b^{\delta}$ is denoted by $C_{\delta}$. Analogously, we denote by $\ensuremath{{\mathcal S}}_\delta$ and $\ensuremath{{\mathcal S}}_\delta^*$ the set of primal and dual solutions with noisy data. It is worth pointing out that, if $b^\delta\not\in\ensuremath{\operatorname{ran}}(A)$, then $\ensuremath{{\mathcal S}}_\delta\subseteq C_{\delta}= \emptyset$ but our analysis and bounds still hold. \subsection{Primal-Dual Splittings with a priori Information}\label{s:pd} In this section, we propose an iterative regularization procedure to solve problem \eqref{P: problem}, based on a primal-dual algorithm with preconditioning and arbitrary activations of a predefined set of operators. While the use of primal-dual algorithms \cite{chambolle2011first} as iterative regularization methods is somewhat established \cite{molinari2021iterative}, in this paper we focus on the possibility of reusing the data constraints during the iterations. This idea was originally introduced in \cite{briceno2021random}, where the authors studied the case in which the exact data is available, and consists in the activation of extra operators, that encode information about the solution set, to improve the feasibility of the updates. In our setting, we can reuse data constraints, and we project, in series or in parallel, onto some equations given by the (noisy) linear constraint. But we will show that other interesting choices are possible, as projections onto the set of dual constraints. \ \\ More formally, for $i\in [m]$, we consider a finite number of operators $T_i\colon \ \ensuremath{\mathbb{R}}^p\to \ensuremath{\mathbb{R}}^p$ or $T_i\colon \ \ensuremath{\mathbb{R}}^d\to \ensuremath{\mathbb{R}}^d$, such that the set of noisy primal solutions is contained in $\ensuremath{\operatorname{Fix}} T_i$ for every $i\in [m]$. We refer to this as a redundant a priori information. A list of operators suitable to our setting (and with practical implementation) can be found in Section~\ref{s: app}. \ \\ The primal-dual algorithms with reuse of constraints which are given in Table~\ref{t:algos} are a preconditioned and deterministic version of the one proposed in \cite{briceno2021random} applied to the case of linearly constrained minimization. \begin{table}[ht!] \label{t:algos} \centering \resizebox{0.8\columnwidth}{!}{ \begin{tabular}{c c} \hspace{-5mm} \begin{tabular}{|m{65mm}|} \hline Primal-Dual splitting with activations \\ \hline\vspace{2mm} \textbf{Input}: $(\bar{p}^0,p^0,\dal^0)\in\ensuremath{\mathbb{R}}^{2p}\times\ensuremath{\mathbb{R}}^{d}$.\\\vspace{-0mm} \begin{flushleft}\textbf{For} $k=1,\ldots,\text{N:}$\end{flushleft}\vspace{-4mm}\\ \vspace{-10mm} \begin{align} \begin{array}{l} \dal^{k+1}= \dal^k+\Gamma( A\bar{p}^k-b^\delta)\\ \prim^{k+1}=\ensuremath{\operatorname{prox}}^{\Sigma }_{J}(p^k-\Sigma A^*\dal^{k+1})\\ p^{k+1}=T_{\epsilon_{k+1}}\prim^{k+1}\\ \bar{p}^{k+1}= p^{k+1}+ \prim^{k+1}-p^{k}, \end{array} \label{A: PDSP}\tag{PDA}\end{align}\vspace{-4mm}\\ \vspace{0mm}\begin{flushleft}\textbf{End}\end{flushleft} \\ \hline \end{tabular} & \hspace{-4mm} \begin{tabular}{|m{65mm}|} \hline Dual-Primal splitting with activations \\ \hline\vspace{2mm} \textbf{Input}: $(\prim^{0},\bar{\prop}^{0},\dal^0)\in \ensuremath{\mathbb{R}}^{p}\times\ensuremath{\mathbb{R}}^{2d}$.\\\vspace{-0mm} \begin{flushleft}\textbf{For} $k=1,\ldots,\text{N:}$\end{flushleft}\vspace{-4mm}\\ \vspace{-10mm} \begin{align} \begin{array}{l} \prim^{k+1}=\ensuremath{\operatorname{prox}}^{\Sigma }_{J}(\prim^k-\Sigma A^*\Bar{\prop}^{k})\\ \dal^{k+1}= \prop^k+\Gamma( A\prim^{k+1}-b^\delta)\\ \prop^{k+1}=T_{\epsilon_{k+1}}\dal^{k+1}\\ \bar{\prop}^{k+1}= \prop^{k+1}+ \dal^{k+1}-\prop^{k}, \end{array} \label{A: DPSP}\tag{DPA}\end{align}\vspace{-4mm}\\ \vspace{0mm}\begin{flushleft}\textbf{End}\end{flushleft} \\ \hline \end{tabular} \\ \end{tabular}}\caption{Proposed algorithms for iterative regularization.} \label{T:Alg} \end{table} We first focus on the Primal-Dual splitting. It is composed by four different steps, to be performed in series. The first step is the update of the dual variable, in which the residuals to the linear equation $Ax=b^\delta$ are accumulated after preconditioning by the operator $\Gamma$. The second step is an implicit prox-step, with function $J$ and norm $\|\cdot\|_{\Sigma^{-1}}$, on the primal variable. The third one is the activation of the operator related to reusing data constraint, on the primal variable. Finally, the last step is an extrapolation again on the primal variable. Notice that, if no operator is activated, it corresponds simply to $\bar{p}^{k+1}= 2 \prim^{k+1}-\prim^{k}$, that is the classical update in primal-dual algorithm. On the other hand, the Dual-Primal Splitting algorithm, except for permutation in the order of the steps, differs from the previous one because the activation of the operator is done not on the primal variable but on the dual one. Indeed, Lemma \ref{L: PD=DP} establishes that, without the activation of the operator, there is an equivalence between the primal variables generated by \ref{A: PDSP} and the ones generated by \ref{A: DPSP}. \\ \begin{remark} As already mentioned, our analysis can be easily extended to infinite dimensional problems. In particular, note that the primal-dual algorithms above can be formulated exactly in the same way for infinite dimensional problems. The convergence guarantees of the plain methods in Hilbert and Banach spaces have been studied in \cite{Condat13,Vu13,silveti2021stochastic}. Another possible extension of the algorithm, that we do not analyse explicitly in this work, is related with the stochastic version of primal-dual; see \cite{chambolle2018stochastic,alacaoglu2019convergence,gutierrez2021convergence}. On the other hand, note that in \eqref{A: PDSP} the redundant activation of the data constraint is arbitrary. In particular, it can be chosen in a stochastic way at every iteration. \end{remark} \ \\ In the following, we list the assumptions that we require on the parameters and the operators involved in the algorithm. \begin{assumption}\label{A: structured error1} Consider the setting of \ref{A: PDSP} or \ref{A: DPSP}: \begin{enumerate} \item[($A1$)]\label{A: structured error1a} The preconditioners $\Sigma\in\ensuremath{\mathbb{R}}^{p\times p}$ and $\Gamma\in\ensuremath{\mathbb{R}}^{d\times d}$ are two diagonal positive definite matrices such that \begin{align} 0<\alpha:=1-\|\Gamma^{\frac{1}{2}} A\Sigma^{\frac{1}{2}}\|^2. \label{c: ConditionL D1} \end{align} \item[($A2$)]\label{A: structured error1b} For every $k\in\mathbb{N}$, $\epsilon_k\in[m]$. \end{enumerate} Consider the setting of \ref{A: PDSP}: \begin{enumerate} \item[($A3$)]\label{A: structured error2} $\left\{T_{i}\right\}_{i\in [m]}$ is a family of operators from $\ensuremath{\mathbb{R}}^{p}$ to $\ensuremath{\mathbb{R}}^{p}$ and for every $i\in [m]$: \begin{enumerate} \item $\ensuremath{\operatorname{Fix}} T_{i}\supseteq\ensuremath{{\mathcal S}}_{\delta} \supseteq\emptyset $; \item there exist $e_i\geq 0$ such that, for every $\prim\in\ensuremath{\mathbb{R}}^p$ and $\bar{\prim}\in \ensuremath{{\mathcal S}}$, \begin{align} \label{A: pitagoras error 1} \|T_i\prim-\bar{\prim}\|_{\Sigma^{-1}}^2\leq \|\prim-\bar{\prim}\|_{\Sigma^{-1}}^2+e_i\delta^{2}.\end{align} We denote by $e=\max_{i\in[m]} e_i$. \end{enumerate} \label{c: ConditionL D3} \end{enumerate} Now consider the setting of \ref{A: DPSP}: \begin{enumerate} \item[($A4$)\label{A: structured error3}] $\left\{T_{i}\right\}_{i\in [m]}$ is a family of operators from $\ensuremath{\mathbb{R}}^{d}$ to $\ensuremath{\mathbb{R}}^{d}$ and for every $i\in [m]$: \begin{enumerate} \item $\ensuremath{\operatorname{Fix}} T_{i}\supseteq\ensuremath{{\mathcal S}}^{*}_{\delta}\supseteq \ensuremath{\operatorname{Fix}} T_{i}\emptyset$; \item for every $u\in\ensuremath{\mathbb{R}}^d$ and $\bar{u}\in \ensuremath{{\mathcal S}}^{*}_{\delta}$, \begin{align} \label{A: pitagoras error 2} \|T_iu-\bar{u}\|_{\Gamma^{-1}}^2\leq \|u-\bar{u}\|_{\Gamma^{-1}}^2.\end{align} \end{enumerate} \label{c: ConditionL D4} \end{enumerate} \end{assumption} \begin{remark}[Hypothesis about the operators] If Assumptions A3-(a) holds and $\delta=0$, Assumptions A3-(b) is implied by quasi-nonexpansivity of $T_i$ on $\ensuremath{{\mathcal S}}$. The previous is a weaker condition than the one proposed in \cite{briceno2021random}, where, due to the generality of the setting, $\alpha$-averaged non-expansive operators are needed. A similar reasoning applies to Assumption A4. \end{remark} \section{Main results} \label{s: MR} In this section, we present and discuss the main results of the paper. We derive stability properties of primal-dual and dual-primal splitting for linearly constrained optimization with a priori information. \ \\ First, we define the averaged iterates and the square weighted norm induced by $\Sigma$ and $\Gamma$ on $\ensuremath{\mathbb{R}}^p\times\ensuremath{\mathbb{R}}^d$, namely \begin{align} \left(\Prim^n,\Dal^n\right):=\frac{\sum_{k=1}^{n}z^{k}}{n}\hspace{1mm} \text{ and }\hspace{1mm} V(z):=\frac{\|\prim\|_{\Sigma^{-1}}^2}{2}+\frac{\|\dal\|_{\Gamma^{-1}}^2}{2}, \label{D: Wnorm} \end{align} where $z^{k}:=(\prim^{k},\dal^{k})$ is the $k$-th iterate and $z:=(\prim,\dal)$ is a primal-dual variable. We also recall the the definition of the Lagrangian as $\mathcal{L}(\prim,\dal):=J(\prim)+\scal{\dal}{A\prim-b}$ The first result establishes the stability properties of algorithm \ref{A: PDSP}, both in terms of Lagrangian and feasibility gap. We recall that here we use activation operators based on the noisy feasibility constraints in the primal space, namely the set $C_\delta$. \begin{theorem}\textbf{} \label{Th:PPD}Consider the setting of \ref{A: PDSP} under Assumptions A1, A2, and A3. Let $(\bar{p}^0,p^{0},\prim^{0})\in\ensuremath{\mathbb{R}}^{2p}\times\ensuremath{\mathbb{R}}^{d}$ be such that $p^0=\bar{p}^{0}$. Then, for every $z~=~(\prim,\dal)~\in~\mathcal{Z}$ and for every $N\in\ensuremath{\mathbb N}$, we have \begin{align} \mathcal{L}\left(\Prim^{N},\dal\right)- \mathcal{L}\left(\prim,\Dal^{N}\right)\leq& \frac{V(z^{0}-z)}{N}+\frac{2N\|\Gamma^{\frac{1}{2}}\|^2\delta^{2}}{\alpha}+\delta\|\Gamma^{\frac{1}{2}}\|\left(\frac{2 V(z^{0}-z)}{\alpha}\right)^{\frac{1}{2}}\nonumber\\ &+\delta\|\Gamma^{\frac{1}{2}}\|\left(\frac{ N e\delta^2}{\alpha}\right)^{\frac{1}{2}}+\frac{e\delta^2}{2} \hspace{2mm}\label{e: DG} \end{align} and \begin{align} \|A\Prim^N-b\|^2\leq&\frac{16N\|\Gamma\|\|\Gamma^{-1}\|\delta^{2}}{\alpha^{2}}+8\delta\|\Gamma^{-1}\|\left(\frac{2\|\Gamma\| V(z^{0}-z)}{\alpha^3}\right)^{\frac{1}{2}}+8\delta^{2}\|\Gamma^{-1}\|\left(\frac{ \|\Gamma\|e N}{\alpha^3}\right)^{\frac{1}{2}\nonumber}\\ &+\frac{8\|\Gamma^{-1}\|V(z^{0}-z)}{N\alpha}+2\delta^{2}+\frac{4\|\Gamma^{-1}\|e\delta^2}{\alpha}, \label{e: RN} \end{align} where we recall that the constants $\alpha$ and $e$ are defined in Assumptions A1 and A3, respectively. \end{theorem} The proof of Theorem~\ref{Th:PPD} is given in the Appendix, Section \ref{Proof:PPD}. The proof combines and extends the techniques developed in \cite{briceno2021random} and \cite{molinari2021iterative}, based on the firm non-expansivity of the proximal point operator and discrete Bihari's lemma to deal with the error; see also \cite{rasch2020inexact}. In the next result, we establish upper bounds for the Lagrangian and feasibility gap analogous to those proposed in Theorem \ref{Th:PDP}, but for algorithm PDA. The main difference is that now the activation step is based on a priori information in the dual space $\ensuremath{\mathbb{R}}^d$, and not on $C_{\delta}$. This set is represented by the intersection of fixed point sets of a finite number of operators and encodes some knowledge about the dual solution. \begin{theorem} \label{Th:PDP} Consider the setting of \ref{A: PDSP} under Assumptions A1, A2, and A4. Let $(\bar{p}^0,\prop^{0},\prim^{0})\in \ensuremath{\mathbb{R}}^{2d}\times\ensuremath{\mathbb{R}}^{p}$ be such that $\prop^0=\bar{p}^{0}$. Then, for every $z~=~(\prim,\dal)~\in~\mathcal{Z}$ and for every $N\in \ensuremath{\mathbb N}$, we have that\\ \scalebox{0.99}{\parbox{\linewidth}{\begin{align} \label{B: Dual-lagrangian} \mathcal{L}\left(\Prim^{N},\dal\right)- \mathcal{L}\left(\prim,\Dal^{N}\right)\leq& \frac{V(z^{0}-z)}{N}+2\|\Gamma^{\frac{1}{2}}\|^{2} N\delta^{2}+\|\Gamma^{\frac{1}{2}}\|\delta\left(2V(z^{0}-z)\right)^{\frac{1}{2}}, \hspace{2mm}\end{align}}}\\ and \\\scalebox{0.99}{\parbox{\linewidth}{\begin{align} \label{B: Dual-feasibility} \|A\Prim^N-b\|^2\leq& \frac{8\|\Gamma^{\frac{1}{2}}\|^{2}\|\Gamma^{-1}\| N\delta^{2}}{\alpha}+\frac{4\|\Gamma^{\frac{1}{2}}\|\|\Gamma^{-1}\|\delta\left(2V(z^{0}-z)\right)^{\frac{1}{2}}}{\alpha}\nonumber\\&+\frac{4\|\Gamma^{-1}\|V(z^{0}-z)}{N\alpha}+2 \delta^{2}. \end{align}}} \\ where we recall that the constants $\alpha$ is defined in Assumptions A1. \end{theorem} The proof is given in the Appendix, Section \ref{Proof:PDP}.\\ \\ \ First, we comment the chosen optimality measures. If the penalty is strongly convex, the Bregman divergence is an upper bound of the squared norm of the difference between the reconstructed and the ideal solution, while if $J$ is only convex, the Bregman divergence gives only limited information. As discussed in \cite{rasch2020inexact}, the Lagrangian gap is equivalent to the Bregman distance of the iterates to the solution, and in general it is a very weak convergence measure. For instance, in the exact case, a vanishing Lagrangian gap does not imply that cluster points of the generated sequence are primal solutions. However, as can be derived from \cite{molinari2021iterative}, a vanishing Lagrangian gap coupled with vanishing feasibility gap implies that every cluster point of the primal sequence is a solution of the primal problem. In both theorems, the established result ensures that the two optimality measures can be upper bounded with the sum of two terms. The first one, which can be interpreted as an optimization error, is of the order $\mathcal{O}(N^{-1})$ and so it goes to zero as $N$ tends to $+\infty$. Note that, in the exact case $\delta=0$, only this term is present and both the Lagrangian and the feasibility gap are indeed vanishing, guaranteeing that every cluster point of the sequence is a primal solution. The second term, which can be interpreted as a stability control, collects all the errors due to the perturbation of the exact datum and takes also into account the presence of the activation operators $T$, when the reuse data constraint is noisy. It is an increasing function of the number of iterations and the noise level $\delta$. \begin{remark} Theorems~\ref{Th:PPD} and \ref{Th:PDP} are an extension of \cite{briceno2021random}, where the authors prove that the sequence generated by the algorithms converges to an element in $\mathcal{Z}$ when $\delta=0$, but no convergence rates neither stability bounds were given. In this work, we filled the gap for linearly constrained convex optimization problems. \ \\ Moreover, in the noise free case, our assumptions on the additional operators $T$ are weaker than those proposed in \cite{briceno2021random}, where $\alpha$-averagedness is required. For the noisy case, without the activation operators (so with $e=0$), our bounds are of the same order as \cite{molinari2021iterative} in the number of iterations and noise level. \end{remark} As mentioned above, in \eqref{e: DG} and \eqref{e: RN}, when $\delta>0$ and $N\rightarrow +\infty$ the upper bounds for the \ref{A: PDSP} iterates tend to infinity and the iteration may not converge to the desired solution. The same comment can be made for the \ref{A: DPSP} iterates, based on \eqref{B: Dual-lagrangian} and \eqref{B: Dual-feasibility}. In both cases, to obtain a minimal reconstruction error, we need to impose a trade off between convergence and stability. The next corollary introduces an early stopping criterion, depending only on the noise level and leading to stable reconstruction. \begin{corollary}\label{ESPDA} (Early-stopping). Under the assumptions of Theorem \ref{Th:PPD} or Theorem~\ref{Th:PDP}, choose $N={c}/{\delta}$ for some $c>0$. Then, for every $z~=~(\prim,\dal)~\in~\mathcal{Z}$, there exist constants $C_1$, $C_2$, and $C_3$ such that \begin{align} \mathcal{L}\left(\Prim^{N},\dal\right)- \mathcal{L}\left(\prim,\Dal^{N}\right)\leq& C_1\delta\nonumber\\ \|A\Prim^N-b\|^2\leq& C_2\delta+ C_3\delta^{2}. \label{e: earlystoppingp} \end{align} \end{corollary} The early stopping rule prescribed above is computationally efficient, in the sense that the number of iterations is proportional to the inverse of the noise level. In particular, if the error $\delta$ is small then more iterations are useful, while if $\delta$ is big, it is convenient to stop sooner. So, the number of iterations plays the role of a regularization parameter. Using the early stopping strategy proposed above, we can see that the error in the data transfers to the error in the solution with the same noise level, which is the best that one can expect for a general operator $A$. \begin{remark}\textbf{Comparison with Tikhonov regularization.} The reconstruction properties of our proposed algorithm are comparable to the ones obtained using Tikhonov regularization \cite{engl1996regularization}, with the same dependence on the noise level \cite{benning2011error}. We underline that in the previous paper only the Bregman divergence is considered, and not the feasibility. One main difference between Tikhonov and iterative regularization techniques is the fact that the Tikhonov parameter $\lambda$ is a continuous regularization parameter, while the iteration counter is a discrete one. This may be seen as a disadvantage, but usually in the practise it may be fixed with the choice of a smaller step-size in the algorithm. On the other hand, iterative regularization is way more efficient from the computational point of view, as it requires the solution of only one optimization problem, while explicit regularization amounts to solve a family of problems indexed by the regularization parameter. Let us also note that, when $\delta$ is unknown, any principle used to determine a suitable $\lambda$ can be used to determine the stopping time. \end{remark} \section{Implementation details} \label{s: app} In this section we discuss some possible standard choices to construct non-expansive operators $T$ that satisfy our assumptions and encode some redundant information on the solution set. We first present examples for \ref{A: PDSP}, and later for \ref{A: DPSP}. To define the operators, we first recall the projection on a row. For every $j\in [d]$ we denote by $a_j$ the $j$-th row of $A$ and by $P_j$ the projection onto the $j$-th linear equation; namely, \begin{align} P_{j}\colon\mathbb{R}^p \mapsto \mathbb{R}^p, \hspace{2mm} \prim \mapsto \prim+\frac{b_{j}-\scal{a_{j}}{\prim}}{\|a_{j}\|^2}a_{j}^{*}. \label{d: averaged} \end{align} Analogously, for every $j\in [d]$, we denote by $P^\delta_{j}$ the projection operator as in the previous definition but with the noisy data $b^\delta$ instead of $b$. We proceed to define the four families of operators proposed in this paper for \ref{A: PDSP}. \begin{definition}\label{O: Operators} The operator $T\colon \ensuremath{\mathbb{R}}^{p}\mapsto\ensuremath{\mathbb{R}}^{p}$ is a \begin{enumerate} \item \textbf{Serial projection} if \begin{align} T=P^{\delta}_{\beta_{l}}\circ\cdots\circ P^{\delta}_{\beta_{1}}, \label{d: averaged1} \end{align} where, for every $j\in [l]$, $\beta_{j}\in [d]$. \item \textbf{Parallel projection} if \begin{align} T=\sum\limits_{j=1}^{l}\alpha_{j} P^{\delta}_{\beta_{j}} \label{d: averaged2} \end{align} where, for every $j\in [l]$, $\beta_{j}\in [d]$ and $\left(\alpha_{j}\right)_{j=1}^{l}$ are real numbers in $[0,1]$, such that $\sum\limits_{j=1}^{l}\alpha_{j}=1$. \item \textbf{Landweber operator} with parameter $\alpha$ if \begin{align} T:\mathbb{R}^p \mapsto \mathbb{R}^p, \hspace{2mm} \prim \mapsto \prim-\alpha A^{*}(A\prim-b^\delta). \label{d: averaged3} \end{align} where $\alpha\in ]0,\frac{2}{\|A\|^2}[$. \item \textbf{Landweber operator with adaptive step} and parameter $M$ if \begin{align} T\colon\mathbb{R}^p \mapsto \mathbb{R}^p, \hspace{2mm} \prim \mapsto \left\{ \begin{array}{ll} \prim-\beta(x) A^{*}(A\prim-b^\delta) & \text{\ \ \ if } A^{*}A\prim\neq A^{*}b_\delta \\ \prim & \text{\ \ \ otherwise.} \end{array}\right. \label{d: averaged4} \end{align} where, for $M>0$, $\beta(x)=\min\left(\frac{\|A\prim-b^\delta\|^2}{\|A^{*}(A\prim-b^\delta)\|^2},M\right)$. \end{enumerate} \end{definition} The next lemma states that the operators in Definition~\ref{O: Operators} satisfy Assumption A3. \begin{lemma} \label{L: Series Parallel} Let $T\colon\ensuremath{\mathbb{R}}^p\to\ensuremath{\mathbb{R}}^p$ be one of the operators given in Definition~\ref{O: Operators}. Then Assumption A3 holds with \begin{enumerate} \item $e_T~=\sum_{j=1}^l \frac{1}{\|a_{{\beta_j}}\|^2}$, if $T$ is a serial projection; \item $e_T~=\sum_{j=1}^l\frac{\alpha_{j}}{\|a_{\beta_j}\|^2}$, if $T$ is a parallel projection; \item $e_T=\frac{\alpha}{2-\alpha\|A\|^2}$, if $T$ is the Landweber operator with parameter $\alpha$; \item $e_T=M$, if $T$ is the Landweber operator with adaptive step and parameter $M$. \end{enumerate} \end{lemma} \begin{remark}\label{R: Parallel-Landweber} \textbf{Relationship between Parallel projection and Landweber operator}. A particular parallel projection is the one corresponding to $l=d$, $\beta_{j}=j$, and $\alpha_{j}=\frac{\|a_j\|^2}{\|A\|_{F}^2}$. Then, \eqref{d: averaged2} reduces to \begin{equation} T(x)=x-\frac{1}{\|A\|_{F}^2}A^{*}(Ax-b^\delta).\label{T:Land-Paralell} \end{equation} Observe that, since $\|A\|\leq \|A\|_{F}$, the previous is a special case of Landweber operator with $\alpha=\frac{1}{\|A\|_{F}^2}$. \end{remark} \begin{remark}\textbf{Steepest descent}. Let $\bar{\prim}\in \ensuremath \mathbb{R}^p$ such that $A\bar{\prim}=b$. Then, from \eqref{d: averaged4}, we derive (see also equation \eqref{e:Steepest descentbeta} in the Appendix) \begin{align} \|T\prim-\bar{\prim}\|^2&=\|\prim-\bar{\prim}\|^2-2\beta(x)\scal{b^{\delta}-b}{A\prim-b^\delta}-2\beta(x)\|A\prim-b^\delta\|^2\nonumber\\&\hspace{2mm}+\beta(x)^2\|A^{*}(A\prim-b^\delta)\|^2.\label{e:Steepest descentbeta0} \end{align} If $\delta=0$, then the choice of $\beta(x)$ given in \eqref{d: averaged4} minimizes the right hand side of \eqref{e:Steepest descentbeta0}, if the minimizer is smaller than $M$. In this case, $\beta$ is chosen in order to maximize the contractivity with respect to a fixed point of $T$. While we cannot repeat the same procedure for $\delta > 0$, since we do not know $b$, we still keep the same choice. If $b^\delta\in \ensuremath{\operatorname{ran}}(A)$, then $\sup\limits_{x\in\ensuremath{\mathbb{R}}^p}\frac{\|A\prim-b^\delta\|^2}{\|A^{*}(A\prim-b^\delta)\|^2}<+\infty$. However, in general, if $\delta > 0$, this is not true and $M$ is needed to ensure that $\beta(x)$ is bounded. \end{remark} \begin{remark} From a computational point of view, parallel projections and Landweber operators are more efficient than serial projections. In particular, note that the quantity $(Ax^{k}-b^\delta)$ needs to be computed anyway in the other steps of the algorithm. \end{remark} While for the primal space the reuse data constraint that we want to exploit is clearly given by the linear constraint, for the dual is not always so. In the following we present an example related to the $\ell^1$ norm. A similar implementation can be extended to the case of $1$-homogenous penalty functions, for which the Fenchel conjugate is the indicator of a closed and convex subset of the dual space \cite[Proposition 14.11 (ii)]{bauschke2011convex}. \begin{example} \label{e:dpl1} Consider the noisy version of problem \ref{P: problem} with $J(x)= \|x\|_1$. Then the dual is given by \[ \min_{u\in\ensuremath{\mathbb{R}}^d} \langle b^\delta, u\rangle \,:\, |(A^*u)_i| \leq 1, \text{ for every $i\in [p]$}. \] For every $i\in [p]$, set $D_i=\{u\in\ensuremath{\mathbb{R}}^d\,:\, |(A^*u)_i| \leq 1\}$ and denote by $T_i$ the projection over $D_i$. Note that this is trivial to compute, since it is the projection onto the intersection of two parallel hyperplanes. Clearly Assumption A4 holds. Differently from the primal case, here we are projecting on exact constraints, independent from the noisy data $b^{\delta}$. \end{example} \section{Numerical results} In this section, to test the efficiency of the proposed algorithms, we perform numerical experiments in two relevant settings: regularization with the $\ell^1$-norm and total variation regularization. For the $\ell^1$-norm regularization, we compare our results with other regularization techniques. In the more complex problem of total variation we explore the properties of different variants of our procedure. \textbf{Code statement:} All numerical examples are implemented in MATLAB\textsuperscript{\textregistered} on a laptop. In the second experiment we also use the library Numerical tours \cite{peyre2011numerical}. The corresponding code can be downloaded at \href{https://github.com/cristianvega1995/L1-TV-Experiments-of-Fast-iterative-regularization-by-reusing-data-constraints} {https://github.com/cristianvega1995/L1-TV-Experiments-of-Fast-iterative-regularization-by-reusing-data-constraints} \subsection{$\ell^1$-norm regularization} In this section, we apply the routines \ref{A: PDSP} and \ref{A: DPSP} when $J$ is equal to the $\ell^1$-norm. We compare the results given by our method with two state-of-the-art regularization procedures: iterative regularization by vanilla primal-dual \cite{molinari2021iterative}, and Tikhonov explicit regularization, using the forward-backward algorithm \cite{Combettes_Wajs2005}. In addition, we compare to another classical optimization algorithm for the minimization of the sum of two non-differentiable functions, namely Douglas-Rachford \cite{briceno2012douglas}. In the noise free case, this algorithm is very effective in terms of number of iterations, but at each iteration it requires the explicit projection on the feasible set. In the noisy case, a stability analysis of the previous is not available. We use the four variants of the algorithm \ref{A: PDSP} corresponding to the different choices of the operators $T$ in Definition \ref{O: Operators} and the version of \ref{A: DPSP} described in Example~\ref{e:dpl1}. Unless otherwise stated, in all the experiments we use as preconditioners $\Sigma=\Gamma=\frac{0.99}{\|A\|} \ensuremath{\operatorname{Id}}\,$, which both satisfy \eqref{c: ConditionL D1}. Let $d=2260$, $p=3000$, and let $A\in\ensuremath{\mathbb{R}}^{d\times p}$ be such that every entry of the matrix is an independent sample from $\mathcal{N}(0,1)$, then normalized column by column. We set $b:=Ax^{*}$, where $x^{*}\in \ensuremath{\mathbb{R}}^{p}$ is a sparse vector with approximately $300$ nonzero entries uniformly distributed in the interval $[0,1]$. It follows from \cite[Theorem 9.18]{foucart2013invitation} that $x^{*}$ is the unique minimizer of the problem with probability bigger than $0.99$. Let $b^\delta$ be such that $b^\delta=b+\|b\| u $ where the vector $u$ is distributed, entry-wise, as $U[-0.2,0.2]$. In this experiment, to test the reconstruction capabilities of our method, we use the exact datum $x_*$ to establish the best stopping time, i.e. the one minimizing $\|x_k-x_*\|$. The exact solution is also used for the other regularization techniques. In a real practical situation, if $\delta$ is unknown, we would need to use parameter tuning techniques in order to select the optimal stopping time, but we do not address this aspect here. We detail the used algorithms and their parameters below. \begin{itemize} \item[(Tik)] \textbf{Tikhonov Regularization}: We consider a grid of penalty parameters $$G=\left\{\left(1-\frac{l-1}{5}\right)10^{1-d}\|Ab^\delta\|_{\infty} : \ \ l\in[5], \ d\in [6]\right\}$$ and, for each value $\lambda\in G$, the optimization problem \begin{equation} \label{ProbTyk} \min\limits_{x\in\ensuremath{\mathbb{R}}^{p}}~\left\{\lambda\|x\|_{1}~+~\frac{1}{2}\|Ax-b^\delta\|^2\right\}. \end{equation} We solve each one of the previous problems with $300$ iterations of forward-backward algorithm, unless the stopping criterion $\|x^{k+1}-x^{k}\|\leq 10^{-3}$ is satisfied earlier. Moreover, to deal efficiently with the sequence of problems, we use warm restart \cite{becker2011nesta}. We first solve problem \eqref{ProbTyk} for the biggest value of $\lambda$ in $G$. Then, we initialize the algorithm for the next value of $\lambda$, in decreasing order, with the solution reached for the previous one; and so on. \item[(DR)] \textbf{Douglas Rachford}: see \cite[Theorem 3.1]{briceno2012douglas}. \item[(PD)] \textbf{Primal-dual}: this corresponds to PDA with $m=1$ and $T_1=\ensuremath{\operatorname{Id}}\,$. \item[(PDS)] \textbf{Primal-dual with serial projections}: at every iteration, we compute a serial projection using all the equations of the noisy system, where the order of the projections is given by a random shuffle. \item[(PDP)]\textbf{Primal-dual with parallel projections}: $m=1$ and $T_1x=x-\frac{1}{\|A\|_{F}^2}A^{*}(Ax-b^{\delta})$, see Remark \ref{R: Parallel-Landweber}. \item[(PDL)]\textbf{Primal-dual Landweber}: $m=1$ and $T_1x=x-\frac{2}{\|A\|^2}A^{*}(Ax-b^{\delta})$. \item[(PDAL)] \textbf{Primal-dual Landweber with adaptive step}: $m=1$, and $T_1x~=~x~-\beta(x)A^{*}~(Ax~-~b^{\delta})$, where $\beta(x)=\min\left(\frac{\|Ax-b^\delta\|^2}{\|A^{*}(Ax-b^\delta)\|^2}, M\right)$ for $M=10^{6}$. \item[(DPS)]\textbf{Dual primal with serial projections}: at every iteration, we compute a serial projection over every inequality of $|A^{*}u|_{\infty}\leq 1$, where the order is given by a random shuffle of the rows of $A^*$. \end{itemize} \begin{table}[ht!]\begin{center} \begin{tabular}{|l|l|l|l|} \hline & Time [S] & Iteration & \begin{tabular}[c]{@{}l@{}}Reconstruction \\ error\end{tabular} \\ \hline Tik & 1.89 & 109 & 3.07 \\ \hline DR & 3.08 & 5 & 5.01 \\ \hline PD & 0.36 & 14 & 3.11 \\ \hline PDS & 1.41 & 11 & 2.58 \\ \hline PDP & 0.35 & 14 & 3.11 \\ \hline PDL & \textcolor{red}{0.28} & 12 & \textcolor{red}{2.60} \\ \hline PDAL & \textcolor{red}{0.27} & 11 & \textcolor{red}{2.56} \\ \hline DPS & 0.54 & 17 & 2.83 \\ \hline \end{tabular}\caption{Run-time and number of iterations of each method until it reaches its reconstruction error. We compare the proposed algorithms with Tikhonov regularization (Tik), Douglas-Rachford (DR), and iterative regularization (PD).} \label{table:1} \end{center} \end{table} \begin{figure}[ht] \centering \includegraphics[scale=0.25]{NF_Error.jpg} \caption{ Graphical representation of early stopping. Note that the first iterates are closer to the noise free solution, then converges to the noisy solution.} \label{fig: Early stopping} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=0.25]{Feas.jpg} \caption{ Early stopping with respect the feasibility. Note that they are similar with respect to previous Figure.} \label{fig: Early stoppingFeas} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=0.25]{Tik.jpg} \caption{ Reconstruction error of Tikhonov Method with different penalties.} \label{fig: Tik} \end{figure} In Table \ref{table:1}, we reported also the number of iterations needed to achieve the best reconstruction error, but it is important to note that the iteration of each method has a different computational cost, so the run-time is a more appropriate comparison criterion. Douglas-Rachford with early stopping is the regularization method performing worst on this example, both in terms of time and reconstruction error. This behavior may be explained by the fact that this algorithm converges fast convergence to the noisy solution, from which we infer that Douglas-Rachford is not a good algorithm for iterative regularization. Moreover, since we project on the noisy feasible set at every iteration, the resolution of a linear system is needed at every step. This explains also the cost of each iteration in terms of time. Note in addition that in our example $b^\delta$ is in the range of $A$ and so the noisy feasible set is nonempty. Tikhonov regularization performs similarly in terms of time, but it requires many more (cheaper) iterations. The achieved error is smaller than the one of DR, but bigger then the minimal one achieved by other methods. Regarding our proposals, we observe that the proposed methods perform better than (PD). This supports the idea of reusing the data constraints is beneficial with respect to vanilla primal-dual. The benefit is not evident for (PDP), which achieves the worst reconstruction error, since $\|A\|^2_F$ is very big and so $T_1$ is very close to the identity. All the other methods give better results in terms of reconstruction error. (PDS) is the slowest since it requires computing several projections at each iteration in a serial manner. We also observe that (PDL) and (PDAL) have better performance improving 22.2\% and 25.0\% in reconstruction error and 16.4\% and 17.7\% in run-time. Figure \ref{fig: Early stopping} empirically shows the existence of the trade-off between convergence and stability for all the algorithms, and therefore the advantage of early stopping. Similar results were obtained for the feasibility gap. \subsection{Total variation} In this section, we perform several numerical experiments using the proposed algorithms for image denoising and deblurring. As done in the classical image denoising method introduced by Rudin, Osher and Fantemi in \cite{rudin1992nonlinear}, we rely on the total variation regularizer. See also \cite{rudin1992nonlinear,osher1990feature,rudin1994total,chambolle2004algorithm,chambolle1997image,osher2005iterative,xiao2010dual}. We compare (PD) with (PDL) and (PDAL) algorithms, which were the algorithms performing the best in the previous application. In this section, we use two different preconditioners, which have been proved to be very efficient in practice \cite{pock2011diagonal}. Let $x^{*} \in \mathbb{R}^{N^2}$ represent an image with $N\times N$ pixels in $[0,1]$. We want to recover $x^{*}$ from a blurry and noisy measurement $y$, i.e. from \begin{align} y=Kx^{*}+e, \end{align} where $K$ is a linear bounded blurring operator and $e$ is a random noise vector. A standard approach is to assume that the original image is well approximated by the solution of the following constrained minimization problem: \begin{align} \label{D:ROF} \tag{TV} \min\limits_{u\in \ensuremath{\mathbb{R}}^{N\times N}}&\|Du\|_{1,2}\nonumber\\\text{s.t.}&\hspace{2mm}Ku=y\nonumber, \end{align} In the previous, \begin{align} \|\cdot\|_{1,2}\colon (\ensuremath{\mathbb{R}}^{2})^{N\times N}\rightarrow \ensuremath{\mathbb{R}}\colon p\rightarrow \sum_{i=1}^{N}\sum_{j=1}^{N}\|p_{ij}\|, \end{align} and $D\colon \ensuremath{\mathbb{R}}^{N^2}\rightarrow (\ensuremath{\mathbb{R}}^{2})^{N^2}$ is the discrete gradient operator for images, which is defined as \begin{align} \left(D u\right)_{ij}=&\left((D_{x}u)_{ij},(D_{y}u)_{ij}\right) \end{align} with \begin{align} \left(D_y u\right)_{ij}= &\left\{ \begin{array}{cc} u_{i+1,j}-u_{i,j} & \text{if } 1\leq i\leq N-1 \\ 0 & \text{if } i=N \end{array}\right.\nonumber\\ \left(D_x u\right)_{ij}=&\left\{ \begin{array}{cc} u_{i,j+1}-u_{i,j} & \text{if } 1\leq j\leq N-1 \\ 0 & \text{if } j=N. \end{array}\right.\nonumber \end{align} In order to avoid the computation of the proximity operator of $\| D \cdot\|_{1,2} $, we introduce an auxiliary variable $v=Du \in Y:=\ensuremath{\mathbb{R}}^{2N^2}$. Since the value in each pixel must belong to $[0,1]$, we add the constraint $u\in X:=[0,1]^{N^2}$. In this way, \eqref{D:ROF} becomes \begin{align} \label{D:ROFL1} \tag{TV} \min\limits_{(u,v)\in X\times Y}&\|v\|_{1,2}\nonumber\\\text{s.t.}&\hspace{2mm}Ku=y\nonumber\\ &\hspace{2mm}Du=v\nonumber. \end{align} \subsubsection{Formulation and Algorithms} Problem~\eqref{D:ROFL1} is a special instance of \eqref{P: problem}, with \begin{align} \left\{\begin{array}{l} J\colon \ensuremath{\mathbb{R}}^{N^2}\times \ensuremath{\mathbb{R}}^{2N^2}\mapsto \ensuremath{\mathbb{R}}\cup\{+\infty\}\colon x:=(u,v)\mapsto \|v\|_{1,2}+\iota_{X}(u), \\ \\ A=\left[\begin{array}{cc} K & 0 \\ D & -\ensuremath{\operatorname{Id}}\, \end{array}\right],\hspace{2mm} b^{\delta}=\left[\begin{array}{c} y \\ 0 \end{array}\right], \text{ and } p=d=3N^2. \end{array} \right. \label{S: TV} \end{align} Clearly, $A$ is a linear bounded nonzero operator, and $J\in\Gamma_0(\ensuremath{\mathbb{R}}^{N^2}\times \ensuremath{\mathbb{R}}^{2N^2})$. \begin{table}[ht] \centering \begin{tabular}{|m{120mm}|} \hline Primal-Dual for total variation \\ \hline\vspace{2mm} \textbf{Input}: $(p^0,p^{-1},\prim^{0},v^0)\in\ensuremath{\mathbb{R}}^{6N^2}\times\ensuremath{\mathbb{R}}^{2N^2}$ and $(q^0,q^{-1},z^{0},w^0)\in\ensuremath{\mathbb{R}}^{3N^2}\times\ensuremath{\mathbb{R}}^{N^2}$.\\\vspace{2mm} \begin{flushleft}\textbf{For} $k=1,\ldots,\text{N:}$\end{flushleft}\vspace{-4mm}\\ \vspace{-10mm} \begin{align} \begin{array}{l} v^{k+1}= v^k+\Gamma( K(p^{k}+ \prim^{k}-p^{k-1})^k-y)\\ w^{k+1}= w^k-\Gamma(q^{k}+ z^{k}-q^{k-1})+\Gamma D(p^{k}+ \prim^{k}-p^{k-1})\\ \prim^{k+1}=P_{X}(p^k-\Sigma K^*v^{k+1}+\Sigma w^{k+1})\\ z^{k+1}=\ensuremath{\operatorname{prox}}_{\Sigma \|\cdot\|_{1,2}}(q^k-\Sigma D^*w^{k+1})\\ p^{k+1}=x^{k}-\alpha(x^{k})\left (K^{*}(Kx^{k}-y)+(Dx^{k}-z^{k})\right)\\q^{k+1}=q^{k}-\alpha(x^{k})D^{*}\left(Dx^{k}-z^{k}\right) \end{array} \label{A: PDA-TV}\end{align}\vspace{-4mm}\\ \vspace{0mm}\begin{flushleft}\textbf{End}\end{flushleft} \\ \hline \end{tabular} \caption{General form of the algorithms.} \label{tab: Tabla 1} \end{table} We compare the algorithms listed below. Note that all the proposed algorithms are different instances of the general routine described in Table~\ref{tab: Tabla 1}, and each one of them corresponds to a different choice of $\alpha(x^k)$: \begin{enumerate} \item PD, the vanilla primal-dual algorithm, corresponding to $\alpha(x^k)=0$; \item PPD, the preconditioned primal-dual algorithm, obtained by $\alpha(x^k)=0$ and $\Sigma$ and $\Gamma$ as in \cite[Lemma 2]{pock2011diagonal}; \item PDL, corresponding to $\alpha(x^k)=1/\|A\|^2$; \item PDAL, corresponding to $\alpha(x^k)=\beta(x^k)$ as \eqref{d: averaged4}. \end{enumerate} Initializing by $p^0=\bar{p}^{0}=x^{0}$ and $q^0=\bar{q}^{0}=z^{0}$, we recover the results of Theorem \ref{Th:PPD} and Corollary \ref{ESPDA}. \begin{remark} In order to implement the algorithm in \ref{A: PDA-TV}, we first need to compute the following operators. \begin{enumerate} \item It follows from \cite[Proposition 24.11]{bauschke2011convex} and \cite[Example 24.20]{bauschke2011convex} that $$\ensuremath{\operatorname{prox}}^{\Sigma}_{\|\cdot\|_{1,2}}(v)=\left(\ensuremath{\operatorname{prox}}^{\Sigma_{i}}_{\|\cdot\|}(v_{i})\right)_{i=1}^{N^2}=\left(\left(1-\frac{\Sigma}{\max\{\Sigma,\|v\|\}}\right)v_{i}\right)_{i=1}^{N^2},$$ where $v_i\in\ensuremath{\mathbb{R}}^{2}$. Analogously, the projection onto $X$ can be computed as $$P_{X}(u)=\left(P_{[0,1]}(u_i)\right)_{i=1}^{N^2},$$ where $P_{[0,1]}(u_i)=\min\{1,\max\{u_i,0\} \}.$ \item It follows from \cite{chambolle2004algorithm} that \begin{align} \hspace{-6mm}-D^{*}p= \operatorname{div}p=\left\{ \begin{array}{ll} (p_1)_{i,j}-(p_1)_{i-1,j} & \text{if } 1< i<N \\ (p_1)_{i,j} & \text{if } i=1\\ -(p_1)_{i-1,j} & \text{if } i=N \end{array}\right.\hspace{-2mm}+\left\{ \begin{array}{ll} (p_2)_{i,j}-(p_2)_{i,j-1} & \text{if } 1< j<N \\ (p_2)_{i,j} & \text{if } j=1\\ -(p_2)_{i,j-1} & \text{if } j=N. \end{array}\right.\nonumber \end{align} \end{enumerate} \end{remark} \subsubsection{Numerical results} Set $N=256$, let $x^{*}$ be the image \textquotedblleft boat\textquotedblright\ in the library Numerical tours \cite{peyre2011numerical}. We suppose that $K$ is an operator assigning to every pixel the average of the pixels in a neighborhood of radius 8 and that $e\thicksim U[-0.025,0.025]^{N^2}$. We use the original image as exact solution. For denoising and deblurring, we early stop the procedure at the iteration minimizing the mean square error (MSE), namely $\|x^k-x^*\|^2/N^2$, and we measure the time and the number of iterations needed to reach it. Another option for early stopping could be to consider the image with minimal structural similarity (SSIM). Numerically, in our experiments, this gives the same results. Additionally, we use the peak signal-to-noise ratio (PSNR) to compare the images. Note that primal-dual algorithm with preconditioning is the method that needs less time and iterations among all the procedures. Moreover, due to \cite[Lemma 2]{chambolle2011first}, condition \eqref{c: ConditionL D1} is automatically satisfied, while for the other methods we need to check it explicitly, which is computationally costly. However, (PPD) is the worst in terms of SSIM, PNSR, and MSE. We verify that all other algorithms have a superior performance in terms of reconstruction, with a small advantage for the Landweber with fixed and adaptive step-sizes, reducing the MSE of $94\%$ with respect the noisy image. In addition, compared to (PD), (PDL) and (PDAL) require less iterations and time to satisfy the early stopping criterion. We believe that this is due to the fact that the extra Landweber operator improves the feasibility of the primal iterates. Visual assessment of the denoised and deblurred images are shown in Figure \ref{fig: Comparision_TV}, that highlights the regularization properties achieved by the addition of the Landweber operator and confirms the previous conclusions. \begin{table}[ht]\centering \resizebox{0.75\textwidth}{!}{% \begin{tabular}{|l|l|l|l|l|l|} \hline & Iterations & Time & SSIM & PNSR & MSE \\ \hline Noisy image & - & - & 0.4468 & 21.4801 & 0.0071 \\ \hline PD & 54 & 8.9773 & 0.8928 & 32.3614 & 0.0006 \\ \hline \begin{tabular}[c]{@{}l@{}}PD with\\ preconditioning\end{tabular} & 5 & 1.5515 & 0.8581 & 27.3753 & 0.0018 \\ \hline PDL & 46 & 7.1846 & 0.9066 & 34.2174 & 0.0004 \\ \hline PDAL & 31 & 5.4542 & 0.9112 & 34.3539 & 0.0004 \\ \hline \end{tabular}% } \caption{Quantitative comparison of the algorithms in terms of Structural similarity (SSIM), peak signal-to-noise ratio (PSNR), Mean square error (MSE), time, and iterations to reach the early stopping.} \label{tab: Tabla 2} \end{table} \begin{figure}[ht] \centering \includegraphics[scale=0.40]{Boat_NF.jpg} \caption{ Qualitative comparison of the 4 proposed methods.} \label{fig: Comparision_TV} \end{figure} \section{Conclusion and Future Work} \label{s: Conclusion} In this paper we studied two new iterative regularization methods for solving a linearly constrained minimization problem, based on an extra activation step reusing the the data constraint. The analysis was carried out in the context of convex functions and worst-case deterministic noise. We proposed five instances of our algorithm and compared their numerical performance with state of the art methods and we observed considerable improvement in run-time. In the future, we would like to extend Theorem~\ref{Th:PPD} to structured convex problems and more specific algorithms. Possible extensions are: 1) the study of problems including, in the objective function, a $L$-smooth term and a composite linear term; 2) the analysis of random updates in the dual variable (see \cite{chambolle2018stochastic}) and stochastic approximations for the gradient; 3) the theoretical study of the impact of different preconditioners; 4) the improvement of the convergence and stability rates for strongly convex objective functions. \section{Acknowledgement} This project has been supported by TraDE-OPT project which received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No 861137. L.R. acknowledges support from the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. L.R. also acknowledges the financial support of the European Research Council (grant SLING 819789), the AFOSR projects FA9550-18-1-7009, FA9550-17-1-0390 and BAA-AFRL-AFOSR-2016-0007 (European Office of Aerospace Research and Development), and the EU H2020-MSCA-RISE project NoMADS - DLV-777826. C. M. e S. V. are members of the INDAM-GNAMPA research group. This work represents only the view of the authors. The European Commission and the other organizations are not responsible for any use that may be made of the information it contains. \section{Proofs} \subsection{Equivalence between Primal-dual and Dual-primal algorithms.}\label{Proof:PD=Dp} In the following lemma we establish that, if $T=\ensuremath{\operatorname{Id}}\,$ and the initialization is the same, then there is an equivalence between the $k$-th primal variable of \ref{A: PDSP} and \ref{A: DPSP}, denoted by $\dal^{k}_{PD}$ and $\dal^{k}_{DP}$, respectively. \begin{lemma} \label{L: PD=DP} Let $(p^0_{PD},\bar{p}^{0}_{PD},\dal^0_{PD})\in \ensuremath{\mathbb{R}}^{2p}\times\ensuremath{\mathbb{R}}^{d}$ and $(\prim_{DP}^{0},\prop_{DP}^0,\bar{\prop}_{DP}^{0})\in \ensuremath{\mathbb{R}}^{p}\times\ensuremath{\mathbb{R}}^{2d}$ the initialization \ref{A: PDSP} and \ref{A: DPSP}, respectively, in the case when $m=1$ and $T=\ensuremath{\operatorname{Id}}\,$. Suppose that $p_{PD}^0=\bar{p}_{PD}^{0}$, $\prop_{DP}^0=\bar{\prop}_{DP}^{0}$, $\dal_{P D}^0=\prop_{DP}^{0}$, and $\prim_{PD}^{1}=\prim_{DP}^{1}$, then for every $k\in\ensuremath{\mathbb N}$, $\prim^{k}_{PD}=\prim^{k}_{DP}$. \end{lemma} \begin{proof} Since $m=1$ and $T=\ensuremath{\operatorname{Id}}\,$ in both algorithms, for every $k\in\ensuremath{\mathbb N}$, yields $\prim_{PD}^{k}=p_{PD}^{k}$ and $\dal_{DP}^{k}=\prop_{DP}^{k}$. On one hand, by definition of \ref{A: PDSP}, we have that \begin{align} \dal^{k+1}_{PD}&=\dal^{1}_{PD}+\Gamma\sum_{i=1}^{k}\left(A\bar{p}^{i}_{PD}-b^{\delta}\right)\nonumber\\&=\dal^{1}_{PD}+\sum_{i=1}^{k}\Gamma A(p^{i}_{PD}-p^{i-1}_{PD})+\Gamma\sum_{i=1}^{k}\left(A\prim^{i}_{PD}-b^{\delta}\right)\nonumber\\&=\dal^{1}_{PD}+\Gamma A(p^{k}_{PD}-p^{0}_{PD})+\Gamma\sum_{i=1}^{k}\left(A\prim^{i}_{PD}-b^{\delta}\right)\nonumber\\&=\dal^{0}_{PD}+\Gamma(A\prim^{k}_{PD}-b^\delta)+\Gamma\sum_{i=1}^{k}\left(A\prim^{i}_{PD}-b^{\delta}\right),\label{V: Uprimal} \end{align} where in the last equality is obtained since $p_{PD}^0=\bar{p}_{PD}^{0}$. Replacing \eqref{V: Uprimal} in the definition of $x^{k+1}_{PD}$ \begin{align} \prim_{PD}^{k+1}= \ensuremath{\operatorname{prox}}^{\Sigma }_{J}\left(\prim_{PD}^{k}-\Sigma A^{*}\left(\dal_{PD}^{0}+\Gamma (A\prim_{PD}^{k}-b^\delta)+\Gamma\sum_{i=1}^{k}\left(A\prim_{PD}^{i}-b^{\delta}\right)\right)\right). \label{e: algonestep} \end{align} On the other hand, by \ref{A: DPSP} we have that \begin{align} \dal^{k+1}_{DP}=\prop^{k+1}_{DP}=\prop^{0}_{DP}+\Gamma\sum_{i=1}^{k+1}\left(A\prim^{i}_{DP}-b^{\delta}\right),\label{V: Udual} \end{align} and \begin{align} \bar{\prop}^{k}_{DP}=\prop^{k}_{DP}+\dal^{k}_{DP}-\prop^{k-1}_{DP}=\prop^{0}_{DP}+\Gamma(A\prim^{k}_{DP}-b^\delta)+\Gamma\sum_{i=1}^{k}\left(A\prim^{i}_{DP}-b^{\delta}\right),\label{V: UbarDual}. \end{align} Replacing \eqref{V: UbarDual} in \ref{A: DPSP}, for every $k>1$, we can deduce that \begin{align} \prim_{DP}^{k+1}= \ensuremath{\operatorname{prox}}^{\Sigma }_{J}\left(\prim_{DP}^{k}-\Sigma A^{*}\left(\prop_{DP}^{0}+\Gamma (A\prim_{DP}^{k}-b^\delta)+\Gamma\sum_{i=1}^{k}\left(A\prim_{DP}^{i}-b^{\delta}\right)\right)\right). \label{e: algonestep1} \end{align} Since $\dal^{0}_{PD}=\prop^{0}_{DP}$ and $\prim^{1}_{PD}=\prim^{1}_{DP}$ the result follows by induction. \end{proof} \begin{remark} An analysis similar to that in the proof of Lemma \ref{L: PD=DP} shows that \begin{align} \prim_{PD}^{k+1}= \ensuremath{\operatorname{prox}}^{\Sigma }_{J}\left(\prim^{k}_{PD}-\Sigma A^{*}\left(\dal^{0}_{PD}+\Gamma (A T_{\epsilon_{k}}\prim^{k}_{PD}-b^\delta)+\Gamma\sum_{i=1}^{k}\left(A\prim^{i}_{PD}-b^{\delta}\right)\right)\right), \label{e: algonestepproj} \end{align} which implies that the algorithm can be written in one step if we only care about the primal variable. \end{remark} \subsection{Proof of Theorem \ref{Th:PPD}}\label{Proof:PPD} \begin{proof} From \ref{A: PDSP}, we deduce that: \begin{align} \Sigma^{-1}(p^k-\prim^{k+1})- A^*\dal^{k+1}&\in\partial J(\prim^{k+1})\nonumber\\ \Gamma^{-1}(\dal^k-\dal^{k+1}) +A\bar{p}^k &=b^{\delta}\label{e:PMI} \end{align} Therefore, we have \begin{align} \left(\forall x\in\ensuremath{\mathbb{R}}^p\right)\hspace{3mm} J(\prim^{k+1})+\scal{\Sigma^{-1}(p^k-\prim^{k+1})- A^*\dal^{k+1}}{\prim-\prim^{k+1}}\leq J(\prim) \label{e:subdif P} \end{align} and \eqref{e:subdif P} yields \begin{align} 0\geq& J(\prim^{k+1})-J(\prim)+\scal{\Sigma^{-1}(p^k-\prim^{k+1})-A^*\dal^{k+1}}{\prim-\prim^{k+1}}\nonumber\\=& J(\prim^{k+1})-J(\prim)+\frac{\|p^k-\prim^{k+1}\|_{\Sigma^{-1}}^2}{2}+\frac{\|\prim^{k+1}-\prim\|_{\Sigma^{-1}}^2}{2}\nonumber\\&-\frac{\|p^k-\prim\|_{\Sigma^{-1}}^2}{2}+\scal{\prim^{k+1}-\prim}{A^*\dal^{k+1}} \label{e: psub P} \end{align} Analogously by \eqref{e:PMI} we get \begin{align} 0=&\scal{\Gamma^{-1}(\dal^k-\dal^{k+1})+ A\bar{p}^k-b^\delta}{\dal-\dal^{k+1}}\nonumber\\0=& \frac{\|\dal^{k+1}-\dal^{k}\|_{\Gamma^{-1}}^2}{2}+\frac{\|\dal^{k+1}-\dal\|_{\Gamma^{-1}}^2}{2}-\frac{\|\dal^{k}-\dal\|_{\Gamma^{-1}}^2}{2}+\scal{b^{\delta}-A\bar{p}^k}{\dal^{k+1}-\dal}\label{e: dsub P} \end{align} Recall that $z:=(\prim,\dal)\in\mathcal{Z}\subset C\times \ensuremath{\mathbb{R}}^{d}$, $z^{k}:=(\prim^{k},\dal^{k})$, and $V(z):=\frac{\|\prim\|^2_{\Sigma^{-1}}}{2}+\frac{\|\dal\|_{\Gamma^{-1}}^2}{2}$. Summing \eqref{e: psub P} and \eqref{e: dsub P}, and by Assumption A3, we obtain \\ \begin{align} J(\prim^{k+1})-J(\prim)+\frac{\|\prim^{k+1}-p^k\|_{\Sigma^{-1}}^2}{2} +\frac{\|\dal^{k+1}-\dal^{k}\|_{\Gamma^{-1}}^2}{2}+V(z^{k+1}-z)-V(z^{k}-z)&\nonumber\\+\scal{A(\prim^{k+1}-\prim)}{\dal^{k+1}}+\scal{b^\delta-A\bar{p}^k}{\dal^{k+1}-\dal}-\frac{e\delta^2}{2}&\leq 0\label{e: prestimate P} \end{align} Now compute \begin{align} & J(\prim^{k+1})-J(\prim)+\scal{A(\prim^{k+1}-\prim)}{\dal^{k+1}} +\scal{b^{\delta}-A\bar{p}^k}{\dal^{k+1}-\dal}\nonumber\\ =&\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})-\scal{A\prim^{k+1}-b}{\dal}+\scal{A\prim-b}{\dal^{k+1}}\nonumber\\&+\scal{A(\prim^{k+1}-\prim)}{\dal^{k+1}} +\scal{b^{\delta}-A\bar{p}^k}{\dal^{k+1}-\dal}\nonumber\\ =&\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})-\scal{A\prim^{k+1}}{\dal}+\scal{b}{\dal}+\scal{A\prim}{\dal^{k+1}}-\scal{b}{\dal^{k+1}}\nonumber\\&+\scal{A\prim^{k+1}}{\dal^{k+1}}-\scal{A\prim}{\dal^{k+1}} +\scal{b^\delta}{\dal^{k+1}-\dal}-\scal{A\bar{p}^k}{\dal^{k+1}-\dal}\nonumber\\ =&\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})+\scal{b^{\delta}-b}{\dal^{k+1}-\dal} +\scal{A\prim^{k+1}-A\bar{p}^k}{\dal^{k+1}-\dal}\nonumber\\ \geq&\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})-\delta\|\Gamma^{\frac{1}{2}}\|\|\dal^{k+1}-\dal\|_{\Gamma^{-1}} +\scal{A\prim^{k+1}-A\bar{p}^k}{\dal^{k+1}-\dal}. \label{e: lagrange P} \end{align} From \eqref{e: lagrange P} and \eqref{e: prestimate P} we obtain \begin{align} &\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})+\frac{\|\prim^{k+1}-p^k\|_{\Sigma^{-1}}^2}{2} +\frac{\|\dal^{k+1}-\dal^{k}\|_{\Gamma^{-1}}^2}{2}\nonumber\\&+V(z^{k+1}-z)-V(z^{k}-z)-\delta\|\Gamma^{\frac{1}{2}}\|\|\dal^{k+1}-\dal\|_{\Gamma^{-1}}-\frac{e\delta^2}{2}\nonumber\\ \leq& -\scal{A(\prim^{k+1}-\bar{p}^k)}{\dal^{k+1}-\dal}\label{e: lagrangeI.25 P} \\ = & -\scal{A(\prim^{k+1}-p^k)}{\dal^{k+1}-\dal}+\scal{A(\prim^{k}-p^{k-1})}{\dal^{k}-\dal}\nonumber\\ &+\scal{A(\prim^{k}-p^{k-1})}{\dal^{k+1}-\dal^{k}}\label{e: lagrangeI.5 P} \\ = & -\scal{A(\prim^{k+1}-p^k)}{\dal^{k+1}-\dal}+\scal{A(\prim^{k}-p^{k-1})}{\dal^{k}-\dal}\nonumber\\ &+\scal{\Gamma^{\frac{1}{2}} A\Sigma^{\frac{1}{2}}\Sigma^{-\frac{1}{2}}(\prim^{k}-p^{k-1})}{\Gamma^{-\frac{1}{2}}(\dal^{k+1}-\dal^{k})}\nonumber\\ \leq& -\scal{A(\prim^{k+1}-p^k)}{\dal^{k+1}-\dal}+\scal{A(\prim^{k}-p^{k-1})}{\dal^{k}-\dal}\nonumber\\ &+\|\Gamma^{\frac{1}{2}} A\Sigma^{\frac{1}{2}}\|^2\frac{\|\dal^{k+1}-\dal^{k}\|_{\Gamma^{-1}}^2}{2}+\frac{\|\prim^{k}-p^{k-1}\|_{\Sigma^{-1}}^2}{2}\label{e: lagrangeIP} \end{align} Then, recalling that $\alpha=1-\|\Gamma^{\frac{1}{2}} A\Sigma^{\frac{1}{2}}\|^2$, we have the following estimate \begin{align} &\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})+\frac{\|\prim^{k+1}-p^{k}\|_{\Sigma^{-1}}^2}{2}-\frac{\|\prim^{k}-p^{k-1}\|_{\Sigma^{-1}}^2}{2}\nonumber\\&+\frac{\alpha}{2}\|\dal^{k+1}-\dal^{k}\|_{\Gamma^{-1}}^2+V(z^{k+1}-z)-V(z^{k}-z)\nonumber\\ \leq& \delta\|\Gamma^{\frac{1}{2}}\|\|\dal^{k+1}-\dal\|_{\Gamma^{-1}} -\scal{A(\prim^{k+1}-p^k)}{\dal^{k+1}-\dal}\nonumber\\&+\scal{A(\prim^{k}-p^{k-1})}{\dal^{k}-\dal}+\frac{e\delta^2}{2} \label{e: lagrangeII P} \end{align} Summing from $1$ to $N-1$ we obtain \begin{align} &\sum_{k=1}^{N-1}\left(\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})\right)+\frac{\|\prim^{N}-p^{N-1}\|_{\Sigma^{-1}}^2}{2}-\frac{\|\prim^{1}-p^{0}\|_{\Sigma^{-1}}^2}{2}\nonumber\\&+\frac{\alpha}{2}\sum_{k=1}^{N-1}\|\dal^{k+1}-\dal^{k}\|_{\Gamma^{-1}}^2+V(z^{N}-z)-V(z^{1}-z)-\scal{ A(\prim^{1}-p^{0})}{\dal^{1}-\dal} \nonumber\\\leq& \delta\|\Gamma^{\frac{1}{2}}\|\sum_{k=1}^{N}\|\dal^{k}-\dal\|_{\Gamma^{-1}}-\scal{\Gamma^{\frac{1}{2}} A\Sigma^{\frac{1}{2}}\Sigma^{-\frac{1}{2}}(\prim^{N}-p^{N-1})}{\Gamma^{-\frac{1}{2}}(\dal^{N}-\dal)}+\frac{(N-1) e\delta^2}{2}\nonumber\\\leq&\delta\|\Gamma^{\frac{1}{2}}\|\sum_{k=1}^{N}\|\dal^{k}-\dal\|_{\Gamma^{-1}}+\frac{\|\prim^{N}-p^{N-1}\|_{\Sigma^{-1}}^2}{2}+\|\Gamma^{\frac{1}{2}} A\Sigma ^{\frac{1}{2}}\|^2\frac{\|\dal^{N}-\dal\|_{\Gamma^{-1}}^2}{2}+\frac{(N-1) e\delta^2}{2} \label{e: lagrangeIII.5 P} \end{align} Now, by choosing $k=1$ in \eqref{e: lagrangeI.25 P} we get \begin{align} &\mathcal{L}(\prim^{1},\dal)-\mathcal{L}(\prim,\dal^{1})+\frac{\|\prim^{1}-p^0\|_{\Sigma^{-1}}^2}{2} +\frac{\alpha}{2}\|\dal^{1}-\dal^{0}\|_{\Gamma^{-1}}^2\nonumber\\&+V(z^{1}-z)-V(z^{0}-z) +\scal{A(\prim^{1}-\bar{p}^0)}{\dal^{1}-\dal}\nonumber\\ \leq&\delta\|\Gamma^{\frac{1}{2}}\|\|\dal^{1}-\dal\|_{\Gamma^{-1}}+\frac{e\delta^2}{2}. \label{e: lagrangeIII.5 P1} \end{align} Adding \eqref{e: lagrangeIII.5 P} and \eqref{e: lagrangeIII.5 P1} we obtain \begin{align} &\sum_{k=0}^{N-1}\left(\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})\right)+\frac{\alpha}{2}\|\dal^{N}-\dal\|_{\Gamma^{-1}}^2\nonumber\\&+\sum_{k=1}^{N}\frac{\alpha}{2}\|\dal^{k}-\dal^{k-1}\|_{\Gamma^{-1}}^2+\frac{\|\prim^{N}-\prim\|_{\Sigma^{-1}}^2}{2} \nonumber\\\leq& \delta\|\Gamma^{\frac{1}{2}}\|\sum_{k=1}^{N}\|\dal^{k}-\dal\|_{\Gamma^{-1}}+V(z^{0}-z)+\frac{N e\delta^2}{2} \label{e: lagrangeIV.5 P} \end{align} Next, by \eqref{e: lagrangeI.5 P} we have the following estimate \begin{align} &\frac{\|\prim^{k+1}-p^{k}\|_{\Sigma^{-1}}^2}{2}-\scal{A(\prim^{k}-p^{k-1})}{\dal^{k+1}-\dal^k}+\frac{\|\dal^{k+1}-\dal^k\|_{\Gamma^{-1}}^2}{2}\nonumber\\&+\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})+V(z^{k+1}-z)-V(z^{k}-z)\nonumber\\ \leq& \delta\|\Gamma^{\frac{1}{2}}\|\|\dal^{k+1}-\dal\|_{\Gamma^{-1}} -\scal{A(\prim^{k+1}-p^{k})}{\dal^{k+1}-\dal} \nonumber\\&+\scal{A(\prim^{k}-p^{k-1})}{\dal^{k}-\dal}+\frac{e\delta^2}{2} \label{e: lagrangeII.5 P} \end{align} Summing from $1$ to $N-1$ we obtain \begin{align} &\sum_{k=1}^{N-1}\left(\frac{\|\prim^{k+1}-p^{k}\|_{\Sigma^{-1}}^2}{2}-\scal{A(\prim^{k}-p^{k-1})}{\dal^{k+1}-\dal^{k}} +\frac{\|\dal^{k+1}-\dal^{k}\|_{\Gamma^{-1}}^2}{2}\right)\nonumber\\&+\sum_{k=1}^{N-1}\left(\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})\right)+V(z^{N}-z)-V(z^{1}-z) -\scal{A(\prim^{1}-p^{0})}{\dal^{1}-\dal}\nonumber\\ \leq& \delta\|\Gamma^{\frac{1}{2}}\|\sum_{k=1}^{N-1}\|\dal^{k+1}-\dal\|_{\Gamma^{-1}} -\scal{A(\prim^{N}-p^{N-1})}{\dal^{N}-\dal}+\frac{(N-1) e\delta^2}{2}\nonumber\\ =& \delta\|\Gamma^{\frac{1}{2}}\|\sum_{k=1}^{N-1}\|\dal^{k+1}-\dal\|_{\Gamma^{-1}} -\scal{\Gamma^{\frac{1}{2}} A\Sigma^{\frac{1}{2}}\Sigma^{-\frac{1}{2}}(\prim^{N}-p^{N-1})}{\Gamma^{-\frac{1}{2}}(\dal^{N}-\dal)}+\frac{(N-1) e\delta^2}{2}\nonumber\\\leq& \delta\|\Gamma^{\frac{1}{2}}\|\sum_{k=1}^{N-1}\|\dal^{k+1}-\dal\|_{\Gamma^{-1}} +\frac{ \|\Gamma^{\frac{1}{2}} A\Sigma^\frac{1}{2}\|^2}{2}\|\prim^{N}-p^{N-1}\|_{\Sigma^{-1}}^2+\frac{\|\dal^{N}-\dal\|_{\Gamma^{-1}}^2}{2}+\frac{(N-1) e\delta^2}{2} \label{e: lagrangeIII P} \end{align} Now, since $\dal^{k+1}-\dal^{k}=\Gamma\left(A\bar{p}^k-b^\delta\right)$ we derive that \begin{align} &\sum_{k=1}^{N-1}\left(\frac{\|\prim^{k+1}-p^{k}\|_{\Sigma^{-1}}^2}{2}-\scal{A(\prim^{k}-p^{k-1})}{\dal^{k+1}-\dal^{k}} +\frac{\|\dal^{k+1}-\dal^{k}\|_{\Gamma^{-1}}^2}{2}\right)\nonumber\\ =& \sum_{k=1}^{N-1}\left(\frac{\|\prim^{k}-p^{k-1}\|_{\Sigma^{-1}}^2}{2}-\scal{A(\prim^{k}-p^{k-1})}{\dal^{k+1}-\dal^{k}}+\frac{\|\dal^{k+1}-\dal^{k}\|_{\Gamma^{-1}}^2}{2}\right)\nonumber\\&+ \frac{\|\prim^{N}-p^{N-1}\|_{\Sigma^{-1}}^2}{2}-\frac{\|\prim^{1}-p^{0}\|_{\Sigma^{-1}}^2}{2}\nonumber\\=& \sum_{k=1}^{N-1}\left(\frac{\|\Gamma^{\frac{1}{2}} A(\prim^{k}-p^{k-1})\|^2}{2}-\scal{\Gamma^{\frac{1}{2}}A(\prim^{k}-p^{k-1})}{\Gamma^{\frac{1}{2}}(A\bar{p}^k-b^{\delta})}+\frac{\|\Gamma^{\frac{1}{2}}(A\bar{p}^k-b^{\delta})\|^2}{2}\right)\nonumber\\ &+ \frac{\|\prim^{N}-p^{N-1}\|_{\Sigma^{-1}}^2}{2}-\frac{\|\prim^{1}-p^{0}\| ^2}{2}\nonumber\\&+\sum_{k=1}^{N-1}\left(\frac{\|\prim^{k}-p^{k-1}\|_{\Sigma^{-1}}^2}{2}-\frac{\|\Gamma^{\frac{1}{2}} A(\prim^{k}-p^{k-1})\|^2}{2}\right)\nonumber\\ =& \sum_{k=1}^{N-1}\frac{\|\Gamma^{\frac{1}{2}} ( Ap^{k}-b^{\delta})\|^2}{2}+ \frac{\|\prim^{N}-p^{N-1}\|_{\Sigma^{-1}}^2}{2}-\frac{\|\prim^{1}-p^{0}\|_{\Sigma^{-1}}^2}{2}\nonumber\\&+\sum_{k=1}^{N-1}\left(\frac{\|\prim^{k}-p^{k-1}\|_{\Sigma^{-1}}^2}{2}-\frac{\|\Gamma^{\frac{1}{2}} A(\prim^{k}-p^{k-1})\|^2}{2}\right),\end{align} and since $\alpha=1-\|\Gamma^{\frac{1}{2}} A\Sigma^{\frac{1}{2}}\|^2>0$ we obtain \begin{align} & \sum_{k=1}^{N-1}\frac{\|\Gamma^{\frac{1}{2}} ( Ap^{k}-b^{\delta})\|^2}{2}+ \frac{\|\prim^{N}-p^{N-1}\|_{\Sigma^{-1}}^2}{2}-\frac{\|\prim^{1}-p^{0}\|_{\Sigma^{-1}}^2}{2}\nonumber\\&+\sum_{k=1}^{N-1}\left(\frac{\|\prim^{k}-p^{k-1}\|_{\Sigma^{-1}}^2}{2}-\frac{\|\Gamma^{\frac{1}{2}} A(\prim^{k}-p^{k-1})\|^2}{2}\right) \nonumber\\ \geq& \sum_{k=1}^{N-1}\frac{\|\Gamma^{\frac{1}{2}} (Ap^{k}-b^{\delta})\|^2}{2}+ \frac{\|\prim^{N}-p^{N-1}\|_{\Sigma^{-1}}^2}{2}-\frac{\|\prim^{1}-p^{0}\|_{\Sigma^{-1}}^2}{2}\nonumber\\ &+\frac{\alpha}{2}\sum_{k=1}^{N-1}\|\Gamma^{\frac{1}{2}} A(\prim^{k}-p^{k-1})\|^2\nonumber\\ \geq& \sum_{k=1}^{N-1}\frac{\|\Gamma^{\frac{1}{2}} ( Ap^{k}-b^{\delta})\|^2}{2}+ \frac{\|\prim^{N}-p^{N-1}\|_{\Sigma^{-1}}^2}{2}-\frac{\|\prim^{1}-p^{0}\|_{\Sigma^{-1}}^2}{2}\nonumber\\&-\frac{\alpha}{2}\|\Gamma^{\frac{1}{2}} A(\prim^{N}-p^{N-1})\|^2+\frac{\alpha}{2}\|\Gamma^{\frac{1}{2}} A(\prim^{1}-p^{0})\|^2\nonumber\\ &+\frac{\alpha}{2}\sum_{k=1}^{N-1}\|\Gamma^{\frac{1}{2}} A(\prim^{k+1}-p^{k})\|^2.\end{align} In turn, by convexity of $\|\cdot\|^2$ results in \begin{align} & \sum_{k=1}^{N-1}\frac{\|\Gamma^{\frac{1}{2}} ( Ap^{k}-b^{\delta})\|^2}{2}+ \frac{\|\prim^{N}-p^{N-1}\|_{\Sigma^{-1}}^2}{2}-\frac{\|\prim^{1}-p^{0}\|_{\Sigma^{-1}}^2}{2}\nonumber\\&-\frac{\alpha}{2}\|\Gamma^{\frac{1}{2}} A(\prim^{N}-p^{N-1})\|^2+\frac{\alpha}{2}\|\Gamma^{\frac{1}{2}} A(\prim^{1}-p^{0})\|^2\nonumber\\ &+\frac{\alpha}{2}\sum_{k=1}^{N-1}\|\Gamma^{\frac{1}{2}} A(\prim^{k+1}-p^{k})\|^2\nonumber\\ \geq & \frac{\alpha}{4}\sum_{k=1}^{N-1}\|\Gamma^{\frac{1}{2}} ( A\prim^{k+1}-b^{\delta})\|^2-\frac{\|\prim^{1}-p^{0}\|_{\Sigma^{-1}}^2}{2}+\frac{\alpha}{2}\|\Gamma^{\frac{1}{2}} A(\prim^{1}-p^{0})\|^2\nonumber\\ &+ \frac{\alpha^{2}+\|\Gamma^{\frac{1}{2}} A\Sigma^{\frac{1}{2}}\|^2}{2}\|\prim^{N}-p^{N-1}\|_{\Sigma^{-1}}^2\nonumber\\ \geq& \frac{\alpha}{4}\sum_{k=2}^{N}\|\Gamma^{\frac{1}{2}} ( A\prim^{k}-b^{\delta})\|^2-\frac{\|\prim^{1}-p^{0}\|_{\Sigma^{-1}}^2}{2}+\frac{\alpha}{2}\|\Gamma^{\frac{1}{2}} A(\prim^{1}-p^{0})\|^2\nonumber\\ &+ \frac{\alpha^{2}+\|\Gamma^{\frac{1}{2}} A\Sigma^{\frac{1}{2}}\|^2}{2}\|\prim^{N}-p^{N-1}\|_{\Sigma^{-1}}^2.\label{e: PX} \end{align} On the other hand, we get \begin{align} \|\Gamma^{\frac{1}{2}}(A\prim^k-b^\delta)\|^2 \geq &\frac{\|A\prim^k-b^{\delta}\|^2}{\|\Gamma^{-1}\|}\nonumber\\\geq&\frac{1}{\|\Gamma^{-1}\|}\left(\frac{\|A\prim^k-b\|^2}{2}-\|b^\delta-b\|^2\right). \label{e: Axsep P} \end{align} Combining \eqref{e: lagrangeIII.5 P1}, \eqref{e: lagrangeIII P}, \eqref{e: PX}, and \eqref{e: Axsep P} we have that \begin{align} &\sum_{k=0}^{N-1}\left(\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})\right)+\frac{\alpha^2}{2}\|\prim^{N}-p^{N-1}\|_{\Sigma^{-1}}^2\nonumber\\ &\sum_{k=1}^{N}\frac{\alpha}{8\|\Gamma^{-1}\|}\|A\prim^{k+1}-b\|^2+\frac{\|\prim^{N}-\prim\|^2_{\Sigma^{-1}}}{2} \nonumber\\ \leq& \delta\|\Gamma^{\frac{1}{2}}\|\sum_{k=1}^{N}\|\dal^{k}-\dal\|_{\Gamma^{-1}}+V(z^{0}-z)+\frac{N e\delta^2}{2}+N\frac{\alpha}{4\|\Gamma^{-1}\|}\delta^2 \label{e: lagrangeIV P} \end{align} It remains to bound $\delta\|\Gamma^{\frac{1}{2}}\|\sum_{k=1}^{N}\|\dal^{k}-\dal\|_{\Gamma^{-1}}$. From \eqref{e: lagrangeIV.5 P} and since $(x,u)$ is a saddle-point of the Lagrangian we deduce that\\ \begin{align} \|\dal^{N}-\dal\|^2\leq\frac{2\|\Gamma^{\frac{1}{2}}\|\delta}{\alpha} \sum_{k=1}^{N}\|\dal^{k}-\dal\|+\frac{2 V(z^{0}-z)}{\alpha}+\frac{ N e\delta^2}{\alpha}. \label{e: U bound P} \end{align} Applying \cite[Lemma A.1]{rasch2020inexact} to Equation \eqref{e: U bound P} with $\lambda_{k}:=\frac{2\|\Gamma^{\frac{1}{2}}\|\delta}{\alpha} $ and $S_{N}:= \frac{2 V(z^{0}-z)}{\alpha}+\frac{N e\delta^2}{\alpha}$ we get \begin{align} \|\dal^{N}-\dal\|\leq& \frac{N\|\Gamma^{\frac{1}{2}}\|\delta}{\alpha}+\left(\frac{2 V(z^{0}-z)}{\alpha}+\frac{ N e\delta^2}{\alpha}+\left(\frac{N\|\Gamma^{\frac{1}{2}}\|\delta}{\alpha}\right)^2\right)^{\frac{1}{2}}\nonumber\\\leq& \frac{2N\|\Gamma^{\frac{1}{2}}\|\delta}{\alpha}+\left(\frac{2 V(z^{0}-z)}{\alpha}\right)^{\frac{1}{2}}+\left(\frac{ N e\delta^2}{\alpha}\right)^{\frac{1}{2}} \end{align} Insert the previous in Equation \eqref{e: lagrangeIV.5 P}, to obtain \begin{align} \sum_{k=0}^{N-1}\left(\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})\right)\leq& \frac{2(N\|\Gamma^{\frac{1}{2}}\|\delta)^{2}}{\alpha}+N\|\Gamma^{\frac{1}{2}}\|\delta\left(\frac{ V(z^{0}-z)}{\alpha}\right)^{\frac{1}{2}}\nonumber\\&+N\|\Gamma^{\frac{1}{2}}\|\delta\left(\frac{ N e\delta^2}{\alpha}\right)^{\frac{1}{2}} +V(z^{0}-z)+\frac{N e\delta^2}{2} \label{e: lagrangeV P} \end{align} Analogously from \eqref{e: lagrangeIV P} \begin{align} \sum_{k=1}^{N}\|A\prim^k-b\|^2\leq&\frac{16N^2\|\Gamma\|\|\Gamma^{-1}\|\delta^{2}}{\alpha^{2}}+8N\delta\|\Gamma^{-1}\|\left(\frac{2\|\Gamma\| V(z^{0}-z)}{\alpha^3}\right)^{\frac{1}{2}}+8N\delta^{2}\|\Gamma^{-1}\|\left(\frac{ \|\Gamma\|e N}{\alpha^3}\right)^{\frac{1}{2}\nonumber}\\ &+\frac{8\|\Gamma^{-1}\|V(z^{0}-z)}{\alpha}+2N\delta^{2}+\frac{4N\|\Gamma^{-1}\|e\delta^2}{\alpha} \end{align} and both results are straightforward from the Jensen's inequality.\end{proof} \subsection{Proof of Theorem \ref{Th:PDP}}\label{Proof:PDP} \begin{proof} It follows from \ref{A: DPSP} that \begin{align} \Sigma^{-1}(\prim^k-\prim^{k+1})-A^*\bar{\prop}^{k}\in\partial J(\prim^{k+1})\nonumber\\ \Gamma^{-1}(\prop^k-\dal^{k+1})+A\prim^{k+1} =b^{\delta} \label{I: DPinclsuion} \end{align} Thus,\\ \begin{align} J(\prim^{k+1})+\scal{\Sigma^{-1}(\prim^k-\prim^{k+1})- A^*\bar{\prop}^{k}}{\prim-\prim^{k+1}}\leq J(\prim) \label{e:subdif D} \end{align} and \eqref{e:subdif D} yields \begin{align} 0\geq& J(\prim^{k+1})-J(\prim)+\scal{\Sigma^{-1}(\prim^{k}-\prim^{k+1})-A^*\bar{\prop}^{k}}{\prim-\prim^{k+1}} \nonumber\\ =& J(\prim^{k+1})-J(\prim)+\frac{\|\prim^{k}-\prim^{k+1}\|_{\Sigma^{-1}}^2}{2}+\frac{\|\prim^{k+1}-\prim\|_{\Sigma^{-1}}^2}{2}\nonumber\\&-\frac{\|\prim^{k}-\prim\|_{\Sigma^{-1}}^2}{2}+\scal{\prim^{k+1}-\prim}{A^*\bar{\prop}^{k}} \label{e: psub D} \end{align} From \eqref{I: DPinclsuion}, it follows that \begin{align} 0=&\scal{\Gamma^{-1}(\prop^k-\dal^{k+1})+A\prim^{k+1}-b^\delta}{\dal-\dal^{k+1}}\nonumber\\ 0=&\frac{\|\dal^{k+1}-\prop^{k}\|_{\Gamma^{-1}}^2}{2}+\frac{\|\dal^{k+1}-\dal\|_{\Gamma^{-1}}^2}{2}-\frac{\|\prop^{k}-\dal\|_{\Gamma^{-1}}^2}{2}+\scal{b^{\delta}-A\prim^{k+1}}{\dal^{k+1}-\dal}\label{e: dsub D} \end{align} Recall that $z:=(\prim,\dal)\in\mathcal{Z}\subset C\times \ensuremath{\mathbb{R}}^{d}$, $z^{k}:=(\prim^{k},\dal^{k})$, and $V(z):=\frac{\|\prim\|^2_{\Sigma^{-1}}}{2}+\frac{\|\dal\|_{\Gamma^{-1}}^2}{2}$. Summing \eqref{e: psub D} and \eqref{e: dsub D}, we obtain \begin{align} J(\prim^{k+1})-J(\prim)+\frac{\|\prim^{k+1}-\prim^{k}\|_{\Sigma^{-1}}^2}{2} +\frac{\|\dal^{k+1}-\prop^{k}\|_{\Gamma^{-1}}^2}{2}+V(z^{k+1}-z)-V(z^{k}-z)&\nonumber\\+\scal{A(\prim^{k+1}-\prim)}{\bar{\prop}^{k}} +\scal{b^{\delta}-A\prim^{k+1}}{\dal^{k+1}-\dal}&\leq0\label{e: prestimate D} \end{align} Now compute \begin{align} & J(\prim^{k+1})-J(\prim)+\scal{A(\prim^{k+1}-\prim)}{\bar{\prop}^{k}} +\scal{b^{\delta}-A\prim^{k+1}}{\dal^{k+1}-\dal}\nonumber\\ =&\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})-\scal{A\prim^{k+1}-b}{\dal}+\scal{A\prim-b}{\dal^{k+1}}\nonumber\\&+\scal{A(\prim^{k+1}-\prim)}{\bar{\prop}^{k}} +\scal{b^{\delta}-A\prim^{k+1}}{\dal^{k+1}-\dal}\nonumber\\ =&\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})-\scal{A\prim^{k+1}}{\dal}+\scal{b}{\dal}+\scal{A\prim}{\dal^{k+1}}-\scal{b}{\dal^{k+1}}\nonumber\\&+\scal{A(\prim^{k+1}-\prim)}{\bar{\prop}^{k}} +\scal{b^\delta}{\dal^{k+1}-\dal}-\scal{A\prim^{k+1}}{\dal^{k+1}}+\scal{A\prim^{k+1}}{\dal}\nonumber\\ =&\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})+\scal{b^{\delta}-b}{\dal^{k+1}-\dal}+\scal{A(\prim^{k+1}-\prim)}{\bar{\prop}^{k}-\dal^{k+1}}\label{e: lagrange Di1}\\ =&\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})+\scal{b^{\delta}-b}{\dal^{k+1}-\dal}+\scal{A(\prim^{k+1}-\prim)}{\prop^{k}-\dal^{k+1}}\nonumber\\ &+\scal{A(\prim^{k+1}-\prim)}{\dal^{k}-\prop^{k-1}}\nonumber\\ =&\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})+\scal{b^{\delta}-b}{\dal^{k+1}-\dal}+\scal{A(\prim^{k+1}-\prim)}{\prop^{k}-\dal^{k+1}}\nonumber\\ &+\scal{A(\prim^{k}-\prim)}{\dal^{k}-\prop^{k-1}}+\scal{A(\prim^{k+1}-\prim^{k})}{\dal^{k}-\prop^{k-1}}\nonumber\\ =&\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})+\scal{b^{\delta}-b}{\dal^{k+1}-\dal}+\scal{A(\prim^{k+1}-\prim)}{\prop^{k}-\dal^{k+1}}\nonumber\\ &+\scal{A(\prim^{k}-\prim)}{\dal^{k}-\prop^{k-1}}+\scal{\Gamma^{\frac{1}{2}} A(\prim^{k+1}-\prim^{k})}{\Gamma^{-\frac{1}{2}}(\dal^{k}-\prop^{k-1})}. \label{e: lagrange D} \end{align} From \eqref{e: lagrange D} and \eqref{e: prestimate D} we obtain \begin{align} &\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})+\frac{\|\prim^{k+1}-\prim^{k}\|_{\Sigma^{-1}}^2}{2} +\frac{\|\dal^{k+1}-\prop^{k}\|_{\Gamma^{-1}}^2}{2}+V(z^{k+1}-z)-V(z^{k}-z) \nonumber\\ \leq&-\scal{b^{\delta}-b}{\dal^{k+1}-\dal} -\scal{A(\prim^{k+1}-\prim)}{\prop^{k}-\dal^{k+1}} +\scal{A(\prim^{k}-\prim)}{\prop^{k-1}-\dal^{k}}\nonumber\\ &-\scal{\Gamma^{\frac{1}{2}} A(\prim^{k+1}-\prim^{k})}{\Gamma^{-\frac{1}{2}}(\dal^{k}-\prop^{k-1})}\nonumber\\\leq&\delta\|\Gamma^{\frac{1}{2}}\|\|\dal^{k+1}-\dal\|_{\Gamma^{-1}} -\scal{A(\prim^{k+1}-\prim)}{\prop^{k}-\dal^{k+1}} +\scal{A(\prim^{k}-\prim)}{\prop^{k-1}-\dal^{k}}\nonumber\\ &+\frac{\|\prim^{k+1}-\prim^{k}\|_{\Sigma^{-1}}^2}{2}+\|\Gamma^{\frac{1}{2}} A\Sigma^{\frac{1}{2}}\|^2\frac{\|\dal^{k}-\prop^{k-1}\|_{\Gamma^{-1}}^2}{2}\label{e: lagrangeI D} \end{align} Therefore we have that \begin{align} &\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1}) +\frac{\|\dal^{k+1}-\prop^{k}\|_{\Gamma^{-1}}^2}{2}-\|\Gamma^{\frac{1}{2}} A\Sigma^{\frac{1}{2}}\|^2\frac{\|\dal^{k}-\prop^{k-1}\|_{\Gamma^{-1}}^2}{2}\nonumber\\&+V(z^{k+1}-z)-V(z^{k}-z) \nonumber\\\leq&\delta\|\Gamma^{\frac{1}{2}}\|\|\dal^{k+1}-\dal\|_{\Gamma^{-1}} -\scal{A(\prim^{k+1}-\prim)}{\prop^{k}-\dal^{k+1}} +\scal{A(\prim^{k}-\prim)}{\prop^{k-1}-\dal^{k}} \label{e: lagrangeII D} \end{align} Summing from $1$ to $N-1$ we obtain \begin{align} &\sum_{k=1}^{N-1}\left(\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1}) \right)+\frac{\alpha}{2}\sum_{k=1}^{N-1}\|\dal^{k+1}-\prop^{k}\|_{\Gamma^{-1}}^2\nonumber\\&+V(z^{N}-z)+\|\Gamma^{\frac{1}{2}} A\Sigma^{\frac{1}{2}}\|^2\frac{\|\dal^{N}-\prop^{N-1}\|_{\Gamma^{-1}}^2}{2}\nonumber\\\leq&\delta\|\Gamma^{\frac{1}{2}}\|\sum_{k=1}^{N-1}\|\dal^{k+1}-\dal\| -\scal{A(\prim^{N}-\prim)}{\prop^{N-1}-\dal^{N}}+\scal{A(\prim^{1}-\prim)}{\prop^{0}-\dal^{1}} +V(z^{1}-z) \nonumber\\\leq&\delta\|\Gamma^{\frac{1}{2}}\|\sum_{k=1}^{N-1}\|\dal^{k+1}-\dal\| +\|\Gamma^{\frac{1}{2}} A\Sigma^{\frac{1}{2}}\|^2\frac{\|\dal^{N}-\prop^{N-1}\|_{\Gamma^{-1}}^2}{2} +\frac{\|\prim^{N}-\prim\|_{\Sigma^{-1}}^2}{2} \nonumber\\&+\scal{A(\prim^{1}-\prim)}{\prop^{0}-\dal^{1}} +V(z^{1}-z) \label{e: lagrangeIII D} \end{align} Reordering \eqref{e: lagrangeIII D} we obtain \begin{align} &\sum_{k=1}^{N-1}\left(\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})\right)+\frac{\alpha}{2}\sum_{k=1}^{N-1}\|\dal^{k+1}-\prop^{k}\|_{\Gamma^{-1}}^2+\frac{\|\dal^{N}-\dal\|_{\Gamma^{-1}}^2}{2} \nonumber\\\leq& \delta\|\Gamma^{\frac{1}{2}}\|\sum_{k=1}^{N-1}\|\dal^{k+1}-\dal\|+\scal{A(\prim^{1}-\prim)}{\prop^{0}-\dal^{1}} +V(z^{1}-z). \label{e: lagrangeIV D} \end{align} On the other hand, from \eqref{e: prestimate D}, \eqref{e: lagrange Di1}, and \eqref{e: Axsep D} we get \begin{align} \mathcal{L}(\prim^{1},\dal)-\mathcal{L}(\prim,\dal^{1})+\frac{\alpha}{2}\|\dal^{1}-\prop^{0}\|^2\leq& \delta\|\dal^{1}-\dal\|-\scal{A(\prim^{1}-\prim)}{\bar{\prop}^{0}-\dal^{1}}\nonumber\\ &V(z^{0}-z)-V(z^{1}-z)\label{e: firsstiteation} \end{align} Summing \eqref{e: lagrangeIV D} and \eqref{e: firsstiteation} yields \begin{align} &\sum_{k=1}^{N}\left(\mathcal{L}(\prim^{k},\dal)-\mathcal{L}(\prim,\dal^{k})\right)+\frac{\alpha}{2}\sum_{k=1}^{N}\|\dal^{k}-\prop^{k-1}\|_{\Gamma^{-1}}^2+\frac{\|\dal^{N}-\dal\|_{\Gamma^{-1}}^2}{2} \nonumber\\\leq& \delta\|\Gamma^{\frac{1}{2}}\|\sum_{k=1}^{N}\|\dal^{k}-\dal\|+V(z^{0}-z). \label{e: lagrangeVIII D} \end{align} Moreover, since $\dal^{k+1}-\prop^{k}=\Gamma(A\prim^{k+1}-b^{\delta})$ \begin{align} \|\dal^{k+1}-p^{k}\|_{\Gamma^{-1}}^2=& \scal{\Gamma (A\prim^{k+1}-b^\delta)}{A\prim^{k+1}-b^\delta}\nonumber\\\geq&\frac{\|A\prim^{k+1}-b^\delta\|^2}{\|\Gamma^{-1}\|}\nonumber\\\geq&\frac{1}{\|\Gamma^{-1}\|}\left(\frac{\|A\prim^{k+1}-b\|^2}{2}-\|b^\delta-b\|^2\right) \label{e: Axsep D} \end{align} and from \eqref{e: lagrangeVIII D} and \eqref{e: Axsep D} we obtain \begin{align} &\sum_{k=0}^{N-1}\left(\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})\right)+\frac{\alpha}{4\|\Gamma^{-1}\|}\sum_{k=1}^{N}\|A\prim^{k}-b\|^2+\frac{\|\dal^{N}-\dal\|_{\Gamma^{-1}}^2}{2} \nonumber\\\leq& \delta\|\Gamma^{\frac{1}{2}}\|\sum_{k=1}^{N}\|\dal^{k}-\dal\|_{\Gamma^{-1}}+V(z^{0}-z)+\frac{\alpha N\delta^{2}}{2\|\Gamma^{-1}\|} \label{e: lagrangeV D} \end{align} From \eqref{e: lagrangeVIII D} it follows that \begin{align} \|\dal^{k}-\dal\|_{\Gamma^{-1}}^2\leq2\delta\|\Gamma^{\frac{1}{2}}\|\sum_{k=1}^{N}\|\dal^{k}-\dal\|_{\Gamma^{-1}}+2V(z^{0}-z) , \label{e: U bound D} \end{align} Apply \cite[Lemma A.1]{rasch2020inexact} to Equation \eqref{e: U bound D} with $\lambda_{k}:=2\delta\|\Gamma^{\frac{1}{2}}\|$ and $S_{k}:= 2V(z^{0}-z)$ to get \begin{align} \|\dal^{k}-\dal\|_{\Gamma^{-1}}\leq& N\|\Gamma^{\frac{1}{2}}\|\delta+\left(2 V(z^{0}-z)+\left(N\|\Gamma^{\frac{1}{2}}\|\delta\right)^2\right)^{\frac{1}{2}}\nonumber\\\leq& 2N\|\Gamma^{\frac{1}{2}}\|\delta+\left(2 V(z^{0}-z)\right)^{\frac{1}{2}}\label{e: ubound D} \end{align} Insert the previous in Equation \eqref{e: lagrangeVIII D}, to obtain \begin{align} \sum_{k=0}^{N-1}\left(\mathcal{L}(\prim^{k+1},\dal)-\mathcal{L}(\prim,\dal^{k+1})\right)\leq& 2\|\Gamma^{\frac{1}{2}}\|^{2} N^2\delta^{2}+N\|\Gamma^{\frac{1}{2}}\|\delta\left(2V(z^{0}-z)\right)^{\frac{1}{2}}+V(z^{0}-z) \label{e: lagrangeVI D} \end{align} and by \eqref{e: lagrangeV D} and \eqref{e: ubound D} we have \begin{align} \sum_{k=1}^{N}\|A\prim^{k}-b\|^2\leq&\frac{4\|\Gamma^{-1}\|}{\alpha} &\left(2\|\Gamma^{\frac{1}{2}}\|^{2} N^2\delta^{2}+N\|\Gamma^{\frac{1}{2}}\|\delta\left(2V(z^{0}-z)\right)^{\frac{1}{2}}+V(z^{0}-z)+\frac{\alpha N\delta^{2}}{2\|\Gamma^{-1}\|}\right) \label{e: lagrangeVII D} \end{align} and both results follows from the Jensen's inequality.\end{proof} \subsection{Proof of Lemma \ref{L: Series Parallel}}\label{LP: Series Parallel} \begin{proof} Note that every single equation in $C$ and $C_{\delta}$ Let us first recall that \begin{align} P^{\delta}\hspace{2mm} \prim \mapsto \prim+\frac{b^{\delta}_{j}-\scal{a_{j}}{\prim}}{\|a_{j}\|^2}a_{j}^{*} \end{align} Note that the $j$-th equation of $C$ and $C_{\delta}$ are parallel. Then, for every $j\in [d]$ and $\bar{\prim}\in C$, we get \begin{align} \|P^{\delta}_{j}\prim-\bar{\prim}\|^2=&\|P_{j}\prim-\bar{\prim}\|^2+2\scal{P_{j}\prim-\bar{\prim}}{P^{\delta}_{j}\prim-P_{j}\prim}\nonumber\\&+\|P_{j}\prim-P^{\delta}_{j}\prim\|^2\nonumber\\=&\|P_{j}\prim-\bar{\prim}\|^2+\|P_{j}\prim-P^{\delta}_{j}\prim\|^2, \label{E:Pitagoras_1} \end{align} analogously, we have that \begin{align} \|\prim-\bar{\prim}\|^2=&\|\prim-P_{j}\prim\|^2+\|P_{j}\prim-\bar{\prim}\|^2. \label{E:Pitagoras_2} \end{align} It follows from \eqref{E:Pitagoras_1} and \eqref{E:Pitagoras_2} that \begin{align} \|P^{\delta}_{j}\prim-\bar{\prim}\|^2+\|\prim-P_{j}\prim\|^2=\|\prim-\bar{\prim}\|^2+\|P^{\delta}_{j}\prim-P_{j}\prim\|^2 , \end{align} hence \begin{align} \|P^{\delta}_{j}\prim-\bar{\prim}\|^2&\leq\|\prim-\bar{\prim}\|^2+\|P^{\delta}_{j}\prim-P_{j}\prim\|^2\nonumber\\ &\leq \|\prim-\bar{\prim}\|^2+\frac{(b^{\delta}_{j}-b_{j})^2}{\|a_{j}\|^2}\nonumber\\ &\leq \|\prim-\bar{\prim}\|^2+\frac{\delta^2}{\|a_{j}\|^2}\label{P: Paralellogram} \end{align} \begin{enumerate} \item Since $T=P^{\delta}_{\beta_{l}}\circ\cdots\circ P^{\delta}_{\beta_{1}}$ it is clear that $C_{\delta}\subset\ensuremath{\operatorname{Fix}} T$ and by induction we have that, \begin{align} \|T\prim-\bar{\prim}\|^2&\leq\|\prim-\bar{\prim}\|^2+e\delta^{2}, \end{align} where $e=\frac{l}{\max\limits_{i=1,\dots,d}\|a_i\|}$. \item The proof follows from the convexity of $\|\cdot\|^{2}$ which is obtained with $e=\frac{1}{\max\limits_{i=1,\dots,d}\|a_i\|}$. \item Let $\bar{\prim}\in C$, by \eqref{d: averaged3}, we have \begin{align} \|T\prim-\bar{\prim}\|^2=&\|\prim-\bar{\prim}\|^2-2\alpha\scal{\prim-\bar{\prim}}{A^{*}(A\prim-b^\delta)}+\alpha^2\|A^{*}(A\prim-b^\delta)\|^2\nonumber\\=&\|\prim-\bar{\prim}\|^2-2\alpha\scal{\prim-b}{A\prim-b^\delta}+\alpha^2\|A^{*}(A\prim-b^\delta)\|^2\nonumber\\ \leq&\|\prim-\bar{\prim}\|^2-2\alpha\scal{b^\delta-b}{A\prim-b^\delta}+\left(\alpha^2\|A\|^{2}-2\alpha\right)\|A\prim-b^{\delta}\|^2\label{e:Steepest descentalpha} \end{align} Now using the Young inequality with parameter $2-\alpha\|A\|^2$, we have that \begin{align} \|T\prim-\bar{\prim}\|^2\leq&\|\prim-\bar{\prim}\|^2+\frac{\alpha}{2-\alpha\|A\|^2}\|b^{\delta}-b\|^2\nonumber\\\leq&\|\prim-\bar{\prim}\|^2+\frac{\alpha\delta^{2}}{2-\alpha\|A\|^2}. \end{align} It remains to prove that if $C_{\delta}\neq0$ then $C_{\delta}\subset \ensuremath{\operatorname{Fix}} T$, which is clear from \eqref{d: averaged3}. \item Let $\bar{\prim}\in C$ and $x\in\ensuremath{\mathbb{R}}^p$, if $A^{*}Ax=A^{*}b^{\delta}$ then \eqref{A: pitagoras error 1} immediately holds. Otherwise, we have \begin{align} \|T\prim-\bar{\prim}\|^2=&\|\prim-\bar{\prim}\|^2-2\beta(x)\scal{\prim-\bar{\prim}}{A^{*}(A\prim-b^\delta)}+\beta(x)^2\|A^{*}(A\prim-b^\delta)\|^2\nonumber\\=&\|\prim-\bar{\prim}\|^2-2\beta(x)\scal{A\prim-b}{A\prim-b^\delta}+\beta(x)^2\|A^{*}(A\prim-b^\delta)\|^2\nonumber\\ =&\|\prim-\bar{\prim}\|^2-2\beta(x)\scal{b^{\delta}-b}{A\prim-b^\delta}-2\beta(x)\|A\prim-b^\delta\|^2\nonumber\\&+\beta(x)^2\|A^{*}(A\prim-b^\delta)\|^2\label{e:Steepest descentbeta} \end{align} Now using the Young inequality with parameter $2-\beta(x)\frac{\|A^{*}(A\prim-b^\delta)\|^2}{\|A\prim-b^\delta\|^2}$, we have that \begin{align} \|T\prim-\bar{\prim}\|^2\leq&\|\prim-\bar{\prim}\|^2+\frac{\beta(x)}{2-\beta(x)\frac{\|A^{*}(A\prim-b^\delta)\|^2}{\|A\prim-b^\delta\|^2}}\|b^{\delta}-b\|^2\nonumber\\\leq&\|\prim-\bar{\prim}\|^2+M\delta^{2}. \end{align} Finally, it is clear from \eqref{d: averaged4} that if $C_{\delta}\neq0$ then $C_{\delta}\subset \ensuremath{\operatorname{Fix}} T$. \end{enumerate} \end{proof} \printbibliography \end{document}
2,869,038,154,190
arxiv
\section{Introduction} In 1967, Morikazu Toda introduced a one-dimensional lattice mechanical system with exponential interactions nowadays called the Toda lattice \cite{Toda67a}. Though designed to have a periodic solution written in terms of elliptic functions \cite{Toda67b}, this nonlinear lattice was soon shown to have a solution with colliding solitons. This suggested remarkable similarity with the KdV equation, hence integrability. Integrability of the Toda lattice was established by the middle of 1970s after the construction of exact $N$-soliton solutions \cite{Hirota73}, first integrals in involution \cite{Henon74,Flaschka74a}, Lax pairs for the inverse scattering method \cite{Flaschka74b,Manakov74} and finite-band integration of the periodic problem \cite{KvM75,DT76}. These results were extended to a system of two-dimensional relativistic fields with exponential interactions among the components of the fields. By the end of 1970s, this 2D Toda field equation was proved to be integrable by the Lie group theory \cite{LS79}, the inverse scattering method \cite{Mikhailov79,Mikhailov81} and the bilinearization method \cite{Hirota81}. In the beginning of 1980s, a fully 3D discretization was proposed in a bilinear form along with $N$-soliton solutions \cite{Hirota81}. A description of more general solutions of this discrete system was soon presented in the language of a 2D complex free fermion system \cite{Miwa82}. The Toda hierarchies \cite{UT84} were developed as a Toda version of the KP hierarchy \cite{SS83,SW85} and its various relatives \cite{JM83}. One of its prototypes is an unpublished result of van Moerbeke that is quoted in the work of Adler \cite{Adler79}. This result explains how to construct an integrable hierarchy of Lax equations for a difference operator $\mathfrak{L}$. In particular, the integrable hierarchy for the Jacobi operator \begin{equation*} \mathfrak{L} = e^{\partial_s} + b + ce^{-\partial_s}, \end{equation*} referred to as the {\it 1D Toda hierarchy\/}, contains the equation of motion of the Toda lattice as the lowest member of the Lax equations with time variables $\boldsymbol{t} = (t_1,t_2,\ldots)$. In view of the construction of the KP hierarchy with a pseudo-differential Lax operator, it is natural to extend this construction to a ``pseudo-difference operator'' of the form \begin{equation*} L = e^{\partial_s} + u_0 + u_1e^{-\partial_s} + \cdots. \end{equation*} This extension, however, is not enough to accommodate the 2D Toda field equation. To this end, another Lax operator of the form \begin{equation*} \bar{L}^{-1} = \bar{u}_0e^{-\partial_s} + \bar{u}_1 + \bar{u}_2e^{\partial_s} + \cdots \end{equation*} has to be introduced along with another set $\bar{\boldsymbol{t}} = (\bar{t}_1,\bar{t}_2,\ldots)$ of time variables. The {\it 2D Toda hierarchy} consists of Lax equations for these two Lax operators $L,\bar{L}$ with respect to the two sets $\boldsymbol{t},\bar{\boldsymbol{t}}$ of time variables. The whole system of these Lax equations turns out to be equivalent to a system of Zakharov-Shabat equations for difference operators. The 2D Toda field equation is contained therein as the lowest member. The 2D Toda hierarchy can be reformulated as a system of bilinear equations of the Hirota form for a single tau function $\tau(s,\boldsymbol{t},\bar{\boldsymbol{t}})$ (in which the lattice coordinate $s$ is treated on an equal footing with the other independent variables). These bilinear equations can be cast into (and derived from) a generating functional form. One can thereby deduce \cite{UT84} that $\tau(s,\boldsymbol{t},\bar{\boldsymbol{t}})$, up to a sign factor, coincides with the tau function of the two-component KP hierarchy with charge $(s,-s)$ \cite{DJKM81}. This leads to a fermionic formula of $\tau(s,\boldsymbol{t},\bar{\boldsymbol{t}})$. Actually, $\tau(s,\boldsymbol{t},\bar{\boldsymbol{t}})$ has another fermionic formula \cite{Takebe91a,Takebe91b,AZ12} that is directly related to a matrix factorization problem for solving the 2D Toda hierarchy in the Lax formalism \cite{Takasaki84}. This fermionic formula is a very powerful tool for studying various special solutions of the 2D Toda hierarchy including those that we consider in this paper. Many applications of the 2D Toda hierarchy and its relatives have been found in mathematics and mathematical physics. In the 1990s, the 1D and 2D Toda hierarchies were applied to 2D gravity \cite{GMMMO91,Martinec91,AGGJ91,KMMM92} and $c = 1$ string theory \cite{DMP93,HOP94,EK94,Takasaki95,NTT95,Takasaki96} as well as mathematical aspects of random matrices and orthogonal polynomials \cite{AvM95,KMZ96,AvM99}. This is also the place where the Ablowitz-Ladik hierarchy \cite{AL75} (aka the relativistic Toda hierarchy \cite{Ruijsenaars90}) plays a role. These studies also revealed new features of the 2D Toda hierarchy itself such as the Orlov-Schulman operators, additional symmetries and dispersionless analogues \cite{TT93,TT95}. Researches on the dispersionless 2D Toda hierarchy revived later on when a relation to interface dynamics and complex analysis was pointed out \cite{WZ00,MWWZ00}. Sources of new researches were discovered in the early 2000s in enumerative geometry of $\mathbb{C}\mathbb{P}^1$ and $\mathbb{C}^2$ \cite{Pandharipande02,Okounkov00,Getzler01,OP02a,OP02b,DZ04,LQW03,QW04,Milanov05} and 4D $\mathcal{N}=2$ supersymmetric gauge theories \cite{LMN03,Nekrasov02,NO03,MN06}. For example, a generating function of the double Hurwitz numbers of $\mathbb{C}\mathbb{P}^1$ was shown to be a tau function of the 2D Toda hierarchy \cite{Okounkov00}. This tau function falls into a class of special tau functions called ``hypergeometric tau functions'' that was introduced around 2000 in a quite different context \cite{OS00,OS01a,OS01b}. Intersection numbers of the Hilbert scheme of points on $\mathbb{C}^2$, too, give a hypergeometric tau function \cite{LQW03,QW04}. On the other hand, Gromov-Witten invariants of $\mathbb{C}\mathbb{P}^1$ yield a different kind of tau functions \cite{OP02a,OP02b}. This paper reviews our work in the last ten years on integrable structures of the melting crystal models \cite{NT07,NT08,NT11,Takasaki13,Takasaki14}. We focus on two typical cases among these models of statistical mechanics. The first case is a statistical model of random 3D Young diagrams \cite{ORV03} (hence referred to as a ``crystal model''). Its partition function may be also thought of as the simplest instanton partition function of 5D supersymmetric gauge theories \cite{MNTT04}. The second case is a slight modification of the first case, and related to enumerative geometry \cite{BP08} and topological string theory \cite{CGMPS06} of a Calabi-Yau threefold called the ``resolved conifold''. Our work have proved that the 1D Toda hierarchy and the Ablowitz-Ladik hierarchy underlie these two melting crystal models. Let us mention that such a relation between the resolved conifold and the Ablowitz-Ladik hierarchy was pointed out first by Brini \cite{Brini10}. It is remarkable that these two integrable hierarchies, both of which are reductions of the 2D Toda hierarchy \cite{UT84,KMZ96,BCR11}, are also known to be the integrable structures of two typical random matrix models, namely the Hermitian and unitary random matrix models \cite{GMMMO91,Martinec91,AGGJ91,KMMM92,AvM95,KMZ96,AvM99}. Technical clues of our work are the quantum torus algebra in the fermionic formalism, special algebraic relations in this algebra referred to as ``shift symmetries'', and the matrix factorization problem in the Lax formalism. This paper is organized as follows. Section 2 is a review of the 2D Toda hierarchy formulated in the Lax and bilinear forms. Fundamental building blocks of the 2D Toda hierarchy such as the Lax operators, the dressing operators, the wave functions and the tau function are introduced along with various equations. The 1D Toda and Ablowitz-Ladik hierarchies are shown to be reductions of the 2D Toda hierarchy. The matrix factorization problem is also commented. Section 3 is a review of the fermionic formalism of the 2D Toda hierarchy. The fermionic formula of the tau function and its relation to the matrix factorization problem are explained. Relevant combinatorial notions such as partitions, Young diagrams and the Schur functions are also introduced here. The fermionic formula is illustrated for hypergeometric tau functions, in particular, the generating function of the double Hurwitz numbers. Sections 4 and 5 are devoted to the melting crystal models. In Section 4, the two melting crystal models are introduced. The partition functions are defined as sums of the Boltzmann weights over the set of all partitions. Fermionic expressions of these partition functions are also derived. In Section 5, integrable structures of the two melting crystal models are identified. The quantum torus algebra and its shift symmetries are reviewed. With the aid of these algebraic tools, the partition functions are converted to tau functions of the 2D Toda hierarchy. The first model thus turns out to be related to the 1D Toda hierarchy. The second model is further examined in the Lax formalism, and shown to be related to the Ablowitz-Ladik hierarchy. Section 6 concludes these reviews. \section{2D Toda hierarchy} \subsection{Difference operators and infinite matrices} The Lax formalism of the 2D Toda hierarchy is formulated by difference operators in the lattice coordinate $s$ \cite{UT84}. These operators are linear combinations of the shift operators $e^{n\partial_s}$ (symbolically expressed as the exponential of $\partial_s = \partial/\partial s$) that act on functions of $s$ as $e^{n\partial_s}f(s) = f(s+n)$. A genuine difference operators is a finite linear combination \begin{equation*} A = \sum_{n=M}^N a_n(s)e^{n\partial_s} \quad \mbox{(operator of $[M,N]$-type)} \end{equation*} of the shift operators. To formulate the 2D Toda hierarchy, we further use semi-infinite linear combinations of the form \begin{equation*} A = \sum_{n=-\infty}^N a_n(s)e^{n\partial_s} \quad \mbox{(operator of $(-\infty,N]$ type}) \end{equation*} and \begin{equation*} A = \sum_{n=M}^\infty a_n(s)e^{n\partial_s} \quad \mbox{(operator of $[M,\infty)$ type)}. \end{equation*} These ``pseudo-difference operators'' are analogues of pseudo-differential operators in the Lax formalism of the KP hierarchy \cite{SS83,SW85}. Let $(\quad)_{\ge 0}$ and $(\quad)_{<0}$ denote the projection \begin{equation*} (A)_{\ge 0} = \sum_{n\ge 0}a_n(s)e^{n\partial_s}, \quad (A)_{<0} = \sum_{n<0}a_n(s)e^{n\partial_s}. \end{equation*} to the $[0,\infty)$ and $(-\infty,-1]$ parts. These difference operators are also represented by $\mathbb{Z}\times\mathbb{Z}$ matrices. The shift operators $e^{n\partial_s}$ correspond to the shift matrices \begin{equation*} \Lambda^n = (\delta_{i,j-n})_{i,j\in\mathbb{Z}}. \end{equation*} The multiplication operators $a(s)$ are represented by the diagonal matrices \begin{equation*} \operatorname{diag}(a(s)) = (a(i)\delta_{ij})_{i,j\in\mathbb{Z}}. \end{equation*} Thus a general difference operator of the form \begin{equation*} A = A(s,e^{\partial_s}) = \sum_{n\in\mathbb{Z}} a_n(s)e^{n\partial_s} \end{equation*} is represented by the infinite matrix \begin{equation*} A(\Delta,\Lambda) = \sum_{n\in\mathbb{Z}}\operatorname{diag}(a_n(s))\Lambda^n = \sum_{n\in\mathbb{Z}} (a_n(i)\delta_{i,j-n})_{i,j\in\mathbb{Z}}. \end{equation*} The shift operator $e^{\partial_s}$ and the multiplication operator $s$ satisfy the twisted canonical commutation relation \begin{equation} [e^{\partial_s},s] = e^{\partial_s}. \label{tCCR} \end{equation} This commutation relation can be translated to the language of matrices as \begin{equation} [\Lambda,\Delta] = \Lambda, \end{equation} where $\Delta$ denotes the the diagonal matrix \begin{equation*} \Delta = \operatorname{diag}(s) = (i\delta_{ij})_{i,j\in\mathbb{Z}} \end{equation*} that represents the multiplication operator $s$. \subsection{Lax and Zakharov-Shabat equations} The Lax formalism of the 2D Toda hierarchy uses two Lax operators $L,\bar{L}$ \footnote{In the earliest work \cite{UT84}, these Lax operators were denoted by $L,M$. These notations have been changed to $L,\bar{L}$ so as to use $M$ for the Orlov-Schulman operators. Also note that the bar $\;\bar{}\;$ of $\bar{L}$, $\bar{t}_k$, $\bar{u}_n$, etc. does not mean complex conjugation. } of type $(-\infty,1]$ and $[1,\infty)$. From the point of view of symmetry, it is better to consider $L$ and $\bar{L}^{-1}$ rather than $L$ and $\bar{L}$. These operators admit freedom of gauge transformations $L \to e^{-f}\cdot L\cdot e^f$, $\bar{L} \to e^{-f}\cdot\bar{L}\cdot e^f$. We mostly use the gauge in which the leading coefficient of $L$ is equal to $1$: \begin{equation*} \begin{aligned} L &= e^{\partial_s} + \sum_{n=1}^\infty u_ne^{(1-n)\partial_s},\\ \bar{L}^{-1} &= \bar{u}_0e^{-\partial_s} + \sum_{n=1}^\infty \bar{u}_n e^{(n-1)\partial_s}. \end{aligned} \end{equation*} The coefficients $u_n$ and $\bar{u}_n$ are functions $u_n(s,\boldsymbol{t},\bar{\boldsymbol{t}})$ and $\bar{u}_n(s,\boldsymbol{t},\bar{\boldsymbol{t}})$ of $s$ and the time variables $\boldsymbol{t},\bar{\boldsymbol{t}}$. To simplify notations, however, we shall frequently suppress $\boldsymbol{t}$ and $\bar{\boldsymbol{t}}$ as $u_n = u_n(s)$ and $\bar{u}_n = \bar{u}_n(s)$. $L$ and $\bar{L}$ satisfy the Lax equations \begin{equation} \begin{gathered} \frac{\partial L}{\partial t_n} = [B_n,L], \quad \frac{\partial L}{\partial\bar{t}_n} = [\bar{B}_n,L], \\ \frac{\partial\bar{L}}{\partial t_n} = [B_n,\bar{L}],\quad \frac{\partial\bar{L}}{\partial\bar{t}_n} = [\bar{B}_n,\bar{L}], \end{gathered} \label{Laxeq} \end{equation} where $B_n$ and $\bar{B}_n$ are defined as \begin{equation*} B_n = (L^n)_{\ge 0}, \quad \bar{B}_n = (\bar{L}^{-n})_{<0}. \end{equation*} $B_n$ and $\bar{B}_n$, in turn, satisfy the Zakharov-Shabat equations \begin{equation} \begin{gathered} \frac{\partial B_n}{\partial t_m} - \frac{\partial B_m}{\partial t_n} + [B_m,B_n] = 0, \\ \frac{\partial\bar{B}_n}{\partial\bar{t}_m} - \frac{\partial\bar{B}_m}{\partial\bar{t}_n} + [\bar{B}_m,\bar{B}_n] = 0, \\ \frac{\partial\bar{B}_n}{\partial t_m} - \frac{\partial B_m}{\partial\bar{t}_n} + [B_m,\bar{B}_n] = 0. \end{gathered} \label{ZSeq} \end{equation} Actually, the Lax equations and the Zakharov-Shabat equations are equivalent \cite{UT84}. Since \begin{equation*} B_1 = e^{\partial_s} + u_1,\quad \bar{B}_1 = \bar{u}_0e^{-\partial_s}, \end{equation*} the lowest ($m = n = 1$) member of the third set of the Zakharov-Shabat equation reduces to the equations \begin{equation*} \begin{gathered} \frac{\partial u_1(s)}{\partial\bar{t}_1} + \bar{u}_0(s+1) - \bar{u}_0(s) = 0,\\ - \frac{\partial\bar{u}(s)_0}{\partial t_1} + \bar{u}_0(s)(u_1(s) - u_1(s-1)) = 0. \end{gathered} \end{equation*} Upon parametrizing $u_1$ and $\bar{u}_0$ with new dependent variable $\phi(s) = \phi(s,\boldsymbol{t},\bar{\boldsymbol{t}})$ as \begin{equation*} u_1(s) = \frac{\partial\phi(s)}{\partial t_1},\quad \bar{u}_0(s) = e^{\phi(s) - \phi(s-1)}, \end{equation*} these equations yields the 2D Toda field equation \begin{equation} \frac{\partial^2\phi(s)}{\partial t_1\partial\bar{t}_1} + e^{\phi(s+1) - \phi(s)} - e^{\phi(s) - \phi(s-1)} = 0. \label{2DTodaeq} \end{equation} \subsection{Dressing operators and wave functions} The Lax operators $L,\bar{L}$ can be converted to the undressed form $e^{\partial_s}$ as \begin{equation} L = We^{\partial_s}W^{-1},\quad \bar{L} = \bar{W}e^{\partial_s}\bar{W}^{-1} \label{LLbar-WWbar} \end{equation} by dressing operators of the form \begin{equation*} W = 1 + \sum_{n=1}^\infty w_ne^{-n\partial_s},\quad \bar{W} = \sum_{n=0}^\infty\bar{w}_ne^{n\partial_s},\quad \bar{w}_0 \not= 0. \end{equation*} One can further choose $W,\bar{W}$ to satisfy the Sato equations \begin{equation} \begin{gathered} \frac{\partial W}{\partial t_k} = B_kW - We^{k\partial_s},\quad \frac{\partial W}{\partial\bar{t}_k} = \bar{B}_kW,\\ \frac{\partial\bar{W}}{\partial t_k} = B_k\bar{W},\quad \frac{\partial\bar{W}}{\partial\bar{t}_k} = \bar{B}_k\bar{W} - We^{-k\partial_s}. \end{gathered} \label{Satoeq} \end{equation} Upon substituting the expression \begin{equation*} B_k = \left(We^{k\partial_s}W^{-1}\right)_{\ge 0},\quad \bar{B}_k = \left(\bar{W}e^{-k\partial_s}\bar{W}^{-1}\right)^{-1} \end{equation*} for $B_k$'s and $\bar{B}_k$'s, the Sato equations (\ref{Satoeq}) turn into the closed system of evolution equations \begin{equation} \begin{gathered} \frac{\partial W}{\partial t_k} = - \left(We^{k\partial_s}W^{-1}\right)_{<0}W,\quad \frac{\partial W}{\partial\bar{t}_k} = \left(\bar{W}e^{-k\partial_s}\bar{W}\right)_{<0}W,\\ \frac{\partial\bar{W}}{\partial t_k} = \left(We^{k\partial_s}W^{-1}\right)_{\geq 0}\bar{W},\quad \frac{\partial\bar{W}}{\partial\bar{t}_k} = - \left(\bar{W}e^{-k\partial_s}\bar{W}^{-1}\right)_{\geq 0}\bar{W} \end{gathered} \label{Satoeq2} \end{equation} for $W$ and $\bar{W}$. These equations and may be thought of as yet another formulation of the 2D Toda hierarchy, from which the Lax equations (\ref{Laxeq}) can be recovered through the relation (\ref{LLbar-WWbar}). The dressing operators can be used to define the wave functions \begin{equation} \Psi = \left(1 + \sum_{k=1}^\infty w_kz^{-k}\right)z^se^{\xi(\boldsymbol{t},z)},\quad \bar{\Psi} = \left(\sum_{k=0}^\infty\bar{w}_kz^k\right)z^se^{\xi(\bar{\boldsymbol{t}},z^{-1})}, \end{equation} where \begin{equation*} \xi(\boldsymbol{t},z) = \sum_{k=1}^\infty t_kz^k,\quad \xi(\bar{\boldsymbol{t}},z^{-1}) = \sum_{k=1}^\infty\bar{t}_kz^{-k}. \end{equation*} The wave functions satisfy the auxiliary linear equations \begin{equation} L\Psi = z\Psi,\quad \bar{L}\bar{\Psi} = z\bar{\Psi} \label{LLbar-Lineq} \end{equation} and \begin{equation} \begin{gathered} \frac{\partial\Psi}{\partial t_k} = B_k\Psi,\quad \frac{\partial\Psi}{\partial\bar{t}_k} = \bar{B}_k\Psi,\\ \frac{\partial\bar{\Psi}}{\partial t_k} = B_k\bar{\Psi},\quad \frac{\partial\bar{\Psi}}{\partial\bar{t}_k} = \bar{B}_k\bar{\Psi}. \end{gathered} \end{equation} \subsection{Tau functions and bilinear equations} The tau function $\tau = \tau(s,\boldsymbol{t},\bar{\boldsymbol{t}})$ of the 2D Toda hierarchy is related to the wave functions as \footnote{These relations differ from those commonly used in the literature. We have replaced $\tau(s,\boldsymbol{t},\bar{\boldsymbol{t}})$ therein by $\tau(s-1,\boldsymbol{t},\bar{\boldsymbol{t}})$ so as to consistent with the convention of our fermionic formalism.} \begin{equation} \begin{gathered} \Psi(s,\boldsymbol{t},\bar{\boldsymbol{t}},z) = \frac{\tau(s-1,\boldsymbol{t}-[z^{-1}],\bar{\boldsymbol{t}})}{\tau(s-1,\boldsymbol{t},\bar{\boldsymbol{t}})} z^se^{\xi(\boldsymbol{t},z)},\\ \bar{\Psi}(s-1,\boldsymbol{t},\bar{\boldsymbol{t}},z) = \frac{\tau(s,\boldsymbol{t},\bar{\boldsymbol{t}}-[z])}{\tau(s-1,\boldsymbol{t},\bar{\boldsymbol{t}})} z^se^{\xi(\bar{\boldsymbol{t}},z^{-1})}, \end{gathered} \label{Psi-tau-rel} \end{equation} where \begin{equation*} [z] = \left(z,z^2/2,\cdots,z^k/k,\cdots\right). \end{equation*} Given the pair $\Psi,\bar{\Psi}$ of wave functions, one can define the tau function as a kind of potential that satisfy these relations. The tau function satisfies an infinite number of Hirota equations. The first three members of these Hirota equations read \begin{equation} \begin{gathered} D_1\bar{D}_1\tau(s,\boldsymbol{t},\bar{\boldsymbol{t}})\cdot\tau(s,\boldsymbol{t},\bar{\boldsymbol{t}}) + 2\tau(s+1,\boldsymbol{t},\bar{\boldsymbol{t}})\tau(s-1,\boldsymbol{t},\bar{\boldsymbol{t}}) = 0, \\ (D_2 + D_1^2) \tau(s+1,\boldsymbol{t},\bar{\boldsymbol{t}})\cdot\tau(s,\boldsymbol{t},\bar{\boldsymbol{t}}) = 0, \\ (\bar{D}_2 + \bar{D}_1^2) \tau(s,\boldsymbol{t},\bar{\boldsymbol{t}})\cdot\tau(s+1,\boldsymbol{t},\bar{\boldsymbol{t}}) = 0, \end{gathered} \label{Toda-Hirotaeq} \end{equation} where we have used Hirota's notation \begin{multline*} P(D_1,D_2,\ldots,\bar{D}_1,\bar{D}_2,\ldots) f(\boldsymbol{t},\bar{\boldsymbol{t}})\cdot g(\boldsymbol{t},\bar{\boldsymbol{t}})\\ = \left. P(\partial'_1 - \partial_1,\, \partial'_2 - \partial_2,\,\ldots,\, \bar{\partial}'_1 - \bar{\partial}_1,\bar{\partial}'_2 - \bar{\partial}_2,\,\ldots) f(\boldsymbol{t}',\bar{\boldsymbol{t}}')g(\boldsymbol{t},\bar{\boldsymbol{t}})\right|_{\boldsymbol{t}'=\boldsymbol{t}}, \end{multline*} where $\partial_k,\partial'_k,\bar{\partial}_k,\bar{\partial}'_k$ denote the derivatives $\partial_k = \partial/\partial t_k$, $\partial'_k = \partial/\partial t'_k$, $\bar{\partial}_k = \partial/\partial\bar{t}_k$, $\bar{\partial}'_k = \partial/\partial\bar{t}'_k$. The first equation of (\ref{Toda-Hirotaeq}) amounts to the 2D Toda field equation (\ref{2DTodaeq}). The infinite system of Hirota equations can be encoded to (and decoded from) the single bilinear equation \begin{multline} \oint\frac{dz}{2\pi i}z^{s'-s}e^{\xi(\boldsymbol{t}'-\boldsymbol{t},z)} \tau(s',\boldsymbol{t}'-[z^{-1}],\bar{\boldsymbol{t}}') \tau(s,\boldsymbol{t}+[z^{-1}],\bar{\boldsymbol{t}}) \\ = \oint\frac{dz}{2\pi i}z^{s'-s}e^{\xi(\bar{\boldsymbol{t}}'-\bar{\boldsymbol{t}},z^{-1})} \tau(s'+1,\boldsymbol{t}',\bar{\boldsymbol{t}}'-[z]) \tau(s-1,\boldsymbol{t},\bar{\boldsymbol{t}}+[z]), \label{Toda-bilin-tau} \end{multline} where the symbol $\oint$ means extracting the ``residue'' of a (formal) Laurent series: \begin{equation*} \oint\sum_{n=-\infty}^\infty\frac{dz}{2\pi i}a_nz^n = a_{-1}. \end{equation*} Analytically, this symbol on the left side of the equation is understood to be the contour integral along a sufficiently large circle $|z| = R$, and that of the right side is the contour integral along a sufficiently small circle $|z| = R^{-1}$. Various bilinear equations for the tau function can be derived from (\ref{Toda-bilin-tau}) by specialization of $\boldsymbol{t}',\bar{\boldsymbol{t}}'$ and $s'$. The Hirota equations (\ref{Toda-Hirotaeq}) are obtained by Taylor expansion of (\ref{Toda-bilin-tau}) at $\boldsymbol{t}' = \boldsymbol{t}$ and $\bar{\boldsymbol{t}}' = \bar{\boldsymbol{t}}$ to low orders upon letting $s' = s, s\pm 1$. More systematic derivation of Hirota equations uses the polynomials $S_n(\boldsymbol{t})$, $n = 0,1,\ldots$, defined by the generating function \begin{equation} \sum_{n=0}^\infty S_n(\boldsymbol{t})z^n = \exp\left(\sum_{k=1}^\infty t_kz^k\right). \label{S_n} \end{equation} These polynomials are building blocks the Schur functions as well (we refer to Macdonald's book \cite{Mac-book} for the notions of the Schur functions, partitions and Young diagrams). Thus a complete set of Hirota equations can be obtained in the generating functional form \begin{multline} \sum_{n=0}^\infty S_n(-2\boldsymbol{a})S_{n+s'-s+1}(\tilde{D}_{\boldsymbol{t}}) e^{\langle\boldsymbol{a},D_{\boldsymbol{t}}\rangle + \langle\bar{\boldsymbol{a}},D_{\bar{\boldsymbol{t}}}\rangle} \tau(s,\boldsymbol{t},\bar{\boldsymbol{t}})\cdot\tau(s',\boldsymbol{t},\bar{\boldsymbol{t}}) \\ = \sum_{n=0}^\infty S_n(-2\bar{\boldsymbol{a}}) S_{n-s'+s-1}(\tilde{D}_{\bar{\boldsymbol{t}}}) e^{\langle\boldsymbol{a},D_{\boldsymbol{t}}\rangle + \langle\bar{\boldsymbol{a}},D_{\bar{\boldsymbol{t}}}\rangle} \tau(s-1,\boldsymbol{t},\bar{\boldsymbol{t}})\cdot\tau(s'+1,\boldsymbol{t},\bar{\boldsymbol{t}}), \end{multline} where $\boldsymbol{a} = (a_1,a_2,\ldots)$ and $\bar{\boldsymbol{a}} = (\bar{a}_1,\bar{a}_2,\ldots)$ are auxiliary variables, $\langle\boldsymbol{a},D_{\boldsymbol{t}}\rangle$ and $\langle\bar{\boldsymbol{a}},D_{\bar{\boldsymbol{t}}}\rangle$ are the linear combinations \begin{equation*} \langle\boldsymbol{a},D_{\boldsymbol{t}}\rangle = \sum_{k=1}^\infty a_kD_k,\quad \langle\bar{\boldsymbol{a}},\bar{D}_{\bar{\boldsymbol{t}}}\rangle = \sum_{k=1}^\infty \bar{a}_k\bar{D}_k \end{equation*} of $D_k$'s and $\bar{D}_k$'s, and $S_n(\tilde{D}_{\boldsymbol{t}})$ and $S_n(\tilde{D}_{\bar{\boldsymbol{t}}})$ are defined by substituting the variables $\boldsymbol{t}$ of $S_n(\boldsymbol{t})$ for the Hirota bilinear operators \begin{equation*} \tilde{D}_{\boldsymbol{t}} = (D_1,D_2/2,\ldots,D_n/k,\ldots), \quad \tilde{D}_{\bar{\boldsymbol{t}}} = (\bar{D}_1,\bar{D}_2/2,\ldots,\bar{D}_k/k,\ldots). \end{equation*} \subsection{Orlov-Schulman operators} Following the idea of Orlov and Schulman \cite{OS86}, one can introduce a Toda version of the Orlov-Schulman operator of the KP hierarchy. Actually, we need two Orlov-Schulman operators of the form \begin{equation*} \begin{gathered} M = \sum_{k=1}^\infty kt_kL^k + s + \sum_{n=1}^\infty v_n L^{-n},\\ \bar{M} = - \sum_{k=1}^\infty k\bar{t}_k \bar{L}^{-k} + s + \sum_{n=1}^\infty \bar{v}_n \bar{L}^n, \end{gathered} \end{equation*} where $v_n$ and $\bar{v}_n$ are new dependent variables. These operators are defined in terms of the dressing operators as \begin{equation} \begin{gathered} M = W\left(s + \sum_{k=1}^\infty kt_ke^{k\partial_s}\right)W^{-1},\\ \bar{M} = \bar{W}\left(s - \sum_{k=1}^\infty k\bar{t}_ke^{-k\partial_s}\right)\bar{W}^{-1}, \end{gathered} \end{equation} and satisfy the Lax equations \begin{equation} \begin{gathered} \frac{\partial M}{\partial t_n} = [B_n,M], \quad \frac{\partial M}{\partial\bar{t}_n} = [\bar{B}_n,M], \\ \frac{\partial\bar{M}}{\partial t_n} = [B_n,\bar{M}],\quad \frac{\partial\bar{M}}{\partial\bar{t}_n} = [\bar{B}_n,\bar{M}] \end{gathered} \end{equation} of the same form as the Lax equations (\ref{Laxeq}) for $L,\bar{L}$. Moreover, the twisted canonical commutation relations \begin{equation} [L,M] = L, \quad [\bar{L},\bar{M}] = \bar{L} \end{equation} are satisfied as a result of the commutation relation (\ref{tCCR}) of $e^{\partial_s}$ and $s$. These equations form an extended Lax formalism of the 2D Toda hierarchy. One can thereby formulate additional symmetries of $W_{1+\infty}$ type \cite{TT93,TT95}. These additional symmetries play a central role in the so called ``string equations'' for various special solutions \cite{DMP93,HOP94,EK94,Takasaki95,NTT95,Takasaki96,Takasaki12}. Moreover, general solutions of the 2D Toda hierarchy, too, can be captured by the generalization \begin{equation} L = f(\bar{L},\bar{M}),\quad M = g(\bar{L},\bar{M}) \label{gstreq} \end{equation} of those string equations \cite{TT95,TT12}. \subsection{Two reductions of 2D Toda hierarchy} \subsubsection{1D Toda hierarchy} The 1D Toda hierarchy is a reduction of the 2D Toda hierarchy in which all dynamical variables depend on the time variables $\boldsymbol{t},\bar{\boldsymbol{t}}$ through the difference $\boldsymbol{t} - \bar{\boldsymbol{t}}$. In the Lax formalism, the 1D reduction can be achieved by imposing the condition \footnote{In the earliest work \cite{UT84}, a condition of the form $L + L^{-1} = \bar{L} + \bar{L}^{-1}$ is proposed for the 1D reduction. This condition is related to the structure of soliton solutions of the Toda lattice \cite{Hirota73,Flaschka74b}.} \begin{equation} L = \bar{L}^{-1}. \label{1D-LLbar} \end{equation} Both sides of this equation become a difference operator of the form \begin{equation} \mathfrak{L} = e^{\partial_s} + b + ce^{-\partial_s}, \quad b = u_1,\quad c = \bar{u}_0, \label{1D-redL} \end{equation} which satisfies the Lax equations \begin{equation*} \frac{\partial\mathfrak{L}}{\partial t_k} = [B_k,\mathfrak{L}], \quad \frac{\partial\mathfrak{L}}{\partial\bar{t}_k} = [\bar{B}_k,\mathfrak{L}]. \end{equation*} Since (\ref{1D-LLbar}) implies that $B_k$, $\bar{B}_k$ and $\mathfrak{L}$ are linearly related as \begin{equation} B_k + \bar{B}_k = \mathfrak{L}^k, \end{equation} the time evolutions with respect to $\boldsymbol{t}$ and $\bar{\boldsymbol{t}}$ are also linearly related as \begin{equation*} \frac{\partial\mathfrak{L}}{\partial t_k} + \frac{\partial\mathfrak{L}}{\partial\bar{t}_k} = [B_k,\mathfrak{L}] + [\bar{B}_k,\mathfrak{L}] = 0. \end{equation*} Thus the reduced system has just one set of independent Lax equations \begin{equation*} \frac{\partial\mathfrak{L}}{\partial t_k} = [B_k,\mathfrak{L}], \quad B_k = (\mathfrak{L}^k)_{\ge 0}. \end{equation*} \subsubsection{Ablowitz-Ladik hierarchy} The reduction to the Ablowitz-Ladik hierarchy is a kind of ``rational reduction'' \cite{BCR11}. This is achieved by assuming that $L$ and $\bar{L}^{-1}$ are quotients \begin{equation} L = BC^{-1},\quad \bar{L}^{-1} = CB^{-1} \label{AL-LLbar} \end{equation} of two difference operators of the form \begin{equation*} B = e^{\partial_s} - b,\quad C = 1 - ce^{-\partial_s}. \end{equation*} $B^{-1}$ and $C^{-1}$ are understood to be difference operators of type $[0,\infty)$ and $(-\infty,0]$. More explicitly, \begin{equation*} \begin{gathered} B^{-1} = - \sum_{k=0}^\infty (b^{-1}e^{\partial_s})^k b^{-1} = - b(s)^{-1} - \sum_{k=1}^\infty b(s)^{-1}\cdots b(s+k)^{-1}e^{k\partial_s},\\ C^{-1} = 1 + \sum_{k=1}^\infty (ce^{-\partial_s})^k = 1 + \sum_{k=1}^\infty c(s)c(s-1)\cdots c(s-k+1)e^{-k\partial_s}, \end{gathered} \end{equation*} where $b(s)$ and $c(s)$ are abbreviations of $c(s,\boldsymbol{t},\bar{\boldsymbol{t}})$ and $c(s,\boldsymbol{t},\bar{\boldsymbol{t}})$. Under this interpretation, $CB^{-1}$ is not the inverse of $BC^{-1}$. Thus trivial situation where $L = \bar{L} = e^{\partial_s}$ can be avoided. The Lax equations (\ref{Laxeq}) of the 2D Toda hierarchy can be reduced to (and derived from) the equations \begin{equation} \begin{gathered} \frac{\partial B}{\partial t_k} = \left((BC^{-1})^k\right)_{\ge 0}B - B\left((C^{-1}B)^k\right)_{\ge 0},\\ \frac{\partial C}{\partial t_k} = \left((BC^{-1})^k\right)_{\ge 0}C - C\left((C^{-1}B)^k\right)_{\ge 0},\\ \frac{\partial B}{\partial\bar{t}_k} = \left((CB^{-1})^k\right)_{<0}B - B\left((B^{-1}C)^k\right)_{<0},\\ \frac{\partial C}{\partial\bar{t}_k} = \left((CB^{-1})^k\right)_{<0}C - C\left((B^{-1}C)^k\right)_{<0}. \end{gathered} \label{BCeq} \end{equation} Note that this is a closed system of evolution equations for $B$ and $C$. This implies that the reduced form (\ref{AL-LLbar}) of $L$ and $\bar{L}^{-1}$ is preserved by the time evolutions of the 2D Toda hierarchy. The reduction condition to the Ablowitz-Ladik hierarchy can be reformulated in the alternative form \begin{equation} L = \tilde{C}^{-1}\tilde{B},\quad \bar{L}^{-1} = \tilde{B}^{-1}\tilde{C}, \label{AL-LLbar2} \end{equation} where $\tilde{B}$ and $\tilde{C}$ are difference operators of the form \begin{equation*} \tilde{B} = e^{\partial_s} - \tilde{b},\quad \tilde{C} = 1 - \tilde{c}e^{-\partial_s}. \end{equation*} Just like $B^{-1}$ and $C^{-1}$ in (\ref{AL-LLbar}), $\tilde{B}^{-1}$ and $\tilde{C}^{-1}$ are understood to be difference operators of type $[0,\infty)$ and $(-\infty,0]$. The Lax equations (\ref{Laxeq}) of the 2D Toda hierarchy can be reduced to the equations \begin{equation} \begin{gathered} \frac{\partial\tilde{B}}{\partial t_k} = \left((\tilde{B}\tilde{C}^{-1})^k\right)_{\ge 0}\tilde{B} - \tilde{B}\left(\tilde{C}^{-1}\tilde{B})^k\right)_{\ge 0},\\ \frac{\partial\tilde{C}}{\partial t_k} = \left((\tilde{B}\tilde{C}^{-1})^k\right)_{\ge 0}\tilde{C} - \tilde{C}\left((\tilde{C}^{-1}\tilde{B})^k\right)_{\ge 0},\\ \frac{\partial\tilde{B}}{\partial\bar{t}_k} = \left((\tilde{C}\tilde{B}^{-1})^k\right)_{<0}\tilde{B} - \tilde{B}\left((\tilde{B}^{-1}\tilde{C})^k\right)_{<0},\\ \frac{\partial\tilde{C}}{\partial\bar{t}_k} = \left((\tilde{C}\tilde{B}^{-1})^k\right)_{<0}\tilde{C} - \tilde{C}\left((\tilde{B}^{-1}\tilde{C})^k\right)_{<0} \end{gathered} \end{equation} for these operators as well. The second reduction condition (\ref{AL-LLbar2}) is directly related to an auxiliary linear problem of the relativistic Toda hierarchy \cite{Ruijsenaars90}. If the Lax operators are factorized in that form, the linear equations (\ref{LLbar-Lineq}) for the wave functions can be converted to the ``generalized eigenvalue problem'' \begin{equation} \tilde{B}\Psi = z\tilde{C}\Psi, \quad \tilde{B}\bar{\Psi} = z\tilde{C}\bar{\Psi}. \end{equation} A generalized eigenvalue problem of this form is used in Bruschi and Ragnisco's scalar-valued Lax formalism \cite{BR89} of the relativistic Toda lattice. Moreover, as pointed out by Kharchev et al. \cite{KMZ96}, this generalized eigenvalue problem can be derived from the traditional $2\times 2$ matrix-valued Lax formalism \cite{AL75} of the Ablowitz-Ladik hierarchy. \subsection{Matrix factorization problem} General solutions of the 2D Toda hierarchy can be captured by a factorization problem \cite{Takasaki84} of the form \begin{equation} \exp\left(\sum_{k=1}^\infty t_k\Lambda^k\right)U \exp\left(- \sum_{k=1}^\infty\bar{t}_k\Lambda^{-k}\right) = W^{-1}\bar{W}, \label{factor} \end{equation} where $U$ is a given (invertible) constant $\mathbb{Z}\times\mathbb{Z}$ matrix. The problem is to find two $\mathbb{Z}\times\mathbb{Z}$ matrices $W = W(\boldsymbol{t},\bar{\boldsymbol{t}})$ and $\bar{W} = \bar{W}(\boldsymbol{t},\bar{\boldsymbol{t}})$ that are triangular matrices of the form \begin{equation*} W = 1 + \sum_{n=1}^\infty \operatorname{diag}(w_n(s))\Lambda^{-n},\quad \bar{W} = \sum_{n=0}^\infty\operatorname{diag}(\bar{w}_n(s))\Lambda^n,\quad \bar{w}_0 \not= 0. \end{equation*} Note that $W$ and $\bar{W}$ amount to the dressing operators of the last section by the correspondence \begin{equation*} A(s,e^{\partial_s}) = \sum_{n\in\mathbb{Z}}a_n(s)e^{n\partial_s} \;\longleftrightarrow\; A(\Delta,\Lambda) = \sum_{n\in\mathbb{Z}}\operatorname{diag}(a_n(s))_{s\in\mathbb{Z}}\Lambda^n \end{equation*} of difference operators and $\mathbb{Z}\times\mathbb{Z}$ matrices. Since $W$ and $\bar{W}$ are lower and upper triangular matrices, the factorization problem (\ref{factor}) is an infinite dimensional analogue of the Gauss decomposition for finite matrices. If $W$ and $\bar{W}$ satisfy the factorization problem (\ref{factor}), one can readily derive the equations \begin{equation} \begin{gathered} \frac{\partial W}{\partial t_k}W^{-1} + W\Lambda^kW^{-1} = \frac{\partial\bar{W}}{\partial t_k}\bar{W}^{-1},\\ \frac{\partial W}{\partial\bar{t}_k}W^{-1} = \frac{\partial\bar{W}}{\partial\bar{t}_k}\bar{W}^{-1} + \bar{W}\Lambda^{-k}\bar{W}^{-1}. \end{gathered} \end{equation} Splitting these equations to the $(\quad)_{\ge 0}$ and $(\quad)_{<0}$ parts, one can see that these equations are equivalent to the Sato equations (\ref{Satoeq2}). Thus the factorization problem yields a solution of the 2D Toda hierarchy. In analogy with the procedure of the Gauss decomposition for finite matrices, one can express the matrix elements of $W$ and $\bar{W}$ as quotients of semi-infinite minors of \begin{equation} U(\boldsymbol{t},\bar{\boldsymbol{t}}) = \left(U_{ij}(\boldsymbol{t},\bar{\boldsymbol{t}})\right)_{i,j\in\mathbb{Z}} = \exp\left(\sum_{k=1}^\infty t_k\Lambda^k\right)U \exp\left(- \sum_{k=1}^\infty\bar{t}_k\Lambda^{-k}\right). \label{U(t,tbar)} \end{equation} The common denominator of these quotients is a principal minor of $U(\boldsymbol{t},\bar{\boldsymbol{t}})$, and can be identified with the tau function: \footnote{This is a place where the aforementioned modification of the definition of $\tau(s,\boldsymbol{t},\bar{\boldsymbol{t}})$ affects the outcome. In the earlier literature, the right hand side of this formula is the minor for $i,j < s$ rather than $i,j \leq s$.} \begin{equation} \tau(s,\boldsymbol{t},\bar{\boldsymbol{t}}) = \det(U_{ij}(\boldsymbol{t},\bar{\boldsymbol{t}}))_{i,j\leq s}. \label{tau=det} \end{equation} The determinant expression of the matrix elements of $W$ and $\bar{W}$ reproduces the generating functional expression \begin{equation*} \begin{gathered} 1 + \sum_{n=1}^\infty w_nz^{-n} = \frac{\tau(s-1,\boldsymbol{t}-[z^{-1}],\boldsymbol{t})}{\tau(s-1,\boldsymbol{t},\bar{\boldsymbol{t}})},\\ \sum_{n=0}^\infty\bar{w}_nz^n = \frac{\tau(s,\boldsymbol{t},\bar{\boldsymbol{t}}-[z])}{\tau(s-1,\boldsymbol{t},\bar{\boldsymbol{t}})} \end{gathered} \end{equation*} of $w_n$'s and $\bar{w}_n$'s, which implies the relation (\ref{Psi-tau-rel}). These formal computations can be justified rigorously \cite{Takasaki84} in the case where $U$ is given by the quotient \begin{equation} U = W_0^{-1}\bar{W}_0 \end{equation} of two triangular matrices of the same form as $W$ and $\bar{W}$. In this case, $W_0$ and $\bar{W}_0$ can be identified with the initial values of $W$ and $\bar{W}$: \begin{equation} W_0 = W|_{\boldsymbol{t}=\bar{\boldsymbol{t}}=\boldsymbol{0}}, \quad \bar{W}_0 = \bar{W}|_{\boldsymbol{t}=\bar{\boldsymbol{t}}=\boldsymbol{0}}. \end{equation} In other words, the factorization problem (\ref{factor}) in this setup solves the initial value problem of the Sato equations (\ref{Satoeq2}). The determinant formula (\ref{tau=det}) has many implications. First, this is an analogue of the determinant formula of the tau functions of the KP hierarchy. Since $U$ is an element of the ``group'' $\mathrm{GL}(\infty)$ \footnote{This notation is used here in a loose sense and not intended to denote a true group.}, it is $\mathrm{GL}(\infty)$ itself that plays the role of the infinite-dimensional Grassmann manifold in the case of the KP hierarchy. More precisely, the true phase space lies in the product of two flag manifolds in which $W$ and $\bar{W}$ live. Second, the generating matrix $U$ is related to the generalized string equations (\ref{gstreq}). These equations are a consequence of the algebraic relations \begin{equation} \Lambda U = Uf(\Delta,\Lambda),\quad \Delta U = Ug(\Delta,\Lambda) \end{equation} satisfied by $\Lambda$, $\Delta$ and $U$. Third, the determinant formula (\ref{tau=det}) can be translated to the language of a 2D complex free fermion system. Let us turn to this fermionic formalism of the 2D Toda hierarchy. \section{Fermionic formalism} \subsection{Complex free fermion system} Let \begin{equation*} \psi(z) = \sum_{n\in\mathbb{Z}} \psi_nz^{-n-1}, \quad \psi^*(z) = \sum_{n\in\mathbb{Z}} \psi^*_nz^{-n} \end{equation*} denote the conjugate pair of 2D complex free fermion fields. For convenience, we use integers rather than half-integers for the labels of Fourier modes $\psi_n,\psi^*_n$. The Fourier modes satisfy the anti-commutation relations \begin{equation*} \psi_m\psi^*_n + \psi^*_n\psi_m = \delta_{m+n,0}, \quad \psi_m\psi_n + \psi_n\psi_m = 0,\quad \psi^*_m\psi^*_n + \psi^*_n\psi^*_m = 0. \end{equation*} $\psi_i$'s and $\psi^*_i$'s are understood to be linear operators on the fermionic Fock spaces. They act on the Fock space $\mathcal{H}$ from the left side and on its dual space $\mathcal{H}^*$ from the right side. These Fock spaces are decomposed to charge-$s$ sectors $\mathcal{H}_s,\mathcal{H}^*_s$, $s \in \mathbb{Z}$. Let $\langle s|$ and $|s\rangle$ denote the ground states in $\mathcal{H}_s$ and $\mathcal{H}^*_s$: \footnote{The shift of $s$ in (\ref{Psi-tau-rel}) and (\ref{tau=det}) is related to this definition of the ground states.} \begin{equation*} \langle s| = \langle-\infty|\cdots\psi^*_{s-1}\psi^*_{s},\quad |s\rangle = \psi_{-s}\psi_{-s+1}\cdots|-\infty\rangle. \end{equation*} Excited states are labelled by partitions $\lambda = (\lambda_1,\lambda_2,\cdots,\lambda_n,0,0,\cdots)$, $\lambda_1 \geq \lambda_2 \geq \cdots \geq 0$, of arbitrary length as \begin{equation*} \begin{gathered} \langle\lambda,s| = \langle s|\psi_{-s}\cdots\psi_{n-1-s} \psi^*_{\lambda_n-n+1+s}\cdots\psi^*_{\lambda_1+s},\\ |\lambda,s\rangle = \psi_{-\lambda_1-s}\cdots\psi_{-\lambda_n+n-1-s} \psi^*_{-n+1+s}\cdots\psi^*_{s}|s\rangle. \end{gathered} \end{equation*} $\langle s|$ and $|s\rangle$ are identified with $\langle\emptyset,s|$ and $|\emptyset,s\rangle$. $|\lambda,s\rangle$ and $\langle\lambda,s|$ represent a state in which the semi-infinite subset $\{\lambda_i-i+1+s\}_{i=1}^\infty$ of the set $\mathbb{Z}$ of all energy levels are occupied by particles. These vectors form dual bases of $\mathcal{H}_s$ and $\mathcal{H}^*_s$: \begin{equation} \langle\lambda,r|\mu,s\rangle = \delta_{\lambda\mu}\delta_{rs}. \end{equation} The normal ordered fermion bilinears \begin{equation*} {:}\psi_{-i}\psi^*_j{:} = \psi_{-i}\psi^*_j - \langle 0|\psi_{-i}\psi^*_j|0\rangle, \quad i,j \in \mathbb{Z}, \end{equation*} where \begin{equation*} \langle 0|\psi_{-i}\psi^*_j|0\rangle = \begin{cases} 1 & \text{if $i = j \leq 0$},\\ 0 & \text{otherwise}, \end{cases} \end{equation*} span the one-dimensional central extension $\widehat{\mathrm{gl}}(\infty)$ of the Lie algebra $\mathrm{gl}(\infty)$ of $\mathbb{Z}\times\mathbb{Z}$ matrices \cite{Kac-book,MJD-book}. $\mathrm{gl}(\infty)$ consists of infinite matrices $A = (a_{ij})_{i,j\in\mathbb{Z}}$ that correspond to difference operators of finite type (i.e., of $[M,N]$-type for a pair of integers $M,N$ that can depend on $A$). For such a matrix $A \in \mathrm{gl}(\infty)$, the fermion bilinear \begin{equation*} \widehat{A} = \sum_{i,j\in\mathbb{Z}}a_{ij}{:}\psi_{-i}\psi^*_j{:} \end{equation*} becomes a well-defined linear operator on the Fock space, and preserves the charge in the sense that \begin{equation} \langle\lambda,r|\widehat{A}|\mu,s\rangle = 0 \quad\text{if $r \not= s$.} \end{equation} The elements of $\widehat{\mathrm{gl}}(\infty)$ satisfy the commutation relation \begin{equation} [\widehat{A},\widehat{B}] = \widehat{[A,B]} + \gamma(A,B) \end{equation} with the $c$-number cocycle \begin{equation} \gamma(A,B) = \sum_{i>0,j\leq 0}(a_{ij}b_{ji} - b_{ij}a_{ji}). \end{equation} \subsection{Vertex operators and Schur functions} We here introduce the special fermion bilinears \begin{equation*} J_m = \widehat{\Lambda^m} = \sum_{n\in\mathbb{Z}}{:}\psi_{m-n}\psi^*_n{:}, \quad m \in \mathbb{Z}, \end{equation*} which satisfy the commutation relations \begin{equation} [J_m, J_n] = m\delta_{m+n} \label{[J,J]} \end{equation} of the Heisenberg algebra. These operators are used to construct vertex operators. The matrix elements of such a vertex operator with respect to the vectors $\langle\lambda,s|$ and $|\mu,s\rangle$ are related to the Schur and skew Schur functions. Actually, there are two different types of vertex operators that correspond to different formulations of these functions. Vertex operators of the first type are given by the product \begin{equation*} \Gamma_{\pm}(\boldsymbol{x}) = \prod_{i\ge 1}\Gamma_{\pm}(x_i),\quad \boldsymbol{x} = (x_1,x_2,\ldots), \end{equation*} of the elementary vertex operators \begin{equation*} \Gamma_{\pm}(x) = \exp\left(\sum_{k=1}^\infty\frac{x^k}{k}J_{\pm k}\right). \end{equation*} The matrix elements of these operators are the skew Schur functions $s_{\lambda/\mu}(\boldsymbol{x})$ in the sense of symmetric functions of $\boldsymbol{x}$ \cite{ORV03}: \begin{equation} \langle\lambda,s|\Gamma_{-}(\boldsymbol{x})|\mu,s\rangle = \langle\mu,s|\Gamma_{+}(\boldsymbol{x})|\lambda,s\rangle = s_{\lambda/\mu}(\boldsymbol{x}). \end{equation} In particular, if $\mu = \emptyset$, the matrix elements become the Schur functions $s_\lambda(\boldsymbol{x})$: \begin{equation} \langle\lambda,s|\Gamma_{-}(\boldsymbol{x})|s\rangle = \langle s|\Gamma_{+}(\boldsymbol{x})|\lambda,s\rangle = s_\lambda(\boldsymbol{x}). \end{equation} Vertex operators of the second type are defined as \begin{equation*} \gamma_{\pm}(\boldsymbol{t}) = \exp\left(\sum_{k=1}^\infty t_kJ_{\pm k}\right). \end{equation*} It is these operators $\gamma_{\pm}(\boldsymbol{t})$ that are commonly used in the fermionic formula of tau functions of the KP and 2D Toda hierarchies \cite{MJD-book,AZ12}. The matrix elements of $\gamma_{\pm}(\boldsymbol{t})$ are the skew Schur functions $S_{\lambda/\mu}(\boldsymbol{t})$ of the $\boldsymbol{t}$-variables: \begin{equation} \langle\lambda,s|\gamma_{-}(\boldsymbol{t})|\mu,s\rangle = \langle\mu,s|\gamma_{+}(\boldsymbol{t})|\lambda,s\rangle = S_{\lambda/\mu}(\boldsymbol{t}). \end{equation} These functions are defined by the determinant formula \begin{equation} S_{\lambda/\mu}(\boldsymbol{t}) = \det(S_{\lambda_i-\mu_j-i+j}(\boldsymbol{t}))_{i,j=1}^n \label{skewSchur=det} \end{equation} for partitions of the form $\lambda = (\lambda_1,\ldots,\lambda_n,0,0,\ldots)$, $\mu = (\mu_1,\ldots,\mu_n,0,0,\ldots)$. $S_n(\boldsymbol{t})$'s are the polynomials defined by the generating function (\ref{S_n}). $\gamma_{\pm}(\boldsymbol{t})$ can be converted to $\Gamma_{\pm}(\boldsymbol{x})$ by substituting \begin{equation} t_k = \frac{1}{k}\sum_{i\geq 1}x_i^k. \end{equation} By the same transformation of variables, the polynomials $S_n(\boldsymbol{t})$ in $\boldsymbol{t}$ turn into the homogeneous symmetric function $h_n(\boldsymbol{x})$ of $\boldsymbol{x}$. The determinant formula (\ref{skewSchur=det}) of $S_{\lambda/\mu}(\boldsymbol{t})$ thereby reproduces the Jacobi-Trudi formula of $s_{\lambda/\mu}(\boldsymbol{x})$. \subsection{Fermionic formula of tau functions} In terms of the foregoing vertex operators, the fermionic formula of Toda tau functions \cite{Takebe91a,Takebe91b,AZ12} reads \footnote{A prototype of this formula can be found in the work of Jimbo and Miwa \cite{JM83}.} \begin{equation} \tau(s,\boldsymbol{t},\bar{\boldsymbol{t}}) = \langle s|\gamma_{+}(\boldsymbol{t})g\gamma_{-}(-\bar{\boldsymbol{t}})|s\rangle, \label{tau=<..>} \end{equation} where $g$ is an element of the ``group'' $\widehat{\mathrm{GL}}(\infty)$ \footnote{This notation, too, is used here in a loose sense just like $\mathrm{GL}(\infty)$.} of Clifford operators (typically, the exponential $e^{\hat{A}}$ of a fermion bilinear $\hat{A}$) \cite{Kac-book,MJD-book}. Such a Clifford operator induces a linear transformation on the linear span of $\psi_i$'s and $\psi^*_i$'s by the adjoint action: \begin{equation} g\psi_jg^{-1} = \sum_{i\in\mathbb{Z}}\psi_iU_{ij},\quad g\psi^*_jg^{-1} = \sum_{i\in\mathbb{Z}}\psi^*_i\tilde{U}_{ij}. \end{equation} The coefficients $U_{ij}$ and $\tilde{U}_{ij}$ satisfy the orthogonality condition \begin{equation} \sum_{k\in\mathbb{Z}}U_{ik}\tilde{U}_{jk} = \sum_{k\in\mathbb{Z}}U_{ki}\tilde{U}_{kj} = \delta_{ij}. \end{equation} The fermionic formula (\ref{tau=<..>}) corresponds to the determinant formula (\ref{tau=det}) of the factorization problem (\ref{factor}) for the matrix $U = (U_{ij})_{i,j\in\mathbb{Z}}$. An immediate consequence of (\ref{tau=<..>}) is the Schur function expansion \begin{equation} \tau(s,\boldsymbol{t},\bar{\boldsymbol{t}}) = \sum_{\lambda,\mu\in\mathcal{P}}\langle\lambda,s|g|\mu,s\rangle S_\lambda(\boldsymbol{t})S_\mu(-\bar{\boldsymbol{t}}), \end{equation} where $\mathcal{P}$ denotes the set of all partitions. This expansion is obtained by inserting the partition of unity \begin{equation} 1 = \sum_{\lambda\in\mathcal{P},s\in\mathbb{Z}}|\lambda,s\rangle\langle\lambda,s| \label{unity} \end{equation} to the two places among $\gamma_{+}(\boldsymbol{t})$, $g$ and $\gamma_{-}(-\bar{\boldsymbol{t}})$. This amounts to applying the Cauchy-Binet formula to the determinant formula (\ref{tau=det}). The three factors $\langle\lambda,s|g|\mu,s\rangle$, $S_\lambda(\boldsymbol{t})$, $S_\mu(-\bar{\boldsymbol{t}})$ may be thought of as minors of the three matrices on the right side of (\ref{U(t,tbar)}). As regards $S_\lambda(\boldsymbol{t})$ and $S_\mu(-\bar{\boldsymbol{t}})$, this is indeed a consequence of the special case \begin{equation} S_\lambda(\boldsymbol{t}) = \det(S_{\lambda_i-i+j}(\boldsymbol{t}))_{i,j=1}^n \end{equation} of the determinant formula (\ref{skewSchur=det}). \subsection{Hypergeometric tau functions} Let us illustrate the fermionic formula (\ref{tau=<..>}) in the case of hypergeometric tau functions \cite{OS00,OS01a,OS01b}. This is the case where the generating operator $g$ of the tau function (\ref{tau=<..>}) corresponds to a diagonal matrix in $\mathrm{GL}(\infty)$. Let $U = (e^{T_i}\delta_{ij})_{i,j\in\mathbb{Z}}$ be such a diagonal matrix. The associated generating operator can be expressed as \begin{equation} g = \exp\left(\sum_{n\in\mathbb{Z}}T_n{:}\psi_{-n}\psi^*_n{:}\right). \label{HG-g} \end{equation} This operator, too, is diagonal with respect to the basis $\{|\lambda,s\rangle\}_{\lambda \in \mathcal{P},s \in \mathbb{Z}}$, of the Fock space. Thus the tau function becomes a single sum over all partitions: \begin{equation} \tau(s,\boldsymbol{t},\bar{\boldsymbol{t}}) = \sum_{\lambda\in\mathcal{P}}\langle\lambda,s|g|\lambda,s\rangle S_\lambda(\boldsymbol{t})S_\lambda(-\bar{\boldsymbol{t}}). \label{HG-tau} \end{equation} The diagonal elements of $g$ takes the so called ``contents product'' form: \begin{equation} \langle\lambda,s|g|\lambda,s\rangle = \langle s|g|s\rangle\prod_{(i,j)\in\lambda}r_{j-i+1+s}, \end{equation} where $(i,j)\in\lambda$ means that $(i,j)$ runs over the cells of the Young diagram of shape $\lambda$, and $r_n$'s are defined as \begin{equation*} r_n = e^{T_n - T_{n-1}} \end{equation*} The $\lambda$-independent factor $\langle s|g|s\rangle$ can be expressed as \begin{equation} \langle s|g|s\rangle = \frac{\prod_{i=1}^\infty e^{T_{-i+1+s}}}{\prod_{i=1}^\infty e^{T_{-i+1}}} = \begin{cases} \prod_{i=1}^s e^{T_{-i+1+s}} & \text{if $s > 0$},\\ 1 & \text{if $s = 0$},\\ \prod_{i=1}^{-s}e^{-T_{-i+1}} & \text{if $s < 0$}. \end{cases} \label{<s|g|s>} \end{equation} These tau functions are called ``hypergeometric'' after the work of Orlov and Scherbin \cite{OS00,OS01a,OS01b}, because their work aimed at applications to multivariate hypergeometric functions. Actually, specialization of the parameters $\{T_n\}_{n\in\mathbb{Z}}$ yields a variety of examples other than hypergeometric functions. Earliest examples of these tau functions can be found in the studies of random matrix models \cite{KMMM93,Orlov02a,Orlov02b,OS05} and $c = 1$ string theory \cite{DMP93,NTT95,Takasaki96}. Another source of examples is enumerative geometry of $\mathbb{C}\mathbb{P}^1$ and $\mathbb{C}^2$ \cite{Okounkov00,LQW03,QW04}. Recent researches of this class of tau functions are focussed on the double Hurwitz numbers \cite{Takasaki12,Alexandrov11,AMMN12,HO15} and their variants \cite{GPH14a,GPH14b,Harnad1410,Harnad1504}. Let us briefly recall the tau function of the double Hurwitz numbers \cite{Okounkov00}. The generating operator takes such a form as \begin{equation} g = Q^{L_0}e^{\beta K/2}, \end{equation} where $Q$ and $\beta$ are constants, and $L_0$ and $K$ are the special fermion bilinears \begin{equation*} \begin{gathered} L_0 = \widehat{\Delta} = \sum_{n\in\mathbb{Z}}n{:}\psi_{-n}\psi^*_n{:},\\ K = \widehat{(\Delta-1/2)^2} = \sum_{n\in\mathbb{Z}}(n - 1/2)^2{:}\psi_{-n}\psi^*_n{:}. \end{gathered} \end{equation*} The diagonal matrix elements of these fermion bilinears can be computed as follows: \begin{equation} \begin{gathered} \langle\lambda,s|L_0|\lambda,s\rangle =|\lambda| + \frac{s(s+1)}{2},\\ \langle\lambda,s|K|\lambda,s\rangle = \kappa(\lambda) + 2s|\lambda| + \frac{4s^3-s}{12}, \end{gathered} \label{<L0>,<K>} \end{equation} where \begin{equation*} \begin{gathered} |\lambda| = \sum_{i=1}^\infty\lambda_i,\quad \kappa(\lambda) = \sum_{i=1}^\infty\lambda_i(\lambda_i - 2i + 1). \end{gathered} \end{equation*} This implies that \begin{equation*} \begin{gathered} \langle\lambda,s|Q^{L_0}|\lambda,s\rangle = Q^{|\lambda| + s(s+1)/2},\\ \langle\lambda,s|e^{\beta K/2}|\lambda,s\rangle = e^{\beta(\kappa(\lambda)/2 + s|\lambda| + (4s^3-s)/24)}, \end{gathered} \end{equation*} Consequently, the tau function has a Schur function expansion of the form \begin{equation} \tau(s,\boldsymbol{t},\bar{\boldsymbol{t}}) = \sum_{\lambda\in\mathcal{P}}Q^{|\lambda|+s(s+1)/2} e^{\beta(\kappa(\lambda)/2 + s|\lambda| + (4s^3-s)/24)} S_\lambda(\boldsymbol{t})S_\lambda(-\bar{\boldsymbol{t}}). \label{2Hurwitz-tau} \end{equation} Its specialization \begin{equation} \tau(0,\boldsymbol{t},\bar{\boldsymbol{t}}) = \sum_{\lambda\in\mathcal{P}}Q^{|\lambda|}e^{\beta\kappa(\lambda)/2} S_\lambda(\boldsymbol{t})S_\lambda(-\bar{\boldsymbol{t}}) \end{equation} to $s = 0$ is a genuine generating function of the double Hurwitz numbers. Further specialization to $\bar{\boldsymbol{t}} = (-1,0,0,\ldots)$ becomes a generating function of the single Hurwitz numbers. The special value of the second Schur function at this point can be computed by the combinatorial formula \cite{Mac-book} \begin{equation} S_\lambda(1,0,0,\ldots) = \frac{\dim\lambda}{|\lambda|!} = \prod_{(i,j)\in\lambda}h(i,j)^{-1}, \label{hook-formula} \end{equation} where $h(i,j)$ is the hook length of the cell $(i,j)$ in the Young diagram of shape $\lambda$, and $\dim\lambda$ is the number of standard tableau therein (i.e., the dimension of the associated irreducible representation of the symmetric group $S_N$, $N = |\lambda|$). The doubly specialized tau function \begin{equation} \tau(0,\boldsymbol{t},-1,0,0,\ldots) = \sum_{\lambda\in\mathcal{P}}\frac{\dim\lambda}{|\lambda|!} Q^{|\lambda|}e^{\beta\kappa(\lambda)/2}S_\lambda(\boldsymbol{t}) \end{equation} reproduces a generating function of the single Hurwitz numbers. Note that this is a tau function of the KP hierarchy with the fermionic expression \begin{equation} \tau(0,\boldsymbol{t},-1,0,0,\ldots) = \langle 0|\gamma_{+}(\boldsymbol{t})Q^{L_0}e^{\beta K/2}e^{J_{-1}}|0\rangle. \end{equation} \section{Melting crystal models} \subsection{Statistical model of random 3D Young diagrams} The simplest melting crystal model \cite{ORV03} has a single parameter $q$ in the range $0 < q < 1$ (or just a formal variable). The partition function is the sum \begin{equation} Z = \sum_{\pi\in\mathcal{P}\calP} q^{|\pi|} \label{Z=PPsum} \end{equation} of the Boltzmann weight $q^{|\pi|}$ over the set $\mathcal{P}\calP$ of all plane partitions. The plane partition \begin{equation*} \pi = (\pi_{ij})_{i,j=1}^\infty = \begin{pmatrix} \pi_{11} & \pi_{12} & \cdots \\ \pi_{21} & \pi_{22} & \cdots \\ \vdots & \vdots & \ddots \end{pmatrix}, \quad \pi_{i+1,j} \leq \pi_{ij} \geq \pi_{i,j+1}, \end{equation*} represent a 3D Young diagram in the first octant of the $xyz$-space. $\pi_{ij}$ is the height of the stacks of unit cubes on the unit square $[i-1,i]\times[j-1,j]$ of the $xy$-plane. $|\pi|$ denotes the volume of the 3D Young diagram, i.e., \begin{equation*} |\pi| = \sum_{i,j=1}^\infty \pi_{ij}. \end{equation*} By the method of diagonal slicing \cite{ORV03}, the sum (\ref{Z=PPsum}) over the set of plane partitions can be converted to the sum \begin{equation} Z = \sum_{\lambda\in\mathcal{P}}s_\lambda(q^{-\rho})^2 \label{Z=Psum} \end{equation} over the set of ordinary partitions. The building block $s_\lambda(q^{-\rho})$ of the Boltzmann weight is the special value of the infinite-variate Schur function $s_\lambda(\boldsymbol{x})$ at \begin{equation*} \boldsymbol{x} = q^{-\rho} = (q^{1/2},q^{3/2},\ldots,q^{i-1/2},\ldots). \end{equation*} This is a kind of ``principal specialization'' of $s_\lambda(\boldsymbol{x})$ \cite{Mac-book}, and can be computed by the hook-length formula \begin{equation} s_\lambda(q^{-\rho}) = \frac{q^{-\kappa(\lambda)/4}} {\prod_{(i,j)\in\lambda}(q^{-h(i,j)/2} - q^{h(i,j)/2})}. \label{q-hook-formula} \end{equation} Note that this formula is a $q$-analogue of (\ref{hook-formula}) for $S_\lambda(1,0,0,\ldots)$. Let us introduce another parameter $Q$, a discrete variable $s \in \mathbb{Z}$ and an infinite number of continuous variables $\boldsymbol{t} = (t_1,t_2,\ldots)$, and deform (\ref{Z=Psum}) as \begin{equation} Z(s,\boldsymbol{t}) = \sum_{\lambda\in\mathcal{P}}s_\lambda(q^{-\rho})^2 Q^{|\lambda|+s(s+1)/2}e^{\phi(\lambda,s,\boldsymbol{t})}. \label{Z(s)=Psum} \end{equation} $Q^{|\lambda|+s(s+1)/2}$ is the same factor as inserted in the tau function (\ref{2Hurwitz-tau}) of the double Hurwitz numbers. $\phi(\lambda,s,\boldsymbol{t})$ is a linear combination \begin{equation*} \phi(\lambda,s,\boldsymbol{t}) = \sum_{k=1}^\infty t_k\phi_k(\lambda,s) \end{equation*} of the external potentials \begin{equation} \phi_k(\lambda,s) = \sum_{i=1}^\infty\left(q^{k(\lambda_i-i+1+s)} - q^{k(-i+1+s)}\right) + \frac{1 - q^{ks}}{1 - q^k}q^s, \label{phi_k} \end{equation} and $t_k$'s play the role of coupling constants of these potentials. Note that the sum on the right hand side of (\ref{phi_k}) is a finite sum, because only a finite number of $\lambda_i$'s are non-zero. (\ref{Z(s)=Psum}) is related to 5D $\mathcal{N} = 1$ supersymmetric $U(1)$ Yang-Mills theory \cite{MNTT04}. The external potentials represent the contribution of Wilson loops along the fifth dimension therein \cite{NT07}. Let us mention that these external potentials are obtained from the apparently divergent (as far as $|q| < 1$) expression \begin{equation} \phi_k(\lambda,s) = \sum_{i=1}^\infty q^{k(\lambda_i-i+1+s)} - \sum_{i=1}^\infty q^{k(-i+1)} \label{phi_k2} \end{equation} by recombination of terms as \begin{equation*} \phi_k(\lambda,s) = \sum_{i=1}^\infty\left(q^{k(\lambda_i-i+1+s)} - q^{k(-i+1+s)}\right) + \sum_{i=1}^\infty q^{k(-i+1+s)} - \sum_{i=1}^\infty q^{k(-i+1)}. \end{equation*} The difference of the last two sums, too, thereby becomes a finite sum: \begin{equation*} \sum_{i=1}^\infty q^{k(-i+1+s)} - \sum_{i=1}^\infty q^{k(-i+1)} \\ = \left\{\begin{matrix} \sum_{i=1}^s q^{k(-i+1+s)} & \text{if $s > 0$}\\ 0 & \text{if $s = 0$}\\ - \sum_{i=1}^s q^{k(-i+1)} & \text{if $s < 0$} \end{matrix}\right\} = \frac{1 - q^{ks}}{1 - q^k}q^s. \end{equation*} A similar prescription is used in the computation (\ref{<s|g|s>}) of the factor $\langle s|g|s\rangle$ in hypergeometric tau functions. These computations are related to normal ordering of fermion bilinears. It is this deformed partition function $Z(s,\boldsymbol{t})$ that is shown to be related to the 1D Toda hierarchy. To this end, we use a fermionic expression of $Z(s,\boldsymbol{t})$. Before showing this expression, let us present another melting crystal model. \subsection{Modified melting crystal model} The second model is obtained by replacing the main part of the Boltzmann weight as \begin{equation*} s_\lambda(q^{-\rho})^2 \;\longrightarrow\; s_\lambda(q^{-\rho})s_{\tp{\lambda}}(q^{-\rho}), \end{equation*} where $\tp{\lambda}$ denotes the conjugate (or transposed) partition of $\lambda$. Namely, in place of (\ref{Z=Psum}) or its $Q$-deformed version \begin{equation} Z = \sum_{\lambda\in\mathcal{P}}s_\lambda(q^{-\rho})^2Q^{|\lambda|}, \label{Z(Q)=Psum} \end{equation} we here consider the modified partition function \begin{equation} Z' = \sum_{\lambda\in\mathcal{P}}s_\lambda(q^{-\rho})s_{\tp{\lambda}}(q^{-\rho})Q^{|\lambda|} \end{equation} and its deformations by external potentials. In view of the relation \begin{equation*} s_{\tp{\lambda}}(q^{-\rho}) = q^{\kappa(\lambda)/2}s_\lambda(q^{-\rho}) \end{equation*} that can be derived from (\ref{q-hook-formula}), one can rewrite $Z'$ as \begin{equation} Z' = \sum_{\lambda\in\mathcal{P}}s_\lambda(q^{-\rho})^2 q^{\kappa(\lambda)/2}Q^{|\lambda|}. \label{Z'(Q)=Psum} \end{equation} These partition functions originate in Gromov-Witten/topological string theory of special local Calabi-Yau threefolds called ``local $\mathbb{C}\mathbb{P}^1$ geometry'' \cite{BP08,CGMPS06}. In particular, $Z'$ is related to the ``resolved conifold'', for which Brini pointed out a relation to the Ablowitz-Ladik hierarchy \cite{Brini10}. Let us mention that one can use the homogeneity \begin{equation*} s_\lambda(Qx_1,Qx_2,\ldots) = Q^{|\lambda|}s_\lambda(x_1,x_2,\ldots) \end{equation*} and the Cauchy identities \begin{equation*} \begin{gathered} \sum_{\lambda\in\mathcal{P}}s_\lambda(x_1,x_2,\ldots)s_\lambda(y_1,y_2,\ldots) = \prod_{i,j\geq 1}(1 - x_iy_j)^{-1},\\ \sum_{\lambda\in\mathcal{P}}s_\lambda(x_1,x_2,\ldots)s_{\tp{\lambda}}(y_1,y_2,\ldots) = \prod_{i,j\geq 1}(1 + x_iy_j) \end{gathered} \end{equation*} of the Schur functions to convert these partition functions to an infinite product form: \begin{equation} \begin{gathered} Z = \prod_{i,j=1}^\infty (1 - Qq^{i+j-1})^{-1} = \prod_{n=1}^\infty (1 - Qq^n)^{-n}, \\ Z' = \prod_{i,j=1}^\infty (1 + Qq^{i+j-1}) = \prod_{n=1}^\infty (1 + Qq^n)^{-n}. \end{gathered} \end{equation} These functions are referred to as the ``MacMahon function'' in the literature of combinatorics and mathematical physics. We deform $Z'$ by two sets of external potentials $\phi_{\pm k}(\lambda,s)$, $k = 1,2,\ldots$, with coupling constants $\boldsymbol{t} = (t_1,t_2,\ldots)$ and $\bar{\boldsymbol{t}} = (\bar{t}_1,\bar{t}_2,\ldots)$ as \begin{equation} \begin{gathered} Z'(s,\boldsymbol{t},\bar{\boldsymbol{t}}) = \sum_{\lambda\in\mathcal{P}}s_\lambda(q^{-\rho})s_{\tp{\lambda}}(q^{-\rho}) Q^{|\lambda|+s(s+1)/2}e^{\phi(\lambda,s,\boldsymbol{t},\bar{\boldsymbol{t}})},\\ \phi(\lambda,s,\boldsymbol{t},\bar{\boldsymbol{t}}) = \sum_{k=1}^\infty t_k\phi_k(\lambda,s) + \sum_{k=1}^\infty\bar{t}_k\phi_{-k}(\lambda,s). \end{gathered} \label{Z'(s)=Psum} \end{equation} $\phi_{-k}(\lambda,s)$'s are defined by the same formula as (\ref{phi_k}) with $k$ replaced by $-k$. As it turns out, $\boldsymbol{t}$ and $\bar{\boldsymbol{t}}$ correspond to the two sets of time variables of the 2D Toda hierarchy. \subsection{Fermionic expression of partition functions} To translate $Z(s,\boldsymbol{t})$ and $Z'(s,\boldsymbol{t},\bar{\boldsymbol{t}})$ to the language of the complex free fermion system, we need some more operators on the Fock space. Let us introduce the new fermion bilinears \begin{equation} H_k = \widehat{q^{k\Delta}} = \sum_{n\in\mathbb{Z}}q^{kn}{:}\psi_{-n}\psi^*_n{:}, \quad k \in \mathbb{Z}. \label{H_k} \end{equation} These operators are diagonal with respect to the basis $\{|\lambda,s\rangle\}_{\lambda\in\mathcal{P},s\in\mathbb{Z}}$, and the matrix elements are nothing but the external potentials $\phi_k(\lambda,s)$: \begin{equation} \langle\lambda,s|H_k|\lambda,s\rangle = \phi_k(\lambda,s). \end{equation} This explains the origin of the formal expression (\ref{phi_k2}) and its interpretation (\ref{phi_k}). The exponential factors in (\ref{Z(s)=Psum}) and (\ref{Z'(s)=Psum}) can be thereby expressed as \begin{equation*} e^{\phi(\lambda,s,\boldsymbol{t})} = \langle\lambda,s|e^{H(\boldsymbol{t})}|\lambda,s\rangle,\quad e^{\phi(\lambda,s,\boldsymbol{t},\bar{\boldsymbol{t}})} = \langle\lambda,s|e^{H(\boldsymbol{t},\bar{\boldsymbol{t}})}|\lambda,s\rangle, \end{equation*} where \begin{equation*} H(\boldsymbol{t}) = \sum_{k=1}^\infty t_kH_k, \quad H(\boldsymbol{t},\bar{\boldsymbol{t}}) = \sum_{k=1}^\infty t_kH_k + \sum_{k=1}^\infty\bar{t}_kH_{-k}. \end{equation*} The other building blocks of $Z(s,\boldsymbol{t})$ are similar to those of the tau function (\ref{2Hurwitz-tau}) of the double Hurwitz numbers: \begin{equation*} \begin{gathered} s_\lambda(q^{-\rho}) = \langle s|\Gamma_{+}(q^{-\rho})|\lambda,s\rangle = \langle\lambda,s|\Gamma_{-}(q^{-\rho})|s\rangle,\\ Q^{|\lambda|+s(s+1)/2} = \langle\lambda,s|Q^{L_0}|\lambda,s\rangle. \end{gathered} \end{equation*} These building bocks are glued together by the partition of unity (\ref{unity}) to construct the following fermionic formula of $Z(s,\boldsymbol{t})$: \begin{equation} Z(s,\boldsymbol{t}) = \langle s|\Gamma_{+}(q^{-\rho})Q^{L_0} e^{H(\boldsymbol{t})}\Gamma_{-}(q^{-\rho})|s\rangle \label{Z(s)=<..>} \end{equation} To derive a similar fermionic formula of $Z'(s,\boldsymbol{t},\bar{\boldsymbol{t}})$, we use the following variants of $\Gamma_{\pm}(\boldsymbol{x})$ \cite{YB08}: \begin{equation*} \begin{gathered} \Gamma'_{\pm}(\boldsymbol{x}) = \prod_{i\ge 1}\Gamma'_{\pm}(x_i), \quad \boldsymbol{x} = (x_1,x_2,\ldots), \\ \Gamma'_{\pm}(z) = \exp\left(- \sum_{k=1}^\infty\frac{(-z)^k}{k}J_{\pm k}\right). \end{gathered} \end{equation*} The matrix elements of these modified vertex operators, too, are related to the skew Schur functions except that they are labelled by conjugate partitions: \begin{equation} \langle\lambda,s|\Gamma'_{-}(\boldsymbol{x})|\mu,s\rangle = \langle\mu,s|\Gamma'_{+}(\boldsymbol{x})|\lambda,s\rangle = s_{\tp{\lambda}/\tp{\mu}}(\boldsymbol{x}) \end{equation} Thus the following fermionic formula of $Z'(s,\boldsymbol{t},\bar{\boldsymbol{t}})$ can be obtained in the same way as the case of $Z(s,\boldsymbol{t})$: \begin{equation} Z'(s,\boldsymbol{t},\bar{\boldsymbol{t}}) = \langle s|\Gamma_{+}(q^{-\rho})Q^{L_0} e^{H(\boldsymbol{t},\bar{\boldsymbol{t}})}\Gamma'_{-}(q^{-\rho})|s\rangle. \label{Z'(s)=<..>} \end{equation} These fermionic formulae resemble the fermionic expression of the stationary Gromov-Witten invariants of $\mathbb{C}\mathbb{P}^1$ \cite{OP02a,OP02b} and the instanton partition functions of 4D $\mathcal{N}=2$ supersymmetric gauge theories \cite{LMN03,Nekrasov02,NO03,MN06}. We use these formulae to show that $Z(s,\boldsymbol{t})$ and $Z'(s,\boldsymbol{t},\bar{\boldsymbol{t}})$ are related to tau functions of the 2D Toda hierarchy. \section{Integrable structures of melting crystal models} \subsection{Quantum torus algebra and shift symmetries} Although the fermionic formulae (\ref{Z(s)=<..>}), (\ref{Z'(s)=<..>}) of the partition functions of the melting crystal mode resemble the fermionic formula (\ref{tau=<..>}) of Toda tau functions, they have manifestly different structures. In particular, it is $H_k$'s rather than $J_k$'s that generate deformations of the partition functions. We use special algebraic relations connecting $H_k$'s and $J_k$'s to convert the partition functions to Toda tau functions. These algebraic relations, referred to as ``shift symmetries'', are formulated in the language of a subalgebra in $\widehat{\mathrm{gl}}(\infty)$. This subalgebra is spanned by the fermion bilinears \begin{equation*} V^{(k)}_m = q^{-km/2}\widehat{\Lambda^mq^{k\Delta}} = q^{-km/2}\sum_{n\in\mathbb{Z}}q^{kn}{:}\psi_{m-n}\psi^*_{n}{:}, \quad k,m \in \mathbb{Z}. \end{equation*} This is substantially the same fermionic realization of the quantum torus algebra that are used in the work of Okounkov and Pandharipande on $\mathbb{C}\mathbb{P}^1$ Gromov-Witten theory \cite{OP02a,OP02b}. $V^{(k)}_m$'s satisfy the commutation relations \begin{equation} [V^{(k)}_m, V^{(l)}_n] = (q^{(lm-kn)/2} - q^{(kn-lm)/2}) \left(V^{(k+l)}_{m+n} - \frac{q^{k+l}}{1-q^{k+l}}\delta_{m+n,0}\right) \end{equation} for $k$ and $l$ with $k + l \not= 0$ and \begin{equation} [V^{(k)}_m,V^{(-k)}_n] = (q^{-k(m+n)}-q^{k(m+n)})V^{(0)}_{m+n} + m\delta_{m+n,0}. \end{equation} $H_k$'s and $J_k$'s are particular elements among $V^{(k)}_m$'s: \begin{equation*} H_k = V^{(k)}_0, \quad J_k = V^{(0)}_k. \end{equation*} We have the following three types of shift symmetries \cite{NT07,NT08,Takasaki13}: \begin{itemize} \item[(i)]For $k > 0$ and $m \in \mathbb{Z}$, \begin{multline} \Gamma_{-}(q^{-\rho})\Gamma_{+}(q^{-\rho}) \left(V^{(k)}_m - \frac{q^k}{1-q^k}\delta_{m,0}\right) \\ = (-1)^k\left(V^{(k)}_{m+k} - \frac{q^k}{1-q^k}\delta_{m+k,0}\right) \Gamma_{-}(q^{-\rho})\Gamma_{+}(q^{-\rho}). \label{SSi} \end{multline} \item[(ii)] For $k > 0$ and $m \in \mathbb{Z}$, \begin{multline} \Gamma'_{-}(q^{-\rho})\Gamma'_{+}(q^{-\rho}) \left(V^{(-k)}_m + \frac{1}{1-q^k}\delta_{m,0}\right) \\ = \left(V^{(-k)}_{m+k} + \frac{1}{1-q^k}\delta_{m+k,0}\right) \Gamma'_{-}(q^{-\rho})\Gamma'_{+}(q^{-\rho}). \label{SSii} \end{multline} \item[(iii)] For $k,m \in \mathbb{Z}$, \begin{equation} V^{(k)}_mq^{K/2} = q^{-m/2}q^{K/2}V^{(k+m)}_m. \label{SSiii} \end{equation} \end{itemize} Note that the indices of $V^{(k)}_m$'s are literally shifted after exchanging the order of operator product with $\Gamma_{-}(q^{-\rho})\Gamma_{+}(q^{-\rho})$, $\Gamma'_{-}(q^{-\rho})\Gamma'_{+}(q^{-\rho})$ and $q^{K/2}$. In the earlier work \cite{NT07,NT08,Takasaki13}, we used the slightly different fermion bilinear \begin{equation*} W_0 = \sum_{n\in\mathbb{Z}} n{:}\psi_{-n}\psi^*_n{:} \end{equation*} and the algebraic relation \begin{equation*} V^{(k)}_mq^{W_0/2} = q^{W_0/2}V^{(k+m)}_m \end{equation*} in place of $K$ and (\ref{SSiii}). This difference does not affect the essential part of the whole story. \subsection{$Z(s,\boldsymbol{t})$ as tau function} Let us explain how to convert the partition function $Z(s,\boldsymbol{t})$ of the first melting crystal model to a tau function of the 1D Toda hierarchy with the aid of the foregoing shift symmetries \cite{NT07,NT08}. The first step is to insert apparently redundant operators among $\langle s|$, $|s\rangle$ and the operator product in between: \begin{multline} Z(s,\boldsymbol{t}) = q^{-(4s^3-s)/12} \langle s|q^{K/2}\Gamma_{-}(q^{-\rho})\Gamma_{+}(q^{-\rho})e^{H(\boldsymbol{t})}\\ \quad\mbox{}\times Q^{L_0}\Gamma_{-}(q^{-\rho})\Gamma_{+}(q^{-\rho})q^{K/2}|s\rangle. \label{Z(s)-step1} \end{multline} This is based on the identities \begin{equation*} \begin{gathered} \langle s|q^{K/2} = q^{(4s^3-s)/24}\langle s|,\quad \langle s|\Gamma_{-}(q^{-\rho}) = \langle s|,\\ q^{K/2}|s\rangle = q^{(4s^3-s)/24}|s\rangle,\quad \Gamma_{+}(q^{-\rho})|s\rangle = |s\rangle \end{gathered} \end{equation*} that can be derived from (\ref{<L0>,<K>}) and the fact that $\langle s|J_{-k} = 0$ and $J_k|s\rangle = 0$ for $k > 0$. Also note that the order of $Q^{L_0}$ and $e^{H(\boldsymbol{t})}$, which are commutative, is reversed. The second step is to apply the shift symmetries. The first set (\ref{SSi}) of shift symmetries, specialized to $m = 0$ and $k > 0$, yields the identity \begin{equation*} \Gamma_{-}(q^{-\rho})\Gamma_{+}(q^{-\rho}) \left(H_k - \frac{q^k}{1-q^k}\right) = (-1)^kV^{(k)}_k\Gamma_{-}(q^{-\rho})\Gamma_{+}(q^{-\rho}) \end{equation*} that connects $V^{(k)}_0 = H_k$ and $V^{(k)}_k$. The third set (\ref{SSiii}) of shift symmetries imply the relation \begin{equation*} V^{(k)}_k = q^{k/2}q^{-K/2}J_kq^{K/2} \end{equation*} between $V^{(k)}_k$ and $V^{(0)}_k = J_k$. Thus $H_k - q^k/(1-q^k)$ and $J_k$ turn out to satisfy the intertwining relation \begin{equation*} q^{K/2}\Gamma_{-}(q^{-\rho})\Gamma_{+}(q^{-\rho}) \left(H_k - \frac{q^k}{1-q^k}\right) = (-q^{1/2})^kJ_kq^{K/2}\Gamma_{-}(q^{-\rho})\Gamma_{+}(q^{-\rho}). \end{equation*} This relation can be exponentiated as \begin{multline*} q^{K/2}\Gamma_{-}(q^{-\rho})\Gamma_{+}(q^{-\rho}) \exp\left(\sum_{k=1}^\infty t_k(H_k - \frac{q^k}{1-q^k})\right) \\ = \exp\left(\sum_{k=1}^\infty (-q^{1/2})^kt_kJ_k\right) q^{K/2}\Gamma_{-}(q^{-\rho})\Gamma_{+}(q^{-\rho}). \end{multline*} We can thus rewrite the first half of the operator product in (\ref{Z(s)-step1}) as \begin{multline} q^{K/2}\Gamma_{-}(q^{-\rho})\Gamma_{+}(q^{-\rho})e^{H(\boldsymbol{t})} = \exp\left(\sum_{k=1}^\infty \frac{q^kt_k}{1-q^k}\right)\\ \mbox{}\times \exp\left(\sum_{k=1}^\infty (-q^{1/2})^kt_kJ_k\right) q^{K/2}\Gamma_{-}(q^{-\rho})\Gamma_{+}(q^{-\rho}). \label{Z(s)-step2} \end{multline} Plugging (\ref{Z(s)-step2}) into (\ref{Z(s)-step1}), we obtain the following expression of $Z(s,\boldsymbol{t})$: \begin{multline} Z(s,\boldsymbol{t}) = q^{-(4s^3-s)/12} \exp\left(\sum_{k=1}^\infty \frac{q^kt_k}{1-q^k}\right)\\ \quad \mbox{} \times \langle s|\exp\left(\sum_{k=1}^\infty (-q^{1/2})^kt_kJ_k\right) g|s\rangle, \label{Z(s)=tau1} \end{multline} where \begin{equation} g = q^{K/2}\Gamma_{-}(q^{-\rho})\Gamma_{+}(q^{-\rho}) Q^{L_0}\Gamma_{-}(q^{-\rho})\Gamma_{+}(q^{-\rho})q^{K/2}. \label{mcm-g} \end{equation} Let us note that this expression is slightly different from the one presented in the previous papers \cite{NT07,NT08}, because we use $K$ in place of $W_0$ in (\ref{Z(s)-step1}). In much the same way, moving $e^{H(\boldsymbol{t})}$ to the right of $\Gamma_{-}(q^{-\rho})\Gamma_{+}(q^{-\rho})q^{K/2}$ in (\ref{Z(s)-step1}), we can derive another expression of $Z(s,\boldsymbol{t})$: \begin{multline} Z(s,\boldsymbol{t}) = q^{-(4s^3-s)/12} \exp\left(\sum_{k=1}^\infty \frac{q^kt_k}{1-q^k}\right)\\ \quad \mbox{} \times \langle s|g \exp\left(\sum_{k=1}^\infty (-q^{1/2})^kt_kJ_{-k}\right)|s\rangle. \label{Z(s)=tau2} \end{multline} Actually, as one can show with the aid of the shift symmetries, the operator (\ref{mcm-g}) connects $J_k$'s and $J_{-k}$'s as \begin{equation} J_k g = g J_{-k}, \quad k = 1,2,\ldots. \label{mcm-g-symmetry} \end{equation} This explains why $Z(s,\boldsymbol{t})$ has the two apparently different expressions (\ref{Z(s)=tau1}) and (\ref{Z(s)=tau2}). Apart from the prefactors and the rescaling $t_k \to (-q^{1/2})^kt_k$ of the time variables, the essential part of the right side of (\ref{Z(s)=tau1}) and (\ref{Z(s)=tau2}) is the function \begin{equation} \tau(s,\boldsymbol{t}) = \langle s|\gamma_{+}(\boldsymbol{t})g|s\rangle = \langle s|g\gamma_{-}(\boldsymbol{t})|s\rangle. \label{1Dtau=<..>} \end{equation} By the symmetry (\ref{mcm-g-symmetry}) of $g$, the associated 2D Toda tau function reduces to this function: \begin{equation} \tau(s,\boldsymbol{t},\bar{\boldsymbol{t}}) = \langle s|\gamma_{+}(\boldsymbol{t})g\gamma_{-}(-\bar{\boldsymbol{t}})|s\rangle = \tau(s,\boldsymbol{t} - \bar{\boldsymbol{t}}). \label{2D->1Dtau} \end{equation} This means that $\tau(s,\boldsymbol{t})$ is a tau function of the 1D Toda hierarchy. \begin{remark} The exponential functions in (\ref{Z(s)=tau1}) and (\ref{Z(s)=tau2}) can be absorbed by redefinition of the tau function replacing \begin{equation*} g \to \tilde{g} = \exp\left(\sum_{k=1}^\infty\frac{(-q^{1/2})^k}{k(1-q^k)}J_{-k}\right) g\exp\left(\sum_{k=1}^\infty\frac{(-q^{1/2})^k}{k(1-q^k)}J_k\right). \end{equation*} This is a consequence of the identities \begin{multline*} \exp\left(\sum_{k=1}^\infty \frac{q^kt_k}{1-q^k}\right) \exp\left(\sum_{k=1}^\infty(-q^{1/2})^kt_kJ_k\right)\\ = \exp\left(\sum_{k=1}^\infty(-q^{1/2})^kt_kJ_k\right) \exp\left(\sum_{k=1}^\infty\frac{(-q^{1/2})^k}{k(1-q^k)}J_{-k}\right), \end{multline*} \begin{multline*} \exp\left(\sum_{k=1}^\infty \frac{q^kt_k}{1-q^k}\right) \exp\left(\sum_{k=1}^\infty(-q^{1/2})^kt_kJ_{-k}\right)\\ = \exp\left(\sum_{k=1}^\infty\frac{(-q^{1/2})^k}{k(1-q^k)}J_k\right) \exp\left(\sum_{k=1}^\infty(-q^{1/2})^kt_kJ_{-k}\right) \end{multline*} that can be deduced from the commutation relations (\ref{[J,J]}) of $J_{\pm k}$'s. Note that the new generating operator $\tilde{g}$, too, satisfies the 1D reduction condition \begin{equation*} J_k\tilde{g} = \tilde{g}J_{-k}, \quad k = 1,2,\ldots. \end{equation*} It is also remarkable that the two operators in the transformation $g \to \tilde{g}$ are related to the vertex operators: \begin{equation*} \exp\left(\sum_{k=1}^\infty\frac{(-q^{1/2})^k}{k(1-q^k)}J_{\pm k}\right) = \Gamma'_{\pm}(q^{-\rho})^{-1}. \end{equation*} \end{remark} \begin{remark} There is an another way to avoid the exponential factors in (\ref{Z(s)=tau1}) and (\ref{Z(s)=tau2}). These factors disappear if the external potentials $\phi_k(\lambda)$ are modified as \begin{equation} \phi_k(\lambda,s) = \sum_{i=1}^\infty\left(q^{k(\lambda_i-i+1+s)} - q^{k(-i+1+s)}\right) - \frac{q^{ks}}{1 - q^k}q^s, \label{phi_k-mod} \end{equation} namely, if the constant term $q^k/(1 - q^k)$ is subtracted from $\phi_k(\lambda)$. This amounts to modifying the definition (\ref{H_k}) of $H_k$ as \begin{equation} H_k = \widehat{q^{k\Delta}} - \frac{q^k}{1 - q^k}. \label{H_k-mod} \end{equation} The foregoing computations with the aid of the shift symmetries, too, can be slightly simplified by this redefinition of $H_k$'s. Note that the prefactor $q^{-(4s^3-s)/12}$ cannot be removed by this modification. \end{remark} \subsection{$Z'(s,\boldsymbol{t},\bar{\boldsymbol{t}})$ as tau function} The partition function $Z'(s,\boldsymbol{t},\bar{\boldsymbol{t}})$ of the second melting crystal model can be treated in a parallel manner. Let us show an outline of the computations \cite{Takasaki13}. The first step is to rewrite the fermionic expression (\ref{Z'(s)=<..>}) as follows: \begin{multline} Z'(s,\boldsymbol{t},\bar{\boldsymbol{t}}) = \langle s|q^{K/2}\Gamma_{-}(q^{-\rho})\Gamma_{+}(q^{-\rho})e^{H(\boldsymbol{t})}\\ \mbox{}\times Q^{L_0}e^{\bar{H}(\bar{\boldsymbol{t}})} \Gamma'_{-}(q^{-\rho})\Gamma'_{+}(q^{-\rho})q^{-K/2}|s\rangle, \label{Z'(s)-step1} \end{multline} where \begin{equation*} \bar{H}(\bar{\boldsymbol{t}}) = \sum_{k=1}^\infty\bar{t}_kJ_{-k}. \end{equation*} Note that we have split $e^{H(\boldsymbol{t},\bar{\boldsymbol{t}})}$ into $e^{H(\boldsymbol{t})}$ and $e^{\bar{H}(\bar{\boldsymbol{t}})}$, and inserted $\Gamma'_{+}(q^{-\rho})q^{-K/2}$ in place of $\Gamma_{+}(q^{-\rho})q^{K/2}$ to the right end of the operator product. The second step is to transfer $e^{H(\boldsymbol{t})}$ and $e^{\bar{H}(\bar{\boldsymbol{t}})}$ to the left and right ends, respectively, with the aid of the shift symmetries. Computations for $e^{H(\boldsymbol{t})}$ are exactly the same as the case of $Z(s,\boldsymbol{t})$. To transfer $e^{\bar{H}(\bar{\boldsymbol{t}})}$, we combine the shift symmetries of the second type (\ref{SSii}) and the third type (\ref{SSiii}). This yields the relation \begin{equation*} \left(H_{-k} + \frac{1}{1-q^k}\right) \Gamma'_{-}(q^{-\rho})\Gamma'_{+}(q^{-\rho})q^{-K/2} = \Gamma'_{-}(q^{-\rho})\Gamma'_{+}(q^{-\rho})q^{-K/2}J_{-k} \end{equation*} connecting $H_{-k} + 1/(1-q^k)$ and $J_{-k}$. Exponentiating this relation, we obtain the following counterpart of (\ref{Z'(s)-step2}): \begin{multline} e^{\bar{H}{(\bar{\boldsymbol{t}})}} \Gamma'_{-}(q^{-\rho})\Gamma'_{+}(q^{-\rho})q^{-K/2} = \exp\left(- \sum_{k=1}^\infty\frac{\bar{t}_k}{1-q^k}\right)\\ \mbox{}\times \Gamma'_{-}(q^{-\rho})\Gamma'_{+}(q^{-\rho})q^{-K/2} \exp\left(\sum_{k=1}^\infty q^{-k/2}\bar{t}_kJ_{-k}\right). \label{Z'(s)-step2} \end{multline} Plugging (\ref{Z(s)-step2}) and (\ref{Z'(s)-step2}) into (\ref{Z'(s)-step1}), we can rewrite $Z'(s,\boldsymbol{t},\bar{\boldsymbol{t}})$ as \begin{multline} Z'(s,\boldsymbol{t},\bar{\boldsymbol{t}}) = \exp\left(\sum_{k=1}^\infty\frac{q^kt_k-\bar{t}_k}{1-q^k}\right)\\ \mbox{}\times \langle s|\exp\left(\sum_{k=1}^\infty (-q^{1/2})^kt_kJ_k\right) g\exp\left(\sum_{k=1}^\infty q^{-k/2}\bar{t}_kJ_{-k}\right)|s\rangle, \label{Z'(s)=tau} \end{multline} where \begin{equation} g = q^{K/2}\Gamma_{-}(q^{-\rho})\Gamma_{+}(q^{-\rho}) Q^{L_0}\Gamma'_{-}(q^{-\rho})\Gamma'_{+}(q^{-\rho})q^{-K/2}. \label{mcm'-g} \end{equation} Thus, apart from the exponential prefactor and the rescaling $t_k \to (-q^{1/2})^kt_k$, $\bar{t}_k \to q^{-k/2}\bar{t}_k$ of the time variables, $Z'(s,\boldsymbol{t},\bar{\boldsymbol{t}})$ is a tau function of the 2D Toda hierarchy generated by the operator (\ref{mcm'-g}). One can find no symmetry like (\ref{mcm-g-symmetry}) for the generating operator (\ref{mcm'-g}). The associated tau function is a genuine 2D Toda tau function. Actually, this special solution of the 2D Toda hierarchy falls into the Ablowitz-Ladik hierarchy \cite{Takasaki13}. \begin{remark} The exponential prefactor in (\ref{Z'(s)=tau}) can be absorbed by replacing \begin{equation*} g \to \tilde{g} = \exp\left(\sum_{k=1}^\infty\frac{(-q^{1/2})^k}{k(1-q^k)}J_{-k}\right) g\exp\left(- \sum_{k=1}^\infty\frac{q^{k/2}}{k(1-q^k)}J_k\right). \end{equation*} Alternatively, one can remove this prefactor by subtracting the constant terms $q^{\pm k}/(1 - q^{\pm k})$ from the external potentials $\phi_{\pm k}(\lambda)$ as shown in (\ref{phi_k-mod}). The operators $H_{\pm k}$ are accordingly modified as shown in (\ref{H_k-mod}). \end{remark} \subsection{Shift symmetries in matrix formalism} We here turn to a digression on the quantum torus algebra and the shift symmetries. This is not just a digression, but closely related to the subsequent consideration in the perspective of the Lax formalism. The foregoing quantum torus Lie algebra and shift symmetries can be translated to the language of infinite matrices by the correspondence $A \leftrightarrow \widehat{A}$ between $\mathbb{Z}\times\mathbb{Z}$ matrices and fermion bilinears. This matrix formalism enables us to use the associative product of matrices as well. In particular, the matrix representation $\boldsymbol{V}^{(k)}_m$ of $V^{(k)}_m$ are expressed in term of $\Lambda$ and $\Delta$ as \begin{equation} \boldsymbol{V}^{(k)}_m = q^{-km/2}\Lambda^m q^{k\Delta}. \label{V-matrix} \end{equation} Moreover, the commutation relations \begin{equation} [\boldsymbol{V}^{(k)}_m, \boldsymbol{V}^{(l)}_n] = (q^{(lm-kn)/2} - q^{(kn-lm)/2})\boldsymbol{V}^{(k+l)}_{m+n} \end{equation} of the centerless quantum torus Lie algebra can be derived from the so called quantum torus relation \begin{equation} \Lambda q^\Delta = q q^\Delta\Lambda \label{q-torus} \end{equation} satisfied by $\Lambda$ and $q^\Delta$, which generate an associative quantum torus algebra. Moreover, the vertex operators $\Gamma_{\pm}(q^{-\rho})$ and $\Gamma'_{\pm}(q^{-\rho})$ reveals a hidden link with the notion of quantum dilogarithmic functions \cite{FV93,FK93} through the matrix representation. Such a Clifford operator, too, have the associated matrix representation through the exponentiation $e^A \leftrightarrow e^{\hat{A}}$ of the Lie algebraic correspondence $A \leftrightarrow \hat{A}$. The fundamental vertex operators $\Gamma_{\pm}(x)$ and $\Gamma'_{\pm}(x)$ thereby correspond to the following matrices: \begin{equation} \begin{gathered} \boldsymbol{\Gamma}_{\pm}(x) = \exp\left(\sum_{k=1}^\infty\frac{x^k}{k}\Lambda^{\pm k}\right) = (1 - x\Lambda^{\pm 1})^{-1}, \\ \boldsymbol{\Gamma}'_{\pm}(x) = \exp\left(- \sum_{k=1}^\infty\frac{(-x)^k}{k}\Lambda^{\pm}\right) = (1 + x\Lambda^{\pm 1}). \end{gathered} \end{equation} Consequently, the matrix representation of $\Gamma_{\pm}(q^{-\rho})$ and $\Gamma'_{\pm}(q^{-\rho})$ become an infinite product of these matrices specialized to $x = q^{i-1/2}$: \begin{equation} \boldsymbol{\Gamma}_{\pm}(q^{-\rho}) = \prod_{i=1}^\infty (1 - q^{i-1/2}\Lambda^{\pm 1})^{-1},\quad \boldsymbol{\Gamma}'_{\pm}(q^{-\rho}) = \prod_{i=1}^\infty (1 + q^{i-1/2}\Lambda^{\pm 1}). \label{GG'-matrix} \end{equation} These infinite products may be thought of as matrix-valued quantum dilogarithmic functions in the sense of Faddeev et al. We thus find the following matrix analogues of the shift symmetries: \begin{itemize} \item[(i)]For $k > 0$ and $m \in \mathbb{Z}$, \begin{equation} \boldsymbol{\Gamma}_{-}(q^{-\rho})\boldsymbol{\Gamma}_{+}(q^{-\rho})\boldsymbol{V}^{(k)}_m = (-1)^k\boldsymbol{V}^{(k)}_{m+k}\boldsymbol{\Gamma}_{-}(q^{-\rho})\boldsymbol{\Gamma}_{+}(q^{-\rho}). \end{equation} \item[(ii)] For $k > 0$ and $m \in \mathbb{Z}$, \begin{equation} \boldsymbol{\Gamma}'_{-}(q^{-\rho})\boldsymbol{\Gamma}'_{+}(q^{-\rho})\boldsymbol{V}^{(-k)}_m = \boldsymbol{V}^{(-k)}_{m+k}\boldsymbol{\Gamma}'_{-}(q^{-\rho})\boldsymbol{\Gamma}'_{+}(q^{-\rho}). \end{equation} \item[(iii)] For $k,m \in \mathbb{Z}$, \begin{equation} \boldsymbol{V}^{(k)}_mq^{(\Delta-1/2)^2/2} = q^{-m/2}q^{(\Delta-1/2)^2/2}\boldsymbol{V}^{(k+m)}_m. \label{SSiii-matrix} \end{equation} \end{itemize} These matrix analogues of the shift symmetries can be derived from the matrix representation (\ref{V-matrix}), (\ref{GG'-matrix}) of $V^{(k)}_m$'s and the vertex operators by straightforward computations using the quantum torus relation (\ref{q-torus}) \cite{Takasaki13}. \subsection{Perspectives in Lax formalism} Let us return to the melting crystal models, and consider the associated special solutions of the 2D Toda hierarchy in the Lax formalism. The goal is to show that the Lax operators $L,\bar{L}$ satisfy the reduction conditions (\ref{1D-LLbar}) and (\ref{AL-LLbar2}) to the 1D Toda and Ablowitz-Ladik hierarchies \cite{Takasaki13}. The reasoning can be outlined as follows. \begin{itemize} \item[1.] It is enough to show that the initial values of the Lax operators at $\boldsymbol{t} = \bar{\boldsymbol{t}} = \boldsymbol{0}$ satisfy the reduction condition (\ref{1D-LLbar}) and (\ref{AL-LLbar2}), because these factorized forms are preserved by the time evolutions of the 2D Toda hierarchy. \item[2.] One can explicitly solve the factorization problem (\ref{factor}) for these cases at the initial time. The initial values of the dressing operators are written in terms of the matrix representation (\ref{GG'-matrix}) of the vertex operators and some other simple matrices. \item[3.] The initial values of the Lax operators can be computed with the aid of these matrices, and turn out to take the forms as shown in (\ref{1D-LLbar}) and (\ref{AL-LLbar2}). \end{itemize} \subsubsection{First melting crystal model} The generating operator (\ref{mcm-g}) in this case corresponds to a matrix of the form \begin{equation} U = q^{(\Delta-1/2)^2/2}\boldsymbol{\Gamma}_{-}(q^{-\rho})\boldsymbol{\Gamma}_{+}(q^{-\rho}) Q^\Delta\boldsymbol{\Gamma}_{-}(q^{-\rho})\boldsymbol{\Gamma}_{+}(q^{-\rho})q^{(\Delta-1/2)^2/2}. \label{mcm-U} \end{equation} One can use the identities \begin{equation} Q^\Delta\Lambda^n Q^{-\Delta} = Q^{-n}\Lambda^n,\quad Q^{-\Delta}\Lambda^n Q^\Delta = Q^n\Lambda^n \label{Q^D-scaling} \end{equation} to rewrite the triple product in the middle as \begin{equation*} U = q^{(\Delta-1/2)^2/2}\boldsymbol{\Gamma}_{-}(q^{-\rho})\boldsymbol{\Gamma}_{-}(Qq^{-\rho}) Q^\Delta\boldsymbol{\Gamma}_{+}(Qq^{-\rho})\boldsymbol{\Gamma}_{+}(q^{-\rho})q^{(\Delta-1/2)^2/2}. \end{equation*} This matrix is already factorized to a product of lower and upper triangular matrices as \begin{equation*} U = W_0^{-1}\bar{W}_0, \end{equation*} where \begin{equation} \begin{gathered} W_0 = q^{(\Delta-1/2)^2/2}\boldsymbol{\Gamma}_{-}(Qq^{-\rho})^{-1} \boldsymbol{\Gamma}_{-}(q^{-\rho})^{-1}q^{-(\Delta-1/2)^2/2},\\ \bar{W}_0 = q^{(\Delta-1/2)^2/2}Q^\Delta\boldsymbol{\Gamma}_{+}(Qq^{-\rho}) \boldsymbol{\Gamma}_{+}(q^{-\rho})q^{(\Delta-1/2)^2/2}. \end{gathered} \label{mcm-WWbar0} \end{equation} This means that $W_0$ and $\bar{W}_0$ are the initial values $W|_{\boldsymbol{t}=\bar{\boldsymbol{t}}=\boldsymbol{0}}$, $\bar{W}|_{\boldsymbol{t}=\bar{\boldsymbol{t}}=\boldsymbol{0}}$ of the dressing operators determined by the generating matrix (\ref{mcm-U}). One can compute the initial values \begin{equation*} L_0 = L|_{\boldsymbol{t}=\bar{\boldsymbol{t}}=\boldsymbol{0}} = W_0\Lambda W_0^{-1}, \quad \bar{L}_0^{-1} = \bar{L}|_{\boldsymbol{t}=\bar{\boldsymbol{t}}=\boldsymbol{0}}^{-1} = \bar{W}_0\Lambda^{-1}\bar{W}_0^{-1} \end{equation*} of the Lax operators from these explicit forms of $W_0$ and $\bar{W}_0$ as follows. The first step for computing $L_0$ is to uses the identity \begin{equation*} q^{-(\Delta-1/2)^2/2}\Lambda q^{(\Delta/1/2)^2/2} = q^\Delta\Lambda \end{equation*} that is a consequence of (\ref{SSiii-matrix}). By this identity and the expression (\ref{mcm-WWbar0}) of $W_0$, one can rewrite $L_0$ as \begin{equation*} L_0 = q^{(\Delta-1/2)^2/2}\boldsymbol{\Gamma}_{-}(Qq^{-\rho})^{-1}\boldsymbol{\Gamma}_{-}(q^{-\rho})^{-1} q^\Delta\Lambda\boldsymbol{\Gamma}_{-}(q^{-\rho})\boldsymbol{\Gamma}_{-}(Qq^{-\rho}) q^{-(\Delta-1/2)^2/2}. \end{equation*} Since $\boldsymbol{\Gamma}_{-}(q^{-\rho})$ and $\boldsymbol{\Gamma}_{-}(Qq^{-\rho})$ are matrices of the form \begin{equation*} \boldsymbol{\Gamma}_{-}(q^{-\rho}) = \prod_{i=1}^\infty (1 - q^{i-1/2}\Lambda^{-1})^{-1},\quad \boldsymbol{\Gamma}_{-}(Qq^{-\rho}) = \prod_{i=1}^\infty (1 - Qq^{i-1/2}\Lambda^{-1})^{-1}, \end{equation*} the matrix $\Lambda$ in front of these two matrices can be moved to the right side as \begin{equation*} \Lambda\boldsymbol{\Gamma}_{-}(q^{-\rho})\boldsymbol{\Gamma}_{-}(Qq^{-\rho}) = \boldsymbol{\Gamma}_{-}(q^{-\rho})\boldsymbol{\Gamma}_{-}(Qq^{-\rho})\Lambda. \end{equation*} One can further use the identity \begin{equation*} q^\Delta \Lambda^{-1}q^{-\Delta} = q\Lambda^{-1} \end{equation*} to transfer the remaining $q^\Delta$ to the right as \begin{equation*} \begin{aligned} q^\Delta\boldsymbol{\Gamma}_{-}(q^{-\rho})\boldsymbol{\Gamma}_{-}(Qq^{-\rho}) &= q^\Delta\prod_{i=1}^\infty(1 - q^{i-1/2}\Lambda^{-1})^{-1} \prod_{i=1}^\infty(1 - Qq^{i-1/2}\Lambda^{-1})^{-1} \\ &= \prod_{i=1}^\infty(1 - q^{i+1/2}\Lambda^{-1})^{-1} \prod_{i=1}^\infty(1 - Qq^{i+1/2}\Lambda^{-1})^{-1}\cdot q^\Delta \\ &= \boldsymbol{\Gamma}_{-}(q^{-\rho})\boldsymbol{\Gamma}_{-}(Qq^{-\rho}) (1 - Qq^{1/2}\Lambda^{-1})(1 - q^{1/2}\Lambda^{-1})q^\Delta. \end{aligned} \end{equation*} The outcome reads \begin{equation*} L_0 = q^{(\Delta-1/2)^2/2}(1 - Qq^{1/2}\Lambda^{-1})(1 - q^{1/2}\Lambda^{-1}) q^\Delta\Lambda q^{-(\Delta-1/2)^2/2}. \end{equation*} Lastly, by the identities \begin{equation*} \begin{gathered} q^{(\Delta-1/2)^2/2}\Lambda q^{-(\Delta-1/2)^2/2} = q^{-\Delta}\Lambda,\\ q^{(\Delta-1/2)^2/2}\Lambda^{-1}q^{-(\Delta-1/2)^2/2} = \Lambda^{-1}q^\Delta = q^{-1}q^\Delta\Lambda^{-1} \end{gathered} \end{equation*} one can rewrite the last expression of $L_0$ as \begin{equation} \begin{aligned} L_0 &= (1 - Qq^{-1/2}q^\Delta\Lambda^{-1}) (1 - q^{-1/2}q^\Delta\Lambda^{-1})\Lambda\\ &= \Lambda - (Q+1)q^{-1/2}q^\Delta + Qq^{-2}q^{2\Delta}\Lambda^{-1}. \end{aligned} \label{mcm-L0} \end{equation} One can compute $\bar{L}_0^{-1}$ in much the same way, and confirm that it coincides with the expression (\ref{mcm-L0}) of $L_0$. This implies that the reduction condition (\ref{1D-LLbar}) to the 1D Toda hierarchy is indeed satisfied. \subsubsection{Second melting crystal model} The generating operator (\ref{mcm'-g}) in this case corresponds to the matrix \begin{equation} U = q^{(\Delta-1/2)^2/2}\boldsymbol{\Gamma}_{-}(q^{-\rho})\boldsymbol{\Gamma}_{+}(q^{-\rho}) Q^\Delta\boldsymbol{\Gamma}'_{-}(q^{-\rho})\boldsymbol{\Gamma}'_{+}(q^{-\rho})q^{-(\Delta-1/2)^2/2}. \label{mcm'-U} \end{equation} This matrix can be factorized as \begin{equation*} U = W_0^{-1}\bar{W}_0 \end{equation*} with \begin{equation} \begin{gathered} W_0 = q^{(\Delta-1/2)^2/2}\boldsymbol{\Gamma}'_{-}(Qq^{-\rho})^{-1} \boldsymbol{\Gamma}_{-}(q^{-\rho})^{-1}q^{-(\Delta-1/2)^2/2},\\ \bar{W}_0 = q^{(\Delta-1/2)^2/2}Q^\Delta\boldsymbol{\Gamma}_{+}(Qq^{-\rho}) \boldsymbol{\Gamma}'_{+}(q^{-\rho})q^{-(\Delta-1/2)^2/2}. \end{gathered} \label{mcm'-WWbar0} \end{equation} One can compute $L_0$ in much the same way as the previous case, starting from the expression \begin{equation*} L_0 = q^{(\Delta-1/2)^2/2}\boldsymbol{\Gamma}'_{-}(Qq^{-\rho})^{-1}\boldsymbol{\Gamma}_{-}(q^{-\rho})^{-1} q^\Delta\Lambda\boldsymbol{\Gamma}_{-}(q^{-\rho})\boldsymbol{\Gamma}'_{-}(Qq^{-\rho}) q^{-(\Delta-1/2)^2/2}. \end{equation*} This expression contains \begin{equation*} \boldsymbol{\Gamma}'_{-}(Qq^{-\rho}) = \prod_{i=1}^\infty (1 + Qq^{i-1/2}\Lambda^{-1}) \end{equation*} in place of $\boldsymbol{\Gamma}_{-}(Qq^{-\rho})$. Consequently, the foregoing transfer procedure of $q^\Delta$ is modified as \begin{equation*} \begin{aligned} q^\Delta\boldsymbol{\Gamma}_{-}(q^{-\rho})\boldsymbol{\Gamma}'_{-}(Qq^{-\rho}) &= q^\Delta\prod_{i=1}^\infty(1 - q^{i-1/2}\Lambda^{-1})^{-1} \prod_{i=1}^\infty(1 + Qq^{i-1/2}\Lambda^{-1}) \\ &= \prod_{i=1}^\infty(1 - q^{i+1/2}\Lambda^{-1})^{-1} \prod_{i=1}^\infty(1 + Qq^{i+1/2}\Lambda^{-1})\cdot q^\Delta \\ &= \boldsymbol{\Gamma}_{-}(q^{-\rho})\boldsymbol{\Gamma}'_{-}(Qq^{-\rho}) (1 + Qq^{1/2}\Lambda^{-1})^{-1}(1 - q^{1/2}\Lambda^{-1})q^\Delta. \end{aligned} \end{equation*} The final expression of $L_0$ takes the quotient form \begin{equation} L_0 = (1 + Qq^{-1/2}q^\Delta\Lambda^{-1})^{-1} (1 - q^{-1/2}q^\Delta\Lambda^{-1})\Lambda. \label{mcm'-L0} \end{equation} One can compute $\bar{L}_0^{-1}$ in much the same (but slightly more complicated) way starting. (\ref{mcm'-WWbar0}) and the identity \begin{equation*} q^{-(\Delta-1/2)^2/2}\Lambda^{-1}q^{(\Delta-1/2)^2/2} = \Lambda^{-1}q^{-\Delta} \end{equation*} imply that $\bar{L}_0^{-1}$ can be expressed as \begin{multline*} \bar{L}_0^{-1} = q^{(\Delta-1/2)^2/2}Q^\Delta\boldsymbol{\Gamma}_{+}(Qq^{-\rho})\boldsymbol{\Gamma}'_{+}(q^{-\rho})\\ \mbox{}\times \Lambda^{-1}q^{-\Delta}\boldsymbol{\Gamma}'_{+}(q^{-\rho})^{-1}\boldsymbol{\Gamma}_{+}(Qq^{-\rho})^{-1} Q^{-\Delta}q^{-(\Delta-1/2)^2/2}. \end{multline*} The outcome of somewhat lengthy computations reads \begin{equation} \bar{L}_0^{-1} = (1 - q^{1/2}q^{-\Delta}\Lambda)^{-1} (1 + Q^{-1}q^{1/2}q^{-\Delta}\Lambda)Q\Lambda^{-1}. \label{mcm'-Lbar0} \end{equation} It is easy to see that (\ref{mcm'-L0}) and (\ref{mcm'-Lbar0}) can be rewritten as \begin{equation*} L_0 = \tilde{C}_0^{-1}\tilde{B}_0, \quad \bar{L}_0^{-1} = - \tilde{B}_0^{-1}\tilde{C}_0, \end{equation*} where \begin{equation} \tilde{B}_0 = \Lambda - q^{-1/2}q^\Delta,\quad \tilde{C}_0 = 1 + Qq^{-1/2}q^\Delta\Lambda^{-1}. \end{equation} This coincides with the reduced form of (\ref{AL-LLbar2}) except for the negative sign in the expression of $\bar{L}_0^{-1}$. The negative sign is harmless, because it can be absorbed by the time reversal $\bar{\boldsymbol{t}} \to -\bar{\boldsymbol{t}}$. Actually, one can express $L_0$ and $\bar{L}_0$ in the form of (\ref{AL-LLbar}) as well (again with an extra negative sign) \cite{Takasaki13}. Anyway, the reduction condition to the Ablowiz-Ladik hierarchy is satisfied in this case. \section{Conclusion} It is remarkable that the two melting crystal models repeat the same pattern of integrable structures as the Hermitian and unitary matrix models. A major difference is the fact that the partition functions of the matrix models are $s \times s$ determinants (hence the lattice coordinate $s$ therein take values in positive integers), whereas there is no such expression of the partition functions of the melting crystal models as determinants of finite size. The discrete variable $s$ of the melting crystal models enters the Boltzmann weights as a parameter. This is a main reason why we need an entirely different method to identify the underlying integrable structures. On the other hand, the undeformed partition functions (\ref{Z(Q)=Psum}) and (\ref{Z'(Q)=Psum}) of the two melting crystal models differs in just the single factor $q^{\kappa(\lambda)/2}$. It is somewhat surprising that this tiniest modification leads to a drastic change in the underlying integrable structure. Of course this is rather natural from a geometric point of view, because the associated Calabi-Yau threefolds are different. The shift symmetries of the quantum torus algebra lie in the heart of our method. These algebraic relations are used to transform the ``diagonal'' Hamiltonians $H_k = V^{(k)}_0$ to the ``non-diagonal'' generators $J_m$ of time evolutions of the 2D Toda hierarchy. Let us mention two other approaches to this kind of unconventional time evolutions (see also Section 3.5 of the review of Alexandrov and Zabrodin \cite{AZ12}). The first one is Orlov's approach \cite{Orlov03} to a class of KP tau functions obtained from the hypergeometric functions (\ref{HG-tau}) by specializing the second set $\bar{\boldsymbol{t}}$ of time variables to a particular point. The special value of $S_\lambda(-\bar{\boldsymbol{t}})$ at that point $\bar{\boldsymbol{t}} = - \boldsymbol{a}$ becomes a determinant of the Cauchy type. The Schur function expansion of $\tau(s,\boldsymbol{t},-\boldsymbol{a})$ can be thereby reorganized to an ``$\infty$-soliton solution'' of the KP hierarchy in which the parameters $\boldsymbol{T} = (T_1,T_2,\ldots)$ of the generating operator (\ref{HG-g}) play the role of time variables. The second approach is developed by Bettelheim et al. \cite{BAW06} in their research of a complex fermion system on the real line. Time evolutions of this system are generated by diagonal Hamiltonians similar to our $H_k$'s except that the coefficients $q^{kn}$ of ${:}\psi^*_{-n}\psi_n{:}$ are replaced by $n^k$. Bettelheim et al. considered an analogue of KP and Toda tau functions in which $J_k$'s and the ground states $\langle s|$ and $|s\rangle$ are replaced by $H_k$'s and what they call ``boundary states'', $\langle B_s|$ and $|B_s\rangle$. These boundary states are generated from the vacuum states $\langle 0|$ and $|0\rangle$ by ``boundary operators'' $B_s$. The modified ``tau functions'' are shown to satisfy the bilinear equations of the KP and Toda hierarchies. Unfortunately, it is difficult to compare the results of Bettelheim et al. with ours literally, because the setup of the fermion system is different. Our complex free fermions live on a circle $|z| = R$ of the $z$-plane rather than the real axis. Nevertheless, it is obvious that the boundary operators $B_s$ play the same role as $\Gamma_{-}(q^{-\rho})\Gamma_{+}(q^{-\rho})$ in our approach. We believe that the shift symmetries will be useful beyond the scope of the melting crystal models. The results reviewed in this paper should be just a small piece of possible applications. In fact, we recently applied the shift symmetries to computations of topological string theory in a special case \cite{NT15}. We are currently trying to find how the algebraic relations (\ref{SSi}) and (\ref{SSii}) are altered outside the range $k > 0$. Hopefully, the shift symmetries thus extended will become a new tool of computations for various purposes. It is also true that the shift symmetries are a very special property of the vertex operators $\Gamma_{\pm}(q^{-\rho})$ and $\Gamma'_{\pm}(q^{-\rho})$. Until now, we have been unable to find a similar tool for the 4D version \cite{LMN03,Nekrasov02,NO03,MN06} of $Z(s,\boldsymbol{t})$ and $Z'(s,\boldsymbol{t},\bar{\boldsymbol{t}})$. The aforementioned boundary operators of Bettelheim et al. might be a clue to this problem. It seems more likely that another clue is hidden in the fermionic formalism of $\mathbb{C}\mathbb{P}^1$ Gromov-Witten theory developed by Okounkov and Pandharipande \cite{OP02a,OP02b}. \subsection*{Acknowledgements} I would like to thank Takashi Takebe and Toshio Nakatsu for longstanding collaboration. I am also grateful to Mark Adler, Pierre van Moerbeke, John Harnad and Sasha Orlov for having constant interests in the Toda hierarchy. Last but not least, I am indebted to Kimio Ueno for support in the earliest stage of the studies on the Toda hierarchies. This work is partly supported by the JSPS Kakenhi Grant No. 25400111 and No. 15K04912.
2,869,038,154,191
arxiv
\section{Introduction} \vspace{-5pt} Understanding conversational context and dynamics from an egocentric perspective is vital for creating realistic and useful augmented reality (AR) experiences. These attributes characterize the interactions of multiple speakers in a given scene with the AR device wearer (i.e., {\it ego}). An example such device may consist of glasses with outward looking cameras and microphones so that audio-visual data is captured from the wearer's point of view. Modeling these attributes involves not only detecting and tracking people within a scene, but also localizing the voice activity within a conversation. In this work, we focus on the task of active speaker localization (ASL) with the goal of detecting the spatio-temporal location of all active speakers both within and outside the camera's field of view (FOV). Closely related to the problem of active speaker detection (ASD), ASL involves estimating the relative direction of arrival of speech from an egocentric perspective. In this paper, active speakers typically correspond to the people who are speaking and `driving' the conversations. The elements of our proposed egocentric ASL problem are illustrated in Fig.~\ref{fig:teaser}. \begin{figure}[tb] \centering \includegraphics[width=0.325\linewidth]{figures/my_images/25/image1/352.jpg}\hspace{\fill} \includegraphics[width=0.325\linewidth]{figures/my_images/5/image1/384.jpg}\hspace{\fill} \includegraphics[width=0.325\linewidth]{figures/my_images/5/image1/826.jpg}% \linebreak \includegraphics[width=0.325\linewidth]{figures/my_images/25/image2/352.jpg}\hspace{\fill} \includegraphics[width=0.325\linewidth]{figures/my_images/5/image2/384.jpg}\hspace{\fill} \includegraphics[width=0.325\linewidth]{figures/my_images/5/image2/826.jpg}% \vspace{0.2em} \includegraphics[width=0.325\linewidth]{figures/my_images/35/image1/31.jpg}\hspace{\fill} \includegraphics[width=0.325\linewidth]{figures/my_images/35/image1/157.jpg}\hspace{\fill} \includegraphics[width=0.325\linewidth]{figures/my_images/35/image1/1165.jpg}% \linebreak \includegraphics[width=0.325\linewidth]{figures/my_images/35/image2/31.jpg}\hspace{\fill} \includegraphics[width=0.325\linewidth]{figures/my_images/35/image2/157.jpg}\hspace{\fill} \includegraphics[width=0.325\linewidth]{figures/my_images/35/image2/1165.jpg}% \vspace{-5pt} \caption{Our novel multi-channel audio-visual deep network localizes active speakers from any direction on the sphere. In this illustration, predicted active speaker probability heat maps are shown in the red channel of the images (rows 1,3) alongside the full 360$^{\circ}$ voice map (rows 2,4) where the camera's limited field of view is indicated in the central blue rectangle. This 360$\times$180 voice map (rows 2,4) is a cylindrical 2D projection of the sphere where each pixel corresponds to a direction in the device wearer's local 3D coordinate system. The ground truth is shown as the purple bar under the talking head's lower edge and the blue dots in the 360$^{\circ}$ map. The yellow bar at the upper edge of a head box shows our prediction that the person is talking. Our method also predicts whether the device wearer speaks. } \vspace{-10pt} \label{fig:teaser} \end{figure} A good ASL system needs to account for the changing orientations of speakers from egocentric point of view and be robust to speakers moving in and out of the visual field of view. In particular, natural conversations entail significant overlap between different speakers' voice activity and involve one or more speakers interrupting each other --- a classical attribute in conversational ecology called turn-taking. Such a system should also ideally be agnostic to the number of microphone channels, thereby allowing for generalization to different AR devices with varying numbers of audio and/or visual channels. Note that the device wearer may also be an active speaker during the conversation whose voice is naturally amplified due to their closeness to the device microphones. An ASL system must account for this {\it false} amplification that may nullify competing active speakers in the scene. In this work, we propose a real-time audio-visual ASL system that addresses these aspects to effectively localize active speakers potentially outside of the visual FOV by leveraging audio recorded from a device-mounted microphone array. We propose a new end-to-end deep neural network trained to tackle this problem. Our network is partitioned into two branches: an audio network and an audio-visual network. The audio network builds useful representations for constructing a low-resolution sound source localization map with a full 360$^{\circ}$ FOV by utilizing spatio-temporal correlations across different channels. The audio-visual network then combines the extracted audio features with the corresponding video frames, resulting in a higher resolution activity map for the camera's FOV. Visual cues such as the person's mouth movement, facial expressions, and body pose are extracted here and combined with audio features for computing a joint representation. The final 360$^{\circ}$ active speaker map is a combination of the low-resolution audio-only map and the high-resolution audio-visual map. In addition, the device wearer voice detector shares the features from the audio network, and our model estimates the relative 3D orientations of the speakers in the scene from egocentric perspective. The proposed network is also aimed at real-time applications in the immersion-driven domain of AR, enabling systems for the spatialization and localization of audio-visual activity in world-locked frame of reference. Lastly, the lack of reliable multi-channel conversational datasets is another limiting factor for building in-the-wild ASL systems. To that end, we build and evaluate our approach using a very recent egocentric conversations dataset called EasyCom \cite{easycom}. Our contributions are: \begin{enumerate} \item We tackle the new problem of ASL using multi-channel audio and video from egocentric perspective. In this new problem, we localize all the active speakers in the scene including the device wearer. \item We propose a real-time low-latency egocentric audio-visual active speaker localization system with a 360$^{\circ}$ field of view. Our novel deep multi-channel audio-visual network learns from different audio features and can accommodate different numbers of audio channels without structure changes. \item We evaluate our method on the EasyCom dataset and demonstrate significantly improved results in comparison to previous audio-visual ASL and ASD approaches. \end{enumerate} \subsection{Related Work} \label{sec:related} Single and multi-channel sound source detection and localization problems have classically been studied by speech and audio signal processing communities \cite{audioprocess, doa, soundloc1}. Most of these works are based on source separation and voice activity detection, and they mainly assume that there is one speaker in the audio stream who dominates the others (i.e., a high signal-to-noise ratio). The primary characteristic of these methods is to build auto-correlation and cross-correlation functions across different channels to account for timing and level differences caused by microphone placement. However, these approaches are sensitive to room acoustics and noisy backgrounds and may be unreliable when multiple sources are present. More recently, machine learning has been used for direction of arrival estimation with some success \cite{deep-sound-loc1, deep-sound-loc2, deepdoasurvey}. Although these methods improve upon the traditional approaches, the lack of visual information limits the efficacy of these systems in real-word settings. Furthermore, most multi-channel approaches assume fixed, stationary microphone arrays, which may lead to poor performance with moving arrays in egocentric settings. The computer vision community has seen a surge in audio-visual learning research, in particular due to datasets like the AVA Speech and Activity corpus \cite{avadataset}, Voxconverse \cite{voxconverse}, and Voxceleb \cite{voxceleb2}. These approaches are driven by building correspondences between audio and visual modalities, thereby resulting in robust joint representations that improve upon their audio-only or image-only counterparts. For action and activity recognition, several studies have shown evidence that audio disambiguates certain visually ambiguous cues \cite{kazakos2019epic,audiovisual-slowfast}. Audio-visual models have been explored for speech recognition \cite{avspeech}, sound source detection \cite{avloc1, avloc2, avloc3}, multiple source separation \cite{avsep2, avsep4, avsep5, avsep6}, localization of sounds in a 2D image \cite{360sound, avsep1}, 3D scene navigation guided by audio \cite{avnavi}, and others. A bulk of the audio-visual learning models follow a simple recipe: audio inputs are often converted to spectrogram images which are then jointly processed with video frames. In addition to traditional network architectures, transformer networks have also been proposed for single-channel active speaker detection \cite{talknet}. More recently, turn-taking has also been studied as a means to improve detection performance \cite{iccv21b}. A related problem is that of speech separation, which singles out a speaker's voice by using both audio and cropped facial images \cite{avsep2, avsep5, avsep6}. The voice energy of the enhanced speech can then be used to detect active speakers. Although extensively studied, single-channel speaker detection from an egocentric perspective is still a challenging problem. This is mainly because of substantial device motion, occlusions, reduced visibility of speakers' faces, and noise induced by overlapping and interrupting speakers. Most current methods also induce significant latency in detection, which would be ineffective for enabling real-time AR experiences. Single-channel audio-visual localization in exocentric settings has received much attention lately \cite{av1, avloc1, avloc2, avloc3, iccv21a}. These methods either utilize audio-visual joint embeddings similar to those in active speaker detection, or they train audio-visual joint classification modules as the backbone for modality fusion. In addition, due to the lack of multiple channels, localization is restricted to the image frame in a manner similar to traditional visual object localization. The most recent related work is from \cite{wangaaai}, where the authors propose an audio-visual model that can process binaural (two-channel) audio for sound source localization. However, the system cannot be extended to multi-channel settings, and is restricted to localizing targets within the visual field of view. \section{Egocentric Active Speaker Localization} \label{sec:framework} \paragraph{Problem Setup:} Given multi-channel audio-visual data captured using AR glasses with a microphone array and RGB camera, we define the egocentric ASL problem as the detection and spatio-temporal localization of all the active speakers in the scene including the voice activity of the device wearer. Let $\mathbf{A}_i$ with ($i=1..N$) denote the audio signals captured via $N$-channel microphone array and $\bf{I}$ denote the video from the RGB camera. The audio signals are normalized to the range [-1,1] based on the maximum bit length of audio samples. At each time instant $t$, given a segment of audio ${\mathbf{A}^t_i}$ and the corresponding video frame $\mathbf{I}^t$, we estimate two outputs: a heat map $\mathbf{V}^t_{\alpha,\beta}$ of activity in the scene and the device wearer activity $\bf{W}$. $\mathbf{V}^t_{\alpha,\beta}$ is a 2D matrix where each element gives the probability of a sound source being present at particular relative angles $(\alpha,\beta)$ at the time instant $t$, where $\alpha \in [-180,180]$ and $\beta \in [-90,90]$ correspond to azimuthal (horizontal) elevation (vertical) respectively. Although we focus on human voice in this work, the proposed framework is applicable to any sounds of interest. \subsection{Overview} \label{sec:overview} \begin{figure*}[tb] \centering \includegraphics[width=0.85\linewidth]{figures/system2c.pdf} \vspace{-5pt} \caption{Egocentric multi-channel audio-visual localization. Our end-to-end deep network detects a 360$^{\circ}$ voice activity map and the wearer's voice activity at the same time.} \vspace{-5pt} \label{fig:system} \end{figure*} Fig.~\ref{fig:system} illustrates the proposed egocentric ASL framework. Our method is an end-to-end deep learning model which takes the raw audio and video as input and estimates the active speaker activity heat map ($\bf{V}$) and wearer's voice activity ($\bf{W}$) directly. The framework has two networks: an audio network cascade ($\mathcal{A}$) and an audio-visual network cascade ($\mathcal{AV}$). $\mathcal{A}$ $ $ converts raw multi-channel audio and compacts a 2D representation aligned to each video frame, which is then used to extract relevant features using a convolutional neural network to estimate a direction of arrival estimate for the sources in the scene. $\mathcal{AV}$ $ $ then utilizes the outputs from $\mathcal{A}$ $ $ and incorporates visual information using another network. The resulting outputs from both $\mathcal{A}$ $ $ and $\mathcal{AV}$ $ $ are then combined to compute the scene and wearer's activity ($\bf{V}$ and $\bf{W}$). \subsection{Audio Network} \label{sec:audio-only} \vspace{-5pt} \subsubsection{Audio Representation} \vspace{-5pt} In this paper, we consider three audio representations and design our deep network so that it can take these different representations together with video as input in the same fashion. Our experiments show these audio representations are stronger than the raw audio. These different audio representations have different properties that are suitable for different use cases. Our first audio representation is adapted from the complex spectrogram representation \cite{wangaaai}. For audio with sampling rate of $48kHz$ and video frame rate at $20Hz$, we compute the short-time Fourier transform (STFT) extract 100 discrete Fourier transforms (DFTs) of length 200 to align with each video frame. The real and imaginary parts of the DFTs from all the channels are stacked together along the depth axis to form the multi-channel 2D tensor. \begin{figure}[tb] \centering \includegraphics[height=1.3cm]{figures/fimage/images1/58.jpg}% \includegraphics[height=1.3cm]{figures/features_crop/58.png}\hspace{0.02em} \includegraphics[height=1.3cm]{figures/fimage/images1/590.jpg}% \includegraphics[height=1.3cm]{figures/features_crop/590.png}% \vspace{0.15em} \includegraphics[height=1.3cm]{figures/fimage/images1/815.jpg}% \includegraphics[height=1.3cm]{figures/features_crop/815.png}\hspace{0.02em} \includegraphics[height=1.3cm]{figures/features_crop2/199.jpg}% \includegraphics[height=1.3cm]{figures/features_crop2/199.png}% \vspace{-5pt} \caption{Odd columns: video frames overlaid with voice activity labels. Even columns: vertical stack of the audio cross correlation and energy feature maps.} \vspace{-20pt} \label{fig:feature} \end{figure} Apart from complex spectrogram, we further propose a 2D audio representation that captures the cross correlation between all pairs of the audio channels. Unlike spectrograms, this representation is mostly speaker invariant. In more details, assuming the audio sample $n$ matches the time stamp of video frame at time $t$, the cross correlation $C_{p,q}(n,m)$ between channel $p$ and $q$ is \vspace{-3pt} \small \[ C_{p,q}(n, m) = \frac{\sum_{k=0}^K [A_p(n - k) A_q(n - k + m)]}{\sqrt{\sum_{k=0}^K A_p(n - k)^2)}\sqrt{\sum_{k=0}^K A_q(n - k + m)^2)}}, \vspace{-3pt} \] \normalsize where $m=[-L,L]$, and $K$ and $L$ are two parameters. In our experiments, audio signals have sampling rate $48kHz$, $K=1200 $ and $L=50$. In a discrete format, $C_{p,q}(n,m)$ is a vector of length $2L+1$ at each time $n$ that characterizes not only the time shift of different audio channels due to the different path of the sound transmission, but also other fine-grained couplings between different audio channels. Using this $C$, we construct a 2D audio representation at each time $n$, which is a stack of all the vectors $C_{p,q}(n,m)$ for each $(p,q)$ pair. The short-time energy of audio is a feature that is invariant to sound sources and easy to compute. Therefore, we also include a separate measure of the energies from each audio channel, \small $E_p(n) = (\sum_{k=0}^K A_p(n - k)^2)^{0.5}$. \normalsize Using the $E$, we stack $\bf{e}_p(n)$ for each $p$, where $\bf{e}_p(n)$ is a vector that duplicates the $E_p(n)$ by $2L+1$ times to form a 2D energy map. These features can also be combined to form richer representations. Fig.~\ref{fig:feature} illustrates how the combined cross correlation and energy feature correspond to the audio events in videos. The cross-correlation, energy and the combined 2D feature are further resized. In this paper, the width and height are resized to 128. \vspace{-10pt} \subsubsection{Audio Activity Network} \vspace{-5pt} The audio activity network predicts a rough 360$^{\circ}$ audio activity map and the voice activity of the device wearer. Its structure is shown in Fig.~\ref{fig:audio-doa}. The feature extraction network is adapted from the first several layers of a ResNet18 network whose coefficients are pre-trained on ImageNet. The first convolutional layer is modified to match the channel number of different audio representations. The feature extraction network maps the audio 2D representation to a compact feature, which quantifies the spatial and voice characteristics of audio signals in the scene. The extracted features are flattened and passed to two fully connected layers, which are further reshaped to two $90\times45$ maps. The two maps are stacked and resized to a $180\times90$ one-hot representation half the size of the full 360$^{\circ}$ audio activity map. This network thus predicts the voice activity probability from each direction on the sphere with an angular resolution of 2$^{\circ}$. One key design here is to generate the one-hot representation of the heat map and train using cross-entropy loss. This gives more stable results than directly regressing a single heat map of the audio activity using L1 or L2 losses. The audio activity map is also used to simultaneously estimate the wearer's voice activity. Due to the spatial position of the wearer's mouth relative to the microphones and the loudness of the wearer's voice, the 2D feature representation learned by the audio localization network also provides useful information for detecting whether the device wearer is speaking. To accomplish this, the audio feature extraction is shared with the 360$^{\circ}$ audio map prediction, and wearer voice activity detection is performed by a separate head that consists of two fully-connected layers trained to predict probability with a cross-entropy loss. \begin{figure}[tb] \centering \includegraphics[width=0.8 \linewidth]{figures/audio_doad.pdf} \vspace{-5pt} \caption{The audio activity network.} \vspace{-10pt} \label{fig:audio-doa} \end{figure} \subsection{Audio-Visual Network} \label{sec:audio-visual} With only multi-channel audio available for speaker localization, the spatial resolution is low. This is due to the inherent physics of sound propagation and the limitations of compact microphone arrays. We therefore also take advantage of video frames to further improve the estimation result. Images not only increase spatial resolution, but also provide extra clues related to voice activity, such as mouth movement, facial expression, and hand gestures. \begin{figure}[tb] \centering \includegraphics[width=0.4 \textwidth]{figures/audio_video_doa3.pdf} \vspace{-5pt} \caption{Audio-visual network. The blocks $B(p)$ and $C(p,q)$ are defined in Fig.~\ref{fig:blocks}. For 2D convolution layers, the parameters are input channel number, output channel number, convolution kernel size, stride and padding. For maxpool layer, the parameters are pooling kernel size, stride and padding. } \vspace{-10pt} \label{fig:audio-video-doa} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=0.5 \textwidth]{figures/B_C_block.pdf} \vspace{-20pt} \caption{Residual blocks in the audio-visual network.} \vspace{-20pt} \label{fig:blocks} \end{figure} In this paper, we propose a different approach to fusing audio and visual information from previous audio-visual methods: we directly stack the video frames with the estimated voice activity map from the audio network. Since the rough 360$^{\circ}$ voice map from the audio network is defined on the unit sphere and the grids are horizontal and vertical angles, we need a procedure to align the audio map to the corresponding video frames. Even though we can map each grid in the voice map to the image, we find a simpler cropping and scaling method is sufficient due to the low resolution of the audio map. More specifically, we crop the region from the audio map within the horizontal and vertical angles corresponding to the four corners of the image. The scaling procedure then upsamples the region so that the audio map in the FOV is aligned with the input video. These operations are integrated in the audio-visual network. As shown in Fig.~\ref{fig:audio-video-doa}, the fused audio map and the corresponding color video frame form a tensor with depth of 4, which is sent to a fully-convolutional network to estimate the refined voice activity map in the camera's field of view. In this paper, the video resolution is $640\times360$. With such a design, if the faces are visible, the audio-visual network is able to take advantage of image features such as the appearance of the mouth and facial expression to localize audio activity. Due to its wide effective receptive field, the proposed network can also learn to extract other visual features such as body pose. Unlike previous methods, if the faces are not visible, our proposed method can still function because the audio activity map gives the locations of the potential speakers in the scene. We combine the rough 360$^{\circ}$ heat map and the more detailed heat map in the FOV. In this paper, we simply pad the refined heat map with zeros outside the camera's FOV and add it to the 360$^{\circ}$ heat map to generate the final estimation. \subsection{Model Training} \label{sec:training} We train the network in two stages. In the first stage, we train the audio-only and audio-visual network together without the wearer's voice activity classification network. In the second stage, we fix the audio feature layer's weights and train the fully connected network to predict the wearer's voice activity. The 360$^{\circ}$ voice map and the voice map in the FOV are represented differently in the ground truth. The 360$^{\circ}$ voice map is a 180$\times$90 2D map. If there is a speaker located at $(\alpha, \beta)$, the ground truth voice map has a solid disk with radius 5 centered at the point. Such labeling is uniform for regions inside and outside of the field of view. In contrast, the voice map in the FOV has the same size as the video frames, and the active speaker in the field of view is labeled as a solid rectangle that covers the speaker's head. Therefore inside the FOV, the detection also has an attribute of size which is related to the depth of the target. The training losses are defined as follows. The first stage loss function is \[ \mathcal{L}_a = \mathcal{H}(y_{a}, \hat{y}_{360}) + \mathcal{H}(y_{av}, \hat{y}_{fov}), \] and the second stage loss function is: \[ \mathcal{L}_b = \mathcal{H}(y_{w}, \hat{y}_{w}), \] where $\mathcal{H}$ is the mean cross entropy, $y_{a}$ and $y_{av}$ are the one-hot output representations of the audio-only and audio-visual networks, $\hat{y}_{360}$ and $\hat{y}_{fov}$ are their corresponding ground truth audio maps, $y_w$ is the wearer speech activity prediction, and $\hat{y}_{w}$ is its ground truth label. The training procedure generally converges quickly within 5 epochs. \section{Experiment Results} \vspace{-5pt} In this section, we evaluate the proposed method on real videos and compare it with different audio-visual approaches for active speaker localization and wearer voice activity detection. Since we consider a novel egocentric problem setting, there are no previous audio-visual methods that are directly applicable. For comparison, we instead adapt our multi-channel audio and video inputs to other approaches to similar problems. We also experiment with variations of the proposed method to justify our design choice. \subsection{Evaluation Dataset} \vspace{-5pt} We evaluate our method using the EasyCom \cite{easycom} dataset. EasyCom is a multi-channel audio-visual dataset that includes around 6 hours of egocentric videos of conversations within a simulated noisy environment. The dataset is recorded using a microphone array and a RGB camera mounted on a pair of glasses. EasyCom is a challenging dataset with significant background noise, fast head motion, and motion blur. Participants may sit or walk around in the scene, and their faces and mouths are not always visible due to occlusions. There are six microphones used for recording: four fixed to the glasses and two placed within the ears of the participants. In this paper, we use the RGB egocentric video together with the multi-channel audio from the four fixed microphones in our experiments. The dataset has 12 video sessions, each of which is about half an hour long. There may be 4, 5, or 6 participants including the camera wearer in each recording session. We use sessions 1--3 for testing and the remaining 9 sessions for training. For fair comparison, we report the best numbers for all competing models trained until convergence after a sufficiently large number of epochs. \subsection{Methods in Evaluation} \vspace{-5pt} We compare the proposed method in different variations against other active speaker detection and localization methods. The methods in the evaluation include: \begin{itemize} \vspace{-8pt} \item Our method and variations ( \texttt{Ours AV([cor] + [eng] + [spec] + [box])}): Variations include different combinations of feature representations (cor: cross correlation, eng: energy, spec: spectrogram, and box: head bounding boxes). In the variation that uses head bounding boxes, we set the background color outside of the detected head regions to black. We also evaluate the audio-only and video-only versions of our method in which the video or audio branches are removed from our full model. \vspace{-8pt} \item \texttt{DOA+headbox}: A state-of-the-art signal processing method \cite{doa} for extracting spherical direction-of-arrival (DOA) energy maps from the 4 microphones on the glasses combined with head detection bounding boxes for active speaker detection. This DOA estimation method was designed to achieve more robust results in highly reverberant settings compared to previous signal processing audio localization methods. To detect active speakers in the field of view, we pool regions of the DOA map corresponding to directions within the detected head bounding boxes. If the DOA map accurately estimates sound arrival directions, then the head bounding boxes corresponding to active speakers will include higher energy values. \vspace{-8pt} \item \texttt{DOA+image}: A deep neural network trained to localize active speakers using both traditional signal processing DOA maps \cite{doa} and video frames as inputs. The network is fully convolutional and has the same structure as the audio-visual network in our method. \vspace{-8pt} \item \texttt{AV-rawaudio}: A deep neural network trained using multi-channel raw audio and video as the input. Aside from extracting audio features with 1D convolution layers, the overall network architecture is the same as our approach. \vspace{-8pt} \item Mouth region classifier (\texttt{MRC}): A visual-only method for classifying active speech from cropped images of mouth regions extracted from a 68-point facial key point detector. Such a scheme has been commonly used in active speaker detection. A ResNet18 network is trained to classify the cropped mouth images. We test two cases: \texttt{MRC(AVA)} trained using the AVA active speaker detection dataset \cite{avadataset}, and \texttt{MRC(EasyCom)} only trained on EasyCom. \vspace{-8pt} \item \texttt{TalkNet} \cite{talknet}: A transformer-based single-channel audio-visual active speaker detection method that gave state-of-the-art results in the AVA active speaker detection challenge. We use the method in two modes: \texttt{TalkNet(AVA)} trained on the AVA dataset and \texttt{TalkNet(EasyCom)} trained on EasyCom. \vspace{-8pt} \item \texttt{BinauralAVLocation} \cite{wangaaai}: A two-channel audio-visual method for sound source localization. Since this method cannot be easily extended to settings with more than two asymmetric microphones, we use only the audio channels from the two frontal microphones in our comparisons. \end{itemize} \subsection{Within-View Active Speaker Localization} \begin{figure*}[tb] \centering \includegraphics[width=0.166\linewidth]{figures/comp_images/my/image1/301.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/my/image1/474.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/my/image1/600.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/my/image1/830.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/my/image1/1114.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/my/image1/1144.jpg}% \linebreak \includegraphics[width=0.166\linewidth]{figures/comp_images/my/image2/301.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/my/image2/474.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/my/image2/600.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/my/image2/830.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/my/image2/1114.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/my/image2/1144.jpg}% \linebreak \includegraphics[width=0.166\linewidth]{figures/comp_images/doa_raw/image1/301.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/doa_raw/image1/474.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/doa_raw/image1/600.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/doa_raw/image1/830.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/doa_raw/image1/1114.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/doa_raw/image1/1144.jpg}% \linebreak \includegraphics[width=0.166\linewidth]{figures/comp_images/doa_raw/image2/301.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/doa_raw/image2/474.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/doa_raw/image2/600.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/doa_raw/image2/830.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/doa_raw/image2/1114.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/doa_raw/image2/1144.jpg}% \linebreak \includegraphics[width=0.166\linewidth]{figures/comp_images/doa_image/image1/301.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/doa_image/image1/474.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/doa_image/image1/600.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/doa_image/image1/830.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/doa_image/image1/1114.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/doa_image/image1/1144.jpg}% \linebreak \includegraphics[width=0.166\linewidth]{figures/comp_images/mrc/image1/301.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/mrc/image1/474.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/mrc/image1/600.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/mrc/image1/830.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/mrc/image1/1114.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/mrc/image1/1144.jpg}% \linebreak \includegraphics[width=0.166\linewidth]{figures/comp_images/talknet/301.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/talknet/474.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/talknet/600.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/talknet/830.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/talknet/1114.jpg}% \includegraphics[width=0.166\linewidth]{figures/comp_images/talknet/1144.jpg}% \vspace{-5pt} \caption{ Qualitative comparison results. The purple bar indicates when a person is predicted to be talking while the yellow bar is the corresponding ground truth. Rows 2, 4: the predicted 360$^{\circ}$ voice map compared against the the ground truth in blue channel. Rows 1, 2: The result of \texttt{Ours AV(corr)}. Rows 3, 4: \texttt{DOA+headbox}, Row 5: \texttt{DOA+image}, Row 6: \texttt{MRC(EasyCom)}, Row 7: \texttt{TalkNet(EasyCom)}. In Row 7, green boxes indicate active speech while red boxes are inactive. } \vspace{-10pt} \label{fig:comparison} \end{figure*} We first evaluate the mean average precision (mAP) of active speaker localization detections within the camera's field of view. We compare against multi-channel as well as one- and two-channel audio-visual methods and visual-only method. The mAP is computed based on the scores within the ground truth head bounding boxes in each video frame. For our methods and the competing methods \texttt{DOA+headbox}, \texttt{DOA+image}, \texttt{AV-rawaudio}, and \texttt{BinauralAVLocation} we extract the voice heat map's maximum value in each ground truth head bounding box and use it as the detection score. The \texttt{MRC} and the \texttt{TalkNet} methods use the classification probability of the corresponding head box as the detection score. Both \texttt{MRC} and \texttt{TalkNet} use the ground truth head bounding boxes for testing. As shown in Table ~\ref{tab:comp1}, our methods give much higher mAP than all of the competing methods. Fig.~\ref{fig:comparison} shows qualitative comparison results. Due to the difficulty in learning useful features from raw audio, \texttt{AV-rawaudio} gives inferior results in comparison to spectrogram and cross-correlation audio features. Background noise also causes traditional audio-only signal processing approaches give blurry DOA maps and inaccurate target localization results. The \texttt{DOA+image} deep learning method that combines this DOA map with video frames improves performance, but still gives lower mAP than our proposed method. This emphasizes the benefit of learning spatial audio-visual representations end-to-end. Our method also gives much higher mAP than the previous video-only \texttt{MRC} and single-channel audio-visual active speaker detection method \texttt{TalkNet} trained on both the AVA dataset \cite{avadataset} and the EasyCom dataset. Our method greatly outperforms the \texttt{BinauralAVLocation} in both the 4-channel and 2-channel audio settings. \setlength{\tabcolsep}{1pt} \begin{table}[tb] \small \centering \begin{tabular}{ c | c } \hline & ASL mAP \\ \hline Ours AV(cor) & 84.14 \\ Ours AV(cor+eng) & 83.32 \\ Ours AV(cor+box) & 86.25 \\ Ours AV(cor+eng+box) & 86.32 \\ Ours AV(spec) & 85.49 \\ Ours AV (eng) & 62.68 \\ Ours AV(cor)-2ch & 80.00 \\ Ours AV(spec)-2ch & 83.30 \\ \hline AV-rawaudio & 72.32 \\ DOA+headbox & 52.62 \\ DOA+image & 54.27 \\ MRC (AVA) & 46.60 \\ MRC(EasyCom) & 64.24 \\ TalkNet (AVA) & 69.13 \\ TalkNet (EasyCom) & 44.24 \\ BinauralAVLoc & 60.75 \\ \hline \end{tabular} \caption{Comparison of mAPs in the visual field of view. Most of these tests use 4-channel audio, except for \texttt{Ours AV(cor)-2ch}, \texttt{Ours AV(spec)-2ch}, \texttt{BinauralAVLoc}, which use 2-channel audio, \texttt{TalkNet} which uses single-channel audio, and video-only \texttt{MRC}. } \label{tab:comp1} \vspace{-10pt} \end{table} For different variations of the proposed method, as shown in Table~\ref{tab:comp1}, the energy feature is significantly worse than the other two features, while spectrogram features give slightly better mAP. The cross correlation and energy features are still attractive due to their speaker-invariant properties and thus have potential to generalize better in real applications and preserve privacy. The cross correlation feature is also invariant to the microphone gain settings; this makes it useful when the gains need to change dynamically for best signal-noise ratio. We also compare our audio-only and video-only variations with the full audio-visual model. In comparison to our full audio-visual method \texttt{Ours AV(cor+mag+box)} with a mAP of 86.32\%, the video-only variation gave a much lower mAP of 58.44\% and the audio-only version also gave a lower mAP of 78.08\%. The results of \texttt{Ours AV(corr+box)} and \texttt{Ours AV(corr+eng+box)} also show that our proposed method can generalize to different environments by removing background visual information outside of head detections, which can potentially improve the result. Even with only two audio channels, our network still gave strong results that outperformed the \texttt{BinauralAVLoc} network architecture designed to leverage the symmetry of binaural audio. \subsection{Spherical Active Speaker Localization} One unique property of our proposed method is that it gives a full 360$^{\circ}$ spherical speaker localization result. Since there is no head bounding box outside of the field of view, we use the angular error to measure the localization quality. The metric is defined as follows: We first extract the detected target locations in the predicted voice heat map using non-maximum suppression. Every peak in the heat map with value greater than a threshold is a potential target. In the experiments, we set the threshold to 0. The positions in the heat map indicate the angles of directions. We compute the minimum distances from the detected points to the ground truth points in the voice heat map, whose mean is denoted as E1. We compute mean E1 and its standard deviation Std1. The corresponding metrics from the ground truth point set to the detected point set are mean E2 and Std2. The reason we use distance metric in two directions is to take both missing detections and false alarms into account. Not all the competing methods can give full 360$^{\circ}$ spherical localization results. In this experiment, we compare our method with the methods that use traditional DOA maps and the audio-visual variation with raw audio input. As shown in Table~\ref{tab:vloc360}, our method gives the lowest angular errors. \setlength{\tabcolsep}{1pt} \begin{table}[tbh] \small \centering \begin{tabular}{ c | c | c | c | c } \hline & Mean E1 & Std1 & Mean E2 & Std2 \\ \hline Ours AV (cor) & 16.77 & 12.63 & 6.56 & 8.77 \\ Ours AV (spec) & 8.81 & 9.63 & 6.21 & 6.89 \\ DOA & 129.82 & 18.26 & 46.45 & 21.50 \\ DOA+image & 66.81 & 7.89 & 36.48 & 8.97 \\ AV-rawaudio & 40.14 & 10.55 & 140.75 & 19.58 \\ \hline \end{tabular} \caption{Comparison of full 360$^{\circ}$ spherical voice activity localization errors measured in degrees.} \label{tab:vloc360} \vspace{-10pt} \end{table} \setlength{\tabcolsep}{1pt} \begin{table}[tbh] \small \centering \begin{tabular}{ c | c } \hline & Wearer Audio activity mAP \\ \hline Ours(cor) & 90.20 \\ Ours(cor+eng) & 90.13 \\ Ours(eng) & 88.89 \\ Ours(spec) & 91.69 \\ Ours(cor)-2ch & 87.66 \\ Ours(spec)-2ch & 90.14 \\ Eng(single channel) & 76.71 \\ AV-rawaudio & 87.29 \\ \hline \end{tabular} \caption{Camera wearer voice activity detection. \texttt{Eng(single channel)} is the naive approach of using short-time energy for wearer voice classification.} \label{tab:wearer} \vspace{-10pt} \end{table} \vspace{-5pt} \subsection{Wearer Speech Activity Detection} \vspace{-5pt} Another unique property of the proposed method is that it can simultaneously detect the voice activity of the person wearing the recording glasses. Our method shares the learned audio features for both tasks. During the training of the camera wearer voice networks, the shared feature design freezes the network feature extraction parameters while only training the last two fully connected layers. Camera wearer audio activity detection is a new task. We construct different natural solutions in the comparison. Table~\ref{tab:wearer} summarizes the comparison result. As shown in Table~\ref{tab:wearer}, our proposed method gives better results than the competing methods. The shared feature design in fact also gives better result than training a separate wearer voice classification model. For instance, our method using cross correlation input features gives 90.2\% mAP, but if we retrain a separate wearer classifier the mAP is 88.01\%. This is likely because of the additional supervision in training the localization task to explicitly suppress the wearer's speech. Comparing to traditional signal processing approaches, our method requires more computationally expensive GPU operations. However, the proposed method is still efficient. It runs in real time at over 180 frames per second using a single GTX2080Ti GPU with about 50\% utilization. More optimization could also further improve the efficiency of the network. The proposed method also has a smaller latency compared to traditional signal processing methods, which require estimating signal statistic over longer windows of time. While we only use 4 microphones in our experiments, the proposed method could be easily extended to devices with any number of microphones in any array configuration. With a larger microphone array, the proposed method has the potential to achieve even better results. \vspace{-5pt} \section{Conclusion} \vspace{-5pt} We proposed a novel multi-channel audio-visual method to tackle the 360$^{\circ}$ spherical active speaker detection problem for localizing active speakers both within and beyond an egocentric camera's visual field of view while also simultaneously predicting the wearer's voice activity. Our experiments showed that the proposed method gives superior results to competing methods and can run in real time with short latency. It can be deployed to enable many useful AR functions. \small
2,869,038,154,192
arxiv
\section{Introduction} The inverse Gaussian distribution (IGD) \cite{tweedie1957inversegaussian,johnson1970continuous} is widely used in a variety of application areas including reliability and survival analysis \cite{whitmore1975inversegauss,chhikara1977invgausslifetime,bardsley1980inversegauss,chhikara1989inversegauss,wang2010inverse,balakrishna2014inverse}. It is more generally used for modeling non-negative positively skewed data because of its connections to exponential families and generalized linear models \cite{seshadri1993inversegauss,blough1999modeling,smyth1999adjusted,dejong2008glms}. Our aim in this article is to develop reliable software for this distribution for the R programming environment (\url{http://www.r-project.org}). Basic probability functions for the IGD have been implemented previously in James Lindsey's R package \pkg{rmutil} \cite{rmutil} and in the CRAN packages \pkg{SuppDists} \cite{SuppDists} and \pkg{STAR} \cite{STAR}. We have found however that none of these IGD functions work for all parameter values or return results to full machine accuracy. Bob Wheeler remarks in the \pkg{SuppDists} documentation that the IGD ``is an extremely difficult distribution to treat numerically''. The \pkg{rmutil} package was removed from CRAN in 1999 but is still available from Lindsey's webpage (\url{http://www.commanster.eu/rcode.html}). \pkg{SuppDists} was orphaned in 2013 but is still available from CRAN. The \pkg{SuppDists} code is mostly implemented in C while the other packages are pure R as far as the IGD functions are concerned. The probability density of the IGD has a simple closed form expression and so is easy to compute. Care is still required though to handle infinite parameter values that correspond to valid limiting cases. The cumulative distribution function (cdf) is also available in closed form via an indirect relationship with the normal distribution \cite{shuster1968inverse,chhikara1974estimation}. Considerable care is nevertheless required to compute probabilities accurately on the log-scale, because the formula involves a sum of two normal probabilities on the un-logged scale. Random variates from IGDs can be generated using a combination of chisquare and binomial random variables \cite{michael1976generating}. Most difficult is the inverse cdf or quantile function, which must be computed by some iterative numerical approximation. Two strategies have been used to compute IGD quantiles. One is to solve for the quantile using a general-purpose equation solver such as the \code{uniroot} function in R. This is the approach taken by the \code{qinvgauss} functions in the \pkg{rmutil} and \pkg{STAR} packages. This approach can usually be relied on to converge satisfactorily but is computationally slow and provides only limited precision. The other approach is to use Newton's method to solve the equation after applying an initial approximation \cite{kallioras2014percentile}. This approach was taken by one of the current authors when developing inverse Gaussian code for S-PLUS \cite{smyth1998invgauss}. It is also the approach taken by the \code{qinvGauss} function in the \pkg{SuppDists} package. This approach is fast and accurate when it works but can fail unpredictably when the Newton iteration diverges. Newton's method cannot in general be guaranteed to converge, even when the initial approximation is close to the required value, and the parameter values for which divergence occurs are hard to predict. We have resolved the above difficulties by developing a Newton iteration for the IGD quantiles that has guaranteed convergence. Instead of attempting to find a starting value that is close to the required solution, we instead use the convexity properties of the cdf function to approach the required quantiles in a predictable fashion. We show that Newton's method for finding the quantiles of an IGD always converges when started from the mode of the distribution. Furthermore the convergence is monotonic, so that backtracking is eliminated. Newton's method is eventually quadratically convergent, meaning that the number of decimal places corrected determined tends to double with each iteration \cite{press1992numericalrecipes}. Although the starting value may be far from the required solution, the rapid convergence means the starting value is quickly left behind. Convergence tends to be rapid even when the required quantile in the extreme tails of the distribution. The above methods have been implemented in the \code{dinvgauss}, \code{pinvgauss}, \code{qinvgauss} and \code{rinvgauss} functions of the \pkg{statmod} package \cite{statmod}. The functions give close to machine accuracy for all possible parameter values. They obey similar conventions to the probability functions provided in the \pkg{stats} package that is bundled with R. Tests show that the functions are faster, more accurate and more reliable than existing functions for the IGD. Every effort has to made to ensure that the functions return results for the widest possible range of parameter values. \section{Density function} \label{sec:density} The inverse Gaussian distribution, denoted IG($\mu$,$\phi$), has probability density function (pdf) \begin{equation} d(x;\mu,\phi)=\left(2\pi\phi x^3\right)^{-1/2} \exp\left\{-\frac{(x-\mu)^2}{2\phi\mu^2 x}\right\} \label{pdf} \end{equation} for $x>0$, $\mu>0$ and $\phi>0$. The mean of the distribution is $\mu$ and the variance is $\phi\mu^3$. In generalized linear model theory \cite{mccullagh1989glms,smyth1999adjusted}, $\phi$ is called the \dfn{dispersion} parameter. Another popular parametrization of the IGD uses $\lambda=1/\phi$, which we call the \dfn{shape} parameter. For best accuracy, we compute $d(x;\mu,\phi)$ on the log-scale and then exponentiate if an unlogged value is required. Note that the mean $\mu$ can be viewed as a scaling parameter: if $X$ is distributed as IG($\mu$,$\phi$), then $X/\mu$ is also inverse Gaussian with mean $1$ and dispersion $\phi\mu$. The skewness of the distribution is therefore determined by $\phi\mu$, and in fact $\phi\mu$ is the squared coefficient of variation of the distribution. \begin{figure}[t] \begin{center} \includegraphics[width=\textwidth]{fig1_igpdf} \caption{Probability density functions of inverse Gaussian distributions. The left panel shows densities for different $\lambda$ with $\mu=1$. The right panel shows densities for different $\mu$ for $\lambda=1$. The densities are unimodal with mode between 0 and $\mu$. As $\mu/\lambda$ increases the distribution becomes more right skew and the mode decreases relative to the mean. Note that $\lambda=1/\phi$.} \label{fig:pdf} \end{center} \end{figure} The IGD is unimodal with mode at \begin{equation} m=\mu\left\{\left(1+\kappa^2\right)^{1/2}-\kappa\right\} \label{eq:mode} \end{equation} where $\kappa=3\phi\mu/2$ \cite{johnson1970continuous}. The second factor in the mode is strictly between 0 and 1, showing that the mode is strictly between 0 and $\mu$. Figure~\ref{fig:pdf} shows the pdf of the IGD for various choices of $\mu$ and $\lambda$. \begin{table} \begin{center} \begin{tabular}{lcccc} \hline Description & Parameter values & log-pdf & pdf & cdf\\ \hline Left limit & $x<0$ & $-\infty$ & 0 & 0\\ Left limit & $x=0$, $\mu>0$ and $\phi<\infty$ & $-\infty$ & 0 & 0\\ Left limit & $x<\mu$ and $\phi=0$ & $-\infty$ & 0 & 0\\ Right limit & $x=\infty$ & $-\infty$ & 0 & 1\\ Right limit & $x>\mu$ and $\phi=0$ & $-\infty$ & 0 & 1\\ Right limit & $x>0$ and $\phi=\infty$ & $-\infty$ & 0 & 1\\ Spike & $x=\mu<\infty$ and $\phi=0$ & $\infty$ & $\infty$ & 1\\ Spike & $x=0$ and $\phi=\infty$ & $\infty$ & $\infty$ & 1\\ Inverse chisquare & $\mu=\infty$ and $\phi<\infty$ & Eqn \ref{pdfinfmean} & Eqn \ref{pdfinfmean} & Uses \code{pchisq}\\ Invalid & $\mu<0$ or $\phi<0$ & \code{NA} & \code{NA} & \code{NA}\\ \hline \end{tabular} \caption{Probability density function values for special cases of the parameter values. The pdf values for infinite parameters are theoretical limit values.} \label{tab:specialcases} \end{center} \end{table} Care needs to be taken with special cases when evaluating the pdf (Table~\ref{tab:specialcases}). When $\phi\mu$ is large, a Taylor series expansion shows that the mode becomes dependent on $\phi$ only: \begin{equation} m =\mu\kappa\left\{\left(1+\kappa^{-2}\right)^{1/2}-1\right\} =\mu\kappa\left(\frac{1}{2\kappa^2}-\frac{1}{8\kappa^4}+\frac{1}{16\kappa^6}-\cdots\right) \approx \mu\kappa\frac{1}{2\kappa^2} =\frac{1}{3\phi}. \label{eq:modetaylor} \end{equation} Under the same conditions, the peak value of the density can be seen to converge to $\phi (2\pi/27)^{-1/2}$ $\times\exp(-3/2)$. This shows that the distribution has a spike at 0 whenever $\phi$ is very large, regardless of $\mu$. It is also known that \begin{equation} \frac{(X-\mu)^2}{\phi X \mu^2} \sim \chi^2_1 \label{chisq} \end{equation} \cite{shuster1968inverse}. Amongst other things, this implies that $1/(X\phi) \sim \chi^2_1$ asymptotically for $\mu$ large. For infinite $\mu$, the density becomes \begin{equation} d(x;\infty,\phi)=\left(2\pi x^3 \phi\right)^{-1/2} \exp\left(-\frac{1}{2\phi x}\right). \label{pdfinfmean} \end{equation} The pdf is always \code{NA} if $x$ is \code{NA}. Missing values for $\phi$ lead to \code{NA} values for the pdf except when $x<0$ or $x=\infty$. Missing values for $\mu$ lead to \code{NA} values for the pdf except when $x<0$, $x=\infty$ or $\phi=\infty$. Next we give some code examples. We start by loading the packages that we will compare. Note that \pkg{statmod} is loaded last and is therefore first in the search path. \begin{example} > library(rmutil) > library(SuppDists) > library(STAR) > library(statmod) \end{example} The \pkg{statmod} \code{dinvgauss} function checks for out-of-range or missing values: \begin{example} > options(digits = 3) > dinvgauss(c(-1, 0, 1, 2, Inf, NA), mean = 1.5, dispersion = 0.7) [1] 0.000 0.000 0.440 0.162 0.000 NA \end{example} Infinite mean corresponds to an inverse-chisquare case: \begin{example} > dinvgauss(c(-1, 0, 1, 2, Inf, NA), mean = Inf, dispersion = 0.7) [1] 0.000 0.000 0.233 0.118 0.000 NA \end{example} Infinite dispersion corresponds to a spike at 0 regardless of the mean: \begin{example} > dinvgauss(c(-1, 0, 1, 2, Inf, NA), mean = NA, dispersion = Inf) [1] 0 Inf 0 0 0 NA \end{example} Extreme $x$ values have zero density regardless of the mean or dispersion: \begin{example} > dinvgauss(c(-1, 0, 1, Inf), mean = NA, dispersion = NA) [1] 0 NA NA 0 \end{example} All the existing functions \code{rmutil::dinvgauss}, \code{SuppDist::dinvGauss} and \code{STAR::dinvgauss} return errors for the above calls; they do not tolerate \code{NA} values, or infinite parameter values, or $x$ values outside the support of the distribution. \section{Cumulative distribution function} Let $p(q;\mu,\phi)=P(X\le q)$ be the left tail cdf, and write $\bar p(q;\mu,\phi)$ for the right tail probability $P(X> q)=1-p(q;\mu,\phi)$. The formula developed by \cite{shuster1968inverse} for the cdf is \[ p(q;\mu,\phi)=p_{\rm norm}((q_m-1)/r)+\exp{(2/\phi_m)} p_{\rm norm}(-(q_m+1)/r) \] where $q_m=q/\mu$, $\phi_m=\phi\mu$, $r=(q\phi)^{1/2}$ and $p_{\rm norm}$ is the cdf of the standard normal distribution. The right tail probability can be written similarly: \[ \bar p(q;\mu,\phi)=\bar p_{\rm norm}((q_m-1)/r)-\exp{(2/\phi_m)} p_{\rm norm}(-(q_m+1)/r) \] where $\bar p_{\rm norm}$ is the right tail of the standard normal. The fact that this formula is additive on the unlogged scale poses some numerical problems. The $p_{\rm norm}()$ evaluations are subject to floating underflow, the $\exp()$ evaluation is subject to overflow, and there is the danger of subtractive cancellation when computing the right tail probability. It is possible to derive an asymptotic expression for the right tail probability. If $q$ is very large then: \[ \log\bar p(q;1,\phi) \approx \frac{1}{\phi_m} - 0.5\log\pi - \log(2\phi_m) - 1.5\log\left(\frac{q_m}{2\phi_m}+1\right) -\frac{q_m}{2\phi_m}. \] See the Appendix for the derivation of this approximation. This approximation is very accurate when $\phi_m^{-1/2}(q_m-1) > 10^5$, but only gives 2--3 significant figures correctly for more modest values such as $\phi_m^{-1/2}(q_m-1) = 10$. To avoid or minimize the numerical problems described above, we convert the terms in the cdf to the log-scale and remove a common factor before combining the two term terms to get $\log p$. Given a quantile value $q$, we compute the corresponding $\log p$ as follows: \begin{align*} a &= \log p_{\rm norm}((q_m-1)/r)\\ b &= 2/\phi_m + \log p_{\rm norm} (-(q_m+1)/r)\\ \log p &= a+{\rm log1p}(\exp(b-a)) \end{align*} where $\log p_{\rm norm}()$ is computed by \code{pnorm} with \code{lower.tail=TRUE} and \code{log.p=TRUE}. Note also that \code{log1p()} is an R function that computes the logarithm of one plus its argument avoiding subtractive cancellation for small arguments. The computation of the right tail probability is similar but with \begin{align*} a &= \log \bar p_{\rm norm}((q_m-1)/r)\\ \log\bar p &= a + {\rm log1p}(-\exp(b-a)). \end{align*} Because of this careful computation, \code{statmod::pinvgauss} function is able to compute correct cdf values even in the far tails of the distribution: \begin{example} > options(digits = 4) > pinvgauss(0.001, mean = 1.5, disp = 0.7) [1] 3.368e-312 > pinvgauss(110, mean = 1.5, disp = 0.7, lower.tail = FALSE) [1] 2.197e-18 \end{example} None of the existing functions can distinguish such small left tail probabilities from zero: \begin{example} > rmutil::pinvgauss(0.001, m = 1.5, s = 0.7) [1] 0 > SuppDists::pinvGauss(0.001, nu = 1.5, lambda = 1/0.7) [1] 0 > STAR::pinvgauss(0.001, mu = 1.5, sigma2 = 0.7) [1] 0 \end{example} \code{rmutil::pinvgauss} doesn't compute right tail probabilities. \code{STAR::pinvgauss} does but can't distinguish right tail probabilities less than \code{1e-17} from zero: \begin{example} > STAR::pinvgauss(110, mu = 1.5, sigma2 = 0.7, lower.tail = FALSE) [1] 0 \end{example} \code{SuppDists::pinvGauss} returns non-zero right tail probabilities, but these are too large by a factor of 10: \begin{example} > SuppDists::pinvGauss(110, nu = 1.5, lambda = 1/0.7, lower.tail = FALSE) [1] 2.935e-17 \end{example} The use of log-scale computations means that \code{statmod::pinvgauss} can accurately compute log-probabilities that are too small to be represented on the unlogged scale: \begin{example} > pinvgauss(0.0001, mean = 1.5, disp = 0.7, log.p = TRUE) [1] -7146.914 \end{example} None of the other packages can compute log-probabilities less than about $-700$. \code{pinvgauss} handles special cases similarly to \code{dinvgauss} (Table~\ref{tab:specialcases}). Again, none of the existing functions do this: \begin{example} > pinvgauss(c(-1, 0, 1, 2, Inf, NA), mean = 1.5, dispersion = 0.7) [1] 0.0000 0.0000 0.5009 0.7742 1.0000 NA \end{example} Infinite mean corresponds to an inverse-chisquare case: \begin{example} > pinvgauss(c(-1, 0, 1, 2, Inf, NA), mean = Inf, dispersion = 0.7) [1] 0.000 0.000 0.232 0.398 1.000 NA \end{example} Infinite dispersion corresponds to a spike at 0 regardless of the mean: \begin{example} > pinvgauss(c(-1, 0, 1, 2, Inf, NA), mean = NA, dispersion = Inf) [1] 0 1 1 1 1 NA \end{example} Extreme $x$ values have cdf equal to 0 or 1 regardless of the mean or dispersion: \begin{example} > pinvgauss(c(-1, 0, 1, Inf), mean = NA, dispersion = NA) [1] 0 NA NA 1 \end{example} We can test the accuracy of the cdf functions by comparing to the cdf of the $\chi^2_1$ distribution. For any $q_1<\mu$, let $q_2>\mu$ be that value satisfying $$z=\frac{(q_1-\mu)^2}{\phi\mu^2 q_1}=\frac{(q_2-\mu)^2}{\phi\mu^2 q_2}.$$ From equation~\ref{chisq}, we can conclude that the upper tail probability for the $\chi^2_1$ distribution at $z$ should be the sum of the IGD tail probabilities for $q_1$ and $q_2$, i.e., \begin{equation} \bar p_{\mathrm chisq}(z)=p(q_1;\mu,\phi)+\bar p(q_2;\mu,\phi). \label{chisqcdf} \end{equation} The following code implements this process for an illustrative example with $\mu=1.5$, $\phi=0.7$ and $q_1=0.1$. First we have to solve for $q_2$: \begin{example} > options(digits = 4) > mu <- 1.5 > phi <- 0.7 > q1 <- 0.1 > z <- (q1 - mu)^2 / (phi * mu^2 * q1) > polycoef <- c(mu^2, -2 * mu - phi * mu^2 * z, 1) > q <- Re(polyroot(polycoef)) > q [1] 0.1 22.5 \end{example} The chisquare cdf value corresponding to the left hand size of equation~\ref{chisqcdf} is: \begin{example} > options(digits = 18) > pchisq(z, df = 1, lower.tail = FALSE) [1] 0.00041923696954098788 \end{example} Now we compute the right hand size of equation~\ref{chisqcdf} using each of the IGD packages, starting with \pkg{statmod}: \begin{example} > pinvgauss(q[1], mean = mu, disp = phi) + + pinvgauss(q[2], mean = mu, disp = phi, lower.tail = FALSE) [1] 0.00041923696954098701 > rmutil::pinvgauss(q[1], m = mu, s = phi) + + 1 - rmutil::pinvgauss(q[2], m = mu, s = phi) [1] 0.00041923696954104805 > SuppDists::pinvGauss(q[1], nu = mu, lambda = 1/phi) + + SuppDists::pinvGauss(q[2], nu = mu, lambda = 1/phi, lower.tail = FALSE) [1] 0.00041923696954101699 > STAR::pinvgauss(q[1], mu = mu, sigma2 = phi) + + STAR::pinvgauss(q[2], mu = mu, sigma2 = phi, lower.tail = FALSE) [1] 0.00041923696954100208 \end{example} It can be seen that the \pkg{statmod} function is the only one to agree with \code{pchisq} to 15 significant figures, corresponding to a relative error of about $10^{-15}$. The other three packages give 12 significant figures, corresponding to relative errors of slightly over $10^{-12}$. More extreme tail values give even more striking results. We repeat the above process now with $q_1=0.01$: \begin{example} > q1 <- 0.01 > z <- (q1 - mu)^2 / (phi * mu^2 * q1) > polycoef <- c(mu^2, -2 * mu - phi * mu^2 * z, 1) > q <- Re(polyroot(polycoef)) \end{example} The reference chisquare cdf value is: \begin{example} > pchisq(z, df = 1, lower.tail = FALSE) [1] 1.6427313604456241e-32 \end{example} This can be compared to the corresponding values from the IGD packages: \begin{example} > pinvgauss(q[1], mean = mu, disp = phi) + + pinvgauss(q[2], mean = mu, disp = phi, lower.tail = FALSE) [1] 1.6427313604456183e-32 > rmutil::pinvgauss(q[1], m = mu, s = phi) + + 1 - rmutil::pinvgauss(q[2], m = mu, s = phi) [1] 0 > SuppDists::pinvGauss(q[1], nu = mu, lambda = 1/phi) + + SuppDists::pinvGauss(q[2], nu = mu, lambda = 1/phi, lower.tail = FALSE) [1] 8.2136568022278466e-33 > STAR::pinvgauss(q[1], mu = mu, sigma2 = phi) + + STAR::pinvgauss(q[2], mu = mu, sigma2 = phi, lower.tail = FALSE) [1] 1.6319986233795599e-32 \end{example} It can be seen from the above that \pkg{rmutil} and \pkg{SuppDists} do not agree with \code{pchisq} to any significant figures, meaning that the relative error is close to 100\%, while \pkg{STAR} manages 3 significant figures. \pkg{statmod} on the other hand continues to agree with \code{pchisq} to 15 significant figures. \section{Inverting the cdf} Now consider the problem of computing the quantile function $q(p;\mu,\phi)$. The quantile function computes $q$ satisfying $P(X\le q)=p$. If $q_n$ is an initial approximation to $q$, then Newton's method is a natural choice for refining the estimate. Newton's method gives the updated estimate as $$q_{n+1}=q_n+\frac{p-p(q_n;\mu,\phi)}{d(q_n;\mu,\phi)}.$$ For right-tail probabilities, the Newton step is almost the same: $$q_{n+1}=q_n-\frac{p-\bar p(q_n;\mu,\phi)}{d(q_n;\mu,\phi)}$$ where now $P(X> q)=p$. Newton's method is very attractive because it is quadratically convergent if started sufficiently close to the required value. It is hard however to characterize how close the starting value needs to be to achieve convergence and in general there is no guarantee that the Newton iteration will not diverge or give impossible values such as $q<0$ or $q=\infty$. Our approach is to derive simple conditions on the starting values such that the Newton iteration always converges and does so without any backtracking. We call this behavior \dfn{monotonic convergence}. Recall that the IGD is unimodal for all parameter values with mode $m$ given previously. It follows that the pdf $d(q;\mu\phi)$ is increasing for all $q<m$ and decreasing for all $q>m$ and the cdf $p(q;\mu,\phi)$ is convex for $q<m$ and concave for $q>m$. In other words, the cdf has a point of inflexion at the mode of the distribution. \begin{figure} \begin{center} \includegraphics[width=\textwidth]{fig2_igcdf} \caption{Monotonic Newton's method for quantiles of inverse Gaussian distributions. The cdf has a point of inflexion, marked by a red dot, at the mode of the distribution. Blue lines show the progress of the iteration for the 0.01 or 0.99 quantiles. Since the cdf is convex to the left of the mode and concave to the right, starting the iteration at the point of inflexion ensures convergence to the required quantiles without any backtracking.} \label{fig:newton} \end{center} \end{figure} Suppose that the required $q$ satisfies $q \ge m$ and suppose that the working estimate satisfies $m \le q_n \le q$. It can be seen that the cdf is concave in the interval $[q_n,q]$, the Newton step will be positive and the updated estimate $q_{n+1}$ will still satisfy $m \le q_{n+1} \le q$ (Figure~\ref{fig:newton}). Suppose instead that $q<m$ and suppose that the working estimate satisfies $q \le q_n \le m$. In this case it can be seen that the cdf is convex in the interval $[q_n,q]$, the Newton step will be negative and the updated estimate $q_n$ will still satisfy $q \le q_{n+1} \le m$ (Figure~\ref{fig:newton}). It follows that Newton's method is always monotonically convergent provided that the starting value lies between the mode $m$ and the required value $q$. In fact the mode $m$ itself can be used as the starting value. Note that to compute the mode $m$ accurately without subtractive cancellation we use equation~\ref{eq:modetaylor} when $\kappa$ is large and use equation~\ref{eq:mode} otherwise. We use $q_0=m$ as the starting value for the Newton iteration unless the left or right tail probability is very small. When the left tail probability is less than $10^{-5}$, we use instead $$q_0=\frac{\mu}{\phi q_{\rm norm}^2}$$ where $q_{\rm norm}$ is the corresponding quantile of the standard normal distribution. When the right tail probability is less than $10^{-5}$, we use $$q_0=q_{\rm gamma}$$ where $q_{\rm gamma}$ is the corresponding quantile of the gamma distribution with the same mean and variances as the IGD. These starting values are closer to the required $q$ than is $m$ but still lie between $m$ and the required $q$ and so are in the domain of monotonic convergence. We use the alterative starting values only for extreme tail probabilities because in other cases the computational cost of computing the starting value is greater than the saving enjoyed by reducing the number of Newton iterations that are needed. The term $p-p(q_n;\mu,\phi)$ in the Newton step could potentially suffer loss of floating point precision by subtractive cancellation when $p$ and $p(q_n;\mu,\phi)$ are nearly equal or if $p$ is very close to 1. To avoid this we work with $p$ on the log-scale and employ a Taylor series expansion when $p$ and $p(q_n;\mu,\phi)$ are relatively close. Let $\delta=\log p - \log p(q_n;\mu,\phi)$. When $|\delta|<10^{-5}$, we approximate \[ p-p(q_n;\mu,\phi)\approx \delta \exp\left\{\log p + {\rm log1p}(-\delta/2)\right\}. \] Here $\log p(q_n;\mu,\phi)$ is computed by \code{pinvgauss} with \code{log.p=TRUE} and ${\rm log1p}(-\delta/2)$ is computed using the \code{log1p} function. We find that the \pkg{statmod} \code{qinvgauss} package gives 16 significant figures whereas the other packages give no more than 6--8 figures of accuracy. Precision can be demonstrated by comparing the probability vector $p$ with the values obtained by passing the probabilities through \code{qinvgauss} and \code{pinvgauss}. \code{qinvgauss} and \code{pinvgauss} are inverse functions, so the final probabilities should be equal in principle to the original values. Error is measured by comparing the original and processed probability vectors: \begin{example} > p <- c(0.000001, 0.00001, 0.0001, 0.001, 0.01, 0.1, 0.5, + 0.9, 0.99, 0.999, 0.9999, 0.99999, 0.999999) > > p1 <- pinvgauss(qinvgauss(p, mean = 1, disp = 1), mean = 1, disp = 1) > p2 <- rmutil::pinvgauss(rmutil::qinvgauss(p, m = 1, s = 1), m = 1, s = 1) > p3 <- SuppDists::pinvGauss(SuppDists::qinvGauss(p, nu = 1, la = 1), nu = 1, la = 1) > p4 <- STAR::pinvgauss(STAR::qinvgauss(p, mu = 1, sigma2 = 1), mu = 1, sigma2 = 1) > > options(digits = 4) > summary( abs(p-p1) ) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.00e+00 0.00e+00 0.00e+00 1.92e-17 2.20e-19 2.22e-16 > summary( abs(p-p2) ) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.00e+00 5.10e-09 8.39e-08 3.28e-07 5.92e-07 1.18e-06 > summary( abs(p-p3) ) Min. 1st Qu. Median Mean 3rd Qu. Max. 1.00e-12 6.00e-12 2.77e-10 1.77e-09 2.58e-09 1.03e-08 > summary( abs(p-p4) ) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.00e+00 0.00e+00 1.20e-08 8.95e-07 2.17e-07 6.65e-06 \end{example} It can be seen that the error for \code{statmod::qinvgauss} is never greater than \code{2e-16}. Similar results are observed if relative error is assessed in terms of the quantile $q$ instead of the probability $p$: \begin{example} > q <- qinvgauss(p, mean = 1, disp = 1) > q1 <- qinvgauss(pinvgauss(q, mean = 1, disp = 1), mean = 1, disp = 1) > q2 <- rmutil::qinvgauss(rmutil::pinvgauss(q, m = 1, s = 1), m = 1, s = 1) > q3 <- SuppDists::qinvGauss(SuppDists::pinvGauss(q, nu = 1, la = 1), nu = 1, la = 1) > q4 <- STAR::qinvgauss(STAR::pinvgauss(q, mu = 1, sigma2 = 1), mu = 1, sigma2 = 1) > summary( abs(q1-q)/q ) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.00e+00 0.00e+00 0.00e+00 5.57e-17 0.00e+00 4.93e-16 > summary( abs(q2-q)/q ) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.00e+00 1.70e-06 3.30e-06 8.94e-05 8.80e-05 5.98e-04 > summary( abs(q3-q)/q ) Min. 1st Qu. Median Mean 3rd Qu. Max. 1.09e-08 3.94e-08 4.78e-08 4.67e-08 5.67e-08 8.93e-08 > summary( abs(q4-q)/q ) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.00e+00 3.00e-07 1.40e-06 9.20e-05 9.42e-05 5.46e-04 \end{example} The relative error for \code{statmod::qinvgauss} is never worse than \code{5e-16}. Speed was determined by generating $p$ as a vector of a million random uniform deviates, and running the \code{qinvgauss} or \code{qinvGauss} functions on p with mean and dispersion both equal to one. \begin{example} > set.seed(20140526) > u <- runif(1000) > p <- runif(1e6) > system.time(q1 <- qinvgauss(p, mean = 1, shape = 1)) user system elapsed 4.29 0.41 4.69 > system.time(q2 <- rmutil::qinvgauss(p, m = 1, s = 1)) user system elapsed 157.39 0.03 157.90 > system.time(q3 <- SuppDists::qinvGauss(p, nu = 1, lambda = 1)) user system elapsed 13.59 0.00 13.68 > system.time(q4 <- STAR::qinvgauss(p, mu = 1, sigma2 = 1)) user system elapsed 266.41 0.06 267.25 \end{example} Timings shown here are for a Windows laptop with a 2.7GHz Intel i7 processor running 64-bit R-devel (built 31 January 2016). The \pkg{statmod} qinvgauss function is 40 times faster than the \pkg{rmutil} or \pkg{STAR} functions about 3 times faster than \pkg{SuppDists}. Reliability is perhaps even more crucial than precision or speed. \code{SuppDists::qinvGauss} fails for some parameter values because Newton's method does not converge from the starting values provided: \begin{example} > options(digits = 4) > SuppDists::qinvGauss(0.00013, nu=1, lambda=3) Error in SuppDists::qinvGauss(0.00013, nu = 1, lambda = 3) : Iteration limit exceeded in NewtonRoot() \end{example} By contrast, \code{statmod::qinvgauss} runs successfully for all parameter values because divergence of the algorithm is impossible: \begin{example} > qinvgauss(0.00013, mean = 1, shape = 3) [1] 0.1504 \end{example} \code{qinvgauss} returns right tail values accurately, for example: \begin{example} > qinvgauss(1e-20, mean = 1.5, disp = 0.7, lower.tail = FALSE) [1] 126.3 \end{example} The same probability can be supplied as a left tail probability on the log-scale, with the same result: \begin{example} > qinvgauss(-1e-20, mean = 1.5, disp = 0.7, log.p = TRUE) [1] 126.3 \end{example} Note that \code{qinvgauss} returns the correct quantile in this case even though the left tail probability is not distinguishable from 1 in floating point arithmetic on the unlogged scale. By contrast, the \pkg{rmutil} and \pkg{STAR} functions do not compute right tail values and the \pkg{SuppDists} function fails to converge for small right tail probabilities: \begin{example} > SuppDists::qinvGauss(1e-20, nu = 1.5, lambda = 1/0.7, lower.tail = FALSE) Error in SuppDists::qinvGauss(1e-20, nu = 1.5, lambda = 1/0.7, lower.tail = FALSE) : Infinite value in NewtonRoot() \end{example} Similarly for log-probabilities, the \pkg{rmutil} and \pkg{STAR} functions do not accept log-probabilities and the \pkg{SuppDists} function gives an error: \begin{example} > SuppDists::qinvGauss(-1e-20, nu = 1.5, lambda = 1/0.7, log.p=TRUE) Error in SuppDists::qinvGauss(-1e-20, nu = 1.5, lambda = 1/0.7, log.p = TRUE) : Infinite value in NewtonRoot() \end{example} All the \pkg{statmod} IGD functions allow variability to be specified either by way of a dispersion ($\phi$) or shape ($\lambda$) parameter: \begin{example} > args(qinvgauss) function (p, mean = 1, shape = NULL, dispersion = 1, lower.tail = TRUE, log.p = FALSE, maxit = 200L, tol = 1e-14, trace = FALSE) \end{example} Boundary or invalid \code{p} are detected: \begin{example} > options(digits = 4) > qinvgauss(c(0, 0.5, 1, 2, NA)) [1] 0.0000 0.6758 Inf NA NA \end{example} as are invalid values for $\mu$ or $\phi$: \begin{example} > qinvgauss(0.5, mean = c(0, 1, 2)) [1] NA 0.6758 1.0285 \end{example} The \pkg{statmod} functions \code{dinvgauss}, \code{pinvgauss} and \code{qinvgauss} all preserve the attributes of the first input argument provided that none of the other arguments have longer length. For example, \code{qinvgauss} will return a matrix if \code{p} is a matrix: \begin{example} > p <- matrix(c(0.1, 0.6, 0.7, 0.9), 2, 2) > rownames(p) <- c("A", "B") > colnames(p) <- c("X1", "X2") > p X1 X2 A 0.6001 0.3435 B 0.4919 0.4987 > qinvgauss(p) X1 X2 A 0.8486 0.4759 B 0.6637 0.6739 \end{example} Similarly the names of a vector are preserved on output: \begin{example} > p <- c(0.1, 0.6, 0.7, 0.9) > names(p) <- LETTERS[1:4] > qinvgauss(p) A B C D 0.2376 0.8483 1.0851 2.1430 \end{example} \section{Random deviates} The functions \code{statmod::rinvgauss}, \code{SuppDists::rinvGauss} and \code{STAR::rinvgauss} all use the same algorithm to compute random deviates from the IGD. The method is to generate chisquare random deviates corresponding to $(X-\mu)^2/(\phi X \mu^2)$, and then choose between the two possible $X$ values leading to the same chisquare value with probabilities worked out by \cite{michael1976generating}. The \pkg{SuppDists} function is faster than the others because of the implementation in C. Nevertheless, the pure R \pkg{statmod} and \pkg{STAR} functions are acceptably fast. The \pkg{statmod} function generates a million random deviates in about a quarter of a second of elapsed time on a standard business laptop computer while \pkg{STAR} takes about half a second. The \code{rmutil::rinvgauss} function generates random deviates by running \code{qinvgauss} on random uniform deviates. This is far slower and less accurate than the other functions. \section{Discussion} Basic probability calculations for the IGD have been available in various forms for some time but the functions described here are the first to work for all parameter values and to return close to full machine accuracy. The \pkg{statmod} functions achieve good accuracy by computing probabilities on the log-scale where possible. Care is given to handle special limiting cases, including some cases that have not been previously described. The \pkg{statmod} functions trap invalid parameter values, provide all the standard arguments for probability functions in the R and preserve argument attributes on output. A new strategy has been described to invert the cdf using a monotonically convergent Newton iteration. It may seem surprising that we recommend starting the iteration from the same value regardless of the quantile required. Intuitively, a starting value that is closer to the required quantile might have been expected to be better. However using an initial approximation runs the risk of divergence, and convergence of Newton's method from the mode is so rapid that the potential advantage of a closer initial approximation is minimized. The \pkg{statmod} \code{qinvgauss} function is 40 times faster than the quantile functions in the \pkg{rmutil} or \pkg{STAR} packages, despite returning 16 rather than 6 figures of accuracy. It is also 3 times faster than \pkg{SuppDists}, even though \code{SuppDists::qinvGauss} is written in C, uses the same basic Newton strategy and has a less stringent stopping criterion. The starting values for Newton's method used by \code{SuppDists::qinvGauss} are actually closer to the final values than those used by \code{statmod::qinvgauss}, but the latter are more carefully chosen to achieve smooth convergence without backtracking. \code{SuppDists::qinvGauss} uses the log-normal approximation of \cite{whitmore1978normalizing} to start the Newton iteration and the \code{STAR::qinvgauss} uses the same approximation to setup the interval limits for \code{uniroot}. Unfortunately the log-normal approximation has much heavier tails than the IGD, meaning that the starting values are more extreme than the required quantiles and are therefore outside the domain of monotonic convergence. As well as the efficiency gained by avoiding backtracking, monotonic convergence has the advantage that any change in sign of the Newton step is a symptom that the limits of floating point accuracy have been reached. In the \pkg{statmod} \code{qinvgauss} function, the Newton iteration is stopped if this change of sign occurs before the convergence criterion is achieved. The current \pkg{statmod} functions could be made faster by reimplementing in C, but the pure R versions have benefits in terms of understandability and easy maintenance, and they are only slightly slower than comparable functions such as \code{qchisq} and \code{qt}. This strategy used here to compute the quantile could be used for any continuous unimodal distribution, or for continuous distribution that can be transformed to be unimodal. \begin{example} > sessionInfo() R Under development (unstable) (2016-01-31 r70055) Platform: x86_64-w64-mingw32/x64 (64-bit) Running under: Windows 7 x64 (build 7601) Service Pack 1 locale: [1] LC_COLLATE=English_Australia.1252 LC_CTYPE=English_Australia.1252 [3] LC_MONETARY=English_Australia.1252 LC_NUMERIC=C [5] LC_TIME=English_Australia.1252 attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] statmod_1.4.24 STAR_0.3-7 codetools_0.2-14 gss_2.1-5 [5] R2HTML_2.3.1 mgcv_1.8-11 nlme_3.1-124 survival_2.38-3 [9] SuppDists_1.1-9.2 rmutil_1.0 loaded via a namespace (and not attached): [1] Matrix_1.2-3 splines_3.3.0 grid_3.3.0 lattice_0.20-33 \end{example}
2,869,038,154,193
arxiv
\section{Introduction} It is known that a solution of a linear system $y^{\prime}\left( t\right) =Ay\left( t\right) $, $t\geq0$ has the form $y\left( t\right) =e^{At}y\left( 0\right) ,$ where exponential matrix $e^{At}$ is also called fundamental matrix. However, it becomes more complex for seeking a fundamental matrix for linear delay system \begin{align} y^{\prime}\left( t\right) & =Ay\left( t\right) +By\left( t-h\right) ,\ \ t\geq0\ ,h>0,\label{ld1}\\ y\left( t\right) & =\varphi\left( t\right) ,\ \ -h\leq t\leq0,\nonumber \end{align} where $A,B$ are two constant square matrices. Under the assumptions that $A$ and $B$ are permutation matrices, Khusainov \& Shuklin \cite{khus1} give a representation of a solution of a linear homogeneous system with delay by introducing the concept of delayed matrix exponential $e_{h}^{Bt}$ corresponding to delay $h$ and matrix $B.$ They proved that fundamental matrix of linear delay system (\ref{ld1}) (delayed perturbation of exponential matrix $e^{At}$) can be given by $e^{At}e_{h}^{B_{1}\left( t-h\right) ,\ B_{1}=e^{-Ah}B.$ Notice that the fractional analogue of the same problem was considered by Li and Wang \cite{wang1} in the case $A=\Theta.$ For more recent contributions on oscillating system with pure delay, relative controllability of system with pure delay, asymptotic stability of nonlinear multidelay differential equations, finite time stability of differential equations, one can refer to \cite{diblik3}-\cite{pos2} and reference therein. Motivated by Khusainov \& Shuklin \cite{khus1}, Li and Wang \cite{wang1}, we extend to consider representation of solutions of a fractional delay differential equation of the form by introducing delayed perturbation of Mittag-Leffler functio \begin{align} \left( ^{C}D_{-h^{+}}^{\alpha}y\right) \left( t\right) & =Ay\left( t\right) +By\left( t-h\right) +f\left( t\right) ,\ \ t\in\left( 0,T\right] ,h>0,\label{de1}\\ y\left( t\right) & =\varphi\left( t\right) ,\ \ -h\leq t\leq0,\nonumber \end{align} where $\left( ^{C}D_{-h^{+}}^{\alpha}y\right) \left( \cdot\right) $ is the Caputo derivative of order $\alpha\in\left( 0,1\right) $, $A,B\in R^{n\times n}$ denotes constant matrix, and $\varphi:\left[ -h,0\right] \rightarrow R^{n}$ is an arbitrary Caputo differentiable vector function, $f\in C\left( \left[ -h,T\right] ,R^{n}\right) $, $t=lh$ for a fixed natural number $l$. To end this section, we would like to state the main contribution as follows: (i) We propose delayed perturbation $X_{h,\alpha,\beta}^{A,B}\left( t\right) $ of Mittag-Leffler type functions, by means of the matrix equations (\ref{re1}). We show that for $B=\Theta$ the function $X_{h,\alpha,\beta }^{A,B}\left( t\right) $ coincide with Mittag-Leffler type function of two paramemters $t^{\alpha-1}E_{\alpha,\beta}\left( At^{\alpha}\right) $. For $A=\Theta$ $X_{h,\alpha,\beta}^{A,B}\left( t\right) $ coincide with delayed Mittag-Leffler type matrix function of two parameters $E_{h,\alpha,\beta ^{B}\left( t-h\right) .$ (ii) We explicitly represent the solution of fractional delay linear system (\ref{de1}) via delayed perturbation of Mittag-Leffler type function. \begin{definition} \label{def:01}Mittag-Leffler type matrix function of two parameters $\Phi_{\alpha,\beta}\left( A,z\right) :R\rightarrow R^{n\times n}$ is defined b \[ \Phi_{\alpha,\beta}\left( A,z\right) :=z^{\beta-1}E_{\alpha,\beta}\left( Az^{\alpha}\right) :=z^{\beta-1 {\displaystyle\sum\limits_{k=0}^{\infty}} \frac{A^{k}z^{\alpha k}}{\Gamma\left( k\alpha+\beta\right) },\ \ \ \alpha ,\beta>0,z\in R. \] \end{definition} \begin{definition} \label{def:11}Delayed Mittag-Leffler type matrix function of two parameters $E_{h,\alpha,\beta}^{B}\left( t\right) :R\rightarrow R^{n\times n}$ is defined b \begin{equation} E_{h,\alpha,\beta}^{B}\left( t\right) :=\left\{ \begin{tabular} [c]{ll $\Theta,$ & $-\infty<t\leq-h,$\\ $I\frac{\left( h+t\right) ^{\beta-1}}{\Gamma\left( \beta\right) },$ & $-h<t\leq0,$\\ $I\frac{\left( h+t\right) ^{\beta-1}}{\Gamma\left( \beta\right) +B\frac{t^{\alpha+\beta-1}}{\Gamma\left( \alpha+\beta\right) }+B^{2 \frac{\left( t-h\right) ^{2\alpha+\beta-1}}{\Gamma\left( 2\alpha +\beta\right) }+...+B^{k}\frac{\left( t-\left( k-1\right) h\right) ^{k\alpha+\beta-1}}{\Gamma\left( k\alpha+\beta\right) },$ & $\left( k-1\right) h<t\leq kh. \end{tabular} \ \ \ \right. \label{ml2 \end{equation} \end{definition} In order to define delayed perturbation of Mittag-Leffler type matrix functions, we introduce the following matrix equation for $Q_{k}\left( s\right) ,$ $k=1,2,...$ \begin{align} Q_{k+1}\left( s\right) & =AQ_{k}\left( s\right) +BQ_{k}\left( s-h\right) ,\nonumber\\ Q_{0}\left( s\right) & =Q_{k}\left( -h\right) =\Theta,\ \ Q_{1}\left( 0\right) =I,\nonumber\\ k & =0,1,2,...,s=0,h,2h,... \label{re1 \end{align} Simple calculations show tha \ \begin{tabular} [c]{|l|l|l|l|l|l|l|}\hline & $s=0$ & $s=h$ & $s=2h$ & $s=3h$ & $\cdots$ & $s=ph$\\\hline $Q_{1}\left( s\right) $ & $I$ & $\Theta$ & $\Theta$ & $\Theta$ & $\cdots$ & $\Theta$\\\hline $Q_{2}\left( s\right) $ & $A$ & $B$ & $\Theta$ & $\Theta$ & $\cdots$ & $\Theta$\\\hline $Q_{3}\left( s\right) $ & $A^{2}$ & $AB+BA$ & $B^{2}$ & $\Theta$ & $\cdots$ & $\Theta$\\\hline $Q_{4}\left( s\right) $ & $A^{3}$ & $A\left( AB+BA\right) +BA^{2}$ & $AB^{2}+B\left( AB+BA\right) $ & $B^{3}$ & $\cdots$ & \\\hline $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & & $\cdots$ & $\Theta$\\\hline $Q_{p+1}\left( s\right) $ & $A^{p}$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $B^{p}$\\\hline \end{tabular} \ \ \ \ \] \begin{definition} \label{def:21}Delayed perturbation of two parameter Mittag-Leffler type matrix function $X_{h,\alpha,\beta}^{A,B}$ generated by $A,B$\ is defined b \begin{equation} X_{h,\alpha,\beta}^{A,B}\left( t\right) :=\left\{ \begin{tabular} [c]{ll $\Theta,$ & $-h\leq t<0,$\\ $I,$ & $t=0,$\\ {\displaystyle\sum\limits_{i=0}^{\infty}} {\displaystyle\sum\limits_{j=0}^{p-1}} Q_{i+1}\left( jh\right) \dfrac{\left( t-jh\right) ^{i\alpha+\beta-1 }{\Gamma\left( i\alpha+\beta\right) },$ & $\left( p-1\right) h<t\leq ph. \end{tabular} \ \ \right. \label{ml1 \end{equation} \end{definition} \begin{lemma} Let $X_{h,\alpha,\beta}^{A,B}\left( t\right) $ be defined by (\ref{ml1}). Then the following holds true: \begin{description} \item[(i)] if $A=\Theta$ then $X_{h,\alpha,\beta}^{A,B}\left( t\right) =E_{h,\alpha,\beta}^{B}\left( t-h\right) ,\ \ \left( p-1\right) h\leq t-h\leq ph$, \item[(ii)] if $B=\Theta$ then $X_{h,\alpha,\beta}^{A,B}\left( t\right) =t^{\beta-1}E_{\alpha,\beta}\left( At^{\alpha}\right) ,$ \item[(iii)] if $\alpha=\beta=1$ and $AB=BA$ then $X_{h,1,1}^{A,B}\left( t\right) =e^{At}e_{h}^{B\left( t-h\right) }$, $\left( p-1\right) h<t\leq ph$. \end{description} \end{lemma} \begin{proof} (i) If $A=\Theta,$ then \[ Q_{i+1}\left( jh\right) =\left\{ \begin{array} [c]{c \Theta,\ \ \ i\neq j,\\ B^{i},\ \ i=j, \end{array} \right. \] and $X_{h,\alpha,\beta}^{A,B}\left( t\right) $ coincides with $E_{h,\alpha ,\beta}^{B}\left( t-h\right) : \begin{align*} X_{h,\alpha,\beta}^{A,B}\left( t\right) & {\displaystyle\sum\limits_{i=0}^{p}} B^{i}\dfrac{\left( t-ih\right) ^{i\alpha+\beta-1}}{\Gamma\left( i\alpha+\beta\right) }=\frac{t^{\beta-1}}{\Gamma\left( \beta\right) }+B\frac{\left( t-h\right) ^{\alpha+\beta-1}}{\Gamma\left( \alpha +\beta\right) }+...+B^{p}\frac{\left( t-ph\right) ^{p\alpha+\beta-1 }{\Gamma\left( p\alpha+\beta\right) }\\ & =E_{h,\alpha,\beta}^{B}\left( t-h\right) ,\ \ \left( p-1\right) h<t-h\leq ph. \end{align*} (ii) Trivially, from definition of $X_{h,\alpha,\beta}^{A,B}\left( t\right) $ we have: if $B=\Theta$, the \[ X_{h,\alpha,\beta}^{A,B}\left( t\right) {\displaystyle\sum\limits_{i=0}^{\infty}} A^{i}\frac{t^{i\alpha+\beta-1}}{\Gamma\left( i\alpha+\beta\right) =t^{\beta-1}E_{\alpha,\beta}\left( At^{\alpha}\right) . \] (iii) It can be easily shown that $Q_{i+1}\left( jh\right) =\left( \begin{array} [c]{c i\\ j \end{array} \right) A^{i-j}B^{j}$. So, for $\left( p-1\right) h<t\leq ph$ and $B_{1}=e^{-Ah}B$ we have \begin{align*} X_{h,1,1}^{A,B}\left( t\right) & {\displaystyle\sum\limits_{i=0}^{\infty}} Q_{i+1}\left( 0\right) \frac{t^{i}}{i!} {\displaystyle\sum\limits_{i=1}^{\infty}} Q_{i+1}\left( h\right) \frac{\left( t-h\right) ^{i}}{i!}+... {\displaystyle\sum\limits_{i=1}^{\infty}} Q_{i+1}\left( \left( p-1\right) h\right) \frac{\left( t-\left( p-1\right) h\right) ^{i}}{i!}\\ & {\displaystyle\sum\limits_{i=0}^{\infty}} A^{i}\frac{t^{i}}{i!} {\displaystyle\sum\limits_{i=1}^{\infty}} \left( \begin{array} [c]{c i\\ 1 \end{array} \right) A^{i-1}B\frac{\left( t-h\right) ^{i}}{i!}+... {\displaystyle\sum\limits_{i=p-1}^{\infty}} \left( \begin{array} [c]{c i\\ p-1 \end{array} \right) A^{i-p+1}B^{p-1}\frac{\left( t-\left( p-1\right) h\right) ^{i }{i!}\\ & =e^{At}+e^{A\left( t-h\right) }B\left( t-h\right) +... {\displaystyle\sum\limits_{i=0}^{\infty}} \left( \begin{array} [c]{c i+p-1\\ p-1 \end{array} \right) A^{i}B^{p-1}\frac{\left( t-\left( p-1\right) h\right) ^{i+p-1 }{\left( i+p-1\right) !}\\ & =e^{At}+e^{A\left( t-h\right) }B\left( t-h\right) +...+e^{A\left( t-\left( p-1\right) h\right) }B^{p-1}\frac{1}{\left( p-1\right) !}\left( t-\left( p-1\right) h\right) ^{p-1}\\ & =e^{At}\left( I+e^{-Ah}B\left( t-h\right) +...+e^{-A\left( p-1\right) h}B^{p-1}\frac{1}{\left( p-1\right) !}\left( t-\left( p-1\right) h\right) ^{p-1}\right) =e^{At}e_{h}^{B_{1}\left( t-h\right) }. \end{align*} \end{proof} It turns out that $X_{h,\alpha,\beta}^{A,B}\left( t\right) $ is a delayed perturbation of the fundamental matrix of the equation (\ref{de1}) with $f=0.$ \begin{lemma} $X_{h,\alpha,\alpha}^{A,B}:R\rightarrow R^{n}$ is a solution o \begin{equation} ^{C}D_{-h^{+}}^{\alpha}X_{h,\alpha,\alpha}^{A,B}\left( t\right) =AX_{h,\alpha,\alpha}^{A,B}\left( t\right) +BX_{h,\alpha,\alpha ^{A,B}\left( t-h\right) . \label{de4 \end{equation} \end{lemma} \begin{proof} We verify that $X_{h,\alpha,\alpha}^{A,B}\left( t\right) $ satisfies differential equation (\ref{de4}) for $t\in\left( t_{p},t_{p+1}\right] .$ We adopt mathematical induction to prove our result. (i) For $p=0$, $0<t\leq h$, we hav \begin{align*} X_{h,\alpha,\alpha}^{A,B}\left( t\right) & =t^{\alpha-1}E_{\alpha,\alpha }\left( At\right) ,\ \ X_{h,\alpha,\alpha}^{A,B}\left( t-h\right) =\Theta,\\ ^{C}D_{-h^{+}}^{\alpha}X_{h,\alpha,\alpha}^{A,B}\left( t\right) & =\ ^{C}D_{-h^{+}}^{\alpha {\displaystyle\sum\limits_{i=1}^{\infty}} A^{i}\frac{t^{\left( i+1\right) \alpha-1}}{\Gamma\left( \left( i+1\right) \alpha\right) }=AX_{h,\alpha,\alpha}^{A,B}\left( t\right) =AX_{h,\alpha ,\alpha}^{A,B}\left( t\right) +BX_{h,\alpha,\alpha}^{A,B}\left( t-h\right) . \end{align*} (ii) Suppose $p=n,\ \left( n-1\right) h<t\leq nh$ the following relation holds \[ X_{h,\alpha,\alpha}^{A,B}\left( t\right) {\displaystyle\sum\limits_{i=0}^{\infty}} {\displaystyle\sum\limits_{j=0}^{n-1}} Q_{i+1}\left( jh\right) \frac{\left( t-jh\right) ^{\left( i+1\right) \alpha-1}}{\Gamma\left( \left( i+1\right) \alpha\right) }. \] Next, for $p=n+1,\ nh<t\leq\left( n+1\right) h$, by elementary computation, one obtain \begin{align*} ^{C}D_{-h^{+}}^{\alpha}X_{h,\alpha,\alpha}^{A,B}\left( t\right) & {\displaystyle\sum\limits_{i=0}^{\infty}} {\displaystyle\sum\limits_{j=0}^{n}} Q_{i+1}\left( jh\right) \frac{\Gamma\left( \left( i+1\right) \alpha\right) }{\Gamma\left( i\alpha\right) }\frac{\left( t-jh\right) ^{i\alpha-1}}{\Gamma\left( \left( i+1\right) \alpha\right) }\\ & {\displaystyle\sum\limits_{i=0}^{\infty}} {\displaystyle\sum\limits_{j=0}^{n}} Q_{i+1}\left( jh\right) \frac{\left( t-jh\right) ^{i\alpha-1} {\Gamma\left( i\alpha\right) } {\displaystyle\sum\limits_{i=0}^{\infty}} {\displaystyle\sum\limits_{j=0}^{n}} \left( AQ_{i}\left( jh\right) +BQ_{i}\left( jh-h\right) \right) \frac{\left( t-jh\right) ^{i\alpha-1}}{\Gamma\left( i\alpha\right) }\\ & {\displaystyle\sum\limits_{i=0}^{\infty}} {\displaystyle\sum\limits_{j=0}^{n}} AQ_{i+1}\left( jh\right) \frac{\left( t-jh\right) ^{\left( i+1\right) \alpha-1}}{\Gamma\left( \left( i+1\right) \alpha\right) } {\displaystyle\sum\limits_{i=0}^{\infty}} {\displaystyle\sum\limits_{j=0}^{n-1}} BQ_{i+1}\left( jh\right) \frac{\left( t-h-jh\right) ^{\left( i+1\right) \alpha-1}}{\Gamma\left( \left( i+1\right) \alpha\right) }\\ & =AX_{h,\alpha,\alpha}^{A,B}\left( t\right) +BX_{h,\alpha,\alpha ^{A,B}\left( t-h\right) . \end{align*} This ends the proof. \end{proof} \begin{lemma} \label{lem:11}Let $\left( k-1\right) h<t\leq kh,$ $-h\leq s\leq t$. We hav \[ \int_{s}^{t}\left( t-r\right) ^{-\alpha}X_{h,\alpha,\alpha}^{A,B}\left( r-s\right) dr=\left( t-s-jh\right) ^{-\alpha+i\alpha+\beta {\displaystyle\sum\limits_{i=0}^{\infty}} {\displaystyle\sum\limits_{j=0}^{p-1}} Q_{i+1}\left( jh\right) \frac{\Gamma\left( 1-\alpha\right) }{\Gamma\left( i\alpha+\beta+1-\alpha\right) }. \] \end{lemma} \begin{theorem} \label{thm:1}The solution $y(t)$ of (\ref{de1}) satisfying zero initial condition, has a for \[ y\left( t\right) =\int_{-h}^{t}X_{h,\alpha,\alpha}^{A,B}\left( t-s\right) f\left( s\right) ds,\ \ t\geq0. \] \end{theorem} \begin{proof} By using the method of variation of constants, any solution of nonhomogeneous system $y\left( t\right) $ should be satisfied in the for \begin{equation} y\left( t\right) =\int_{-h}^{t}X_{h,\alpha,\alpha}^{A,B}\left( t-s\right) c\left( s\right) ds,\ \ t\geq0, \label{rp1 \end{equation} where $c\left( s\right) ,$ $0\leq s\leq t$ is an unknown vector function and $y(0)=0$. Having Caputo fractional differentiation on both sides of (\ref{rp1}) , we obtain the following cases: (i) For $0<t\leq h$ we hav \begin{align*} \left( ^{C}D_{-h^{+}}^{\alpha}y\right) \left( t\right) & =Ay\left( t\right) +By\left( t-h\right) +f\left( t\right) \\ & =A\int_{-h}^{t}X_{h,\alpha,\alpha}^{A,B}\left( t-s\right) c\left( s\right) ds+\int_{-h}^{t-h}X_{h,\alpha,\alpha}^{A,B}\left( t-h-s\right) c\left( s\right) ds+f\left( t\right) \\ & =A\int_{-h}^{t}X_{h,\alpha,\alpha}^{A,B}\left( t-s\right) c\left( s\right) ds+f\left( t\right) . \end{align*} According to Lemma \ref{lem:11}, we hav \begin{align*} \left( ^{C}D_{-h^{+}}^{\alpha}y\right) \left( t\right) & =\frac {1}{\Gamma\left( 1-\alpha\right) }\frac{d}{dt}\int_{-h}^{t}\left( t-r\right) ^{-\alpha}\left( \int_{-h}^{r}X_{h,\alpha,\alpha}^{A,B}\left( r-s\right) c\left( s\right) ds\right) dr\\ & =\frac{1}{\Gamma\left( 1-\alpha\right) }\frac{d}{dt}\int_{-h}^{t}c\left( s\right) \int_{s}^{t}\left( t-r\right) ^{-\alpha {\displaystyle\sum\limits_{i=0}^{\infty}} A^{i}\frac{\left( r-s\right) ^{\left( i+1\right) \alpha-1}}{\Gamma\left( \left( i+1\right) \alpha\right) }drds\\ & =\frac{1}{\Gamma\left( 1-\alpha\right) }\frac{d}{dt}\int_{-h}^{t}c\left( s\right) \int_{s}^{t}\left( t-r\right) ^{-\alpha}\frac{\left( r-s\right) ^{\alpha-1}}{\Gamma\left( \alpha\right) }drds\\ & +\frac{1}{\Gamma\left( 1-\alpha\right) {\displaystyle\sum\limits_{i=1}^{\infty}} A^{i}\frac{d}{dt}\int_{-h}^{t}c\left( s\right) \int_{s}^{t}\left( t-r\right) ^{-\alpha}\frac{\left( r-s\right) ^{\left( i+1\right) \alpha-1}}{\Gamma\left( \left( i+1\right) \alpha\right) }drds \end{align* \begin{align*} & =c\left( t\right) {\displaystyle\sum\limits_{i=1}^{\infty}} \frac{1}{\Gamma\left( 1-\alpha\right) \Gamma\left( \left( i+1\right) \alpha\right) }A^{i}\frac{d}{dt}\int_{-h}^{t}c\left( s\right) \left( t-s\right) ^{i\alpha-1}B\left( \alpha\left( i+1\right) ,1-\alpha\right) ds\\ & =c\left( t\right) {\displaystyle\sum\limits_{i=1}^{\infty}} \frac{\Gamma\left( 1-\alpha\right) \Gamma\left( \left( i+1\right) \alpha\right) }{\Gamma\left( 1-\alpha\right) \Gamma\left( \left( i+1\right) \alpha\right) \Gamma\left( 1+i\alpha\right) }A^{i}\frac{d {dt}\int_{-h}^{t}c\left( s\right) \left( t-s\right) ^{i\alpha}ds\\ & =c\left( t\right) {\displaystyle\sum\limits_{i=1}^{\infty}} A^{i}\frac{1}{\Gamma\left( 1+i\alpha\right) }\frac{d}{dt}\int_{-h ^{t}c\left( s\right) \left( t-s\right) ^{i\alpha}ds \end{align* \begin{align*} & =c\left( t\right) {\displaystyle\sum\limits_{i=1}^{\infty}} A^{i}\frac{\alpha i}{\Gamma\left( 1+i\alpha\right) }\int_{-h}^{t}c\left( s\right) \left( t-s\right) ^{i\alpha-1}ds=c\left( t\right) +\int_{-h}^{t {\displaystyle\sum\limits_{i=1}^{\infty}} A^{i}\frac{1}{\Gamma\left( i\alpha\right) }\left( t-s\right) ^{i\alpha -1}c\left( s\right) ds\\ & =c\left( t\right) +A\int_{-h}^{t {\displaystyle\sum\limits_{i=0}^{\infty}} A^{i}\frac{\left( t-s\right) ^{\left( i+1\right) \alpha-1}}{\Gamma\left( \left( i+1\right) \alpha\right) }c\left( s\right) ds=c\left( t\right) +A\int_{-h}^{t}X_{h,\alpha,\beta}^{A,B}\left( t-s\right) c\left( s\right) ds. \end{align*} Hence, we obtain $c(t)=f(t)$. (ii) For $kh<t\leq\left( k+1\right) h$, according to (\ref{de1}) , we hav \begin{align*} \left( ^{C}D_{-h^{+}}^{\alpha}y\right) \left( t\right) & =Ay\left( t\right) +By\left( t-h\right) +f\left( t\right) \\ & =A\int_{-h}^{t}X_{h,\alpha,\alpha}^{A,B}\left( t-s\right) c\left( s\right) ds+B\int_{-h}^{t-h}X_{h,\alpha,\alpha}^{A,B}\left( t-h-s\right) c\left( s\right) ds+f\left( t\right) \\ & =A\int_{-h}^{t {\displaystyle\sum\limits_{i=0}^{\infty}} {\displaystyle\sum\limits_{j=0}^{n}} Q_{i+1}\left( jh\right) \frac{\left( t-s-jh\right) ^{\left( i+1\right) \alpha-1}}{\Gamma\left( \left( i+1\right) \alpha\right) }c\left( s\right) ds\\ & +B\int_{-h}^{t-h {\displaystyle\sum\limits_{i=0}^{\infty}} {\displaystyle\sum\limits_{j=0}^{n-1}} Q_{i+1}\left( jh\right) \frac{\left( t-s-h-jh\right) ^{\left( i+1\right) \alpha-1}}{\Gamma\left( \left( i+1\right) \alpha\right) }c\left( s\right) ds+f\left( t\right) \\ & = {\displaystyle\sum\limits_{i=0}^{\infty}} {\displaystyle\sum\limits_{j=0}^{n}} Q_{i+1}\left( jh\right) \int_{-h}^{t-jh}\frac{\left( t-s-jh\right) ^{\left( i+1\right) \alpha-1}}{\Gamma\left( \left( i+1\right) \alpha\right) }c\left( s\right) ds\\ & + {\displaystyle\sum\limits_{i=0}^{\infty}} {\displaystyle\sum\limits_{j=1}^{n}} Q_{i+1}\left( jh-h\right) \int_{-h}^{t-jh}\frac{\left( t-s-jh\right) ^{\left( i+1\right) \alpha-1}}{\Gamma\left( \left( i+1\right) \alpha\right) }c\left( s\right) ds+f\left( t\right) \\ & {\displaystyle\sum\limits_{i=0}^{\infty}} {\displaystyle\sum\limits_{j=0}^{n}} Q_{i+2}\left( jh\right) \int_{-h}^{t-jh}\frac{\left( t-s-jh\right) ^{\left( i+1\right) \alpha-1}}{\Gamma\left( \left( i+1\right) \alpha\right) }c\left( s\right) ds+f\left( t\right) . \end{align*} According to Lemma \ref{lem:11}, we hav \begin{align*} & \left( ^{C}D_{-h^{+}}^{\alpha}y\right) \left( t\right) \\ & =\frac{1}{\Gamma\left( 1-\alpha\right) }\frac{d}{dt}\int_{-h}^{t}\left( t-r\right) ^{-\alpha}\left( \int_{-h}^{r}X_{h,\alpha,\alpha}^{A,B}\left( r-s\right) c\left( s\right) ds\right) dr\\ & =\frac{1}{\Gamma\left( 1-\alpha\right) }\frac{d}{dt}\int_{-h}^{t}c\left( s\right) \int_{s}^{t}\left( t-r\right) ^{-\alpha}X_{h,\alpha,\alpha ^{A,B}\left( r-s\right) drds\\ & =\frac{1}{\Gamma\left( 1-\alpha\right) {\displaystyle\sum\limits_{i=0}^{\infty}} {\displaystyle\sum\limits_{j=0}^{n}} Q_{i+1}\left( jh\right) \frac{d}{dt}\int_{-h}^{t}c\left( s\right) \int _{s}^{t}\left( t-r\right) ^{-\alpha}\frac{\left( r-s-jh\right) ^{\left( i+1\right) \alpha-1}}{\Gamma\left( \left( i+1\right) \alpha\right) }drds \end{align* \begin{align*} & =\frac{1}{\Gamma\left( 1-\alpha\right) {\displaystyle\sum\limits_{i=0}^{\infty}} {\displaystyle\sum\limits_{j=0}^{n}} Q_{i+1}\left( jh\right) \frac{d}{dt}\int_{-h}^{t-jh}c\left( s\right) \left( t-s-jh\right) ^{i\alpha}\frac{\Gamma\left( 1-\alpha\right) {\Gamma\left( i\alpha+1\right) }ds\\ & {\displaystyle\sum\limits_{j=0}^{n}} Q_{1}\left( jh\right) \frac{d}{dt}\int_{-h}^{t-jh}c\left( s\right) ds {\displaystyle\sum\limits_{i=1}^{\infty}} {\displaystyle\sum\limits_{j=0}^{n}} Q_{i+1}\left( jh\right) \int_{-h}^{t-jh}c\left( s\right) \left( t-s-jh\right) ^{i\alpha-1}\frac{1}{\Gamma\left( i\alpha\right) }ds\\ & =c\left( t\right) {\displaystyle\sum\limits_{i=0}^{\infty}} {\displaystyle\sum\limits_{j=0}^{n}} Q_{i+2}\left( jh\right) \int_{-h}^{t-jh}\frac{\left( t-s-jh\right) ^{\left( i+1\right) \alpha-1}}{\Gamma\left( \left( i+1\right) \alpha\right) }c\left( s\right) ds. \end{align*} Hence, we obtain $c(t)=f(t)$. The proof is completed. \end{proof} \begin{theorem} \label{thm:2}Let $p=0,1,...,l$. A solution $y\in C\left( \left( \left( p-1\right) h,ph\right] ,R^{n}\right) $ of (\ref{de1}) has a for \[ y\left( t\right) =X_{h,\alpha,1}^{A,B}\left( t+h\right) \varphi\left( 0\right) +\int_{-h}^{0}X_{h,\alpha,\alpha}^{A,B}\left( t-s\right) \left( \left( ^{C}D_{-h^{+}}^{\alpha}\varphi\right) \left( s\right) -A\varphi\left( s\right) \right) ds. \] \end{theorem} \begin{proof} We looking for a solution of the for \[ y\left( t\right) =X_{h,\alpha,1}^{A,B}\left( t+h\right) c+\int_{-h ^{0}X_{h,\alpha,\alpha}^{A,B}\left( t-s\right) g\left( s\right) ds, \] where $c$ is an unknown constants, $g(t)$ is an unknown continuously differentiable function. Moreover, it satisfies initial condition $y\left( t\right) =\varphi\left( t\right) $, $-h\leq t\leq0$, i.e. \[ y\left( t\right) =X_{h,\alpha,1}^{A,B}\left( t+h\right) c+\int_{-h ^{0}X_{h,\alpha,\alpha}^{A,B}\left( t-s\right) g\left( s\right) ds:=\varphi\left( t\right) ,\ \ -h\leq t\leq0. \] Let $t=-h$ we hav \[ X_{h,\alpha,1}^{A,B}\left( -h-s\right) =\left\{ \begin{tabular} [c]{ll $\Theta,$ & $-h<s\leq0,$\\ $I,$ & $s=-h. \end{tabular} \ \ \ \right. \] Thus $c=\varphi\left( -h\right) $. Since $-h\leq t\leq0$, one obtain \[ X_{h,\alpha,1}^{A,B}\left( t-s\right) =\left\{ \begin{tabular} [c]{ll $\Theta,$ & $t<s\leq0,$\\ $E_{\alpha,1}\left( A\left( t-s\right) ^{\alpha}\right) ,$ & $-h\leq s\leq t,\ 0\leq t-s\leq t+h\leq h. \end{tabular} \ \ \ \ \ \right. \] Thus on interval $-h\leq t\leq0$, one can derive tha \begin{align} \varphi\left( t\right) & =X_{h,\alpha,1}^{A,B}\left( t+h\right) \varphi\left( -h\right) +\int_{-h}^{0}X_{h,\alpha,\alpha}^{A,B}\left( t-s\right) g\left( s\right) ds\label{q1}\\ & =X_{h,\alpha,1}^{A,B}\left( t+h\right) \varphi\left( -h\right) +\int_{-h}^{t}X_{h,\alpha,\alpha}^{A,B}\left( t-s\right) g\left( s\right) ds+\int_{t}^{0}X_{h,\alpha,\alpha}^{A,B}\left( t-s\right) g\left( s\right) ds\nonumber\\ & =E_{\alpha,1}\left( A\left( t+h\right) ^{\alpha}\right) \varphi\left( -h\right) +\int_{-h}^{t}\left( t-s\right) ^{\alpha-1}E_{\alpha,\alpha }\left( A\left( t-s\right) ^{\alpha}\right) g\left( s\right) ds.\nonumber \end{align} Having differentiated (\ref{q1}), we obtai \begin{align*} \left( ^{C}D_{-h^{+}}^{\alpha}\varphi\right) \left( t\right) & = {\displaystyle\sum\limits_{k=0}^{\infty}} \frac{A^{k}\left( t+h\right) ^{\alpha k}}{\Gamma\left( 1+k\alpha\right) }\varphi\left( -h\right) +\int_{-h}^{t {\displaystyle\sum\limits_{k=1}^{\infty}} \frac{A^{k}\left( t-s\right) ^{\alpha k-1}}{\Gamma\left( k\alpha\right) }g\left( s\right) ds+g\left( t\right) \\ & =AE_{\alpha,1}\left( A\left( t+h\right) ^{\alpha}\right) \varphi\left( -h\right) +A\int_{-h}^{t}E_{\alpha,\alpha}\left( A\left( t-s\right) ^{\alpha}\right) g\left( s\right) ds+g\left( t\right) \\ & =A\varphi\left( t\right) +g\left( t\right) . \end{align*} Therefore, $g\left( t\right) =\left( ^{C}D_{-h^{+}}^{\alpha}\varphi\right) \left( t\right) -A\varphi\left( t\right) $ and the desired result holds. \end{proof} Combining Theorems \ref{thm:1} and \ref{thm:2}, we have the following result. \begin{corollary} A solution $y\in C\left( \left[ -h,T\right] \cap\left( \left( p-1\right) h,ph\right] ,R^{n}\right) $ of (\ref{de1}) has a for \begin{align*} y\left( t\right) & =X_{h,\alpha,1}^{A,B}\left( t+h\right) \varphi\left( -h\right) +\int_{-h}^{0}X_{h,\alpha,\alpha}^{A,B}\left( t-s\right) \left[ \left( ^{C}D_{-h^{+}}^{\alpha}\varphi\right) \left( s\right) -A\varphi\left( s\right) \right] ds\\ & +\int_{0}^{t}X_{h,\alpha,\alpha}^{A,B}\left( t-s\right) f\left( s\right) ds. \end{align*} \end{corollary} \bigskip
2,869,038,154,194
arxiv
\section{Superlattice Hamiltonian}\label{sec:continuum} Here we will outline the details of the Hamiltonian used to describe the graphene superlattice in the continuum limit, the electronic structure of which has been extensively studied \cite{MK14,WPMGF13}. In the particle-hole basis (which we distinguish from the eigenstates of the gapped Dirac Hamiltonian that are also commonly used in this field), the Dirac Hamiltonian for charge carriers of momentum $\vec{g}$ with Fermi velocity $v_F$ is given by \begin{equation} \mathcal{H}_D(\vec{g}) = \hbar v_F \vec{g}\cdot\vec{\sigma}. \end{equation} We decompose the momentum $\vec{g}$ within the graphene first Brillouin zone (FBZ) into a momentum $\vec{k}$ within the boundaries of the superlattice FBZ and a contribution from repeats of the superlattice FBZ, as \begin{equation} \vec{g}_n = \vec{k} + \vec{n}\cdot (\vec{G}_1, \vec{G}_2) \equiv \vec{k} + \vec{G}_n. \end{equation} Here $\vec{G}_n$ are the superlattice basis vectors of the $n = 1 \ldots 6$ nearest neighbors in reciprocal space; these generate the first harmonic functions of the superlattice \cite{WPMGF13}. These are referred to as `one star of reciprocal lattice vectors' in the main text. Each star of 6 or 12 points in the reciprocal lattice consists of $\vec{G}$ vectors equivalent by symmetry. Successive stars are further apart from the $\vec{G} = 0$ origin. In this work we consider coupling up to $n_{\text{max}} = 5$ first stars: each position in our superlattice FBZ couples to its nearest neighbors via $\vec{G}_n$, which in turn couple to their nearest neighbors for $n_{\text{max}}$ iterations. This is illustrated for $n_{\text{max}} = 2$ in Fig. \ref{fig:bz}. All calculations reported here are converged with respect to $n_{\text{max}}$ for the energy range shown; in this sense our analysis is \textit{non-}perturbative. \begin{figure}[b] \includegraphics[width=0.3\textwidth]{BZ.pdf} \caption{The Brillouin zones and first star of reciprocal lattice vectors for the superlattice. The triangular superlattice formed by two commensurate honeycomb lattices has a hexagonal first Brillouin zone, shown here (\textit{yellow}) with the superlattice reciprocal lattice vectors (\textit{red}). The forms of the second, third and fourth Brillouin zones are shown in blue, red and green respectively. The `first star' refers to the six smallest reciprocal lattice vectors of the superlattice that couple a site to its six nearest neighbors; these neighbors themselves couple to their nearest neighbors via the first star, for $n_{\text{max}}$ iterations. In the example given the central position couples to its six neighbors, and the first star of one such neighbor is highlighted. This corresponds to a truncation of $n_{\text{max}} = 2$.}\label{fig:bz} \end{figure} As outlined in the main text the effect of the graphene superlattice is treated as a perturbation on the Dirac Hamiltonian, such that the continuum limit is valid. This requires that the energies involved should be small when compared with the $\pi$ bandwidth, greater than the hopping matrix $t$ so that they lie in the energy range where the Dirac cone is defined. This is the case in the superlattices studied, where the energy scales of typical perturbations $\sim$100 meV. The superlattice potential must also change change slowly, so that corrugations at scales comparable to the lattice constant of graphene, $a$ can be neglected. This assumption is satisfied in Moir\'e superlattices where the superlattice constant, $L$, is much larger than the lattice constant of graphene. This is the case for the superlattices studied in our paper, where L/a $\simeq$ 50. The superlattice potential, neglecting intervalley scattering, consists of seven terms \cite{WPMGF13}: a constant gap ($\delta$) and six spatially modulated potentials, including two scalar potentials ($V_s$), two mass gaps ($V_{\Delta}$) and two gauge field potentials ($V_g$), which each can be distinguished by their even ($e$) or odd ($o$) parity. Scalar and mass terms shift charge carrier energies at lattice sites dependent on their position in the superlattice; they have an either equal or opposite effect on each sublattice respectively. In the continuum limit, gauge fields describe modifications in the hopping energies due to the changing positions of lattice sites in the graphene layer. These discrete, real space deformations are represented by a gauge field and subsequent change of phase in the continuum. The resulting deformed lattice is shown in Fig. 1 of the main text; we shall further discuss the underlying vectors that produce this deformation in Section \ref{sec:nanoribbon}. In our notation, neighboring sites $n$ and $n'$ couple via the first star of reciprocal lattice vectors by \begin{gather} \mathcal{H}_\text{pert} = \sum_{j=1}^6 V_{\vec{G}_j} \delta_{\vec{g}_n - \vec{g}_{n'},\vec{G}_j}, \\ V_{\vec{G}_j} = \big( V_s^e + i (-1)^j V_s^o\big) \mathbb{I}_2 + \big( V_{\Delta}^o + i (-1)^j V_{\Delta}^e \big) \sigma_3 + \big(i V_g^e + (-1)^j V_g^o \big) \left( \begin{matrix} 0 & -ie^{-i\phi_{\vec{G}_j}} \\ ie^{i\phi_{\vec{G}_j}} & 0 \end{matrix} \right),\label{eq:superlattice} \end{gather} where the additional phase $\phi_{\vec{G}_j} = \arg \left({G}_{j,x}+i{G}_{j,y}\right)$. This allows us to write the overall Hamiltonian as the original Dirac Hamiltonian plus a small correction due to the superlattice, \begin{align} \begin{split} \mathcal{H} & = \mathcal{H}_D(\vec{g}_n)\otimes \mathbb{I}_N + \mathcal{H}_\text{pert} \\ & = \mathcal{H}_D(\vec{k})\otimes \mathbb{I}_N + \mathcal{H}_D(\vec{G}_n)\otimes \mathbb{I}_N + \mathcal{H}_\text{pert} \\ & = \mathcal{H}_D(\vec{k})\otimes \mathbb{I}_N + \mathcal{H}_{SL}. \end{split} \end{align} Results in the main text use superlattice parameters as derived in \cite{SGSG14}: for a strained graphene superlattice with an associated vector field, we take \begin{equation}\label{eq:parameters} (V_s^e, V_s^o, V_{\Delta}^e, V_{\Delta}^o, V_g^e, V_g^o, \delta) = (21, 38, 6, 0, -42, -21, 50)\,\mathrm{meV}. \end{equation} We have verified that results and conclusions are qualitatively unchanged for any generic combination of superlattice potentials and that flat bands can be produced using all superlattice parameters, with the exception of the gauge potentials $V_g$ that do not influence edge modes. Gauge potentials contribute an additional phase, which can be removed for any well-localized state by an appropriate change-of-gauge transformation; thus $V_g$ will not significantly alter the flat bands found in our tight-binding calculations. \section{Chern Numbers\label{sec:Chern}} To calculate the Chern numbers of each subband we use the method outlined by Fukui \emph{et al} \cite{FHF05}. The superlattice Brillouin zone is tessellated with hexagonal plaquettes, each of which contributes to the Berry curvature. The Berry curvature from each plaquette $p$ depends on its wavefunction $\psi_p$, in particular the product of the wavefunction overlaps between neighboring plaquettes $\bra{\psi_{p+1}}\ket{\psi_p}$: Berry curvature is determined up to a factor of 2$\pi$ by the argument of this complex product. Rescaling by $2\pi$ gives the Chern number contribution from this plaquette, the sum of which over the Brillouin zone gives the Chern number for a sub-band. Plaquettes at the edges of the Brillouin zone are dealt with separately, as we cannot perfectly tessellate hexagonal plaquettes within the boundaries of the hexagonal Brillouin zone. Due to the periodicity of the Brillouin zone, plaquettes at the edges contribute a fraction of their Berry curvature determined by the area of the plaquette within the Brillouin zone. Plaquettes at the side and corners of the Brillouin zone contribute one half and one third of their Berry curvatures respectively. Numerical errors may produce Chern numbers that are only approximately integers: the typical numerical accuracy of the Chern numbers depends on $n_{\text{max}}$, but we find machine accuracy integers for $n_{\text{max}}$ as small as $5$. Even though the answers are thus always integers, a potential issue with this technique is caused by the coarseness of the tessellation of the Brillouin zone, which can lead to the \emph{wrong} integer value! Sufficiently large plaquettes can result in a phase change over an individual plaquette larger than $2\pi$. This which will produce an integer change in the Chern number when compared to the same calculation over a finer grid of hexagons, which will include smaller phase changes per plaquette. We have verified that our calculations are robust against this problem of topological charge `falling through the lattice' by taking increasingly finer plaquettes and showing that the Chern numbers remain unchanged. Example results for the conduction and first excited subbands using superlattice parameters from Eq. \ref{eq:parameters} are given in Fig.~\ref{fig:chern} for coarse and fine grids across the Brillouin zone. \begin{figure} \begin{tabular}{ccl} \Large Coarse & \Large Fine \\ \includegraphics[width=0.3\textwidth,valign=c]{chern_band1_c0_10.pdf}& \includegraphics[width=0.3\textwidth,valign=c]{chern_band1_c0_25.pdf} \\ \includegraphics[width=0.3\textwidth,valign=c]{chern_band2_c1_10.pdf}& \includegraphics[width=0.3\textwidth,valign=c]{chern_band2_c1_25.pdf}& \\ \includegraphics[width=0.3\textwidth,valign=c]{chern_scale_coarse.pdf}& \includegraphics[width=0.3\textwidth,valign=c]{chern_scale_fine.pdf} \end{tabular} \caption{Berry curvatures for the conduction (\textit{left}) and first excited (\textit{right}) subbands, using both 10 and 50 plaquettes to span the width of the Brillouin zone (a total of 75 and 1875 total plaquettes respectively). The orange border marks the edge of the Brillouin zone, beyond which contributions are not included. The Chern numbers for the subbands are 0 and 1 for either choice of hexagonal grid.} \label{fig:chern} \end{figure} \newpage \section{Superlattice Nanoribbons}\label{sec:nanoribbon} We can also investigate superlattice effects using a tight-binding model by including the discrete analogues of the superlattice potentials from Eq.~\ref{eq:superlattice}. These are modeled by a modulation of the hopping parameters in the tight-binding model, due to the superlattice modulation of the carbon-atom positions. The spatially modulated superlattice potentials may identically or oppositely perturb the A and B sublattices, in the case of the scalar potentials and mass gaps respectively. The gauge field potentials used in the continuum model are caused by a change in the local bond lengths in the bond direction $e_{ij}$. This induced strain can be described in the tight-binding model by altering the hopping parameter. The superlattice parameters used in the continuum model are directly related to those in the tight-binding model presented here: we assume that the continuum parameters directly map to those of our tight-banding model and can be applied without alteration, following the good agreement between cases found in \cite{Weckbecker16}. In this formulation the Hamiltonian can thus be expressed as \begin{equation}\label{eq:tb} \mathcal{H} = - (t+V_g t) \sum_{\langle i,j \rangle } (a_{i}^\dagger b_{j} + b_{i}^\dagger a_{j}) + V_s + V_{\Delta} + \delta, \end{equation} where $a_{i}, a_{i}^\dagger (b_{i}, b_{i}^\dagger)$ are the creation and annihilation operators for electrons on sublattice A (B) at site $\vec{r}_i$ and graphene's nearest-neighbor hopping parameter $t=2.74$ eV \cite{KR09}. The full form of the superlattice potentials is \begin{gather} V_s = V_s^e \sum_{l=1}^3 \cos(\vec{g_l}\cdot \vec{r}_j) + V_s^o \sum_{l=1}^3 \sin(\vec{g_l}\cdot \vec{r}_j), \\ V_{\Delta} = \pm \left( V_{\Delta}^e \sum_{l=1}^3 \sin(\vec{g_l}\cdot \vec{r}_j) + V_{\Delta}^o \sum_{l=1}^3 \cos(\vec{g_l}\cdot \vec{r}_j) \right), \\ V_g = V_g^s \sum_{l=1}^3 \vec{g_l}\left( \sin(\vec{g_l}\cdot \vec{r}_i) - \sin(\vec{g_l}\cdot \vec{r}_{j})\right)e_{ij} + V_g^o \sum_{l=1}^3 \vec{g_l}\left(\cos(\vec{g_l}\cdot \vec{r}_i) - \cos(\vec{g_l}\cdot \vec{r}_{j})\right)e_{ij}, \end{gather} where $\vec{g_l}$ are the reciprocal lattice vectors of the triangular superlattice which determine the modulation strength at each lattice position; the even and odd vector contributions for the example of a small superlattice are given in Fig. \ref{fig:vec}. $V_{\Delta}$ has the opposite sign for each sublattice. \begin{figure} \includegraphics[trim={0 0.0355cm 0.012cm 0},clip,width=0.2\textwidth]{vec_even.pdf} \includegraphics[trim={0 0.0355cm 0.011cm 0},clip,width=0.2\textwidth]{vec_odd.pdf} \caption{The even (\textit{left}) and odd (\textit{right}) gauge potentials corresponding to a superlattice with a unit cell $L = 12a$, where $a$ is the graphene unit cell. Vectors indicate the strength of the gauge field at each lattice site, caused by the superlattice displacing individual sites and locally deforming bonds. In either case the corresponding potential has strength 0.5 eV, while A (B) sublattice sites are shown in red (blue).}\label{fig:vec} \end{figure} To analyze this tight-binding model we construct a semi-infinite nanoribbon with periodic boundaries in the direction of the axis parallel to the superlattice. The one-dimensional Brillouin zone is parallel to the nanoribbon direction: we solve Eq. \ref{eq:tb} for charge carriers with momentum $0 \leq k \leq 2\pi$, where $k = k_{\parallel}d$ for momentum $k_{\parallel}$ in the direction of a nanoribbon of unit cell $d$. Points $K$ and $K'$ in the continuum Brillouin zone lie at $k=0$ here. Results in the main text use nanoribbons 144 lattice sites in width with zigzag edges and with superlattice unit cells of 48 $\times$ 48 graphene unit cells, using the superlattice parameters in Eq. \ref{eq:parameters}. Additional band structures for stronger scalar potentials ($V_s^{e,o} = 0.1\,\mathrm{eV}$ with other parameters unchanged) are given in Fig.~\ref{fig:3x48}. The appearance of flat bands in both these and other cases indicates that edge transport is a generic feature of nanoribbons with zigzag edge configurations, irrespective of the specific parametrization of the superlattice. This has been verified for many other ``generic" superlattice potentials, as we outline in the following section. We have repeated these calculations for nanoribbons with armchair edge configurations, demonstrating that the superlattice does not create new edge modes in this system and that the gap persists. Results for a realistically large system with a 48 $\times$ 48 supercell are given in Fig.~\ref{fig:armchair}. \begin{figure} \includegraphics[width=0.4\textwidth]{bands_sjs_12x12.pdf} \includegraphics[width=0.4\textwidth]{bands_sja_12x12.pdf} \includegraphics[width=0.4\textwidth]{bands_sjs_3x48_120.pdf} \includegraphics[width=0.4\textwidth]{bands_sja_3x48_120.pdf} \caption{Band structures for a nanoribbon 144 lattice sites in width, the superlattice unit cell size 12 $\times$ 12 (\textit{top}) and 48 $\times$ 48 (\textit{bottom}) graphene unit cells. Superlattice parameters used include a strong even (\textit{left}) and odd (\textit{right}) scalar potentials: here $(V_s^e,V_s^o)$ = (100, 38) meV and $(V_s^e,V_s^o)$ = (21, 100) meV in each case respectively, while the other parameters remain as in Eq. \ref{eq:parameters}.}\label{fig:3x48} \end{figure} \begin{figure} \includegraphics[width=0.4\textwidth]{armchair_bands_sj_3x48.pdf} \caption{The band structure for a graphene nanoribbon with armchair edges and superlattice parameters from Eq. \ref{eq:parameters}. The nanoribbon width and superlattice unit cell sizes are 144 and 48 $\times$ 48 graphene unit cells respectively.}\label{fig:armchair} \end{figure} \newpage \section{Density of Midgap States} Edge states in superlattices with realistically large unit cells occupy a significant fraction of the bulk gap, as we can see in Fig. \ref{fig:3x48}. We estimate the fraction of all states involved in edge transport by calculating the density of states, comparing the number of states in the gap region (here taken to be $-0.1 \leq E \leq 0.1$ eV for consistency with Fig. \ref{fig:3x48}, and Fig. 3 in the main text) to those over the entire energy range $-0.5 \leq E \leq 0.5$ eV. The density of states and percentage involved in edge transport for nanoribbons with large 48 $\times$ 48 graphene unit cell supercells are given in Fig. \ref{fig:dos}. We make use of the parameters given in Eq. \ref{eq:parameters}, as well as the parameter set with larger even/odd scalar potentials $V_s^{e,o} = 100$ meV as in Fig. \ref{fig:3x48}, and find that $15\% - 28\%$ of states are involved in edge transport. Descriptions of realistic superlattices will involve a combination of all parameters in our model (Eq. \ref{eq:tb}). To demonstrate that generic superlattice perturbations will exhibit significant midgap transport, we calculate the fraction of midgap states for individual superlattice parameters $V_{s,\Delta}^{e,o}$ in the range $5 \leq V_{s,\Delta}^{e,o} \leq 20$ meV, with a gap of $100$ meV: results are given in Table \ref{tab:dos}. The exception is the vector potentials, which are not included here as they can be arbitrarily removed for edge states with an appropriate choice of gauge. We find that each case hosts a large number of edge modes, typically 30\% of those in the given energy range. Combinations of superlattice parameters tend to produce more modes inside the gap region: a given example of a small even scalar addition to an otherwise odd parity scalar potential increases the fraction of midgap states by $\sim10\%$, as shown in the table. From this we expect that many edge modes should be present in the previously gapped region for any general combination of superlattice parameters. \begin{figure} \includegraphics[width=0.3\textwidth]{dos_sj.pdf} \includegraphics[width=0.3\textwidth]{dos_sjs.pdf} \includegraphics[width=0.3\textwidth]{dos_sja.pdf} \caption{The density of states for nanoribbons 144 lattice sites in width and superlattice unit cells of 48 $\times$ 48 graphene cells. Superlattice parameters are those given in Eq. \ref{eq:parameters} (\textit{left}), including cases with large even (\textit{middle}) and odd (\textit{right}) scalar potentials, $V_s^{e,o} = 100$ meV. Each density of states consists of 28\%, 15\% and 28\% edge modes respectively.} \label{fig:dos} \end{figure} \begin{table} \caption{Fraction of the total density of states calculated in the range $-0.5 \leq E \leq 0.5$ eV that can be attributed to edge modes (taken to lie in the bulk gap, range $-0.1 \leq E \leq 0.1$ eV). All calculations are performed for nanoribbons 144 lattice sites in width, with superlattice unit cells of 48 $\times$ 48 graphene cells. Unless specified otherwise all superlattice parameters are set to $0$, while all cases have a bulk gap $\delta = 0.05 eV$. All superlattice parameter strengths are given in meV.}\label{tab:dos} \begin{center} \begin{tabular}{ |c| c c c| c c c| c c c| c c c| c c c| } \hline {Parameter strength} & \multicolumn{3}{c|}{$V_s^e$} & \multicolumn{3}{c|}{$V_s^o$} & \multicolumn{3}{c|}{$V_\Delta^e$} & \multicolumn{3}{c|}{$V_\Delta^o$} & {$V_s^e$} & {$V_s^o$} & {$\delta$} \\ {(meV)} & 5 & 10 & 20 & 5 & 10 & 20 & 5 & 10 & 20 & 5 & 10 & 20 & 8 & 20 & 20 \\ \hline Midgap states (\%) & 30 & 30 & 27 & 30 & 30 & 30 & 30 & 30 & 30 & 30 & 30 & 25 & & 40 & \\ \hline \end{tabular} \end{center} \end{table} \section{Changing Supercell Size} In order to address the most experimentally relevant superlattice configurations of commensurate graphene and hexagonal boron nitride, we have used large superlattice unit cells including approximately 50 $\times$ 50 graphene unit cells, while the angle between their crystallographic axes that determines the size of the Moir\'e pattern $\theta \sim 0^{\circ}$. By increasing $\theta$ the size of the superlattice unit cell is reduced, and we have verified that this effect does not significantly alter the conclusions drawn in the main text, as we shall outline here. We can directly investigate the effect of changing supercell size on the flat bands using the tight-binding model described in Section \ref{sec:nanoribbon}. Considering nanoribbons 144 graphene unit cells in width as before, we calculate the band structure for increasingly large supercells using the same superlattice potentials as the strong even scalar potential case in Fig.~\ref{fig:3x48}. Results are given in Fig. \ref{fig:scsize}, larger supercells producing an increasing number of bands within the bulk gap, eventually filling the gap for realistically large supercells as discussed in the main text and Fig.~\ref{fig:3x48}. In addition, the gap between bulk bands (most notably at $k=\pi$) tends towards the value calculated using the continuum model. These results are qualitatively similar regardless of our choice of superlattice parameters. We attribute this behaviour to the folding of our graphene FBZ due to the superlattice: bands across the graphene FBZ are mapped on to the smaller superlattice FBZ, so accordingly the number of bands in the superlattice FBZ will be larger dependent on the size of the superlattice. Without superlattice perturbation parameters the flat band hosted on a zigzag edge of graphene will be mapped on to itself, producing degenerate subbands in the superlattice FBZ. Finite superlattice perturbations alter the energies of these bands, lifting the degeneracy and filling an average bulk gap. Note that while a $3n \times 3n$ supercell results in $4n$ flat bands, as stated in the main text, our tight-binding model only considers a single valley so results shown will host $2n$ flat bands. \begin{figure} \includegraphics[width=0.3\textwidth]{bands_sjs_24x6.pdf} \includegraphics[width=0.3\textwidth]{bands_sjs_8x18.pdf} \includegraphics[width=0.3\textwidth]{bands_sjs_6x24.pdf} \caption{Band structures for a nanoribbon 144 lattice sites in width, parameters given in Eq. \ref{eq:parameters} except for $V_s^e$ = 100 meV. From left to right, supercells are 6 $\times$ 6, 18 $\times$ 18 and 24 $\times$ 24 graphene unit cells respectively; results for 12 $\times$ 12 and 48 $\times$ 48 cases are given in Fig. \ref{fig:3x48}.}\label{fig:scsize} \end{figure} In the continuum model outlined in Section \ref{sec:continuum}, changing the supercell size changes the magnitude of the superlattice reciprocal lattice vectors $|\vec{G}_n|$; this is inversely proportional to the number of graphene unit cells included in a single supercell, larger supercells corresponding to a smaller superlattice FBZ. Using the method described in Sec.~\ref{sec:Chern} we recalculate the Chern numbers for the first four valence and conductions subbands, labeled relative to zero energy ($-4 \dots 4$ respectively), for supercells of varying size $N_{SL} \times N_{SL}$ and the superlattice parameters in Eq. \ref{eq:parameters}. As outlined in the main text, provided the Chern numbers of occupied subbands are non-trivial the system remains a valley Hall insulator. Results found using 10 plaquettes to calculate the Chern number for varying $N_{SL}$ are given in Table~\ref{tab:Chern}. \begin{table} \caption{Chern numbers for the conduction and valence band as a function of supercell size.}\label{tab:Chern} \begin{center} \begin{tabular}{ |c| c c c c c| } \hline \diaghead{Bands $NSL$}{Band}{$N_{SL}$} & 20 & 30 & 40 & 50 & 60 \\ \hline $-4$ & 1 & -2 & 1 & 1 & -2 \\ $-3$ & -1 & 2 & -1 & -1 & 3 \\ $-2$ & 0 & 0 & 1 & 1 & 0 \\ $-1$ & 1 & 1 & 0 & 0 & 0 \\ $1$ & -1 & -1 & -1 & 0 & 0 \\ $2$ & 0 & 0 & 2 & 1 & 1 \\ $3$ & -3 & -2 & 0 & 0 & 0 \\ $4$ & 4 & 3 & -1 & -1 & -1 \\ \hline \end{tabular} \end{center} \end{table} \section{Current Operator} We shall look at the equilibrium expectation value of the current operator based on our knowledge of the wavefunctions of charge carriers in a nanoribbon. The wavefunctions are all of the discrete form \begin{equation} \psi_{l,k_x}(\vec r_{i,j}), \end{equation} with energy $\epsilon_l(k_x)$ for a given band $l$. Here $i,j$ labels the lattice sites, and $\vec r$ is the coordinate-space position of the atom labeled by $(i,j)$ inside the nanoribbon. The quantity $k_x$ is the electron momentum parallel to the ribbon. The current operator is proportional to the velocity operator, which in the well-known case of a 1D lattice model may be defined as \cite{KAL11} \begin{equation} j_{1D} = -i w \left(c^\dagger_{i}c_{i+1}-c^\dagger_{i+1}c_{i}\right). \end{equation} We similarly define a current operator for a 2D nanoribbon. We assume an applied voltage parallel to the nanoribbon, perturbing the system by introducing a preferential hopping direction, and ignore the small effects of the periodic modulations so that there is no transverse current flowing perpendicular to the edges. The electrons can thus only hop between nearest neighbors along a series of 1D zigzag chains. This allows us to define the current at the vertical position of each, which we choose as the average vertical displacement of the bonds in each chain. For a bond oriented in direction $\delta\vec{r}=\vec r_{i'j'}-\vec r_{ij}$ the current operator is therefore \begin{equation} \left(j_x\right)_{\vec{r},\vec{r}'} = j_x(\vec r, \delta\vec r)\delta_{|\delta\vec r|,1} \propto i\, \delta r_x\,\delta_{|\delta\vec r|,1}. \end{equation} In order to measure the current's vertical dependence across the width of our nanoribbon we introduce an operator that gives the average vertical displacement of each bond, $P_y = \delta_{(r_y + r_y')/2-y}\delta_{|\delta\vec r|,1}$. As we have mirror symmetry about any axis perpendicular to the ribbon direction, however, the expectation value of $P_y$ will be zero -- without incorporating the effect of the external voltage, electrons are equally likely to hop in both directions. At equilibrium the current contributions from $\pm k_x$ will be equal and opposite, and we must include a perturbation of the states to produce a net current. We achieve this by decomposing our net current into left- ($-x$) and right-moving ($x$) components. Results shown both here and in the main text have been calculated for the range $\pi \leq k_x \leq 2\pi$ such that dispersive bands have positive dispersion; we have verified that the corresponding current from $0 \leq k_x \leq \pi$ is both equal in magnitude and flows in the opposite direction. This corresponds to the assumption that applying a small voltage generates a shift in the chemical potential, providing an imbalance between left- and right-moving currents. We therefore calculate the expectation value of $j_x$ at fixed energy $E$ at the Fermi energy: \begin{align}\label{eq:current} J_+(E,y )&=\int_\pi^{2\pi} d k_x\sum_l \delta(\epsilon_l(k_x)-E)\sum_{\vec r, \vec r'} \langle \psi_{l,k_x} | \vec r\rangle (j_x)_{\vec r, \vec r'} \delta_{( r_y+ r'_y)/2-y} \langle \vec r'|\psi_{l,k_x}\rangle \nonumber\\ &\propto \int_\pi^{2\pi} d k_x\sum_l \delta(\epsilon_l(k_x)-E) \sum_{\vec r, \vec r'} \langle \psi_{l,k_x} | \vec r\rangle i \delta r_x\,\delta_{|\delta\vec r|,1} \delta_{( r_y+ r'_y)/2-y} \langle \vec r'|\psi_{l,k_x}\rangle. \end{align} We further exploit the reflection symmetry of our nanoribbon to enforce an overall positive current contribution from each band. It is possible that the current contribution from a given wavefunction will have counter-propagating components along the width of our nanoribbon: to ensure that the current produced by each band travels in the same direction overall, we multiply each current as calculated in Eq. \ref{eq:current} by its overall sign calculated across the nanoribbon's width. Since the application of a voltage across the ribbon means that the Fermi energy varies along its length, we can not work with a single state. As we only have equilibrium results, the best we can do is to take an average of the current operator over a range of energies. Thus, we find the total current due to our flat bands by summing over the current contributions from all bands $l$ within a given energy range; we add all currents calculated provided $E_1 \leq \epsilon_l \leq E_2$, lower and upper energies $E_1$ and $E_2$ chosen such that as far as is possible only flat bands are included. The total current and each wavefunction's contribution for the superlattice parameters used in Fig.~\ref{fig:3x48} is given in Figs.~\ref{fig:sjscurrent} and \ref{fig:sjacurrent}: energy ranges for different superlattice cell sizes have been set such that current contributions from flat bands are not hidden by those from bulk bands. We find that our model does produce edge transport, though it may be suppressed by bulk behavior depending on the energy range used. \begin{figure} \includegraphics[width=0.32\textwidth]{current_sjs_12x12_0-1_bands2.pdf} \includegraphics[width=0.32\textwidth]{current_sjs_3x48_0-05_bands2.pdf} \includegraphics[width=0.32\textwidth]{current_sjs_overlap_thick2.pdf} \caption{Currents across a nanoribbon 144 lattice sites in width due to a superlattice with our given parameters and a larger even scalar potential, $V_s^e$ = 0.1 eV. Contributions from individual wavefunctions are given for a superlattice with a unit cell of 12 $\times$ 12 graphene unit cells, -0.1 $\leq$ E $\leq$ 0.1 eV (\textit{left}) and 48 $\times$ 48 graphene unit cells, -0.05 $\leq$ E $\leq$ 0.05 eV (\textit{center}). The total current in each case is also given (\textit{right}).}\label{fig:sjscurrent} \end{figure} \begin{figure} \includegraphics[width=0.32\textwidth,valign=t]{current_sja_12x12_0-1_bands2.pdf} \includegraphics[width=0.32\textwidth,valign=t]{current_sja_3x48_0-05_bands2.pdf} \includegraphics[width=0.34\textwidth,valign=t]{current_sja_overlap_thick_2.pdf} \caption{Currents across a nanoribbon 144 lattice sites in width due to a superlattice with our given parameters and a larger odd scalar potential, $V_s^e$ = 0.1 eV. Contributions from individual wavefunctions are given for a superlattice with a unit cell of 12 $\times$ 12 graphene unit cells, -0.1 $\leq$ E $\leq$ 0.1 eV (\textit{left}) and 48 $\times$ 48 graphene unit cells, -0.05 $\leq$ E $\leq$ 0.05 eV (\textit{center}). The total current in each case is also given (\textit{right}).}\label{fig:sjacurrent} \end{figure} Here we highlight a possible ambiguity concerning the sign of our model current operator: as can be seen in both Figs. \ref{fig:sjscurrent} and \ref{fig:sjacurrent}, enforcing a net positive current per wavefunction over the entire width of the nanoribbon may lead to counterpropagating edge modes if the bulk contribution of a mode is almost equal but opposite to the edge one. It follows that we may therefore incorrectly estimate the proportion of edge transport in our nanoribbon; to resolve this issue one should perform more detailed calculations using non-equilibrium methods to calculate the current. \end{document}
2,869,038,154,195
arxiv
\section{Causal inference with observational studies} \label{sec::causal-pscore-central} We focus on the canonical setting of causal inference with observational studies. We assume exchangeability of the units and thus drop the index $i$ for the $i$th unit $(i=1, \ldots, n)$. Let $Z$ denote the binary treatment, $Y$ denote the outcome of interest, and $X$ denote the pretreatment covariates. Under the potential outcomes framework \citep{Neyman23, Rubin74, Rosenbaum83ps}, let $Y(1)$ and $Y(0)$ denote the hypothetical outcome under the treatment and control, respectively. This framework allows us to define individual causal effect $Y(1) - Y(0)$ and the average causal effect $$ \tau = E\{ Y(1) - Y(0) \} . $$ A fundamental difficulty of causal inference is that we cannot simultaneously observe both $Y(1)$ and $Y(0)$ for the same unit. The observed outcome equals $Y = ZY(1) + (1-Z)Y(0)$. Following \citet{Rosenbaum83ps}, we assume unconfoundedness and overlap throughout: \begin{equation} \label{condition::ignorability} Z \ind \{ Y (1), Y (0) \} \mid X,\quad 0< e(X) <1 , \end{equation} where $$ e(X) = P(Z=1\mid X) $$ is the propensity score (PS). Under \eqref{condition::ignorability}, the average causal effect can be identified by \begin{eqnarray} \tau &=& E\left\{ \frac{ZY}{e(X)} - \frac{(1-Z)Y}{1-e(X)} \right\} \label{eq::ipw-identification} \\ &=& E\{ \mu_1(X) - \mu_0(X) \} \label{eq::outreg-identification} \end{eqnarray} where $ \mu_z(X) = E( Y \mid Z=z,X)$ is the outcome mean conditional on covariates under treatment $z$ $(z=0,1)$. The identification formula \eqref{eq::ipw-identification} motivates the inverse PS weighting estimator \citep{rosenbaum1987model} $$ \hat{\tau}^{\ipw} = \frac{\sumn Z_iY_i/\hat{e}(X_i) }{ \sumn Z_i/\hat{e}(X_i) } - \frac{\sumn (1-Z_i) Y_i/\{ 1-\hat{e}(X_i) \} }{ \sumn (1-Z_i)/\{1-\hat{e}(X_i)\} } $$ where $\hat{e}(X_i)$ denotes the estimated PS for unit $i$. Here we use the Hajek form instead of the Horvitz--Thompson form due to its superior finite-sample properties \citep{Lunceford04}. The identification formula \eqref{eq::outreg-identification} motivates the outcome regression estimator $$ \hat{\tau}^{\reg} = n^{-1} \sumn \{ \hat \mu_1(X_i) - \hat \mu_0(X_i) \} $$ where $\hat \mu_1(X_i) $ and $ \hat \mu_0(X_i)$ denote the estimated conditional means of the outcomes under the treatment and control, respectively. The estimator $\hat{\tau}^{\ipw} $ is consistent for $\tau$ if the PS model is correct, whereas the estimator $\hat{\tau}^{\reg} $ is consistent if the outcome model is correct. Motivated by the semiparametric efficiency theory \citep{bickel1998efficient, tsiatis2006semiparametric}, the doubly robust estimator combines both models \citep{Bang05}: $$ \hat{\tau}^{\dr} = \hat{\tau}^{\reg} + n^{-1} \sumn \left\{ \frac{Z_i R_i}{\hat{e}(X_i)} - \frac{(1-Z_i) R_i}{1-\hat{e}(X_i)} \right\} $$ where $R_i = Y_i - \hat \mu_{Z_i}(X_i)$ denotes the residual from outcome modeling. The estimator $\hat{\tau}^{\dr} $ is consistent if either the PS or the outcome model is correct, justifying its name ``doubly robust.'' With parametric PS and outcome models, it is straightforward to construct estimators for the variances of these estimators based on the theory of M-estimation or the nonparametric bootstrap; see \citet{Lunceford04} and \citet{yang2020combining} for reviews. The recent literature has also extended these estimators to allow for more flexible nonparametric or machine learning estimation of the outcome model \citep{hahn1998role}, or the PS model \citep{Hirano03}, or both \citep{chernozhukov2018double}. We focus on estimators based on parametric models but conjecture that similar results extend to estimators based on more flexible models under some regularity conditions. \section{The role of the propensity score in Bayesian causal inference} \label{sec::propensityscore-ignorable-bayes} \subsection{The propensity score is ignorable in Bayesian causal inference} \label{sec::pscore-no-role} Let $\theta_X$, $\theta_Z$, and $\theta_Y$ represent, respectively, the parameters for the models for the covariate distribution, the PS, the outcome conditional on the treatment and covariates. Rewrite the identification formula \eqref{eq::outreg-identification} of $\tau$ as $$ \tau = \int \{ \mu_1(x; \theta_Y) - \mu_0(x; \theta_Y) \} f ( x; \theta_X) \text{d} x $$ which depends only on the unknown parameters $\theta_X$ and $\theta_Y$. Assuming independent priors on the parameters $\theta_X$, $\theta_Z$ and $\theta_Y$, the posterior distribution based on exchangeable data $(X_i, Z_i, Y_i)_{i=1}^n$ factors into three independent components: \begin{eqnarray*} &&P( \theta_X, \theta_Z, \theta_Y\mid \cdot ) \\ &\propto & P(\theta_X) \prodn P(X_i; \theta_X) \cdot P(\theta_Z) \prod_{i=1}^n P(Z_i \mid X_i; \theta_Z) \cdot P(\theta_Y) \prod_{i=1}^n P(Y_i\mid Z_i, X_i; \theta_Y) \end{eqnarray*} The posterior distributions of $\theta_X$ and $\theta_Y$ do not depend on the second component corresponding to the PS. Therefore, Bayesian inference of $\tau$ does not depend on the PS. \citet{saarela2016bayesian} gave a similar discussion as above. One might wonder whether the conclusions above will change if we use the identification formula \eqref{eq::ipw-identification} based on the inverse PS weighting. We can verify that under \eqref{condition::ignorability}, the formula \eqref{eq::ipw-identification} reduces to the formula \eqref{eq::outreg-identification}. The PS again does not play any role in Bayesian causal inference. The above discussion focuses on $\tau$, the average causal effect of a super population. \citet{Rubin78} focused on the finite-sample average causal effect $$ \tau_\text{fs} = n^{-1} \sum_{i=1}^n \{ Y_i(1) - Y_i(0) \} , $$ and reduced the problem of causal inference to imputing the missing potential outcomes based on their posterior predictive distributions. He also showed that the PS can be ignored in the finite-sample Bayesian causal inference. By \citet{Rubin78}, the PS is ignorable. \citet{Hill11} and \citet{Ding2018causalinference} discussed other parameters and reached the same conclusion. \subsection{Existing strategies to use the propensity score in Bayesian causal inference} The PS is central in frequentist's causal inference. Section \ref{sec::causal-pscore-central} above reviews its role in constructing the inverse PS and doubly robust estimators. In contrast, Section \ref{sec::propensityscore-ignorable-bayes} dismisses the role of the PS in Bayesian causal inference. A parallel discussions appeared in survey sampling \citep{Rubin85, pfeffermann1993role}. Nevertheless, completely ignoring the PS seems worrisome. Because the PS characterizes the treatment assignment mechanism, it is intuitive to use it in one way or another in analyzing observational data. Below I will review some strategies to use the PS in Bayesian causal inference, with the last one being the proposal of this article. \paragraph{Use the PS in the design phase} \citet{Rubin85} provided a heuristic argument based on robustness for the importance of using the PS in Bayesian causal inference. \citet{robins1997toward} provided more theoretical discussion of this issue. \citet{Rubin07} later argued that observational studies should have two stages: the design stage and the analysis stage. Based on this, even the analysis stage is purely Bayesian in the sense of Section \ref{sec::pscore-no-role}, the PS plays a central role in the design stage to make the observational study as close as possible to a randomized experiment . This view highlights the role of the PS in designing observational studies but still cannot incorporate the PS in the Bayesian analysis reviewed in Section \ref{sec::propensityscore-ignorable-bayes}. \paragraph{Use dependent priors} Section \ref{sec::propensityscore-ignorable-bayes} assumes independent priors on the parameters $\theta_X$, $\theta_Z$ and $\theta_Y$. Consequently, the posterior distribution factors into three independent components, and then the PS is ignorable for inferring $\tau$. The independence of the posterior distributions, however, does not hold with dependent priors on $\theta_X$, $\theta_Z$ and $\theta_Y$. \citet{Wang12} used dependent prior for variable selection in both the PS and outcome models. \citet{ritov2014bayesian} constructed a dependent prior that yielded frequentist's properties. Their prior for the outcome model depended on the PS, and they only focused on some special cases. In general, this strategy may not be easy to implement to achieve desired frequentist's properties. \paragraph{Use the PS as a covariate in the outcome model} \citet{Zigler13}, \citet{an20104}, \citet{ZiglerDominici14}, \citet{zigler2016central}, and \citet{hahn2020bayesian} forced the PS to enter the outcome model in Bayesian computation. However, this strategy may be controversial. Arguably, the outcome model that reflects the natural of the potential outcome generating process should not be dependent on the PS model that reflects the treatment assignment mechanism. Overall, while this strategy can be useful to improve robustness of causal inference, it relies on a somewhat unnatural factorization of the joint likelihood. \paragraph{Posterior predictive estimation} Based on the Bayesian posterior predictive perspective, \citet{saarela2016bayesian} proposed to use the posterior distribution of the doubly robust estimator $\hat{\tau}^{\dr}$, with the PS and outcome models drawn from their posterior distributions. \citet{antonelli2020causal} extended this idea to the setting with high dimension covariates. This is a powerful idea to integrating frequentist's procedures in Bayesian causal inference. \paragraph{Posterior predictive $p$-value} Closely related to \citet{saarela2016bayesian} and \citet{antonelli2020causal}, the proposal in the next section is based on the posterior predictive $p$-value (PPP) for the model of the strong null hypothesis of no causal effects for any units whatsoever. The PPP is a natural extension of the classic Fisher randomization test (FRT) developed for randomized experiments. In observational studies, the proposed PPP equals the $p$-value based on the FRT averaged over the posterior predictive distribution of the PS. We present the details below. \section{The PPP depends on the propensity score} \label{sec::use-pscore-bayesian-ppp} \subsection{General formulation of the PPP} \label{sec::general-ppp} We first show that the PS naturally enters Bayesian causal inference if we use the PPP for the model with the strong null hypothesis \citep{Rubin80}: $$ H_{0\textsc{f}}: Y_i(1) = Y_i(0 ) = Y_i \text{ for all }i. $$ The PS plays a central role in the PPP although it is ignorable in standard Bayesian inference reviewed in Section \ref{sec::pscore-no-role}. We will give a general formulation of the PPP for $H_{0\textsc{f}}$ below. Focus on the finite samples at hand. Under the strong null hypothesis $H_{0\textsc{f}}$, the covariates and outcomes are all fixed, and the only random component is the treatment indicators. Under \eqref{condition::ignorability}, the posterior distribution of $\theta_Z$ is $$ P(\theta_Z\mid \cdot ) \propto P(\theta_Z ) \prod_{i=1}^n P(Z_i \mid X_i, Y_i; \theta_Z) = P(\theta_Z ) \prod_{i=1}^n P(Z_i \mid X_i; \theta_Z) . $$ It only depends on the PS model and reduces to a basic problem in Bayesian modeling. For instance, if $ P(Z_i \mid X_i; \theta_Z) $ follows a logistic model, then $P(\theta_Z\mid \cdot ) $ is the corresponding posterior distribution, which can be easily obtained using the \texttt{MCMClogit} function in the \texttt{MCMCpack} package in \texttt{R} \citep{martin2011mcmcpack}. Define the statistic as $T= T(\bm Z, \bm X, \bm Y)$, where $\bm Z, \bm X, \bm Y$ are the concatenated treatments, covariates, outcomes for all observed units. Define $$ \textup{PPP} = P^\textup{pred}\left\{ T(\bm Z^\textup{pred}, \bm X, \bm Y) \geq T(\bm Z, \bm X, \bm Y) \right\} $$ where $ P^\textup{pred}$ is the probability measure over the posterior predictive distribution of $Z^\textup{pred}$ given the data: $$ P(\bm Z^\textup{pred} \mid \cdot) = \int \prod_{i=1}^n P( Z_i^\textup{pred}\mid X_i, \theta_Z) P( \theta_Z \mid \cdot) \textup{d} \theta_Z. $$ It is clear that the PS plays a central role in defining the Bayesian PPP. The PPP was proposed for general Bayesian inference \citep{Rubin84, meng1994posterior, Gelman96}. It has also been applied to many other Bayesian causal inference problems \citep[e.g.,][]{Rubin84, rubin1998more, mattei2013exploiting, espinosa2016bayesian, jiang2016principal, forastiere2018posterior, zeng2020being}. \subsection{Implementation}\label{sec::implementation-ppp} We then show how to implement the generic PPP introduced above. The first implementation follows the definition of the PPP closely: we simulation the test statistic by first drawing $\theta_Z$ from its posterior distribution and then drawing the treatment indicators conditional on $\theta_Z$. The detailed algorithm is below: \begin{enumerate} \item[A1] draw $\theta_Z^r$ based on $P(\theta_Z \mid \cdot ) $, draw $Z_i^r$ based on $P(Z_i = 1\mid X_i, \theta_Z^r )$ for all units, and compute the statistic $T^r = T^r(\bm Z^r, \bm X, \bm Y)$; \item[A2] repeat the above step to obtain $T^r\ (r=1, \ldots, R)$; \item[A3] calculate \begin{eqnarray}\label{eq::ppp-mc} \textup{PPP} \stackrel{\cdot}{=} R^{-1} \sum_{r=1}^R \mathcal{I}( T^r \geq T ). \end{eqnarray} \end{enumerate} Computationally, the above algorithm is straightforward. We use it to compute the PPP in simulation in Section \ref{sec::simulation-studies}. Moreover, an alternative implementation below can provide more insights into the PPP. By swapping the integral in the definition of the PPP, we can rewrite the PPP as the FRT averaged over the posterior predictive distribution of the PS. We give the details below. Equivalently, we can also obtain the PPP based on \begin{eqnarray}\label{eq::ppp-def} \textup{PPP} = \int p(\theta_Z ) P(\theta_Z \mid \cdot ) \textup{d} \theta_Z \end{eqnarray} where $p(\theta_Z) $ is the $p$-value for a fixed $\theta_Z$ defined below: \begin{enumerate} \item[B1] draw $Z_i^s(\theta_Z)$ based on $P(Z_i = 1\mid X_i; \theta_Z)$ for all units, and compute the statistic $ T^s(\theta_Z) = T^s(\bm Z^s(\theta_Z), \bm X, \bm Y)$; \item [B2] repeat the above step to obtain $T^s(\theta_Z)$ $(s=1, \ldots, S)$; \item[B3] calculate $$ p(\theta_Z) \stackrel{\cdot}{=} S^{-1} \sum_{s=1}^S \mathcal{I}( T^s(\theta_Z) \geq T ). $$ \end{enumerate} For a fixed $\theta_Z$, the $p$-value $p(\theta_Z)$ is justified by the standard FRT because the treatment assignment is known. The PPP equals the average of $p(\theta_Z)$ over the posterior distribution of $\theta_Z$ by \eqref{eq::ppp-def}. \citet{meng1994posterior} and \citet{Gelman96} re-formulated the PPP as \eqref{eq::ppp-def} and motivated the above procedure B1--B3. As a side comment, the FRT interpretation of the PPP is natural in causal inference with observational studies. This interpretation cannot be generalized to the survey sampling setting although the existing literature focused more on the commonality of causal inference and survey sampling \citep[e.g.,][]{Rubin85, Bang05}. They differ in this aspect. \subsection{Choice of the test statistic: studentized estimators are superior} We now discuss the choice of the test statistic. The generic PPP introduced in Section \ref{sec::general-ppp} allows for using any test statistic. However, practitioners may find the strong null hypothesis restrictive. Moreover, frequentist's statisticians may completely dismiss the PPP due to its Bayesian nature. To address these concerns, we propose to use the studentized doubly robust statistic in the PPP, which yields an asymptotically valid $p$-value for the weak null hypothesis $$ H_{0\textsc{n}}: \tau = 0 $$ for the average causal effect. This guarantee is under the frequentist's paradigm \citep[cf.][]{robins2000asymptotic} even though the original PPP is motivated by Bayesian thinking. Even though Section \ref{sec::causal-pscore-central} has a completely different motivation from the Bayesian PPP, the estimators there can provide important insights into choosing the test statistic. If $H_{0\textsc{n}}$ is of interest, then intuitively, we can choose $T$ as the absolute value of $\hat{\tau}^{\ipw}$, $\hat{\tau}^{\reg}$ or $\hat{\tau}^{\dr}$. Previous results for the FRT \citep{chung2013exact, wu2021randomization, zhao2021covariate}, however, suggest that using them in the PPP does not ensure correct type one error rate even in large samples. Better choices are the absolute value of the studentized estimators, that is, $\hat{\tau}^{\ipw}$, $\hat{\tau}^{\reg}$ and $\hat{\tau}^{\dr}$ divided by the corresponding consistent standard errors. Our simulation below will demonstrate the superiority of the studentized estimators, especially the one based on the doubly robust estimator. We focus on the empirical evaluation of the PPP and leave the rigorous frequentist's theory to another report. \subsection{Special case: FRT} With a randomized experiment, $P(Z_i, \ldots, Z_n \mid X_1,\ldots, X_n)$ is determined by the designer of the experiment without any unknown parameter. In this case, $\theta_Z$ is empty and we do not need to simulate its posterior distribution. In calculating the PPP, we simply simulate the treatment indicators from $P(Z_i, \ldots, Z_n \mid X_1,\ldots, X_n)$, following the same rule as the initial randomized experiment. This is precisely the FRT, as pointed out by \citet[][Section 5.6]{Rubin84} and \citet[][Section 4]{Rubin05}. Rubin pointed out the Bayesian interpretation of the FRT and thus hinted at the idea in this article. However, he did not pursue the general form of the PPP proposed above for observational studies. \section{Simulation} \label{sec::simulation-studies} In this section, we evaluate the finite-sample performance of the PPP via simulation. The results suggest that the PPP using the studentized doubly robust estimator has the most desirable properties. For the frequentist's evaluation, we repeatedly generate the data for 3000 times. In each replication, we follow the procedures A1--A3 in Section \ref{sec::implementation-ppp} to calculate the PPP. We use the function \texttt{MCMClogit} in the \texttt{MCMCpack} to simulate the posterior distribution of the coefficients of the logistic PS model, with an improper uniform prior on $\theta_Z$, 1000 burn-in iterations, and 2000 draws of the $\theta_Z^r$'s. \subsection{Data generating process and model specification} We choose the sample size as $n=1000$. We consider two different types of data generating process (DGP). \paragraph{DGP with regular PS} We first generate four covariates from \begin{eqnarray*} &X_{i1}=W_{i1},\quad X_{i2}=W_{i2}+0.3X_{i1},&\\ &X_{i3}=W_{i3}+0.2(X_{i1}X_{i2} - X_{i2}),\quad X_{i4}=W_{i4} +0.1(X_{i1}+X_{i3}+X_{i2}X_{i3}),& \end{eqnarray*} with $$ W_{i1} \sim \text{Bernoulli}(0.5),\quad W_{i2} \sim \text{Uniform}(0,2),\quad W_{i3} \sim \text{Exponential}(1),\quad W_{i4} \sim \chi^2(4). $$ The PS follows the logistic model $$ P(Z_i=1\mid X_i; \theta_Z) = \{ 1+\exp(-X_i^{\scriptscriptstyle\text T}\theta_Z) \}^{-1} \quad \text{ with } \theta_Z = (-1,0.5,-0.25,-0.1)^{\scriptscriptstyle \text T} . $$ The outcomes follow the linear models: $$ Y_i(0)= \ \mu_0+({X}_i - \mu)^{\scriptscriptstyle \text T} {\beta}_{1}+ \epsilon_i(0) , \quad Y_i(1)= \ \mu_1+ ({X}_i - \mu)^{\scriptscriptstyle \text T}{\beta}_{0}+ \epsilon_i(1), $$ with $\epsilon_i(0)\sim \N(0,5^2)$, $\epsilon_i(1) \sim \N(0,1^2)$, and $$ \mu_1=\mu_0=1, \quad \mu = \E (X) ,\quad \beta_1 = (0.1,-0.2,-0.2,-0.2)^{\scriptscriptstyle \text T} ,\quad \beta_0 = (-0.1,0.3,0.1,-0.2)^{\scriptscriptstyle \text T} . $$ So $\tau =\E\{ Y(1) \} - \E\{ Y(0) \} = \mu_1 - \mu_0=0$. \paragraph{DGP with extreme PS} We first generate two covariates $$ X_{i1} = \exp(W_{i1}), \quad X_{i2} = \exp(W_{i2}) \quad \text{ with } (W_{i1},W_{i2})^{\scriptscriptstyle \text T} \sim \N(0,I_2). $$ The PS model is $$ P(Z_i=1\mid X_i; \theta_Z ) = \{ 1+\exp(1-X_i^{\scriptscriptstyle\text T}\theta_Z) \}^{-1} \quad \text{ with } \theta_Z = (1,-1)^{\scriptscriptstyle \text T}. $$ The coefficients of the outcome models change to $\mu_0 = \mu_1 = -1 + 0.1\sqrt{e}$, $\beta_1=(-0.2, 0.1)^{\scriptscriptstyle\text T}$ and $\beta_0=(0.2,-0.1)^{\scriptscriptstyle\text T}$. So again $\tau = 0$. For each DGP, we consider four combinations of model specifications: \begin{enumerate} \item[(i)] Both the PS and the outcome models are correctly specified. \item[(ii)] The PS model is correctly specified but the outcome model is misspecified. In particular, for the DGP without extreme PS, we regress $Y$ on $W_2$ and and $W_3$; for the DGP with extreme PS, we regress $Y$ on $W_1$ and $W_2$. \item[(iii)] The outcome model is correctly specified but the PS model is misspecified. In particular, for the DGP without extreme PS, we regress $Z$ on $W_2$ and $W_3$; for the DGP with extreme PS, we regress $Z$ on $W_1$ and $W_2$. \item[(iv)] Both the PS and the outcome models are misspecified. \end{enumerate} \subsection{Simulation under the weak null hypothesis} We first show that the problem of using the unstudentized statistics under the weak null hypothesis. The original DGP without extreme PS yields conservative PPP. Once we flip the treatment and the control group, we can get anti-conservative PPP, as shown in Figure \ref{weak null unstu}. \begin{figure}[t] \centering \includegraphics[width = 0.85\textwidth]{weaknull_unstudentized.pdf} \caption{PPP using the unstudentized test statistics under $H_{0\textsc{n}}$ and the DGP without extreme PS. To obtain the anti-conservative PPP, we change $Z_i$ to $1-Z_i$. The densities are truncated at $2$.} \label{weak null unstu} \end{figure} We then show the superiority of using the studentized statistics under weak null hypothesis. For computational simplicity, we use the estimated standard errors based on the theory of M-estimation. Figure \ref{fig::studentization}(a) shows the distribution of the PPP under the DGP without extreme PS. The PPP has uniform distributions with correctly specified models. The PPP with the studentized double robust estimator is doubly robust since it is uniform if either the PS or the outcome model is correctly specified. It is our final recommendation. Under the DGP with extreme PS, the superiority of our recommendation becomes clearer, as shown in Figure \ref{fig::studentization}(b). \begin{figure}[t] \centering \includegraphics[width = 0.85\textwidth]{weaknull_studentized.pdf} (a) DGP without extreme PS \includegraphics[width = 0.85\textwidth]{weaknull_studentized_extremeps.pdf} (b) DGP with extreme PS \caption{PPP using studentized test statistics under $H_{0\textsc{n}}$. The densities are truncated at $2$. }\label{fig::studentization} \end{figure} \subsection{Comparison of the PPP with normal approximation} We now compare the performance of PPP and the normal approximation based on the studentized doubly robust estimator. We use the standard errors based on both the asymptotic expansion and the bootstrap by resampling the data 2000 times. So in total, we compare four $p$-values. We first compare them under the weak null hypothesis. Under the DGP without extreme PS, they have similar performance, so we omit the results. Under the DGP with extreme PS, the bootstrap or PPP alone has superior performance compared to the normal approximation based on the asymptotic standard error; their combination does not yield further improvement. Figure \ref{fig::ppp-normal-compare}(a) shows the histograms of the four $p$-values under four scenarios. We then compare their power under an alternative hypothesis. We use the DGP without extreme PS for this case. Let $\mu_1 = 1.1$ and $\mu_0 = 1$ so that $\tau = \mu_1-\mu_0 = 0.1$. In this scenario, all $p$-values have similar power as shown in Figure \ref{fig::ppp-normal-compare}(b). \begin{figure}[t] \centering \includegraphics[width = \textwidth]{weaknull_compare_extremeps.pdf} (a) under the DGP with extreme PS and the weak null hypothesis \includegraphics[width = \textwidth]{alternative_compare_extremeps.pdf} (b) under the DGP without extreme PS and an alternative hypothesis \caption{Comparison of the PPP with normal approximation based on the studentized doubly robust estimator (densities truncated at 2)} \label{fig::ppp-normal-compare} \end{figure} \subsection{Replication files and data analysis} The replication files of this article can be found at Harvard Dataverse: \begin{quotation} https://doi.org/10.7910/DVN/QPOS31 \end{quotation} There we post the \texttt{R} code for the simulation studies as well as two data analysis examples. \section{Discussion} \subsection{Summary} We first reviewed the conceptual difficulty of using the PS in Bayesian causal inference in Section \ref{sec::propensityscore-ignorable-bayes}. We then build upon \citet{Rubin84} to proposed a PPP in Section \ref{sec::use-pscore-bayesian-ppp}, which naturally uses the PS and extends the classic FRT by averaging over the posterior predictive distribution of the PS. Moreover, we recommend using the studentized doubly robust estimator in the PPP, which yields superior finite-sample properties even from the frequentist's perspective under the weak null hypothesis, as illustrated by the simulation studies in Section \ref{sec::simulation-studies}. \subsection{Frequentist's properties} We have used simulation in Section \ref{sec::simulation-studies} to evaluate the frequentist's properties of the PPP via simulation which leads to the following conjecture: \noindent {\bfseries Conjecture}: Assume $\tau = 0$ and regularity conditions. The PPP with the studentized doubly robust estimator, $\text{PPP}^\text{dr}$, has the following asymptotic property: $$ \text{PPP}^\text{dr} \stackrel{\text{d}}{\longrightarrow} \text{Uniform}(0,1) ,\qquad \text{ as } n\rightarrow \infty $$ if either the PS or the outcome model is correctly specified. The conjecture is a frequentist's statement although $\text{PPP}^\text{dr}$ is motivated by a Bayesian procedure. Intuitively, it holds because the studentized doubly robust estimator is asymptotically pivotal if either the PS or the outcome model is correctly specified. It ensures that we can use $\text{PPP}^\text{dr}$ as a standard frequentist's $p$-value for testing the weak null hypothesis of $\tau=0.$ We leave the proof of the conjecture to future work. \subsection{Epilogue: Did \citet{Rosenbaum83ps} mention the Bayesian propensity score?} Yes, they did. In \citet[][Section 1.3]{Rosenbaum83ps}, they wrote: \begin{quote} To a Bayesian, estimates of these probabilities are posterior predictive probabilities of assignment to treatment $1$ for a unit with vector $x$ of covariates. \end{quote} However, they did not provide any further discussion on the role of the PS in Bayesian causal inference perhaps due to the incoherence with \citet{Rubin78}. The existing literature has clearly documented the difficulty of using the PS in standard Bayesian causal inference. We argue that a natural approach to incorporate the PS in Bayesian causal inference is the PPP, which can be viewed as the FRT averaged over the posterior predictive distribution of the PS. \acks{Peng Ding thanks the National Science Foundation (\# 1945136) for support, Fan Li for insightful discussion, Zhichao Jiang and Avi Feller for helpful comments. } \vskip 0.2in
2,869,038,154,196
arxiv
\section{Introduction} Transmission electron microscopy as a tool for both material and life science has recently seen revolutionary developments, driven by new types of electron detectors, computational data analysis, automation, and sample preparation. Concomitantly, statistics from the Protein Data Bank (PDB) and the Electron Microscopy Data Bank (EMDB) show a clear increase in the number of protein structures that are recovered through electron-based techniques. Indeed, cryo-electron microscopy (CryoEM) produces the majority of the protein structures in the 3.5-5~\si{\angstrom} resolution range that are being released nowadays. The predominant CryoEM techniques comprise single-particle analysis and tomography, the former being especially suitable for elucidating the structure of proteins and larger complexes at near-atomic resolution, whereas the latter allows to image larger, inhomogeneous structures, up to entire cells. However, single-particle analysis is limited in its scope to molecules of weight above \(\approx \SI{40}{kDa}\), as the signal-to-noise ratio of such small particles in electron micrographs is not sufficient for computational alignment~\cite{Henderson1995,Glaeser2019}, and despite recent progress in CryoEM~\cite{Nakane2020,Yip2020}, X-ray crystallography is still clearly predominant for routine structure determination at the atomic resolution scale. Diffractive electron techniques such as crystallography of monolayers of proteins (2D crystallography) led to seminal results~\cite{Henderson1975,Henderson1990,Gonen2005}, but ultimately remained limited in scope as preparation of suitable two-dimensional crystals is often prohibitively difficult. On the other hand, there have been successful implementations of three-dimensional electron diffraction (3D ED/MicroED) techniques, where three-dimensional, sub-micron-sized crystals are used, in analogy to X-ray crystallography~\cite{Gemmi2019, Nannenga2019}. As the interaction of electrons with matter is increased by up to six orders of magnitude with respect to X-ray photons, sizable signals can be obtained from even tiny crystals. This, combined with the high dose efficiency of electrons, that is, a favorable ratio of elastic to inelastic events and small energy release during inelastic events, and the signal amplification afforded by diffraction-mode acquisition~\cite{Clabbers2018a} makes 3D ED especially appealing for materials which form only small and radiation-sensitive crystals. The potential of 3D ED techniques have first been realized in material science~\cite{Kolb2007,Zhang2010}. Excellent results could be obtained for radiation-sensitive nanocrystalline materials such as zeolites~\cite{Su2014}, or covalent- and metal-organic frameworks~\cite{Zhang2013}, which often evade X-ray structure determination. Soon after, 3D ED has been introduced into life science (there mostly known as MicroED)~\cite{Shi2013,Nederlof2013,Nannenga2014a}, where high-resolution structures of small proteins, peptides and pharmaceuticals can now routinely be solved~\cite{Nannenga2019}. Most of the 3D ED/MicroED work has so far been performed by rotating the crystal in the electron beam in various ways~\cite{Gemmi2019a}, in analogy to goniometer-based X-ray single-crystal diffraction. More recently, \emph{serial} electron diffraction (SerialED) has been introduced~\cite{Smeets2018,Buecker2020}, where, in analogy to synchrotron- and free-electron laser-based techniques~\cite{Chapman2019,Gati2014,Stellato2014}, a large ensemble of nanocrystals is employed, each of which only a single diffraction pattern is taken from. While this data collection scheme has important advantages over rotation methods, it requires a different approach to data processing, specifically in the data-reduction steps of a crystallographic pipeline, from raw data to estimated Bragg reflection intensities. In this paper, we discuss our pipeline for SerialED data processing. The paper is structured as follows: In Section~\ref{sec:serialed}, we briefly recapitulate the concept of SerialED and its implementation in our laboratory, as described in~\cite{Buecker2020}. Next, in Section~\ref{sec:pipeline}, we discuss the general data processing pipeline, illustrated by examples from a typical data set. Section~\ref{sec:diffractem} introduces our program package \emph{diffractem} and outlines it usage for the pipeline described in Section~\ref{sec:pipeline}. Finally, Section~\ref{sec:discussion} reviews various specific aspects and potential issues of our approach, and future directions of further development. \section{Serial Electron Diffraction: concept and data collection} \label{sec:serialed} While rotation crystallography, whether using electrons or X-rays, can yield high-quality crystallographic data from nanometric crystals, an inherent limitation is the accumulation of radiation damage during rotation data collection~\cite{Hattne2018}, prohibiting acquisition of damage-minimized data. On the other hand, damage accumulation is evaded in serial crystallography, where each crystal is exposed once, using femtosecond X-ray pulses at extreme intensities that record diffraction data before Coulomb explosion~\cite{Chapman2011}, or X-ray/electron pulses at lower intensity below a critical dose threshold, which can yield equivalent results~\cite{Mehrabi2020,Buecker2020}. To automate the process of collection of diffraction data from thousands of crystals randomly dispersed on an electron microscope grid, serial electron diffraction (SerialED) leverages the ability of electron microscopes to map out their locations, using conventional~\cite{Smeets2018} or scanning~\cite{Buecker2020} TEM imaging (Figure~\ref{fig:serialed}~A). Crystals are automatically identified in the map image, and the electron beam is steered sequentially to the found crystals, where diffraction patterns are taken (Figure~\ref{fig:serialed}~B). The process can then be repeated in many regions of a sample grid, each typically tens of \si{\micro \metre} across. This approach adds a high level of automation to the advantages of SerialED, requiring little specific skill on the user's part for operation. In~\cite{Buecker2020}, SerialED was furthermore combined with a dose fractionation scheme as known from single-particle electron microscopy, which allows to obtain damage-minimized data as described above, without the need for prior information about the sample or exact calibrations. \begin{figure} \centering \includegraphics[width=0.95\columnwidth]{Figures/Fig1} \caption{Principle of STEM-based SerialED. (A) A low-resolution, low-dose STEM image is taken over a large region on a TEM grid. Signal is generated using the high-angle dark field (HAADF) detector. (B) After the crystals have been identified in the STEM image, the beam is sequentially steered to each autmatically found crystal. A fast detector records the diffraction patterns in a synchronized way. From the diffraction data, the crystal structure is solved.} \label{fig:serialed} \end{figure} Despite these advantages, with respect to rotation techniques, SerialED poses new challenges with regards to data analysis, specifically pertaining to the steps of data reduction from raw diffraction patterns to merged Bragg spot intensities. In this article, we discuss the processing of SerialED data sets using \emph{CrystFEL}~\cite{White2012} and \emph{diffractem}, a new library specifically developed for SerialED. \section{Processing Method for Serial Electron Diffraction} \label{sec:pipeline} In this section, we will describe the essential steps of a SerialED data processing pipeline, starting from a set of recorded diffraction patterns to merged reflection intensities, which can then be exported to standard software for phasing and refinement, such as \emph{PHENIX}~\cite{Adams2010}, \emph{CCP4}~\cite{Winn2011}, or \emph{SHELX}~\cite{Sheldrick2010}. While a large portion of steps to process serial crystallography data have been addressed in established packages such as \emph{CrystFEL}~\cite{White2012}, \emph{cctbx.xfel}~\cite{Hattne2014}, and \emph{nXDS}~\cite{Kabsch2014a}, SerialED processing requires some more specific steps, which we will discuss in more detail. As example data set from which the figures and results shown in this paper are derived, we use that taken from tetragonal hen egg-white lysozyme crystals, as has been published in~\cite{Buecker2020} (PDB-ID: 6S2N). A flow-chart of the process is shown in Figure~\ref{fig:flowchart}; processing steps are further illustrated for a representative diffraction pattern in Figure~\ref{fig:StepByStep}. For the more technical details of the processing pipeline, we refer to Section~\ref{sec:diffractem}, where the practical use of our processing program package \emph{diffractem} in conjunction with the serial crystallography package \emph{CrystFEL}~\cite{White2012,White2019} is discussed, and the Jupyter notebooks supplied as supplementary material. \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{Figures/Fig2} \caption{Journey of SerialED data through our data reduction pipeline. Green and blue boxes represent processing steps conducted in \emph{diffractem} and \emph{CrystFEL}, respectively; section numbers in this paper relating to each step are indicated. Red and orange parallelograms represent input data and important intermediate results, respectively. The final result (reflection intensities) is then handed over to structure solution software (grey box).} \label{fig:flowchart} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{Figures/Fig3} \caption{Processing steps of a single diffraction pattern. All patterns are shown on the same, logarithmic scale. (A) Initial dose-fractionation stack (first fraction is shown). (B) Aggregated pattern over a range of four dose-fractionation frames. (C) Pattern center (blue cross-hair) and Bragg peaks (red circles) have been determined. (D) Aggregated pattern after background subtraction. (E) Predicted Bragg reflections (green squares) have been computed after successful indexing. In the integration step, those will be included as single observations.} \label{fig:StepByStep} \end{figure*} \subsection{Pre-processing} \label{sec:preprocessing} We start by applying several pre-processing steps to the diffraction patterns, that is, aggregation of dose-fractionation stacks, correction of artifacts introduced by the detector, accurate determination of the pattern center (zero-order peak) and position of Bragg peaks, and general handling of metadata. \subsubsection{Sorting and aggregation} \label{sec:aggregation} The first processing step is to reject superfluous shots (i.e., single exposures on the camera), which might be present in the dataset due to auxiliary scan points inserted during data collection~\cite{Buecker2020} to mitigate hysteresis effects during beam scanning. Next, if dose-fractionation movies have been collected where several images correspond to the same diffraction pattern from a single, still crystal (Figure~\ref{fig:StepByStep}~A), the successive frames are summed over an arbitrary number of frames adding up to an equivalent exposure time, as to provide a reasonable balance between signal-to-noise ratio of low-resolution peaks and pattern resolution (which fades at later times) for each crystal (Figure~\ref{fig:StepByStep}~B). As most of the processing steps, such as peak finding and indexing, are independent of the exact peak intensities affected by damage effects, this choice of equivalent exposure is not critical at this point, as long as the diffraction peaks are well visible. The optimal exposure time can be exactly determined during the later steps of peak integration and merging (Section~\ref{sec:fractionation}). \subsubsection{Detector artifact correction} \label{sec:detector} Any real electron detector shows a range of imperfections, three of which we account for during processing: \begin{itemize} \item Faulty pixels, which yield zero, extremely high, or excessively fluctuating values, are a primary source of errors during peak finding, indexing, and integration. In our processing pipeline, we assume the existence of an accurate dead-pixel map, i.e., an image file with defined pixel values at faulty or intact pixels, respectively, which can be obtained by recording images with even illumination. During processing of diffraction patterns, the values of these pixels are overwritten by interpolation from adjacent pixels, or (at the user's choice) flagged for exclusion from further processing steps. \item The response of a detector to a homogeneous illumination (flat-field) is typically non-uniform. If the raw data are not corrected for this effect already, this can be accounted for by a simple normalization during processing. \item For high pixel values, a detector can saturate, in ways which may differ between various models. Integrating detectors such as CCD, indirect CMOS, or linear-mode direct detectors saturate in the total counts per pixel with a sharp cut-off, which can be treated by exclusion from further analysis steps, in a similar way to dead pixels. On the other hand, counting detectors e.g. of counting-mode direct or hybrid-pixel type, suffer from continuously increasing coincidence-loss saturation as a function of count \emph{rate}. For the latter, if previously characterized, a saturation model can be applied. \end{itemize} Our example data set has been recorded using a hybrid-pixel detector with a large number of dead pixels, which have to be taken into account, but a fairly even flat-field. The used count rates range into the saturation range near the center of the diffraction pattern (i.e. close to the transmitted beam), which is accounted for by using a paralyzable dead-time model~\cite{Feller2015}, parametrized from independent measurements. All of those corrections are applied before any further image analysis. \subsubsection{Pattern centering and peak finding} \label{sec:centering} \begin{figure} \centering \includegraphics[width=0.95\columnwidth]{Figures/Fig4_v2} \caption{Pattern center refinement from Friedel mates. In electron diffraction patterns, even away from zone axes, a large number of Friedel mates is simultaneously visible, such as those marked by arrows. As they are symmetric about the zero-order beam, the positions of each pair can be used to refine the initial estimate $\vec{c}_\mathrm{COM}$ of the pattern center to the more accurate $\vec{c}_\mathrm{refined}$. } \label{fig:friedel} \end{figure} For successful indexing of the diffraction patterns, it is of crucial importance to accurately know the center (zero-order beam) position of each diffraction pattern. For serially collected data, where the beam is moving between crystals, the pattern center tends to fluctuate between beam positions due to residual alignment issues of the microscope (beam-shift pivot). Hence, the beam center has to be found for each pattern separately, which in our pipeline is tightly coupled to the detection of Bragg peaks. To find both the pattern center and peak positions, first we determine the center of mass of the diffraction pattern, excluding all pixels whose values fall below a threshold chosen such that only a region around the center is taken into account. Next, we apply a two-dimensional least-squares fit of a Lorentzian function to a region of tens of pixels of diameter around the found center of mass position, to obtain a more accurate estimate for the pattern center. Peaks are now detected using the \emph{peakfinder8} algorithm~\cite{Barty2014}, which inherently takes into account a radially symmetric background as typically present in electron diffraction patterns due to multiple inelastic/elastic scattering~\cite{Latychevskaia2019}. To further refine the center position of each electron diffraction pattern, we make use of the fact that, due to the flat Ewald sphere of electrons, even for patterns away from a zone axis, many Friedel-mate pairs (with Miller indices h,k,l and -h,-k,-l, respectively) can be found in a single image, as shown in Figure~\ref{fig:friedel}. Each pair is necessarily symmetric with respect to the pattern center, which can be used to further refine the estimate of the pattern center. The refinement is performed by defining a score function: \begin{align*} F(\mathbf{r}_0) = \frac{1}{2N_\mathrm{pk}}\sum_{i,j}^{N_\mathrm{pk}} \exp\left[-\frac{1}{2\sigma^2}(\mathbf{r}_i + \mathbf{r}_j - 2\mathbf{r}_0)^2\right], \end{align*} with all found peaks at pixel position $\mathbf{r}_i=(x_i, y_i)$ of characteristic width $\sigma\approx 2$~pixels, and performing a least-squares minimization on $F^{-1}$ in order to obtain the refined pattern center at $\mathbf{r}_0=(x_0, y_0)$ with sub-pixel accuracy. We find that further refinement as performed by pattern indexing codes does not lead to any significant improvement. In Figure~\ref{fig:StepByStep}~C, a typical result of pattern centering and peak finding is shown. \subsubsection{Ellipticity finding} \label{sec:ellipticity} A common artifact introduced by the electron optics in an electron microscope column is a slight elliptical distortion of the diffraction pattern which, even in the range of only few percent, can severely hamper the efficiency of crystallographic algorithms. Hence, care has to be taken to account for the distorted geometry, especially during the indexing and integration steps. The ellipticity can be derived from the data itself, by computing a two-dimensional histogram of \emph{all} measured diffraction peak positions (relative to the pattern center) in radial coordinates, as shown in Figure~\ref{fig:geomrefine}. In an ideal geometry, there is no dependence of any features (virtual powder rings) on the azimuth angle. The elliptical distortion as seen in Figure~\ref{fig:geomrefine}~A can hence be computed by iteratively modifying the peak positions according to their azimuth angle, and recomputing, until no dependence is found anymore as seen in Figure~\ref{fig:geomrefine}~B. \begin{figure} \centering \includegraphics[width=0.95\columnwidth]{Figures/Fig5.pdf} \caption{Ellipticity refinement. In order to correct for the elliptical distortion of diffraction patterns introduced by electron optics, we histogramize all found diffraction peaks (from all images) in two-dimensional polar coordinates. (A) Elliptical distortion manifests itself in a modulation of the position of major features near the inverse layer spacings of the crystal. Azimuthal integration into a radial profile (white line) yields a blurred, low-contrast pattern. (B) Same, after correcting the positions of the peaks according to an elliptical model before histogramization.} \label{fig:geomrefine} \end{figure} \subsubsection{Background rejection.} In contrast to X-rays, inelastically scattered electrons are not removed from the beam, but continue their trajectory toward the detector, thus appearing in the recorded data unless an energy filter is used. While the differential cross section for inelastic scattering drops off quickly at angles small compared to typical Bragg reflection angles, \emph{combined} elastic and inelastic scattering leads to a pronounced, radially symmetric background~\cite{Latychevskaia2019} in unfiltered electron diffraction patterns. As long as the peak integration algorithm, which serves to extract the summed intensity of each peak from the images, can handle this background appropriately, it in principle does not impact the obtained values, apart from a decreased signal-to-noise ratio at low resolutions. However, we find that subtraction of the radially symmetric background not only aids to visually represent and assess the diffraction patterns as seen in Figure~\ref{fig:StepByStep}~D, but also simplifies the peak integration process (due to the absence of a background gradient) and leads to more consistent results after merging. The tools provided by \emph{diffractem} as described in Section~\ref{sec:proc2d} allow to reject any radially symmetric signal following a prescription as follows: \begin{enumerate} \item Computation of the radial profile of the inelastic background by azimuthal averaging around the previously found pattern center, excluding a generous area around each of the found Bragg spots to avoid over-correction. \item Median filtering of the profile to reduce noise and reject residual ripple caused by weak, unidentified Bragg peaks. \item Computation of expected background image from profile by assigning pixel values based on the radius with respect to the pattern center. \item Subtraction of the computed background from the actual diffraction pattern. \end{enumerate} We find the outcome of this procedure to be satisfactory even for the dense diffraction patterns of proteins. \subsection{Indexing} After corrected diffraction patterns with annotated center and peak positions have been computed (Figure~\ref{fig:StepByStep}~C), the next step is indexing the patterns, that is, deriving the unit cell parameters, which are assumed to be narrowly distributed over all crystals, and the orientation of each individual crystal. Common processing pipelines for serial X-ray diffraction data solve the indexing problem by estimating a unit cell for each pattern separately, and if necessary, iteratively refining the obtained solutions~\cite{White2019}. However, owing to the short de-Broglie wavelength of high-energy electrons, diffraction patterns are almost entirely devoid of three-dimensional information, which precludes the required determination of the crystal unit cell from single diffraction patterns. While approaches exist to bootstrap the cell information from all patterns taken as a whole~\cite{Jiang2009}, the cell can also be experimentally derived from ancillary rotation-based data or multi-tilt serial data (publication in preparation). We defer the discussion of cell-finding to future publications, and instead focus on two remaining steps of indexing, namely, accurate refinement of the unit cell parameters, and determination of the orientation of each individual crystal. \subsubsection{Unit-cell refinement} \label{sec:peak_refine} If the Bravais lattice and reasonable estimates of the cell parameters are known, the latter can be refined against radial distribution functions derived from the found peaks in the \emph{entire} data set, as shown in Figure~\ref{fig:cellrefine}. To this end, we consider two types of peak information, both of which are histogramized with respect to their radial coordinate. Firstly, we simply consider the radial position of peaks with respect to the pattern center, which can be related to the Bragg angle $2\theta$ and hence the crystal's inverse layer spacings. The according histogram is known as a \emph{virtual powder pattern}, as it effectively corresponds to a super-resolution measurement of a powder diffraction pattern. Secondly, we compute the distribution of all pair-wise distance vectors between peaks present in each pattern. Due to the small Bragg angles of electrons (paraxial regime), the lengths of those similarly match inverse layer spacings, which can then be averaged over the entire dataset. The advantage of the second method is that the result displays pronounced peaks near the primitive-cell basis vectors, which are hardly or not at all (due to systematic absences) present in the virtual powder pattern. Once the distribution functions are computed, the cell parameters are refined against them by matching the predicted layer spacings to their respective peaks. \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{Figures/Fig6} \caption{Unit-cell refinement of the example system's tetragonal cell with $a=\SI{78.9}{\angstrom}, c=\SI{37.9}{\angstrom}$. After peak finding, the scattering vector length, and the pair-wise distances between the peaks (within each pattern) are histogramized over all patterns (blue and red curves, respectively), yielding peaks at the inverse layer spacings of the crystal. The unit cell can accurately be refined by fitting the computed layer spacings (grey lines) to the observed peaks in both distributions.} \label{fig:cellrefine} \end{figure*} \subsubsection{Indexing using \emph{pinkIndexer}} \label{sec:indexing} Now that the unit cell of the crystals is known, the orientation of each crystal with respect to the experiment geometry can be determined by an exhaustive search over all possible rotations, for which several implementations are available~\cite{Ginn2016, Beyerlein2017, Smeets2017, Li2019, Gevorkov2019}. We use \emph{pinkIndexer}~\cite{Gevorkov2019}, which has been tested extensively on electron data, and is directly integrated into the \emph{indexamajig} program of~CrystFEL. Before running the indexing process, it may be required to screen the parameters of the indexing on a small sub-set of diffraction patterns, which should be selected by the number of found peaks and visual appearance. Factors impacting the successful indexing rate are the accuracy of unit cell parameters, proper centering of the patterns, sampling density of rotational space, and the assumed radius of Bragg spots in reciprocal space. While the first two can be refined using the methods described above, the others have to be found heuristically for the sample under study. While the sampling density depends critically on the unit cell size, the optimal setting for the Bragg spot radius is defined by the interaction region between the electron beam and continuous crystalline blocks, which can be limited by crystal size, beam size, or crystal mosaicity and bending~\cite{Gallagher-Jones2018}, as well as the beam convergence angle. Given the typical parameters of three-dimensional electron diffraction, realistic values are below \SI{0.005}{\per\angstrom}. While a too large value tends to assign patterns to a near-zone-axis geometry with densely packed peaks, a too small value can preclude any successful indexing. Depending on the sampling density used for the orientation search, up to one minute of computation time is required for each crystal; however it is straightforward to distribute the calculation over arbitrarily many processor cores on a cluster system, which is automated in our processing software~\ref{sec:indexamajig}. \subsection{Peak integration, merging, and validation} Having determined the orientation of each crystal, we can proceed to integration and merging, that is, from the manifold of indexing solutions deriving a complete set of estimates for the Bragg spot intensities for firstly individual patterns (integration), and secondly the entire dataset (merging). \subsubsection{Integration of intensities from indexing results} \label{sec:integration} The unit cell vectors of each diffraction pattern as found by the indexing are used to extract the intensities of observations of Bragg reflections, which may be partial~\cite{White2014}. To accomplish this \emph{peak integration} step, we use functionality built into CrystFEL, as outlined in more detail in Section~\ref{sec:indexamajig}. Briefly, the positions of all Bragg reflections that could reasonably be present in each diffraction pattern are computed (spot \emph{predictions}) from the crystal orientation and a refined reciprocal spot radius, as shown in Figure~\ref{fig:StepByStep}~E. Then, the pixel intensities around each prediction position are integrated, using one out of several available methods such as profile fitting~\cite{Rossmann1979} and simple summation within an appropriately chosen radius~\cite{White2013}. We usually find the simplest method, that is, summation without any additional refinement steps, to be most effective; background-gradient correction as also offered is only required, if the diffraction patterns are not background-subtracted. \subsubsection{Merging and validation of integrated intensities} \label{sec:merging} After the measured Bragg spot intensities from all shots are extracted and stored, they have to be merged into a full crystallographic data set. Firstly, depending on wether the crystal's space group shows an indexing ambiguity, it needs to be resolved using, which can be performed using a clustering algorithm~\cite{Brehm2014,Kabsch2014a,White2016}, which is provided as a part of CrystFEL (\emph{ambigator}). Then, the many observations of each Bragg reflection are combined, in the simplest case by averaging without further weighting (``Monte-Carlo method''). Additionally, iterative global and resolution-dependent scaling can be introduced, which leads to significant improvements~\cite{White2016}. Finally more elaborate models for merging, which explicitly model the amount partiality of each observation, are available, which in our experience leads to varying, sample-dependent results (Figure~\ref{fig:merging}); a detailed discussion of partiality modeling for SerialED will be the subject of future work. \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{Figures/Fig7} \caption{ Merging statistics as a function of resolution (A-C) and crystal number (D-F): Half-set Pearson correlation $CC_{1/2}$, dataset completeness, and observation redundancy. In (A-C), results are shown for three different integration times in different colors, and for merging without (solid lines, circles) or with (dashed lines, triangles) the \emph{xsphere} partiality model~\cite{White2014}. In (D-F), solid and dashed lines represent overall values (entire resolution range) and those from the second-highest resolution shell as shown in (A-C), which is centered at \SI{1.85}{\angstrom}. Blue circles and orange triangles represent results from merging without and with partiality modeling, respectively; iterative scaling was enabled in both.} \label{fig:merging} \end{figure*} In order to assess the overall statistics and quality of the merging result (and hence, the effectively the entire data reduction pipeline), it is of crucial importance to evaluate some important validation metrics. While traditional merging quality indicators such as $R_\mathrm{merge}$ are inadequate for serial dataset due to their strong partiality~\cite{White2012}, the half-set Pearson correlation coefficient between Bragg intensities merged form a half-sets of the crystals $CC_{1/2}$~\cite{Karplus2012} provides a robust figure of merit for consistency of the dataset. Furthermore, the completeness of the dataset, as well as the mean number of observations of each reflection (redundancy) are of highest concern. In Figure~\ref{fig:merging}, these quantities are shown for our example data set, as a function of resolution shell, and of the number of merged crystals (by picking a sub-set from the data). We can observe that the correlation coefficient, which is near-unity at low resolution, drops below a threshold of 0.143 (corresponding to $CC^*=0.5$~\cite{Karplus2012}) at about 1/(\SI{1.8}{\angstrom}), which is hence a reasonable resolution cut-off for phasing and refinement steps. Another important observation is that the completeness of the dataset appears to converge to a value significantly less than 100\% when increase the crystal number; this clearly indicates the presence of preferred crystal orientation, which cannot be significantly mitigated by increasing the number of crystals. Such preferred orientation issues can however be mitigated using a tilt of the sample stage or specifically prepared sample grids~\cite{Wennmacher2019}. \subsubsection{Processing of dose-fractionation movies.} \label{sec:fractionation} If a sufficiently fast diffraction detector is available, it is advisable to collect SerialED in dose-fractionation mode, that is, taking a series of frames (movie) for each crystal in rapid succession, as shown in Figure~\ref{fig:StepByStep}. This technique is commonly applied in single-particle microscopy, and while motion blur is not of concern for diffraction data, dose fractionation allows to select an optimal exposure time, and hence radiation dose per crystal \emph{a posteriori}~\cite{Buecker2020}. Assuming that the orientation of crystals does not significantly change between the movie frames, and hence the indexing solution is valid for all frames equivalently, the exact choice of considered integration time is mostly irrelevant up to the point of integration, as long as the visible Bragg peaks at low to intermediate resolution can be reliably found. It is only in the final steps that results should be derived for different integration time separately. This can be accomplished by ``broadcasting'' the position of stop predictions to a dataset that comprises diffraction patterns with varying aggregation length as described in Section~\ref{sec:aggregation}, and re-running integration and merging on those sets. Our analysis programs provide convenient functions to automate this process and guide the user to an optimal choice of exposure time. \section{Implementation of SerialEM Processing} \label{sec:diffractem} The various steps of data processing explained in the previous sections can be performed in our Python software package \emph{diffractem}, which provides the necessary functionality directly, or via tight integration with \emph{CrystFEL} through wrapper functions. Besides a few command-line tools, diffractem is intended for comfortable use within Jupyter notebooks, a common platform for scientific data analysis and data science in general. This section will introduce some key concepts of diffractem. For more in-depth examples and explanations, we refer the reader to the annotated Jupyter notebooks provided as supplementary information to this paper. \subsection{Data structures and file format} \label{sec:dataset} Diffraction images and meta data are accessed and managed via instances of diffractem's \texttt{Dataset} class. A single \texttt{Dataset} object represents arbitrarily many data files that each correspond to a SerialED acquisition run from one grid region. \subsubsection{Data hanlded by \emph{diffractem}} In diffractem's terminology, a \emph{shot} corresponds to a diffraction pattern recorded by the detector (equivalent to an \emph{event} in CrystFEL), whether it constitutes a hit on a crystal or not. If dose-fractionation is used, the many shots obtained from the same crystal are referred to as the \emph{frames} of said crystal. In SerialED, thousands of raw diffraction patterns can be acquired per hour. Thus, the initial raw data comprises a large number of 2D diffraction patterns (shots), forming together a 3D data cube referred to as \emph{image stack} in the following, with associated meta-data. Such meta-data can be defined per diffraction pattern (\emph{shot table}), per crystal (\emph{feature table}), or per grid region (\emph{global meta-data}). The number of peaks in a diffraction pattern, the position of a crystal on the sample grid, and the camera length setting are examples of per-shot, per-feature, and global data, respectively. The metadata can extensively be changed and extended along the data processing pipeline, where the \texttt{Dataset} object ensures consistency of image and meta data. Destructive processing steps that either change actual image data (such as background correction) or remove shots are handled by generating a new, modified \texttt{Dataset} object. Within the \texttt{Dataset} object, the shot and feature tables are accessible as \emph{pandas} DataFrame objects~\cite{Pandas} via the attributes \texttt{Dataset.shots} and \texttt{Dataset.features}. Their number of rows always correspond to the number of shots and crystals stored in the \texttt{Dataset} object, respectively. On the other hand, the number of columns is arbitrary, and commonly increases once new per-pattern analysis results become available. In any case, key columns such as the file name and location of diffraction data in the image stack, as well as the sample name, and identification numbers of each crystal and grid regions have to be present. Global meta-data (typically comprising instrument parameters such as camera length or exposure time) can be accessed or directly merged into the shot table using the \texttt{Dataset.merge\_meta} method. \subsubsection{Stacks and memory management} \label{sec:stacks} The image stack comprising the actual diffraction data is often too large to fit into the main memory of a typical mid-range workstation computer. Hence, to manage this amount of data and the ensuing parallel computations, we employ the \emph{dask} package~\cite{Dask}, which allows to transparently access larger-than-memory data arrays from disk, and to build lazy computation pipelines, that can be executed efficiently in parallel (see supplementary Jupyter Notebooks for details). A \texttt{Dataset} object can contain an arbitrary number of N-dimensional dask arrays (which behave analogously to \emph{NumPy} arrays), referred to as \emph{stacks}, the first dimension (dimension 0) of which must always equal the number of shots contained in the \texttt{Dataset} (and hence the number of rows in the shots table). Besides the actual diffraction data stack (constituting a three-dimensional stack), typical stacks in a \texttt{Dataset} object are the data of found diffraction peaks in each image, in \emph{CXI} format~\cite{Maia2012}. Generally, data stacks can be added or overwritten using the \texttt{Dataset.add\_stack} method and accessed via attributes of the form \texttt{Dataset.<stackname>}. \subsubsection{Slicing, selecting, and aggregating data} A common task during the preprocessing of a diffraction data set is to reject shots based on criteria such as a minimum number of Bragg peaks or a maximum level of background signal. Such selections can easily be performed using the \texttt{Dataset.get\_selection} method, which allows for selections of sub-sets via query strings acting on columns of the shot list. As an example, the code line \texttt{ds\_sel = ds.get\_selection('num\_peaks >= 15')} generates a new \texttt{Dataset} object \texttt{ds\_sel}, containing only shots from \texttt{ds} where more than 15 Bragg peaks have been detected. In this step (as in all other methods of \texttt{Dataset}), it is ensured that all stacks and tables are kept consistent. The related method \texttt{Dataset.aggregate}, which accepts a similar query string, will, on top of slicing, apply different group-wise aggregation functions to the data stacks, or a subset thereof; its typical application of the summation of dose fractions of the same diffraction pattern as described in Section~\ref{sec:fractionation}. Please see the supplementary Jupyter notebooks for more detailed explanations and examples. \subsubsection{Diffractem data files} \label{sec:file_format} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{Figures/Fig8} \caption{Structure of HDF5 file as used by \emph{diffractem}. Typically, each HDF5 holds data from a single SerialED run on one grid region; a \texttt{Dataset} object typically manages data from many such files, automatically concatenating all information. On the left side, a tree view of the internal folder/dataset hierarchy of a HDF5 file is shown. On the right side, various types of information and (via arrows and braces) their location within the HDF5 file are shown. In compliance with the \emph{NeXus} convention, all data is stored under a global \texttt{/entry} group. Explanations of the top-level groups (A) \texttt{data}, (B) \texttt{instrument}, (C) \texttt{map}, (D) \texttt{sample}, and (E) \texttt{shots} are given in the main text. } \label{fig:file_format} \end{figure*} Diffractem stores its data in \emph{HDF5} files largely follwing the \texttt{NeXus} convention~\cite{Konnecke2015}, which is becoming a common standard in X-Ray crystallography, and can by now be processed by most crystallography libraries. The data within the files can be accessed from all common programming languages through bindings of the HDF5 library, such as \texttt{h5py} for Python, and can directly be mapped into larger-than-memory arrays using the dask package, as described above. Each file holds data from a continuous acquisition run on a single region on the sample, corresponding to a single map image as shown in Figure~\ref{fig:serialed}~A on which crystals have been identified prior to diffraction data collection. A multitude of acquisition runs from the same sample which shall be analyzed as a whole can be defined using simple text files with corresponding HDF5 file names on each line, and a \texttt{.lst} extension by convention. Using the \texttt{Dataset.from\_files} method, data can be loaded from a single file, a list file, or a range of files implicitly defined using wildcard characters. Both the HDF5 and list file specifications are consistent with CrystFEL. HDF5 files are internally organized into \emph{groups} and \emph{datasets}, roughly corresponding to folders and files in a file system. Datasets can be arrays of arbitrary dimension, and have a uniquely assigned data type. Mirroring the structure of a \texttt{Dataset} object, a diffractem data file contains primarily three types of entities: \begin{itemize} \item Tabular data such as the shot list and the feature list, are stored as groups comprising one-dimensional HDF5 datasets, each corresponding to a single table column (Figure~\ref{fig:file_format}~E~and~C, respectively). Those tables are loaded into memory as pandas DataFrames on loading the dataset, as described in Section~\ref{sec:dataset}. \item Data stacks, that is, arrays with an arbitrary number of dimension, where the first (dimension 0 in Python convention) dimension corresponds to a given shot (Section~\ref{sec:dataset}), are stored as HDF5 datasets within the group \texttt{data} (Figure~\ref{fig:file_format}~A). All stacks are mapped into dask arrays when loading the dataset. \item Ancillary per-file instrument metadata, which can be accessed using \texttt{Dataset.merge\_meta} is stored in a hierarchical structures (Figure~\ref{fig:file_format}~B~and~D). \end{itemize} In Figure~\ref{fig:file_format}, a typical HDF5 file structure, and how it maps to the attributes of a \texttt{Dataset} object, is illustrated. \subsection{Processing functions} \label{sec:proc_functions} In this section we describe functions that act on data stored within \texttt{Dataset} objects, specifically image stacks and Bragg peak data. A commonly used ancillary tool for the functionality described in this and the next section is the \texttt{PreProcOpts} class contained in the \texttt{diffractem.pre\_proc\_opts} module. The attributes of this class hold values of a large number of options pertaining to the entire data processing workflow, such as which steps of the pipeline should be applied by default, but also experiment parameters such as the accurate camera length and distortion. The attribute values of a \texttt{PreProcOpts} object are stored to and read from in a human-readable \texttt{.yaml} file, which can be continuously adjusted while working interactively on processing a dataset, and will in its final state document the exact parameters used, ensuring full reproducibility. \subsubsection{Stack processing} \label{sec:proc2d} Diffractem's functions for processing image stacks as required for pre-processing (see Section~\ref{sec:preprocessing}) are contained in the \texttt{diffractem.proc2d} module. Examples for such functions are \texttt{correct\_dead\_pixels}, \texttt{lorentz\_fit}, or \texttt{get\_peaks}. All those take an image stack as described above (as NumPy arrays) as their first argument (with more arguments for individual options). They return either a processed version of the input stack (e.g. dead-pixel correction, background subtraction), per-shot data which can directly be merged into a \texttt{Dataset} shot list (e.g. pattern center finding, virtual detector signals), or more complex per-shot data which can be stored into stacks of a \texttt{Dataset} object (e.g. peak finding, azimuthal averaging). Two special, particularly relevant functions contained in \texttt{proc2d} are \texttt{get\_pattern\_info} and \texttt{correct\_image}, both of which represent multi-step pipelines for getting information (such as pattern center and Bragg peaks) from each shot, and for computing processed images (having undergone e.g. dead-pixel correction and background subtraction), respectively. In contrast to the other functions, these two act on larger-than-memory image stacks stored as dask arrays (like in a \texttt{Dataset} object, see Section~\ref{sec:stacks}), and have their parameters defined via \texttt{PreProcOpts} objects. These two functions encapsulate computationally heavy, but independent (per-shot) steps of pre-processing, and hence are preferably using parallel execution. This is implemented using the \emph{dask.distributed} scheduler, which besides its ease of use provides convenient real-time progress reporting via a web interface. Please consult the supplementary Jupyter notebook \texttt{preprocessing.ipynb} for an example pre-processing workflow. \subsubsection{Peak processing} \label{sec:proc_peaks} Another set of processing functions, acting on Bragg peak positions, is contained in the \texttt{diffractem.proc\_peaks} module. This comprises functions for refinement of the zero-order peak positions (pattern center) via matching of Friedel mates (see Section~\ref{sec:centering}), getting pair-wise distances from all observed peaks (pattern autocorrelation function), and the \texttt{Cell} class, which provides functionality for unit-cell refinement as described in Section~\ref{sec:peak_refine}. An example for the peak refinement workflow using a \texttt{Cell} object and pattern autocorrelation functions is provided in the supplementary Jupyter notebook \texttt{peak\_processing.ipynb} \subsection{Integration with \emph{CrystFEL}} \label{sec:crystfel} For all tasks that are less specific to SerialED, but pertain to (serial) crystallography in general, diffractem provides interfaces to the CrystFEL package, in particular its central command-line tools \texttt{indexamajig} and \texttt{partialator}, as well as the validation programs for merged diffraction intensities \texttt{compare\_hkl} and \texttt{check\_hkl}. Also, functionality to parse and manipulate \texttt{.stream}-files, CrystFEL's output format for pattern indexing and integration results is included. Depend on the task at hand, diffractem either calls the executables directly, or generates the required input files and a shell script containing the corresponding function calls. The functionality for integration with CrystFEL are mostly contained in the \texttt{diffractem.tools} module. While in the supplementary Jupyter notebooks, the usage of the pertinent tools is explained in detail, here we only give a brief overview of the most important functionality, especially where deviating from the standard CrystFEL workflow. \subsubsection{Indexing and integration} \label{sec:indexamajig} Indexing and integration (Sections~\ref{sec:indexing} and~\ref{sec:integration}, respecitvely) in CrystFEL are performed using the \texttt{indexamajig} program As input, it requires a list of HDF5 data files (\texttt{.lst}) containing diffraction patterns and (optionally) peak positions, a geometry file (\texttt{.geom}), and a unit cell specification (\texttt{.cell} or \texttt{.pdb}). Using the \texttt{tools.make\_geometry} function, the geometry file can be automatically generated from a \texttt{PreProcOpts} object (or, respectively, the corresponding \texttt{.yaml} file), which automatically handles elliptical distortion found as described in Section~\ref{sec:ellipticity}. Similarly, the specification of a unit cell after refinement as described in Section~\ref{sec:peak_refine} can automatically be generated using the \texttt{export} method of a \texttt{proc\_peaks.Cell} object. The \texttt{indexamajig} executable can be called including all pertinent options (as defined in a diffractem \texttt{PreProcOpts} object) using the \texttt{tools.call\_indexamajig} and \texttt{tools.call\_indexamajig\_slurm} functions, where the latter sets up intermediate files and a shell script for execution through a \emph{SLURM} queue submission system. Optionally, those, along with the geometry, cell, and virtual data files (see below), can be packed into a \texttt{.tar.gz} archive for convenient uploading to a computing cluster. Diffraction pattern indexing as described in Section~\ref{sec:indexing} requires the positions of found peaks in each diffraction pattern as its primary raw-data input. CrystFEL's \texttt{indexamajig} tightly couples indexing and integration of peak intensities from image data into a single, inseparable step, as described in~\cite{White2019}. While the file format described in Section~\ref{sec:file_format} is compatible with CrystFEL and could directly used for indexing and integration in a single run, for SerialED this approach is hampered by two prohibitive shortcomings. First, the residual movement of the zero-order beam inherent to SerialED, even if known, cannot be natively accounted for by CrystFEL, precluding proper indexing of SerialED patterns from the Bragg reflections either found in the patterns or already stored in the files during preprocessing. Second, SerialED requires a computationally intensive grid search approach to indexing. Coupling indexing and peak integration into a single step hence makes it impractical to optimize the (relatively fast) integration, and would require transfer of the full dataset (as needed for integration) if indexing is offloaded to off-site computing clusters. As shown in Figure~\ref{fig:flowchart}, diffractem circumvents these issues by not running indexing on the actual data files, but on a (single) \emph{virtual} file, which is generated using the \texttt{Dataset.write\_virtual\_file} method and does not carry actual diffraction data. The virtual file, while being a fully valid diffractem and CrystFEL HDF5 file, only contains the shot list and found Bragg peaks in CXI format, which are shifted for each pattern such that position of the zero-order beam remains at the center of the detector. \texttt{indexamajig} can now be run on the virtual file, yielding the indexing results (that is, the reciprocal-space lattice vectors in the laboratory frame, for each crystal found in the diffraction patterns) in \texttt{.stream} format. All book-keeping to associate patterns in the virtual and actual files is transparently performed using items in the shot tables, and the \texttt{--copy-hdf5-field} option of \texttt{indexamajig}. For peak integration, we modified CrystFEL by introducing a new option to, instead of finding indexing solutions from Bragg reflections, read reciprocal-space lattice vectors and beam shift coordinates from a plain-text \emph{solution} file (extension \texttt{.sol}), and proceed with the standard prediction and integration pipeline from there. To generate the solution file from the computed indexing parameters (in \texttt{.stream} format), the method \texttt{Dataset.get\_indexing\_solution} can be used, which transparently handles the case of integrating patterns that have been computed from a different range of movie frames (see Section~\ref{sec:aggregation}) than that initially used for indexing. For the more simple case where the data that shall be integrated is identical to those that were used to generate the indexing solution, the command-line tool \texttt{stream2sol} which is included in diffractem, can be used alternatively. Please see the supplementary notebook \texttt{indexing.ipynb} for a detailed step-by-step guide of indexing and integration. \subsubsection{Merging and validation} \label{sec:partialator} The merging of single Bragg peak observations from all recorded diffraction patterns as described in Section~\ref{sec:merging} is performed using the \texttt{partialator} command-line program contained in CrystFEL. Diffractem includes a corresponding wrapper function \texttt{tools.call\_partialator}. It provides a convenient way to generate \texttt{partialator} calls from within Jupyter notebooks, also providing options to run different merging settings (e.g., with and without post-refinement or resolution cut-offs) in parallel or sequentially, optionally generating a script for submission to a \emph{SLURM} cluster queue submission system. Finally, the merged intensities contained in \texttt{.hkl} files can be analyzed from Jupyter notebooks by wrapping CrystFEL's \texttt{check\_hkl} and \texttt{compare\_hkl} command-line tools into the \texttt{tools.analyze\_hkl} function, which provides means to automatically validate the results of many different integration and merging parameters in parallel, and wraps the results in \emph{pandas} DataFrames. Please see the supplementary notebook \texttt{merging.ipynb} for an example of the merging and validation steps. \subsection{Displaying data} \label{sec:viewers} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{Figures/Fig9} \caption{Screenshots of diffraction viewing tools of \emph{diffractem}. (A) \texttt{Dataset.view} running as an interactive widget inside a Jupyter notebook in a web browser. The diffraction pattern is shown on logarithmic scale, which is particularly useful to assess the quality of pattern center and peak finding at low resolutions; the pattern center and Bragg peaks are shown as blue cross-hair and green circles, respectively. On the left, data from the shot table for the shown pattern are displayed. The controls at the bottom allow to move between shots and set display parameters. (B) \texttt{edview} running in internal-viewer mode. In three columns, the diffraction pattern, the map image (optionally zoomed into the shown crystal) and meta-data from the shot table \emph{and} the indexing result from a \texttt{.stream} file for the shown diffraction pattern are shown, respectively. Image controls are at the bottom. (C) \texttt{edview} in external-viewer mode, in which the diffraction pattern is displayed through \emph{adxv}~\cite{Adxv}. In the pattern, found peaks (green circles) and predicted Bragg spot positions from the indexing solution (red squares) are shown. In the bottom \texttt{edview} window, the corresponding crystal is shown. The directions of the real-space lattice vectors $\vec{a}, \vec{b}, \vec{c}$ are shown in red, green, and blue, respectively. } \label{fig:viewers} \end{figure*} In order to visualize datasets being processed by diffractem, two tools with markedly different scope are provided, as shown in Figure~\ref{fig:viewers}. Firstly, the \texttt{view} method of a \texttt{Dataset} (Section~\ref{sec:dataset}) allows for quick interactive inspection of diffraction data within a Jupyter notebook, which is especially helpful for tuning of processing parameters. Secondly, the stand-alone program \texttt{edview} provides a simple graphical interface to browse through SerialED data, including correlative display of mapping and diffraction data. In Figure~\ref{fig:viewers}, screen-shots of both tools are shown. \subsubsection{Dataset.view} \label{sec:widget} An interactive viewer for diffraction data can be directly used within Jupyter notebooks in the web browser, where data are being processed. The viewer is called by invoking \texttt{ds.view(<\ldots>)}, where \texttt{ds} is a \texttt{Dataset} object and \texttt{<\ldots>} represents additional calling arguments. The viewer shows the data stack accessible via the \texttt{Dataset.diff\_data} attribute (which points to the data stack containing diffraction data), and, if present as CXI-formatted data stacks, detected Bragg peaks. Finally, if columns \texttt{center\_x} and \texttt{center\_y} are present in the shot table, the position of the pattern center (zero-order beam) is shown as a cross-hair. Importantly, \texttt{Dataset.view} acts on diffraction data stored as \emph{dask} array~\cite{Dask}, which are typically not in memory, but either on disk, or not even computed yet (lazy evaluation), if the \texttt{Dataset} object has not been written to disk. They are then loaded and/or computed on-the-fly for each displayed image. This makes \texttt{Dataset.view} especially suitable for interactive tuning of pre-processing parameters (such as peak-finding sensitivity thresholds) on a few selected shots, before the full computation is performed. In the supplementary Jupyter notebook \texttt{preprocessing.ipynb}, the use of \texttt{Dataset.view} is illustrated at various points. \subsubsection{edview} \label{sec:edview} The second option for displaying diffraction data is the stand-alone viewer \texttt{edview}, which is available from the command line after installation of diffractem. As input to \texttt{edview}, single HDF5 data files, list files, multiple data files (via file wildcards), or a \texttt{.stream} file can be provided. In the latter case, indexing solutions (Bragg spot predictions and real-space lattice vectors) can be displayed. \texttt{edview} shows both diffraction data and, if present, the overview maps taken in the course of a SerialED data acquisition from a grid region, including an indicator to show which crystal on the map an individual pattern belongs to. For displaying the diffraction data, either a built-in display window (via the command-line option \texttt{--internal}) or \emph{adxv}~\cite{Adxv}, which is controlled by \texttt{edview} via a local communication socket, can be used. If indexing information is present for a given shot, the projected directions of the real-space lattice vectors $a,b,c$ (with fixed length) are overlaid on the currently displayed crystal (if ``zoom'' is checked). \subsection{Simple on-line pre-processing using \emph{quick\_proc}} \label{quick_proc} While diffractem has been designed with the usage from Jupyter notebooks in mind, there may be situations where it is preferable to run the pre-processing pipeline, up to the point of aggregated, corrected, and background-subtracted images, from the command-line. Hence, the command-line tool \texttt{quick\_proc} is provided by diffractem, which executes those steps according to settings defined in a \texttt{.yaml} file, just as for the standard processing in notebooks (Section~\ref{sec:proc_functions}). Furthermore, \texttt{quick\_proc} can run in an on-line analysis mode (using the flag \texttt{--wait-for-files}), where it waits for new data files from the experiment to arrive, then executes the processing, and adds the newly processed files to a \texttt{.lst} file for use with CrystFEL or viewing using \texttt{edview}. Running \texttt{quick\_proc -h} provides a full reference of options. \section{Discussion and Outlook} \label{sec:discussion} Using the pipeline comprising \emph{CrystFEL} and \emph{diffractem} as described in this article, processing SerialED datasets of high quality becomes a straightforward exercise, and tackling more challenging cases becomes viable. Still, there is plenty of room for future work. Besides usability improvements for non-expert users, such as a graphical program interface for basic operations or functions for reasonable automatic adjustment of parameters for a given sample, there are more fundamental aspects which can profit from further development. A rather obvious starting point for future work could be inclusion of a cell-finding algorithm similar to that presented in~\cite{Jiang2009}, or even an entirely new method for indexing that would be based on considering peak data from the entire dataset instead of acting on individual patterns, similarly to single-particle analysis~\cite{Scheres2012} or expand-maximize-compress algorithms in diffractive imaging~\cite{Loh2009}. Similarly, a more systematic study of partiality modeling for electrons is required, where partiality is especially prevalent due to the small crystal sizes (and concomitantly wide rocking curves) combined with a very monochromatic beam. Another field of study are the effects of dynamical diffraction arising from multiple scattering, which depend on subtle details that are often challenging to grasp, in particular for biological samples made from light elements~\cite{Subramanian2015,Latychevskaia2019,Nannenga2019,Gallagher-Jones2018}. While often considered deleterious for structure solution, careful inclusion of dynamical diffraction can lead to unique insight into molecular configurations~\cite{Palatinus2017,Brazda2019} and even might be able to solve the phasing problem for electron crystallography~\cite{Donatelli2020}. Especially regarding the latter point, SerialED can provide the unique advantage of being able to selectively solve structures from sub-sets of data containing crystals from a given size bracket only. While there is a large scope for future developments, already in its current state of development SerialED can provide high-resolution structures of even the most demanding nano-crystalline samples~\cite{Buecker2020}. Data analysis, while not yet as established as for rotation techniques, is becoming a more and more routine task, helped by packages such as those described in this work. Meanwhile, the \emph{diffractem} package (as well as \emph{CrystFEL}, which provides much of the fundamental functionality) is under constant development, as to keep making SerialED data processing more efficient, powerful, and user friendly; we hence suggest to regularly check the webpage at https://github.com/robertbuecker/diffractem for updates. \section*{Conflict of Interest Statement} The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. \section*{Author Contributions} R.B. and R.J.D.M. conceived the serial electron diffraction concept. R.B. and P.H. developed the SerialED processing pipeline. R.B. wrote the \emph{diffractem} software. P.H. wrote the extensions to \emph{CrystFEL} to adapt to our analysis pipeline. R.B. and P.H. wrote the manuscript. \section*{Funding} This work was funded by the Max Planck Society, the Natural Sciences and Engineering Research Council of Canada (P.H., R.J.D.M.), the Fonds de recherche du Qu\'ebec (P.H.), and the BWFGB Israel-Hamburg project LOM~2018 (R.B.). \section*{Acknowledgments} We acknowledge many helpful discussions with and support for modifying \emph{CrystFEL} by Thomas White. We thank Anton Barty, Valerio Mariani, and Oleksandr Yefanov for making \emph{peakfinder8} available under the terms of the GNU Lesser General Public License. \section*{Supplemental Data} A set of example Jupyter notebooks explaining the processing pipeline in detail is included with this paper in PDF format. The notebooks themselves, including all ancillary files required to reproduce our workflow can be downloaded at https://github.com/robertbuecker/serialed-examples. \section*{Data and Code Availability Statement} The raw diffraction data and indexed/integrated \texttt{stream} files which the examples in this paper have been derived from is available at EMPIAR (https://empiar.org) using the accession code EMPIAR-10542. The diffractem software, along with installation instructions, is available under the terms of the GNU Lesser General Public License 2.1 or higher at https://github.com/robertbuecker/diffractem. The electron-enabled version of CrystFEL 0.9.1 is available at https://stash.desy.de/projects/MPSDED/repos/crystfel under the terms of the GNU General Public License 3.0. CrystFEL 0.10.0 will include the required features by default. \bibliographystyle{alpha}
2,869,038,154,197
arxiv
\section{Introduction} In the Standard Model (minimally extended to include non-zero neutrino mass) the neutrino magnetic moment is non-zero, but small, and is given by~\cite{Marciano:1977wx} \begin{equation} \mu_\nu\approx 3\times 10^{-19}\left(\frac{m_\nu}{1{\rm eV}}\right)\mu_B, \label{SM} \end{equation} where $m_\nu$ is the neutrino mass and $\mu_B$ is the Bohr magneton. An experimental observation of a magnetic moment larger than that given in Eq.(\ref{SM}) would thus be a clear indication of physics beyond the minimally extended Standard Model. Current laboratory limits are determined via neutrino-electron scattering at low energies, with $\mu_\nu < 1.5 \times 10^{-10} \mu_B$~\cite{Beacom} and $\mu_\nu < 0.7 \times 10^{-10} \mu_B$~\cite{reactor} obtained from solar and reactor experiments, respectively. A stronger limit can be obtained from constraints on energy loss from stars, $\mu_\nu < 3 \times 10^{-12} \mu_B$~\cite{Raffelt}. It is possible to write down a simple relationship between the size of the neutrino mass and neutrino magnetic moment. If a magnetic moment is generated by physics beyond the Standard Model (SM) at an energy scale $\Lambda$, as in Fig.~\ref{fig:naive}a, we can generically express its value as \begin{equation} \mu_\nu \sim \frac{eG}{\Lambda}, \end{equation} where $e$ is the electric charge and $G$ contains a combination of coupling constants and loop factors. Removing the photon from the same diagram (Fig.~\ref{fig:naive}b) gives a contribution to the neutrino mass of order \begin{equation} m_\nu \sim G \Lambda. \end{equation} We thus have the relationship \begin{eqnarray} m_\nu \,\, \sim \,\, \frac{\Lambda^2}{2 m_e} \frac{\mu_\nu}{\mu_B} \,\, \sim \,\, \frac{\mu_\nu}{ 10^{-18} \mu_B} [\Lambda({\rm TeV})]^2 \,\,\, {\rm eV}, \label{naive} \end{eqnarray} which implies that it is difficult to simultaneously reconcile a small neutrino mass and a large magnetic moment. \begin{figure}[t] \begin{center} \psfig{file=basic.eps,width=3in} \end{center} \caption{a) Generic contribution to the neutrino magnetic moment induced by physics beyond the standard model. b) Corresponding contribution to the neutrino mass. The solid and wavy lines correspond to neutrinos and photons respectively, while the shaded circle denotes physics beyond the SM.} \label{fig:naive} \end{figure} However, it is well known that the na\"ive restriction given in Eq.(\ref{naive}) can be overcome via a careful choice for the new physics. For example, we may impose a symmetry to enforce $m_\nu=0$ while allowing a non-zero value for $\mu_\nu$~\cite{Voloshin,Georgi,Grimus,mohapatra1}, or employ a spin suppression mechanism to keep $m_\nu$ small~\cite{Barr}. Note though, that these symmetries are typically broken by Standard Model interactions. By calculating contributions to $m_\nu$ generated by SM radiative corrections involving the magnetic moment interaction, we may thus obtain general, \lq\lq naturalness" upper limits on the size of neutrino magnetic moments. One possibility for allowing a large $\mu_\nu$ while keeping $m_\nu$ small is due to Voloshin~\cite{Voloshin}. The original version of this mechanism involved imposing an $SU(2)_\nu$ symmetry, under which the left-handed neutrino and antineutrino ($\nu$ and $\nu^c$) transform as a doublet. The Dirac mass term transforms as a triplet under this symmetry and is thus forbidden, while the magnetic moment term is allowed as it transforms as a singlet. However, the $SU(2)_\nu$ symmetry is violated by SM gauge interactions. For Majorana neutrinos, the Voloshin mechanism may be implemented using flavor symmetries, such as those in Refs.~\refcite{Grimus,Georgi,mohapatra1}. These flavor symmetries are not broken by SM gauge interactions but are instead violated by SM Yukawa interactions.\footnote{We assume that the charged leptons masses are generated via the standard mechanism through Yukawa couplings to the SM Higgs boson. If the charged lepton masses are generated via a non-standard mechanism, SM Yukawa interactions do not necessarily violate flavor symmetries. However, such flavor symmetries must always be broken via some mechanism in order to obtain non-degenerate masses for the charged leptons.} Below, we shall estimate the contribution to $m_\nu$ generated by SM radiative corrections involving the magnetic moment term. This allows us to set general, \lq\lq naturalness" upper limits on the size of neutrino magnetic moments. For Dirac neutrinos, these limits are several orders of magnitude stronger than present experimental bounds~\cite{dirac}. For Majorana neutrinos, however, the bounds are weaker~\cite{Davidson,majorana}. \section{Dirac Neutrinos} We assume that the magnetic moment is generated by physics beyond the SM at an energy scale $\Lambda$ above the electroweak scale. In order to be completely model independent, the new physics will be left unspecified and we shall work exclusively with dimension $D\geq 4$ operators involving only SM fields, obtained by integrating out the physics above the scale $\Lambda$. We thus consider an effective theory that is valid below the scale $\Lambda$, respects the $SU(2)_L\times U(1)_Y$ symmetry of the SM, and contains only SM fields charged under these gauge groups. We start by constructing the most general operators that could give rise to a magnetic moment operator, $\bar{\nu}_L \sigma^{\mu\nu} F_{\mu\nu} \nu_R$. Demanding invariance under the SM gauge group, we have the following 6D operators \begin{equation} \label{eq:ops} {\cal O}^{(6)}_B = \frac{g'}{\Lambda^2}{\bar L}{\tilde \phi} \sigma^{\mu\nu}\nu_R B_{\mu\nu}\ , \hspace{1.5cm} {\cal O}^{(6)}_W = \frac{g}{\Lambda^2} {\bar L}\tau^a {\tilde \phi} \sigma^{\mu\nu}\nu_R W_{\mu\nu}^a\ . \end{equation} where $B_{\mu\nu} = \partial_\mu B_\nu - \partial_\nu B_\mu$ and $W_{\mu\nu}^a = \partial_\mu W_\nu^a - \partial_\nu W_\mu^a - g \epsilon_{abc}W_\mu^b W_\nu^c$ are the U(1)$_Y$ and SU(2)$_L$ field strength tensors, respectively, and $g'$ and $g$ are the corresponding couplings. The Higgs and left-handed lepton doublet fields are denoted $\phi$ and $L$, respectively, and $\tilde\phi = i \tau_2 \phi^*$. After spontaneous symmetry breaking, both ${\cal O}^{(6)}_B$ and ${\cal O}^{(6)}_W$ contribute to the magnetic moment. Through loop diagrams these operators will generate contributions to the neutrino mass. For example, the diagram in Fig.~\ref{4D} will generate a contribution to the neutrino mass operator, ${\cal O}^{(4)}_M={\bar L}{\tilde \phi} \nu_R$. Using dimensional analysis, we estimate~\cite{dirac} \begin{equation} m_\nu \,\, \sim \,\, \frac{\alpha}{16\pi} \frac{\Lambda^2}{m_e} \frac{\mu_\nu}{\mu_B}\,\, , \end{equation} and thus \begin{equation} \mu_\nu \lesssim 3 \times 10^{-15} \mu_B \left(\frac{m_\nu}{1\ {\rm eV}}\right) \left(\frac{1\ {\rm TeV}}{\Lambda}\right)^2 \,\,. \end{equation} If we take $\Lambda \simeq$ 1 TeV and $m_\nu \lesssim $ 0.3 eV, we obtain the limit $\mu_\nu \lesssim 10^{-15} \mu_B$, which is several orders of magnitude stronger than current experimental constraints. Given the quadratic dependence upon $\Lambda$, this constraint becomes extremely stringent for $\Lambda$ significantly above the electroweak scale. \begin{figure} \label{4D} \begin{center} \psfig{file=4D.eps,width=2in} \end{center} \caption{Contribution to the 4D mass operator ${\cal O}^{(4)}_M$ due to insertions of the magnetic moment operators ${\cal O}^{(5)}_{B,W}$.} \end{figure} \begin{figure} \begin{center} \psfig{file=6D.eps,width=2in} \end{center} \caption{Renormalization of the mass operator, ${\cal O}^{(6)}_M$, due to insertions of ${\cal O}^{(6)}_{B,W}$.} \label{6Dfigure} \end{figure} However, if $\Lambda$ is not significantly larger that the EW scale, higher dimension operators are important, and their contribution to $m_\nu$ can be calculated in a model independent way. Through renormalization, both ${\cal O}^{(6)}_B$ and ${\cal O}^{(6)}_W$ will generate a contribution to the 6D neutrino mass operator \begin{equation} {\cal O}^{(6)}_M = \frac{1}{\Lambda^2}{\bar L}{\tilde \phi}\nu_R \left(\phi^\dag\phi\right) \ , \end{equation} via the diagrams in Fig.~\ref{6Dfigure}. Solving the renormalization group equations we find that for $\Lambda \gtrsim 1~{\rm TeV}$, \begin{equation} \label{eq:massbound} \mu_\nu \lesssim 8\times 10^{-15} \mu_B \left(\frac{m_\nu}{1\ {\rm eV}}\right) \ , \end{equation} in the absence of fine tuning~\cite{dirac}. \section{Majorana Neutrinos} We have seen above that the ``naturalness'' bounds on the magnetic moments of Dirac neutrinos are significantly stronger than present experimental limits. However, the analogous bounds for Majorana neutrinos are much weaker. The case of Majorana neutrinos is more subtle, due to the relative flavor symmetries of $m_\nu$ and $\mu_\nu$ respectively. Majorana neutrinos cannot have diagonal magnetic moments, but are permitted non-zero transition moments. The transition magnetic moment $\left[\mu_\nu\right]_{\alpha\beta}$ is antisymmetric in the flavor indices $\{\alpha,\beta\}$, while the mass terms $[m_\nu]_{\alpha\beta}$ are symmetric. These different flavor symmetries play an important role in our limits, and are the origin of the difference between the magnetic moment constraints for Dirac and Majorana neutrinos. As before, we write down the most general set of operators that can give rise to neutrino magnetic moment and mass terms, while respecting the SM gauge group. In the case of Majorana neutrinos, the lowest order contribution to the neutrino mass arises from the usual five dimensional operator containing Higgs and left-handed lepton doublet fields: \begin{equation} \left[O_M^{5D}\right]_{\alpha\beta}\,=\, \left(\overline{L^c_\alpha}\epsilon \phi\right)\left(\phi^T\epsilon L_\beta\right), \label{OM5} \end{equation} where $\epsilon = - i \tau_2$, $\overline{L^c}=L^TC$, $C$ denotes charge conjugation, and $\alpha$, $\beta$ are flavor indices. The lowest order contribution to the neutrino magnetic moment arises from the following dimension seven operators, \begin{eqnarray} \left[O_B\right]_{\alpha\beta}&=& g' \left(\overline{L^c}_\alpha\epsilon \phi\right)\sigma^{\mu\nu} \left(\phi^T\epsilon L_\beta\right)B_{\mu\nu}, \label{OB}\\ \left[O_W\right]_{\alpha\beta}&=& g \left(\overline{L^c_\alpha}\epsilon \phi\right)\sigma^{\mu\nu} \left(\phi^T\epsilon \tau^a L_\beta\right)W_{\mu\nu}^a, \label{OW} \end{eqnarray} and we also define a 7D mass operator as \begin{equation} \left[O_M^{7D}\right]_{\alpha\beta} = \left(\overline{L^c_\alpha}\epsilon \phi\right) \left(\phi^T\epsilon L_\beta\right) \left(\phi^\dagger \phi \right). \label{OM7} \end{equation} Operators $O_M^{5D}$ and $O_M^{7D}$ are flavor symmetric, while $O_B$ is antisymmetric. The operator $O_W$ is the most general 7D operator involving $W_{\mu\nu}^a$. However, as it is neither flavor symmetric nor antisymmetric it is useful to express it in terms of operators with explicit flavor symmetry, $O_W^\pm$, which we define as \begin{eqnarray} \left[ O_W^\pm \right]_{\alpha\beta} &=& \frac{1}{2} \left\{ \left[O_W\right]_{\alpha\beta} \pm \left[O_W\right]_{\beta\alpha} \right\}. \end{eqnarray} Our effective Lagrangian is therefore \begin{eqnarray} {\cal L} &=& \frac{C_M^{5D}}{\Lambda} O_M^{5D} + \frac{C_M^{7D}}{\Lambda^3} O_M^{7D} + \frac{C_{B}}{\Lambda^3} O_{B} +\frac{ C_{W}^+}{\Lambda^3} O_{W}^+ + \frac{C_{W}^-}{\Lambda^3} O_{W}^-+\cdots \ \ . \end{eqnarray} After spontaneous symmetry breaking, the flavor antisymmetric operators $O_B$ and $O_W^-$ generate a contribution to the magnetic moment interaction $\frac{1}{2} \left[\mu_\nu\right]_{\alpha\beta}\, \overline{\nu^c}_\alpha \sigma^{\mu\nu} \nu_\beta F_{\mu\nu}$, given by \begin{equation} \frac{\left[\mu_\nu\right]_{\alpha\beta}}{\mu_B} = \frac{2m_e v^2}{\Lambda^3} \left(\left[C_B(M_W)\right]_{\alpha\beta} + \left[C_W^-(M_W)\right]_{\alpha\beta}\right), \label{munu} \end{equation} where the Higgs vacuum expectation value is $\langle \phi^T \rangle =(0, v/\sqrt{2})$. Similarly, the operators $O_M^{5D}$ and $O_M^{7D}$ generate contributions to the Majorana neutrino mass terms, $\frac{1}{2}\left[m_\nu\right]_{\alpha\beta}\overline{\nu^c}_\alpha \nu_\beta$, given by \begin{equation} \frac{1}{2}\left[ m_\nu \right]_{\alpha\beta} = \frac{v^2}{2 \Lambda} \left[C_M^{5D}(M_W)\right] + \frac{v^4}{4 \Lambda^3} \left[C_M^{7D}(M_W)\right]. \label{mnu} \end{equation} Below, we outline the radiative corrections to the neutrino mass operators ($O_M^{5D}$ and $O_M^{7D}$) generated by the magnetic moment operators $O_W^-$ and $O_B$. This allows us to determine constraints on the size of the magnetic moment in terms of the neutrino mass, using Eqs.(\ref{munu}) and (\ref{mnu}). Our results are summarized in Table~\ref{summary} below, where we have defined $R_{\alpha\beta} = m_\tau^2/| m_\alpha^2 - m_\beta^2|$, with $m_\alpha$ being the masses of charged lepton masses. Numerically, one has $R_{\tau e} \simeq R_{\tau \mu} \simeq 1$ and $R_{\mu e} \simeq 283$. \begin{table}[htbp] \tbl{Summary of constraints on the magnitude of the magnetic moment of Majorana neutrinos. The upper two lines correspond to a magnetic moment generated by the $O_W^-$ operator, while the lower two lines correspond to the $O_B$ operator.} {\begin{tabular}{c|c|c} \hline\hline i) 1-loop, 7D & $\mu^W_{\alpha\beta}$ & $ \leq 1 \times 10^{-10}\mu_B \left(\frac{ \left[m_\nu\right]_{\alpha\beta}}{1~{\rm eV}}\right) \ln^{-1}\frac{\Lambda^2}{M_W^2} R_{\alpha\beta}$ \\ ii) 2-loop, 5D & $\mu^W_{\alpha\beta}$ & $ \leq 1 \times 10^{-9}\mu_B \left(\frac{ \left[m_\nu\right]_{\alpha\beta}}{1~{\rm eV}}\right) \left(\frac{1~{\rm TeV}}{\Lambda}\right)^2 R_{\alpha\beta}$ \\ \hline iii) 2-loop, 7D & $\mu^B_{\alpha\beta}$ & $ \leq 1 \times 10^{-7}\mu_B \left(\frac{ \left[m_\nu\right]_{\alpha\beta}}{1~{\rm eV}}\right) \ln^{-1}\frac{\Lambda^2}{M_W^2} R_{\alpha\beta}$ \\ iv) 2-loop, 5D & $\mu^B_{\alpha\beta}$ & $\leq 4 \times 10^{-9} \mu_B \left(\frac{ \left[m_\nu\right]_{\alpha\beta}}{1~{\rm eV}}\right) \left(\frac{1~{\rm TeV}}{\Lambda}\right)^2 R_{\alpha\beta}$ \\ \hline\hline \end{tabular}} \label{summary} \end{table} \subsection{SU(2) Gauge Boson} \label{su2} \subsubsection{7D mass term --- $O_W$} As the operator $O_W^-$ is flavor antisymmetric, it must be multiplied by another flavor antisymmetric contribution in order to produce a flavor symmetric mass term. This can be accomplished through insertion of Yukawa couplings in the diagram shown in Fig.~\ref{fig:7D}~\cite{Davidson}. This diagram provides a logarithmically divergent contribution to the 7D mass term, given by~\cite{Davidson} \begin{equation} \label{eq:owminusone} \left[ C_M^{7D}(M_W) \right]_{\alpha\beta} \simeq \frac{3 g^2}{16 \pi^2} \frac{m_\alpha^2 - m_\beta^2}{v^2} \ln \frac{\Lambda^2}{M_W^2} \left[ C_W^-(\Lambda) \right]_{\alpha\beta}, \end{equation} where $m_\alpha$ are the charged lepton masses, and the exact coefficient has been computed using dimensional regularization. Using this result, together with Eqs. (\ref{munu}) and (\ref{mnu}), leads to bound (i) in Table~\ref{summary}. \subsubsection{5D mass term --- $O_W$} The neutrino magnetic moment operator $O_W^-$ will also contribute to the 5D mass operator via two-loop diagrams, as shown in Fig.~\ref{fig:2loop}~\cite{majorana}. As with the diagrams in Fig.~\ref{fig:7D}, we require two Yukawa insertions in order to obtain a flavor symmetric result. Using dimensional analysis, we estimate~\cite{majorana} \begin{equation} \left[C_M^{5D}(\Lambda) \right]_{\alpha\beta} \simeq \frac{g^2}{(16 \pi^2)^2} \frac{m_\alpha^2 - m_\beta^2}{v^2} \left[ C_W^- (\Lambda)\right]_{\alpha\beta}. \label{5d1} \end{equation} This leads to bound (ii) in Table~\ref{summary}. Compared to 1-loop (7D) case of Eq.~(\ref{eq:owminusone}), the 2-loop (5D) mass contribution is suppressed by a factor of $1/16\pi^2$ arising from the additional loop, but enhanced by a factor of $\Lambda^2/v^2$ arising from the lower operator dimension. Thus, as we increase the new physics scale, $\Lambda$, this two-loop constraint rapidly becomes more restrictive. The \lq\lq crossover" scale between the two effects occurs at $\sim 10$ TeV. \begin{figure} \begin{center} \psfig{file=7D.eps,width=2in} \end{center} \caption{Contribution of $O_W^-$ to the 7D neutrino mass operator.} \label{fig:7D} \end{figure} \begin{figure} \begin{center} \psfig{file=2loop.eps,width=2in} \end{center} \caption{Representative contribution of $O_W^-$ to the 5D neutrino mass operator.} \label{fig:2loop} \end{figure} \subsection{Hypercharge Gauge Boson} \label{u1b} \subsubsection{7D mass term --- $O_B$} If we insert $O_B$ in the diagram in Fig. \ref{fig:7D}, the contribution vanishes, due to the $SU(2)$ structure of the graph. Therefore, to obtain a non-zero contribution to $O_M^{7D}$ from $O_B$ we require the presence of some non-trivial $SU(2)$ structure. This can arise, for instance, from a virtual $W$ boson loop as in Fig. \ref{fig:B2M_2loop}~\cite{Davidson}. This mechanism gives the leading contribution of the operator $O_B$ to the 7D mass term. The $O_B$ and $O_W$ contributions to the 7D mass term are thus related by \begin{eqnarray} \frac{(\delta m_\nu)^B}{(\delta m_\nu)^W} \,\approx\,\frac{\alpha}{4\pi} \frac{1}{\cos^2\theta_W}, \end{eqnarray} where $\theta_W$ is the weak mixing angle and where the factor on the RHS is due to the additional $SU(2)_L$ boson loop. This additional loop suppression for the $O_B$ contribution results in a significantly weaker neutrino magnetic moment constraint than that obtained above for $O_W^-$. The corresponding limit is shown as bound (iii) in Table~\ref{summary}. \begin{figure}[h] \begin{center} \psfig{file=7D_B.eps,width=2in} \end{center} \caption{Representative contribution of $O_B$ to the 7D neutrino mass operator at two loop order.} \label{fig:B2M_2loop} \end{figure} \subsubsection{5D mass term --- $O_B$} However, the leading contribution of $O_B$ to the 5D mass term arises from the same 2-loop diagrams (Fig.~\ref{fig:2loop}) that we discussed in connection with the $O_W^-$ operator. Therefore, the contribution to the 5D mass term is the same as that for $O_W$, except for a factor of $(g'/g)^2 = \tan^2 \theta_W$. We thus obtain~\cite{majorana} \begin{equation} \left[ C_M^{5D}(\Lambda) \right]_{\alpha\beta} \simeq \frac{g'^2}{(16 \pi^2)^2} \frac{m_\alpha^2 - m_\beta^2}{v^2} \left[ C_B(\Lambda) \right]_{\alpha\beta}\ \ \ , \label{5db} \end{equation} corresponding to bound (iv) in Table.~\ref{summary}. Importantly, this is the strongest constraint on the $O_B$ contribution to the neutrino magnetic moment for any value of $\Lambda$, and the most general of our bounds on $\mu_\nu^{\rm Majorana}$~\cite{majorana}. \subsection{Comparison with experimental limits} The best laboratory limit on $\mu_\nu$, obtained from scattering of low energy reactor neutrinos is, $\text{``}\mu_{e}\text{''} < 0.7 \times 10^{-10} \mu_B$~\cite{reactor}. Note that this limit applies to both $\mu_{\tau e}$ and $\mu_{\mu e}$, as the flavor of the scattered neutrino is not detected in the experiment. Taking the neutrino mass to be $m_\nu \lesssim 0.3$ eV (as implied by cosmological observations, e.g. Ref.~\refcite{WMAP3yr}), bound (iv) in Table.~\ref{summary} gives \begin{eqnarray} \mu_{\tau\mu},\mu_{\tau e} & \lesssim & 10^{-9} \left[ \Lambda(\text{TeV}) \right]^{-2} \nonumber \\ \mu_{\mu e} & \lesssim & 3 \times 10^{-7} \left[ \Lambda(\text{TeV}) \right]^{-2}. \end{eqnarray} For Majorana neutrinos we thus conclude that if $\mu_{\mu e}$ is dominant over the other flavor elements, an experimental discovery near the present limits (e.g., at $\mu \sim 10^{-11}\mu_B$) would imply $\Lambda \lesssim 100$ TeV. However, this would become $\Lambda \lesssim 10$ TeV in any model in which all element of $\mu_{\alpha\beta}$ have similar size. \section{Conclusions} We have discussed radiative corrections to the neutrino mass arising from a neutrino magnetic moment coupling. Expressing the magnetic moment in terms of effective operators in a model independent fashion required constructing operators containing the $SU(2)_L$ and hypercharge gauge bosons, $O_W$ and $O_B$ respectively, rather than working directly with the electromagnetic gauge boson. We then calculated $\mu_\nu$ naturalness bounds arising from the leading order contributions to neutrino mass term, for both Dirac and Majorana neutrinos. For Dirac neutrinos we found \begin{equation} \mu_\nu^{\rm Dirac} \lesssim 3 \times 10^{-15} \mu_B \left(\frac{m_\nu}{1\ {\rm eV}}\right) \left(\frac{1\ {\rm TeV}}{\Lambda}\right)^2 \,\,, \end{equation} while the most general naturalness bound on the size of the Majorana neutrino magnetic moment is \begin{equation} \mu_{\alpha\beta}^{\rm Majorana}\,\leq\,4 \times 10^{-9}\mu_B \left(\frac{\left[m_\nu\right]_{\alpha\beta}}{1~{\rm eV}}\right) \left(\frac{1\ {\rm TeV}}{\Lambda}\right)^2 \left| \frac{m_\tau^2}{m_\alpha^2 - m_\beta^2} \right|. \label{generallimit} \end{equation} These limits can only be evaded in the presence of fine tuning. The limit on the the magnetic moments of Dirac neutrinos is thus considerably more stringent than for Majorana neutrinos. This is due to the different flavor symmetries involved, since in the Majorana case we require the insertion of Yukawa couplings to convert a flavor antisymmetric (magnetic moment) operator into a flavor symmetric (mass) operator. Our results implies that an experimental discovery of a magnetic moment near the present limits would signify (i) neutrinos are Majorana fermions and (ii) new lepton number violating physics responsible for the generation of $\mu_\nu$ arises at a scale $\Lambda$ which is well below the see-saw scale. \section*{Acknowledgments} This article is based upon the results of Ref.~\refcite{dirac} and Ref.~\refcite{majorana}. NFB thanks Vincenzo Cirigliano, Mikhail Gorchtein, Michael Ramsey-Musolf, Petr Vogel, Peng Wang and Mark Wise for an enjoyable and productive collaboration.
2,869,038,154,198
arxiv
\section{Introduction}\label{sec:intro} The use of the topological B-model to explore the gauge theories living on D-branes at singularities has been extremely fruitful in recent years \cite{Wijnholt:2002qz,Herzog:2003dj,Herzog:2004qw,Aspinwall:2004vm,BridgeTStruct,Bergman:2006gv,Bergman:2005kv,Bergman:2005ba}. The fundamental feature of this technique is an equivalence of categories between the derived category of coherent sheaves on the resolved singularity and the derived category of representations of the algebra described by the quiver in the dual gauge theory. However, because of the reliance on the topological B-model, this technique can only describe features related to the complex geometry, \textit{i.e.,\ } the F-terms in the gauge theory. To go further, we must bring in the K\"ahler geometry of the resolved singularity. One important consequence of the K\"ahler geometry is towards the stability of D-branes. In particular, the geometry determines the central charge, the phase of which is related to fractional part of the grading of the brane. Through $\pi$-stability as formulated by Douglas \cite{Douglas:2000gi} and its generalization due to Bridgeland \cite{Bridgeland:2002sc}, this grading determines the topological D-branes that remain stable when passing to the physical string. The difference in gradings between two branes also determines the masses of the string modes stretching between them. Understanding stability is important because we need to know that a D3-brane located at the tip of the cone is marginally unstable to decay to a particular set of fractional branes. The strings between these fractional branes then determine the quiver gauge theory. In particular, we need to show both that this decay exists and that there exists a point in the K\"ahler moduli space where all the gradings align. Since the computation of gradings in terms of geometry in type IIB string theory involves unknown corrections, this is usually accomplished by passing to the mirror where the grading is related to the phase of the period of the holomorphic three-form over the relevant three-cycle. By solving the Picard-Fuchs equations, one can then determine how the gradings behave as we move around in moduli space. This has been accomplished in many examples \cite{Aspinwall:2004vm,Aspinwall:2004jr} but can be difficult in general. In this paper, I will avoid this difficulty by simply postulating the existence of the needed set of gradings. In particular, Bridgeland's notion of a stability condition on a triangulated category reproduces the features of $\pi$-stability and does not refer directly to the metric on our variety. One can then consider the space of all such stability conditions. This is \textit{not} the K\"ahler moduli space of the relevant variety; it is much too big. It may be a generalization of the K\"ahler moduli space of the Calabi-Yau (beyond the usual generalization defined to be the complex structure moduli space of the mirror.) On the other hand, it may be that Bridgeland's definition is too general, and we need further conditions to reproduce the physical moduli space. I will not address this issue here except to note (following Bridgeland) that there ought to exist a generalization of the complex structure moduli space whose tangent space is given by the full Hochschild cohomology of the variety. The possibility of such an extended moduli space is also mentioned by Witten \cite{Witten:1991zz}. Leaving these issues behind, Bridgeland's stability conditions are defined as a structure on a trangulated category without any reference to the Abelian category from whence it came. This makes it ideal to apply to our equivalence of categories. Unfortunately, due to the noncompactness of our geometry, there are a number of technical issues relating the existence of infinite dimensional cohomology groups. Bridgeland avoids this issue by restricting his attention to sheaves that are supported on the `base' of the resolved geometry. This is not sufficient for our application as we wish to reproduce the entire cone as a moduli space of stable representations. In this paper, I will show how to define a stability condition that reproduces a formulation of GIT-stability due to King \cite{King:1994mr} (often called $\theta$-stability in the physics literature) and which reproduces the moment map used in the symplectic reduction construction of SUSY gauge theory moduli spaces. With this in hand, we can make rigorous a number of results in the physics literature including a demonstration that the needed D3-brane decay exists (we have, of course, postulated the needed alignment of the gradings rendering that no longer an issue). In addition, a result of Bridgleland \cite{Bridgeland:2006ss} tells us that moving around in the space of stability conditions is often related to a procedure called tilting. This tilting is related to Seiberg duality as was first demonstrated by Berenstein and Douglas \cite{Berenstein:2002fi} and further explored by Herzog \cite{Herzog:2004qw} and Aspinwall \cite{Aspinwall:2004vm}. As an example of this, the study of stability conditions on ALE singularities \cite{Bridgeland:2005sk} reproduces the beautiful picture of Cachazo \textit{et al} \cite{Cachazo:2001sg} where Seiberg dualities arise as one approaches the walls of the Weyl chamber that is the K\"ahler moduli space. I have little to add to this story, so I will not discuss it further. Stability of D-branes on cones is also discussed from a more physical perspective in \cite{Aspinwall:2004mb}. The main mathematical result in this paper is the following theorem: \begin{ithm} Let $Y$ be a smooth variety with a locally free tilting sheaf $T$, $\pi$ be the projection from $Y$ to its affinization, and $p$ be a point in $Y$ such that $\pi^{-1}\pi(p) = p$. Then the $\mathrm{End}_Y(T)$ module corresponding to $\mathcal{O}_p$ is a simple module.\end{ithm} \noindent Recall that the affinization of a variety is the spectrum of its ring of global functions. The physical content of this result is as follows. The singularities we will deal with are collapsed del Pezzo surfaces in a Calabi-Yau. Thus, a local model for the singularity will be $K_X$, the total space of the canonical line bundle over a del Pezzo surface $X$. In string theory compactified on $\MR{3,1} \times K_X$, the D3-brane fills the $\MR{3,1}$ and is thus represented by a skyscraper sheaf on $K_X$. This theorem tells us that when the D3-brane is off the zero section of $K_X$, \textit{i.e.,\ } away from the `singularity', it is stable for any stability condition associated to a quiver representation. This includes all the stability conditions defined in this paper. This paper is organized as follows. In section two, I will give a lightning review of the equivalences of categories that are at the heart of this construction paying attention to some technical points that often are elided. In section three, I will discuss the stability of topological D-branes and Bridgeland's formalization thereof. In section four, I will discuss Bridgeland's results on the construction of stability conditions from t-structures. In section five, these techniques will be applied to our situation. I conclude in section six with a proof of the above theorem. \section{Equivalences of categories}\label{sec:cat} This section will be a brief overview of the geometry and the equivalence of categories that we will utilize. For a more complete presentation of my view on the subject, please see \cite{Bergman:2006gv}. The original references are given in the introduction. Let $X$ be a del Pezzo surface and let $K_X$ be the total space of the canonical line bundle over $X$. We will denote the projection $\pi : K_X \to X$ and the zero section $s : X \to K_X$. It is straightforward to see that the canonical bundle over $K_X$ is trivial. We can place a Calabi-Yau metric on $K_X$. By collapsing the zero section of $K_X$, we give rise to a singular geometry that is a local model for a collapsing del Pezzo surface inside a Calabi-Yau. We will work with the resolved geometry, \textit{i.e.,\ } the line bundle $K_X$. It follows from a theorem of Bridgeland \cite{Bridgeland:2002fd} that all crepant resolutions of a singularity in a CY 3-fold have equivalent derived categories, so by working solely in that context we do not lose any generality by our choice of resolution. A D3-brane filling the transverse $\MR{3,1}$ is located at a point on the CY cone, $K_X$, and is represented by a skyscraper sheaf. We will argue later that when the D3-brane is located along the zero section of $K_X$, it will be unstable to decay to a set of ``fractional branes". The string states between these fractional branes will give rise to the quiver gauge theory. The starting point for this paper will be that there is an equivalence between the derived category of representations of this quiver and the derived category of coherent sheaves on $K_X$. To construct such an equivalence, we begin with an exceptional collection on $X$. This is a collection of coherent sheaves $E_i$ on $X$ which generate the derived category and satisfy \begin{eqnarray} \label{exccond1} \mathrm{Ext}_X^i(E_a,E_b) &=& 0 \text{\quad for } i \neq 0\ , \\ \label{excond2} \mathrm{Hom}_X(E_a,E_b) &= &0 \text{\quad for } a > b\ , \\ \label{excond3} \mathrm{Hom}_X(E_a,E_a) &=& \mathbb{C} \ . \end{eqnarray} The direct sum $T = \bigoplus E_i$ is a tilting object on X, and it follows from a theorem of Rickard \cite{Rickard:1988mt} that there is an equivalence of categories between the bounded derived categories of coherent sheaves on $X$ and finite dimensional modules over the algebra $\mathrm{End}_X(T)$. The construction of a quiver algebra from this data was first done in \cite{BondalQuiv}. We want an equivalence of categories for $K_X$, however, and not just $X$. We can accomplish this (following Bridgeland \cite{BridgeTStruct}) by imposing the further condition that \begin{equation} \mathrm{Ext}^i_X(E_a,E_b\otimes \omega_X^p) = 0 \quad \mbox{for } i \neq 0, p \le 0\ . \end{equation} Then $\pi^*T = \bigoplus \pi^*E_i$ generates the derived category of coherent sheaves on $K_X$ and is a tilting object. Because $K_X$ is not projective, we must be careful in the precise statement of the results of Rickard's theorem. We have an equivalence of categories between the \textit{unbounded} derived category of quasicoherent sheaves on $K_X$ and the unbounded derived category of modules of $A \buildrel\rm def\over= \mathrm{End}_{K_X}(\pi^*T)^\mathrm{op}$. (By taking the opposite algebra, we interchange left- and right-modules.) This restricts to an equivalence of full categories between the subcategories of objects isomorphic to \textit{perfect} objects. For the case of modules over $A$, these are bounded complexes of finitely-generated projective $A$-modules. As $A$ is noetherian and of finite global dimension, $\mbox{per }A \cong \mathcal{D}^b(A\mathrm{- fgmod})$, the bounded derived category of finitely generated $A$-modules. On $K_X$, the perfect complexes are those locally isomorphic to bounded complexes of vector bundles. As $K_X$ is smooth, any coherent sheaf can be resolved\footnote{There is a subtlety here in that this is true algebraically, but not necessarily analytically. However, because this is the total space of a negative line bundle over a projective variety, the needed resolutions exist (assertion 3, p. 701 \cite{GriHar}).} in terms of vector bundles, so we have $\mbox{per }K_X \cong \mathcal{D}^b(\mathrm{Coh}(K_X))$. Thus, Rickard's theorem gives us the equivalence of categories $ \mathcal{D}^b(\mathrm{Coh}(K_X)) \cong \mathcal{D}^b(A\mathrm{- fgmod})$. The difficulty with this nonprojective case is that the algebra, $A$, is infinite dimensional. Thus, \textit{finitely generated} modules are not the same as \textit{finite dimensional} modules. We will denote the latter category $A \mathrm{- fdmod}$. The category $\mathcal{D}^b(A \mathrm{- fdmod})$ is a full subcategory of $\mathcal{D}^b(A \mathrm{- fgmod})$. It would be very interesting to understand what the corresponding full subcategory of $\mathcal{D}^b(\mbox{Coh}(K_X))$ is. It certainly contains, for example, the category $\mathcal{D}^b(\mbox{Coh}_c(K_X))$ consisting of complexes whose constituent sheaves have compact support. It also is worth considering $\mathcal{D}^b_\mathrm{fd}(A \mathrm{- fgmod})$, the full subcategory of $\mathcal{D}^b(A \mathrm{- fgmod})$ consisting of objects whose cohomology modules are finite-dimensional. Similarly, one can consider $\mathcal{D}^b_c(\mathrm{Coh(K_X)})$, the full subcategory of $\mathcal{D}^b(\mathrm{Coh(K_X)})$ consisting of objects whose cohomology sheaves have compact support. This latter category is, in fact, a Calabi-Yau category of dimension three. It is proven in \cite{Bergman:2008yi} that \begin{equation} \mathcal{D}^b_c(\mathrm{Coh(K_X)}) \cong \mathcal{D}^b_\mathrm{fd}(A \mathrm{- fgmod})\ . \end{equation} As was the case with $\mathrm{End}_X(T)$, the algebra $A$ can be interpreted as the path algebra of a quiver with relations. In this case, the quiver has loops, reflecting the infinite dimensionality of $A$. For a discussion of the construction of this quiver, please see \cite{Herzog:2004qw,Aspinwall:2004vm,Bergman:2006gv,Bergman:2005ba}. As discussed therein, we have the correspondence $\pi^*E_i \leftrightarrow P_i$ where $P_i$ is the projective representation associated to the node $i$ of the quiver. Note that the $P_i$ are infinite dimensional but finitely generated reflecting the noncompactness of the support of the $\pi^*E_i$. We also have the representations $S_i$ defined by $S_i(j) = \MC{\delta_{ij}}$ with all arrows given by the zero map. These are simple representations and correspond to complexes of coherent sheaves whose cohomology objects are supported on the zero section of $K_X$. Note that these are \textit{not} the only simple objects in the abelian category of modules even if we impose the finite generation or finite dimensionality condition. This motivated Bridgeland to introduce yet another category \cite{Bridgeland:2006sc}. Let $\mathcal{D}_0^b(A\mathrm{- fgmod})$ be the smallest full subcategory of $\mathcal{D}^b(A\mathrm{- fgmod})$ containing the objects $S_i$. There is a corresponding category $\mathcal{D}_0^b(\mathrm{Coh}(K_X))$ which is a full subcategory of $\mathcal{D}^b(\mathrm{Coh}(K_X))$. We will call an $A$-module \textit{tiny} if it is an object in the smallest extension-closed subcategory of $A \mathrm{- mod}$ containing the modules $S_i$. Then $\mathcal{D}_0^b(A\mathrm{- fgmod})$ can also be characterized as the full subcategory of $\mathcal{D}^b(A\mathrm{- fgmod})$ consisting of objects whose cohomology modules are tiny. Since the simple modules are supported on the zero section, it turns out that $\mathcal{D}_0^b(\mathrm{Coh}(K_X))$ can be characterized as the full subcategory of $\mathcal{D}^b(\mathrm{Coh}(K_X))$ consisting of objects whose cohomology sheaves are supported on the zero section of $K_X$. By construction, we have the equivalence of categories $\mathcal{D}_0^b(\mathrm{Coh}(K_X)) \cong \mathcal{D}_0^b(A\mathrm{- fgmod})$. An important property of the abelian category of tiny modules is that it is of \textit{finite length} with simple objects $S_i$. This means that for any module $M$, there is a finite sequence of submodules \begin{equation} 0 = F_0 \subset F_1 \subset F_2 \subset \dots \subset F_{n-1} \subset F_n = M\ \end{equation} such that each quotient $F_i/F_{i-1}$ is one of the $S_i$. We will later use the existence of such a sequence to show that, when the gradings align, the D3-brane becomes marginally stable against decay to fractional branes (which we recall are represented by the $S_i$) precisely when it is located on the zero section of $K_X$. \section{Stability of topological D-branes}\label{sec:stab} BPS D-branes have associated to them a central charge. For special lagrangian A-branes this is given by the integral of the holomorphic three-form pulled back to the worldvolume of the branes. In the B-model, no exact formula is known, but the leading term is \begin{equation} \label{cencha} Z(E) = \int_M e^{B + iJ} ch(E)\sqrt{td(M)} + \dots \end{equation} where $M$ is the Calabi-Yau, $B$ is the usual B-field and $J$ is the K\"ahler form. For a CY 3-fold, there are no perturbative corrections and the nonperturbative corrections are given by a power series in $q_i = \exp{2\pi i \left(\int_{C_i} B+iJ\right)}$ with no constant term \cite{Aspinwall:2004jr}. The phase of this central charge is proportional to the grading of the D-brane. More precisely, the grading of a brane $E$ is given by \begin{equation} \label{cenpha} \xi(E) = \frac{1}{\pi}\arg Z(E)\ . \end{equation} This is incomplete, however. Douglas has argued \cite{Douglas:2000gi} that we should define a real valued grading such that its reduction modulo 2 is given by \eqref{cenpha}. Furthermore, we require that \begin{equation} \label{censhift} \xi(E[n]) = \xi(E) + n\ . \end{equation} In fact, it will best to only call this a `grading' when the object $E$ is stable and to not assign a grading to unstable branes. Recall that the spectrum of string states between two topological D-branes, $E$ and $F$, are given by the groups $\mathrm{Hom}(E,F[i])$ where $i$ is the level of the state. Douglas showed \cite{Douglas:2000gi} that these correspond to physical string states with mass \begin{equation} \label{strmass} m^2 = \frac{1}{\alpha'}\left(i-1 + \xi(E) - \xi(F)\right)\ . \end{equation} Notice the compatibility with the relation \eqref{censhift}. The starting point for Douglas's notion of stability is that a pair of D-branes will bind if there is a tachyonic string between them. Thus, let $E$ and $F$ be objects in the derived category and $f \in \mathrm{Hom}(E,F)$ a string between them. The mass of this string is given by \eqref{strmass} with $i=0$. Thus, if $\xi(E) - \xi(F) < 1$, the D-branes will form a bound state. This bound state will be isomorphic to Cone$(E\to F)$. A D-brane will be considered stable if it is not isomorphic to the cone of a non-tachyonic map between any two \textit{stable} branes. In other words, an object $E$ is stable if there are no distinguished triangles \begin{equation} \label{decaytri} \begin{split} \begindc{\commdiag}[3] \obj(20,25)[e]{$E$} \obj(10,10)[a1]{$A_1$} \obj(30,10)[a2]{$A_2$} \mor{a1}{e}{} \mor{e}{a2}{} \mor{a2}{a1}{}[\atright,\dasharrow] \enddc \end{split} \end{equation} with $A_1$ and $A_2$ stable with gradings that obey $\xi(A_1) > \xi(A_2)$. Here (and henceforth), the dashed line represents a map from $A_2$ to $A_1[1]$. Thus, it represents a state with mass given by \eqref{strmass} as $m^2 = \frac{1}{\alpha'}(\xi(A_1) - \xi(A_2)) > 0$. The existence of such a triangle means that the objects $A_1$ and $A_2$ do not bind to form $E$ and instead destabilize it. This condition is very difficult to deal with in practice. It is also deficient in that it only considers two-body decays. Bridgeland invented a formalization and generalization of this notion which I will now describe\footnote{The remainder of this section closely follows \cite{Bridgeland:2002sc}.}. The first ingredient we will need is the notion of the central charge. Since we are working with an arbitrary triangulated category, we should not refer to things like integrals as in \eqref{cencha}. We recognize the combination of the Chern and Todd classes as giving a map from the Grothendieck K-theory of the derived category of coherent sheaves to ordinary cohomology. The equation \eqref{cencha} then gives a map to $\mathbb{C}$. Thus, we define the central charge for an arbitrary triangulated category $\mathcal{D}$ to be a map $Z : K(\mathcal{D}) \to \mathbb{C}$. The group $K(\mathcal{D})$ can be infinite dimensional, however (as in the case of a torus), so it may be worthwhile to replicate the feature of \eqref{cencha} where we first apply a Chern character. One possibility is that we define the central charge to be a map from ${H\!H}_0(\mathcal{D})$ to $\mathbb{C}$. (Bridgeland suggests using periodic cyclic cohomology.) The next ingredient we need is a notion of the semistable objects of a given grade. This gives rise to the notion of a slicing defined as follows. \begin{defn} A \textit{slicing} $\mathcal{P}$ of a triangulated category $\mathcal{D}$ consists of full additive subcategories $\mathcal{P}(\xi)$ of $\mathcal{D}$ for each $\xi \in \mathbb{R}$ satisfying the following axioms: \begin{enumerate} \item for all $\xi \in \mathbb{R}$, $\mathcal{P}(\xi+1) = \mathcal{P}(\xi)[1]$. \item if $\xi_1 > \xi_2$ and $A_j \in \mathcal{P}(\xi_j)$, then $\mathrm{Hom}(A_1,A_2) = 0$. \item for each nonzero object $E \in \mathcal{D}$, there exists a finite sequence of real numbers $\xi_1 > \xi_2 > \dots > \xi_n$ and a collection of triangles \begin{equation} \begin{split} \begindc{\commdiag}[3] \obj(3,25)[0]{$0=$} \obj(10,25)[e0]{$E_0$} \obj(32,25)[e1]{$E_1$} \obj(54,25)[e2]{$E_2$} \obj(76,25)[ed]{$\dots$} \obj(98,25)[en1]{$E_{n-1}$} \obj(120,25)[en]{$E_n$} \obj(127,25)[e]{$=E$} \obj(21,10)[a1]{$A_1$} \obj(43,10)[a2]{$A_2$} \obj(109,10)[an]{$A_n$} \mor{a1}{e0}{}[\atright,\dasharrow] \mor{e0}{e1}{} \mor{e1}{a1}{} \mor{e1}{e2}{} \mor{e2}{a2}{} \mor{a2}{e1}{}[\atright,\dasharrow] \mor{e2}{ed}{} \mor{ed}{en1}{} \mor{en1}{en}{} \mor{en}{an}{} \mor{an}{en1}{}[\atright,\dasharrow] \enddc \end{split} \end{equation} with $A_j \in \mathcal{P}(\phi_j)$ for all $j$. \end{enumerate} \end{defn} The physical import of these axioms is as follows. Each subcategory is the category of semistable objects of a fixed grading. That the subcategories are additive encodes the idea that the direct sum of two objects with the same grading has the same grading (and is marginally stable, so we can assign a grading). The first axiom is a restatement of equation \eqref{censhift}. The second axiom is needed to ensure that the relevant objects are in fact semistable. Finally, the third axiom encodes the decay of any object $E$ into a finite set of semistable objects $A_i$. The two body decays considered above constitutes the case $n=2$, and the decay chain simplifies to the triangle \eqref{decaytri}. Finally, we need a compatibility of the slicing and the central charge. \begin{defn} A stability condition on a triangulated category $\mathcal{D}$ is given by a slicing of the category and a central charge $Z : K(\mathcal{D}) \to \mathbb{C}$ such that for $E \in \mathcal{P}(\xi)$ \begin{equation} Z(E) = m(E)e^{i\pi \xi} \quad \mbox{with } m(E) \in \mathbb{R}^{>0}\ . \end{equation} \end{defn} \noindent It turns out that the categories $\mathcal{P}(\xi)$ are, in fact, abelian. As mentioned above, the semistable branes are the objects in $\mathcal{P}(\xi)$ for some $\xi$, and now we can add that the stable objects are precisely the simple semistable objects considered in their respective abelian subcategory. Bridgeland has shown that, after adding a technical condition, one can form nice moduli spaces of stability conditions. In particular, the local deformations of a stability condition are precisely the deformations of the central charge. In other words, the space of infinitesimal deformations is the dual vector space to $K(\mathcal{D})$. However, this can be infinite dimensional and is certainly not the same as $H^{1,1}(M)$. Even if we restrict to ${H\!H}_0(\mathcal{D})$ as above, the HKR theorem tells us that ${H\!H}_0(\mathcal{D}(\mathrm{Coh}(X))) = \bigoplus H^i(X,\Omega^iX)$. Nonetheless, we will proceed assuming that the stability conditions we define are, in fact, physical. \section{Constructing stability conditions}\label{sec:con} The goal of the next two sections is to construct stability conditions related to the space $K_X$ discussed in section \ref{sec:cat}. We have a plethora of triangulated categories to choose from. Before discussing the issues with each choice, let us see how one can construct a stability condition. The first tool we will need is that of a t-structure. This is a tool for finding an abelian category inside of a triangulated category. To motivate the definition, consider the case of the derived category of an abelian category. We can see that the objects of the original category are exactly the length one complexes located at position zero. To make this more formal, we use the fact that we can associate cohomology objects to any object in a derived category. Define $\mathcal{D}^{\ge n}$ be the full subcategory of $\mathcal{D}$ of objects, $K$, such that $H^i(K) = 0$ for $i<n$. Note that $\mathcal{D}^{\ge n} = \mathcal{D}^{\ge 0}[-n]$. One can similarly define $\mathcal{D}^{\le n}$. Then $\mathcal{D}^{\ge 0} \cap \mathcal{D}^{\le 0}$ is exactly the abelian category we began with. This can be formalized as follows \cite{Bridgeland:2002sc}. \begin{defn} A t-structure on a triangulated category $\mathcal{D}$ is a pair of strictly full subcategories $\mathcal{D}^{\le 0}$ and $\mathcal{D}^{\ge 0}$ such that \begin{enumerate} \item $\mathcal{D}^{\le 0} \subset \mathcal{D}^{\le 1}$ and $\mathcal{D}^{\ge 1} \subset \mathcal{D}^{\ge 0}$ \item $\mathrm{Hom}(X,Y) = 0$ for $X \in \mathrm{Obj}(\mathcal{D}^{\le 0})$ and $Y \in \mathrm{Obj}(\mathcal{D}^{\ge 1})$ \item For any $X\in\mathrm{Obj}(\mathcal{D})$, there exists a distinguished triangle $A \to X \to B \to A[1]$ with $A \in \mathrm{Obj}(\mathcal{D}^{\le 0})$ and $B \in \mathrm{Obj}(\mathcal{D}^{\ge 1})$ \end{enumerate} where $\mathcal{D}^{\ge n} = \mathcal{D}^{\ge 0}[-n]$ and $\mathcal{D}^{\le n} = \mathcal{D}{\le 0}[-n]$. \end{defn} $\mathcal{D}^{\ge 0} \cap \mathcal{D}^{\le 0}$ is called the \textit{heart} (or core) of the t-structure, and it is a nontrivial theorem that it is an abelian category. It is not necessarily true, however, that the derived category of the core of a t-structure is the original triangulated category, although this is obviously true for the case where the t-structure reflects that our triangulated category is the derived category of an abelian category. Regardless, the t-structure allows us to define cohomology functors valued in the heart. A t-structure is called bounded if $\cap_{i=0}^\infty \mathcal{D}^{\ge n} = \cap_{i=0}^\infty \mathcal{D}^{\le n} = 0$ and there are only a finite number of nonzero cohomology objects for any object in $\mathcal{D}$. Given a slicing as defined above and an interval $I \subset \mathbb{R}$, we can define $\mathcal{P}(I)$ to be the extension closed subcategory generated by the $\mathcal{P}(\xi)$ for all $\xi \in I$. Then, it is shown in \cite{Bridgeland:2002sc} that, for any $\xi$, there exists a t-structure with core $\mathcal{P}((\xi,\xi+1])$. The way we will define a stability condition on a triangulated category is to first choose a bounded t-structure and then define a notion of stability on the associated abelian category which we will denote $\mathcal{A}$. For that, we need a central charge and an associated decomposition into semistable objects. A central charge is defined similarly to the central charge on the triangulated category: it is a function $Z : K(\mathcal{A}) \to \mathbb{C}$ such that, for all nonzero objects in $E \in \mathcal{A}$, $Z(E) = r e^{i\pi\xi}$ with $r>0$ and $\xi \in (0,1]$. Given a central charge, we can define a semistable object: \begin{defn} \label{stabdef} An object $E \in \mathcal{A}$ is said to be \textbf{semistable} with respect $Z : K(\mathcal{A}) \to \mathbb{C}$ if every nontrivial subobject $F \subset E$ satisfies $\xi(F) \le \xi(E)$. \end{defn} Next, we introduce the decomposition into semistable objects: \begin{defn} A central charge (or stability function in Bridgeland's terminology) has the \textbf{Harder-Narasimhan property} if, for every $E$ a non-zero object of $\mathcal{A}$, there exists a sequence of objects $E_i$ such that \begin{equation} 0= E_0 \subset E_1 \subset \dots \subset E_{n-1} \subset E_n = E \end{equation} where the quotients $F_i = E_i / E_{i-1}$ are semistable and \begin{equation} \xi(F_1) > \xi(F_2) > \dots > \xi(F_{n-1}) > \xi(F_n)\ . \end{equation} \end{defn} Now, Proposition 5.3 of \cite{Bridgeland:2002sc} states that a stability condition on a triangulated category $\mathcal{D}$ is equivalent to giving a bounded t-structure on $\mathcal{D}$ and a central charge obeying the Harder-Narasimhan property on its heart. The intuition for this result is as follows. The central charge on the heart of the t-structure allows us to define additive subcategories $\mathcal{P}(\xi)$ for $\xi \in (0,1]$. However, by property 1 in the definition of a slicing, this determines the $\mathcal{P}(\xi)$ for all $\xi \in \mathbb{R}$. The decompositions from the Harder-Narasimhan property then fit together to give the decompositions in the slicing. The fact that this theorem goes the other way is also important. Given a stability condition, $\mathcal{P}((0,1])$ is the heart of a t-structure, and the central charge restricted to that subcategory has the Harder-Narasimhan property. In other words, we can determine the set of semistable objects solely be examining that abelian category. Finally, we note that the Harder-Narasimhan property is not a particularly stringent condition on the central charge given the following result (Prop 2.4 of \cite{Bridgeland:2002sc}): \begin{thm} \label{hnthm} Suppose $\mathcal{A}$ is an abelian category with a central charge $Z : K(\mathcal{A}) \to \mathbb{C}$ satisfying the chain conditions \begin{enumerate} \item there are no infinite sequences of subobjects in $\mathcal{A}$ \begin{equation} \dots \subset E_{j+1} \subset E_j \subset \dots \subset E_2 \subset E_1 \end{equation} with $\xi(E_{j+1}) > \xi(E_j)$ for all $j$. \item there are no infinite sequences of quotients in $\mathcal{A}$ \begin{equation} E_1 \to E_2 \to \dots \to E_j \to E_{j+1} \to \dots \end{equation} with $\xi(E_j) > \xi(E_{j+1})$ for all $j$. \end{enumerate} Then $Z$ has the Harder-Naramsimhan property. \end{thm} \section{Stability conditions associated to quivers}\label{sec:quiver} We now return to the categories introduced in section \ref{sec:cat}. Recall that these were $\mathcal{D}^b(A\mathrm{- fgmod})$, the bounded derived category of finitely generated $A$-modules, $\mathcal{D}^b(A\mathrm{- fdmod})$, the bounded derived category of finite dimensional $A$-modules, $\mathcal{D}^b_\mathrm{fd}(A\mathrm{- fgmod})$, the full subcategory of $\mathcal{D}^b(A\mathrm{- fgmod})$ whose cohomology modules are finite dimensional and $\mathcal{D}_0^b(A\mathrm{- fgmod})$, the full subcategory of $\mathcal{D}^b(A\mathrm{- fgmod})$ whose cohomology modules are tiny and which is equivalent to the full subcategory of $\mathcal{D}^b(\mathrm{Coh}(K_X))$ whose cohomology sheaves supported on the zero section of $K_X$. Each has a t-structure associated with being a derived category. Thus, we need a central charge with the Harder-Narasimhan property in order to place a stability condition on them. Each of these categories has advantages and disadvantages towards that end. In particular $\mathcal{D}^b(A\mathrm{- fgmod})$ has a particularly simple K-theory: it is a vector space generated by the representatives of the exceptional collection. Unfortunately, because many of the modules are infinite dimensional, it is by no means clear that the conditions of theorem \ref{hnthm} hold. On the other hand, the theorem is obvious for $A\mathrm{- fdmod}$ on dimensional grounds. In this case, however, the K-theory may be quite complicated. In part because of these dueling difficulties, Bridgeland chose to work with $\mathcal{D}_0^b(A\mathrm{- fgmod})$. Its heart is, by construction, a finite length category, and theorem \ref{hnthm} is again obvious. In addition, because the $S_i$ are the only simple objects, the K-theory is a vector space generated by the classes of the $S_i$. Thus, we can give a stability condition on this category by assigning a number in the upper-half plane for each simple object $S_i$. Nonetheless, from the point of view of the physics, this category is unsatisfactory: it only describes branes supported at the zero section of $K_X$. The gauge theory clearly describes more than that, however. For example, it is shown in \cite{Bergman:2005kv,Bergman:2005mz} that the moduli space of vacua of the quiver gauge theory is precisely the singular cone, and that when the FI-terms are turned on, one partially or completely desingularizes the cone. We would like to understand this result in the context of this paper. While I do not understand the K-theory of the abelian category $A\mathrm{- fdmod}$, there does exist a map $K(A\mathrm{- fdmod}) \to \mathbb{N}^\text{\#nodes}$ given by the dimension vector of the representation. This can be seen by noting that the dimension vectors in a short exact sequence obey precisely the same relation as that which defines the Grothendieck group. We can then define a central charge by $Z(S_i) = r_i e^{i\pi\xi_i}$ for $r_i \ge 0$ and $0 < \xi_i \le 1$. By the theorem, this central charge satisfies the Harder-Narasimhan property and thus defines a stability condition on the category $\mathcal{D}^b(A\mathrm{- fdmod})$. In addition, since the heart of the standard t-structure on $\mathcal{D}^b(A\mathrm{- fgmod})$ when restricted to $\mathcal{D}^b_\mathrm{fd}(A\mathrm{- fgmod})$ is also $A\mathrm{- fdmod}$, this also defines a stability condition on $\mathcal{D}^b_\mathrm{fd}(A\mathrm{- fgmod})$.\footnote{Note that the dimension vector can be defined on $\mathcal{D}^b_\mathrm{fd}(A\mathrm{- fgmod})$ to be the alternating sum of the dimensions of the cohomology modules, thus avoiding any issues of infinite dimensionality.} It is interesting to ask what happens as our central charges leave the upper half plane. This is addressed by Lemma 5.5 of \cite{Bridgeland:2006ss} where we see that it is related to tilting in the derived category. As mentioned in the introduction, Berenstein and Douglas \cite{Berenstein:2002fi} have shown that this is related to Seiberg duality, thus reproducing a standard picture in the physics literature. Another interesting question is whether this stability condition could be associated to one on the category $\mathcal{D}^b(A\mathrm{- fgmod})$. In fact, it is not difficult to see that the central charge map defined above does not factor through $K(A\mathrm{- fgmod})$. To see this, we will use the generalized Ringel resolution of \cite{Bocklandt:2006gc,Ginzburg:2006cy} and discussed in \cite{Bergman:2006gv}. This states that for an arbitrary representation of our CY quiver, $Q$, the following is a projective resolution: \begin{multline} 0\longrightarrow \hspace{-.4cm} \bigoplus_{i\in \text{Nodes}(Q)}\hspace{-.4cm} P_i \otimes V(i) \longrightarrow \hspace{-.2cm} \bigoplus_{a \in \mathrm{Arr}(Q)}\hspace{-.3cm} P_{s(a)} \otimes V(t(a)) \longrightarrow \hspace{-.2cm} \bigoplus_{a \in \mathrm{Arr}(Q)}\hspace{-.3cm} P_{t(a)}\otimes V(s(a)) \longrightarrow\\ \hspace{-.4cm} \bigoplus_{i\in \text{Nodes}(Q)} \hspace{-.4cm} P_{i} \otimes V(i) \longrightarrow V \longrightarrow 0\ . \end{multline} Now, let $d_i = \mathrm{dim} V(i)$. From this resolution, we have that in $K(A\mathrm{- fgmod})$, \begin{equation} \label{repcharge} \begin{split} [V] &= \sum_{i\in \text{Nodes}(Q)} d_i [P_i] - \sum_{a \in \mathrm{Arr}(Q)} d_{s(a)} [P_{t(a)}] + \sum_{a \in \mathrm{Arr}(Q)} d_{t(a)} [P_{s(a)}] - \sum_{i\in \text{Nodes}(Q)} d_i [P_i] \\ &= \sum_{a \in \mathrm{Arr}(Q)} \left(d_{t(a)} [P_{s(a)}] - d_{s(a)} [P_{t(a)}]\right)\ . \end{split} \end{equation} Applying \eqref{repcharge} to the simple representation $S_i$, we obtain: \begin{equation} \label{simpcha} [S_i] = \sum_{t(a) = i} [P_{s(a)}] - \sum_{s(a) = i} [P_{t(a)}]\ . \end{equation} Now, we can rewrite \eqref{repcharge} as: \begin{equation} \label{srepcha} \begin{split} [V] &= \sum_{a \in \mathrm{Arr}(Q)} \left(d_{t(a)} [P_{s(a)}] - d_{s(a)} [P_{t(a)}]\right) \\ &= \sum_{i\in \text{Nodes}(Q)} d_i \left(\sum_{t(a) = i} [P_{s(a)}] - \sum_{s(a) = i} [P_{t(a)}] \right)\\ &= \sum_{i\in \text{Nodes}(Q)} d_i [S_i]\ . \end{split} \end{equation} In other words, the central charge of any representation can be completely determined by the central charge of the simple representations $S_i$, just as with our assignments. Now, let us choose a skyscraper sheaf. Because $K_X$ is noncompact, the K-theory class of this sheaf is trivial. We also know that is has a dimension vector given by $N_i = \text{rank}(E_i)$. Thus, we have \begin{equation} \label{simprel} \sum N_i [S_i] = [\mathcal{O}_x] = 0\ . \end{equation} As the central charges of the $[S_i]$ all have nonnegative imaginary part, we see that it is impossible to satisfy this relation. This suggests that the geometric category we should be considering should have some sort of compact support condition on the sheaves. For example, full subcategory of the derived category consisting of objects with proper support\footnote{The support of an object in the derived category is the union of the supports of its cohomology sheaves.} is a Calabi-Yau category (in the sense of Kontsevich). This is currently under investigation. Next, we would like to see how this notion of stability compares with the notion of stability in a gauge theory. In the construction of the classical moduli space of the quiver gauge theory one does a K\"ahler quotient of the configuration space by the compact gauge group. The FI-terms serve as the moment map in this construction. When they are rational numbers, it is well-known \cite{Kirwan,Luty:1995sd} that this K\"ahler quotient is equivalent to the GIT-quotient by the complexified gauge group. A stability condition in the GIT sense is given by an equivariant line bundle over the configuration space. Since the configuration space is an affine variety, this is just a series of characters for the various gauge groups, $U(N_i)$ whose complexifications are $GL(N_i,\mathbb{C})$. Thus, we have a sequence of integers $\theta_i$. Given a set of rational $f_i$ we obtain integral $\theta_i$ by multiplying by the negative of the lcm of the denominators.\footnote{For irrational $f_i$, choose rational numbers sufficiently close so as to not affect the consequent moduli space.} These obey $\sum \theta_i N_i = 0$. We can now use the characterization of semistable and stable representations due to King \cite{King:1994mr}: \begin{defn} Let $R$ be a representation of $Q$ with dimension vector $N_i$. Let $\theta_i$ be a vector of integers such that $\sum N_i \theta_i = 0$. Then, we say that the representation is $\theta$-\textbf{semistable} if, for all proper subrepresentations $S\subset R$, $\sum \theta_i \mathrm{dim}(S(i)) \ge 0$. Furthermore, if strict inequality holds for all proper subrepresentations, we say that $R$ is $\theta$-\textbf{stable}. \end{defn} To determine stability in Bridgeland sense, note that we are working solely with actual representations (as opposed to complexes of representations). Thus, we can restrict to the t-structure defined by the category of quiver representations. The central charge defines a notion of stability as in definition \ref{stabdef}. In particular, a representation is semi-stable if, for all subrepresentations $U\subset V$, $\xi(U) \le \xi(V)$. As above, we have $Z(S_i) = r_i e^{i\pi\xi_i}$. If we assume that the $\xi_i$ are all close to a given value $\xi_i = \xi + \epsilon_i$, we have \begin{equation} Z(U) = \sum_i \mathrm{dim}(U(i)) Z(S_i) \sim e^{i\pi\xi}\sum_i \mathrm{dim}(U(i)) r_i(1 + i \pi \epsilon_i)\ , \end{equation} giving \begin{equation} \xi(U) = \xi + \frac{1}{\pi} \tan^{-1} \left(\pi\frac{\sum \mathrm{dim}(U(i)) r_i \epsilon_i}{\sum \mathrm{dim}(U(i)) r_i}\right) \sim \xi + \frac{\sum \mathrm{dim}(U(i)) r_i \epsilon_i}{\sum \mathrm{dim}(U(i)) r_i}\ . \end{equation} Let us denote $\mathrm{dim}(U(i)) = U_i$ and similarly for $V$. The condition $\xi(U) \le \xi(V)$ then becomes: \begin{equation} \frac{\sum U_i r_i \epsilon_i}{\sum U_i r_i} - \frac{\sum V_i r_i \epsilon_i}{\sum V_i r_i} \le 0 \end{equation} which is equivalent to \begin{equation} \label{xiineq} \sum_{i} U_i \left(r_i \left(\epsilon_i \sum_j r_j V_j - \sum_j r_j V_j \epsilon_j\right)\right) \le 0 \end{equation} Now, fix a dimension vector $N_i$ (for the quiver gauge theory, we have $N_i = \text{rank}(E_i)$). Define \begin{equation} \theta_i = -r_i \left(\epsilon_i \sum r_j N_j - \sum r_j N_j \epsilon_j\right)\ . \end{equation} and \begin{equation} \label{thfdef} \theta(U) = -\sum_{i} U_i \left(r_i \left(\epsilon_i \sum_j r_j N_j - \sum_j r_j N_j \epsilon_j\right)\right) \end{equation} It is straightforward to verify that $\theta(V) = \sum \theta_i N_i = 0$. In addition, $\theta(U) \ge 0$ imples \eqref{xiineq} by construction. Thus, we have proven\footnote{This result is essentially in Douglas \cite{Douglas:2000ah}.}: \begin{thm} Define a stability condition on the derived category of finite-dimensional representation of a Calabi-Yau quiver by a choice of a central charge which only depends on the dimension vector as above. Fix a dimension vector $N_i$ and define $\theta$ as in equation \eqref{thfdef}. Then, for all $\xi_i$ sufficiently close to a fixed angle $\xi$, a representation being (semi-)stable with respect to $\theta$ in the GIT sense implies that it is (semi-)stable with respect to our stability condition on the derived category. \end{thm} Finally, we need to relate this to physics in particular to the central charges and gauge couplings of the dual field theory. According to Douglas, the mass of the level one string state between nodes $i$ and $j$ is given by \eqref{strmass} as $m^2 = \frac{1}{\alpha'}(\xi_i - \xi_j)$. In the gauge theory, we have the gauge couplings at each node $1/g_i^2$ and the FI-term $f_i$. Computing the D-term potential for a bifundamental, we obtain \begin{equation} m^2 = g_i^2 {f_i}- g_j^2 f_j\ . \end{equation} We now identify the $\theta_i$ with the FI-terms, $f_i$, of the gauge theory. In particular, let \begin{equation} \begin{split} \frac{1}{g_i^2} &= r_i \\ f_i &= \frac{r_i \epsilon_i}{\alpha'} - \frac{r_i \sum r_j N_j \epsilon_j}{\alpha'\sum r_j N_j} \end{split} \end{equation} If we change our conventions to consider $f_i/r_i$ as the appropriate term, these only differ from the assignments of \cite{Berenstein:2002fi,Douglas:2000ah} by the $\alpha'$ and an overall constant. This approach to this issue has closely followed that of Douglas and collaborators. There is another approach, however, due to Aspinwall \cite{Aspinwall:2004mb}. Define a $\theta(U)$ as follows: \begin{equation} \theta(U) = - \text{Im }\frac{\sum U_i r_i e^{i\pi\xi_i}}{\sum N_i r_i e^{i\pi\xi_i}}\ . \end{equation} It is straightforward to see that this is linear on the dimension vector, obeys $\theta(V) = 0$ and $\theta(U) \ge 0$ implies $\xi(U) < \xi(V)$. This shows that there exists a theta which reproduces Bridgeland stability for a given dimension vector. This also reduces to the above formulae in the case that the gradings almost align. However, the coefficients in this theta do not reproduce the usual expressions for the FI-terms in terms of the gradings. Regardless which choice of theta we make, it is not the case that we can say that Bridgeland stability and GIT stability are equivalent. For one, the GIT notion of stability is restricted to a fixed dimension vector while that of Bridgeland makes no such assumption. This is probably not a serious issue as we have seen that fixing a dimension vector can be rephrased as fixing to a subspace of the K-theory. A more serious issue is that GIT gives a \textit{moduli space} of stable objects while no one has to my knowledge defined the appropriate notion of a coarse moduli space of Bridgeland stable objects in a triangulated category. To conclude this section, we will note that any skyscraper sheaf not supported on the zero section of $K_X$ is a Bridgeland stable object in any stability condition as defined in this section. This is a counterpart to the theorem of \cite{Bergman:2005kv} that $K_X$ with the zero section collapsed embeds into the gauge theory moduli space. Thus, a D3-brane anywhere outside the zero section is stable. When it is located on the zero section, however, it becomes part of Bridgeland's category $\mathcal{D}_0^b(\mathrm{Coh}(K_X))$. This means that it corresponds to a tiny representation and hence is at best semistable as the only simple representations in this category are the $S_i$. Thus, there exists a decay chain into a collection of the $S_i$. On dimensional grounds, we know that the number of these must exactly replicate the $N_i$, the dimension vector of the original representation. Thus, we have verified the existence of the decay of the D3-brane into fractional branes precisely when it is located on the zero section of $K_X$, and it is stable everywhere else. In order to show stability of $\mathcal{O}_x$ off the zero section, it suffices to prove the following: \begin{thm}\label{th:sta} Let $\mathcal{O}_x$ be a skyscraper sheaf not supported on the zero section of $K_X$. Then, the representation corresponding to $\mathcal{O}_x$ under the equivalence of categories given in section \ref{sec:cat} is a simple representation. \end{thm} The proof of this theorem is given in the next section. In combination with the results of \cite{Bergman:2005mz}, it shows that the complement of the zero section embeds into the moduli space of representations (in the GIT sense) with any GIT stability condition, and that this embedding is an isomorphism on tangent spaces. In particular, this result does not depend on the exceptional collection being composed of line bundles. It would be interesting to understand what occurs at zero section in this general context. The relation of this mathematical result to the physics deserves further explanation. What it says is that any D3-brane off the zero section of $K_X$ is stable in any stability condition defined in this section. What can vary, however, is the set of fractional branes that the D3-brane can decay into when located on the zero section. We have seen that the decay is determined by the choice of stability condition which, in turn, is determined by the choice of a category of quiver representations. In other words, the different quivers corresponding to different exceptional collections correspond to different open regions in a generalized K\"ahler moduli space of our theory. Furthermore, quivers related by mutation can arise by passing to adjacent regions in this moduli space, a procedure related to Seiberg duality \cite{Herzog:2004qw,Aspinwall:2004vm,Cachazo:2001sg}. Without further understanding the relation between Bridgeland's space of stability conditions and the physical moduli space, however, we cannot say if all these regions occur in the actual string theory. \section{The proof of the theorem} In order to prove theorem \ref{th:sta}, we first need to define the center of a category. \begin{defn} The center of a category, $Z(\mathcal{C})$, is given by the set of natural transformations from the identity to itself. \end{defn} For the categories we are using, this is the zeroth Hochschild cohomology group ${H\!H}^0(\mathcal{C})$. As discussed in \cite{Bergman:2006gv}, when $\mathcal{C} \cong \mathcal{D}^b(Coh(X))$, ${H\!H}^0(\mathcal{C}) = \Gamma(\mathcal{O}_X)$, \textit{i.e.,\ } global functions on $X$. On the other hand, when $\mathcal{C} \cong \mathcal{D}^b(A\mathrm{- fgmod})$, we have that ${H\!H}^0(\mathcal{D}^b(A \mathrm{- fgmod})) = Z(A)$, the center of the algebra $A$. An object in the center gives rise to an endomorphism of every object of our category satisfying certain consistency rules. As both $\mathcal{D}^b(A\mathrm{- fdmod})$ and $\mathcal{D}^b_\mathrm{fd}(A\mathrm{- fgmod})$ are full subcategories of $\mathcal{D}^b(A\mathrm{- fgmod})$, the action of the center of $\mathcal{D}^b(A\mathrm{- fgmod}) = Z(A)$ descends to an action on them. This action is simple to determine. Given a representation $V$ of $A$, we have a map $r : A \to \mathrm{End}(V)$. Given an element $z \in Z(A)$, we have a map $r(z) : V \to V$. Essentially by definition, this is an endomorphism of the representation and gives rise to an endomorphism in the derived category. The action of the center on the derived category of coherent sheaves is also straightforward. Any object in this category can be represented as a bounded complex of coherent sheaves, \textit{i.e.,\ } $\mathcal{O}_X$-modules. Thus, they are certainly acted on by global functions, and it is obvious that this gives rise to an endomorphism in the derived category. Now, recall our situation. We have an equivalence of categories $F : \mathcal{D}^b(Coh(Y)) \to \mathcal{D}^b(A \mathrm{- fgmod})$ given by a locally free tilting sheaf $T$ with $A = \mathrm{End}_Y(T)^\mathrm{op}$. In particular, $F(\mathcal{E}) = \mathbb{R}\mathrm{Hom}(T,\mathcal{E})$ which is a complex of $\mathrm{End}_Y(T)$-modules. We need the following lemmas. In what follows, ``point" will always refer to a closed point. \begin{lma} \label{lma:stalk} Let $\mathcal{A}$ be a non-zero indecomposable object in $\mathcal{D}^b(Coh(Y))$ such that the support of its cohomology sheaves is solely at the point $p$. Then $\mathcal{A}$ is isomorphic to the image of a shift of the skyscraper sheaf $\mathcal{O}_p$ in the derived category. \end{lma} \begin{proof} Given any sheaf $\mathcal{F}$ and point $p$, there is a map $\mathcal{F} \to \mathcal{F}_p / \mathfrak{m}_p \mathcal{F}$ where the latter sheaf is a skyscraper supported at the point $p$ whose fiber is the fiber of $\mathcal{F}$ at $p$. Thus, if we represent $\mathcal{A}$ as a bounded complex of sheaves, there is a chain map from $\mathcal{A}$ to a complex of skyscrapers supported at the point $p$. This obviously induces an isomorphism in cohomology, so it is an isomorphism in the derived category. Since coherent sheaves on a point are vector spaces, the assumption of indecomposability means that we must have only a one-dimensional vector space, thus proving the lemma. \end{proof} \begin{lma} \label{lma:sky} $F(\mathcal{O}_p)$ can be considered as the image in any of the derived categories of an object in $A \mathrm{- fdmod}$, and this object is given by the dual of the fiber of $T$ at the point $p$ with its action of $\mathrm{End}_Y(T)$. Furthermore, the action of the center is given by scalar multiplication by the value of the global function at the point $p$.\end{lma} \begin{proof} This is essentially obvious. Since skyscraper sheaves have no higher cohomology, $\mathbb{R}\mathrm{Hom}(T,\mathcal{O}_p)$ is an honest representation. Furthermore, the vector space $\mathrm{Hom}(T,\mathcal{O}_p)$ is precisely the dual of the fiber of $T$ at the point $p$. It is finite dimensional by the coherence of $T$. The action of the center is the action of global functions on the fiber and is precisely as given in the statement of the lemma. \end{proof} We can now prove theorem \ref{th:sta} which we restate in a more general form. \begin{thm} Let $Y$ be a smooth variety with a locally free tilting sheaf $T$, $\pi$ be the projection from $Y$ to its affinization, and $p$ be a point in $Y$ such that $\pi^{-1}\pi(p) = p$. Then the $\mathrm{End}_Y(T)$ module corresponding to $\mathcal{O}_p$ is a simple module.\end{thm} \begin{proof} By Lemma \ref{lma:sky}, the representations corresponding to skyscraper sheaves are acted upon by the center, $Z$, by scalar multiplication. Let $M$ be an indecomposable subrepresentation of $F(\mathcal{O}_p)$. We want to show that $M$ is isomorphic to $F(\mathcal{O}_p)$. $Z$ acts on $M$ by precisely the same scalar multiplication as it acts on $F(\mathcal{O}_p)$. By the equivalence of categories, there exists an object $\widetilde{M}$ in the derived category of coherent sheaves such that $F(\widetilde{M}) \cong M$. The center acts on the cohomology sheaves of this object by scalar multiplication. This scalar multiplication provides a character of the ring of global functions and thus a point, $q$, on the affinization of $Y$. By examining the action of the center on the stalks, we see that the support of the cohomology sheaves of $\widetilde{M}$ lies in the inverse image of $q$ in $Y$. Furthermore, by construction, the point $p$ projects to $q$ on the affinization. As we have assumed that $\pi^{-1}\pi(p)=p$ , we see that the cohomology sheaves of $\widetilde{M}$ are supported at $p$. Thus, by lemma \ref{lma:stalk}, $\widetilde{M}$ is a shift of a skyscraper sheaf. Since the skyscraper corresponds to an actual representation, \textit{i.e.,\ } an object in the heart of the t-structure corresponding to the abelian category of representations, we see that shifting it would take it out of the heart. As we have assumed that $M$ is an honest representation and lies in the heart, it must be isomorphic to $F(\mathcal{O}_p)$. \end{proof} In our case, we have $Y = K_X$, and the failure of the argument for a skyscraper supported on the zero section is straightforward. The inverse image of the corresponding point on the affinization is the entire zero section of $K_X$. The simple tiny representations are push-forwards of sheaves on $X$ by the zero section and are thus possible subrepresentations. The derived category of coherent sheaves whose cohomology sheaves are supported on the zero section is precisely the category $\mathcal{D}_0^b(A\mathrm{- fgmod})$ considered by Bridgeland whose heart is a finite length Abelian category. \acknowledgments I would like to thank David Ben-Zvi, Tom Bridgeland and Jason Kumar for useful conversations and e-mails on this project. I would also like to thank the Institute for Advanced Study for their hospitality while a portion of this work was being completed. This material is based upon work supported by the National Science Foundation under Grant Nos. PHY-0505757 and PHY-0555575 and Texas A\&M University. \bibliographystyle{utphys}
2,869,038,154,199
arxiv
\section{Introduction} Wasserstein distances are metrics between probability distributions that are inspired by the problem of optimal transportation. These distances (and the optimal transport problem) are ubiquitous in mathematics, notably in fluid mechanics, partial differential equations, optimisation, and, of course, probability theory and statistics. In addition to their theoretical importance, they have provided a successful framework for the comparison of (at times complex) objects in fields of application such as image retrieval \citep{rubner2000earth}, computer vision \citep{ni2009local}, pharmaceutical statistics \citep{munk1998nonparametric}, genomics \citep{bolstad2003comparison,evans2012phylogenetic}, economics \citep{gini1914di} and finance \citep{rachev2011probability}, to name but a few. Indeed, while their origins lie with Monge's (primarily mathematical) enquiry into how to optimally transport a pile of earth of a given volume into a pit of equal volume but potentially different shape, Kantorovich's modern reformulation, which catalysed the development of this rich theory, was inspired by the concrete problem of optimal resource allocation. Unsurprisingly, there is a vast literature on Wasserstein distances and optimal transportation, originally rooted primarily in analysis and probability, but later branching out to quantitative fields well beyond. In statistics, Wasserstein distances play a prominent role in theory and methodology, and more recently have become an object of inference in themselves. In his thousand-page book, \cite{villani2008optimal} writes that reviewing the optimal transport literature is a ``\emph{dauntingly difficult task}". And, if one focusses more narrowly on \emph{statistical} aspects of Wasserstein distances, it is still impossible to carry out a comprehensive review in the order of thirty pages. We thus restrict ourselves to a high level overview of some salient aspects and main concepts, admittedly influenced by our own perspective and interests, and apologise for the inevitable omissions. \subsection{Overview} Wasserstein distances appear in statistics in several ways. We delineate three broad categories of statistical use of these distances, according to which we will structure our review: \begin{enumerate} \item[(1)] Wasserstein distances and the associated notion of an optimal coupling are often exploited as a versatile tool in asymptotic theory, due to the topological structure they induce and their relatively easy majorisation, and Section~\ref{sec:tool} reviews some of their appealing features in that context. \item[(2)] In other cases, Wasserstein distances are employed as a methodological tool, in order to carry out statistical inference, primarily involving structural models and goodness-of-fit testing. Section~\ref{sec:inference} describes key methods and results in this vein. \item[(3)] Finally, a recent trend in functional data analysis is to consider the space of probability measures equipped with a Wasserstein distance as a sample/parameter space itself, a direction that is taken up in Section~\ref{sec:statWass}. \end{enumerate} In contexts such as (2) and (3), it is often important to carry out explicit computations related to the Wasserstein distance, and Section~\ref{sec:numerics} gives a brief overview on such numerical aspects. First, though, the next subsection reviews the basic definitions and relevant notions that we require throughout our review. \subsection{Basic Notions} The $p$-Wasserstein\footnote{Also known as \emph{Mallows' distance}, \emph{Earth mover's distance}, \emph{(Monge--)Kantorovich(--Rubinstein) distance} or \emph{Fr\'echet distance} (when $p=2$). The terminology \emph{Wasserstein distance} became popular, mainly in Western literature, following \cite{dobrushin1970prescribing} who studied some of its topological properties and referred to an earlier work by Wasserstein. See \citet[page 118]{villani2008optimal} and \citet[page 4]{bobkov2014one} for more details.} distance between probability measures $\mu$ and $\nu$ on $\R^d$ is defined as \begin{equation}\label{prob_definition} W_p(\mu,\nu) =\underset{\tiny \begin{array}{c}X\sim\mu \\Y\sim\nu\end{array}}{\inf}\left(\E \|X - Y\|^p\right)^{1/p}, \qquad p\ge1, \end{equation} where the infimum is taken over all pairs of $d$-dimensional random vectors $X$ and $Y$ marginally distributed as $\mu$ and $\nu$, respectively (an obviously nonempty set, since one can always construct independent random variables with prescribed marginals). For convenience, we shall use both notations $W_p(X,Y)$ and $W_p(\mu,\nu)$ interchangeably, whenever $X\sim\mu$ and $Y\sim\nu$. The distance is finite provided the $p$-th moments exist, $\E \|X\|^p+\E\|Y\|^p<\infty$, and this will be tacitly assumed in the sequel. The definition generalises to laws defined on much more general spaces: if $(\mathcal X,\rho)$ is any complete and separable metric space, $W_p$ can be defined in the same way, with $\|X-Y\|$ replaced by the metric $\rho(X,Y)$. In particular, this setup incorporates laws on infinite-dimensional function spaces such as $L^2[0,1]$. For simplicity, we restrict to the setting where $\mathcal{X}$ is a normed vector space, employing the notation $(\mathcal X,\|\cdot\|)$ henceforth. The optimisation problem defining the distance is typically referred to in the literature as \emph{optimal transport(ation)} or the \emph{Monge--Kantorovich} problem. When $X$ and $Y$ take values on the real line, their joint distribution is characterised by specifying their marginal distributions and a copula \citep{sklar1959fonctions}. Since the marginals here are fixed to be the laws of $X$ and $Y$, the problem is to find a copula that couples $X$ and $Y$ together as ``tightly" as possible in an $L_p$-sense, on average; if $p=2$ then the copula one seeks is the one that maximises the correlation (or covariance) between $X$ and $Y$, i.e., the copula inducing maximal linear dependence. The Wasserstein distances $W_p$ are proper distances in that they are nonnegative, symmetric in $X$ and $Y$, and satisfy the triangle inequality. A compactness argument shows that the infimum in their definition is indeed attained (if $\mathcal X$ is complete). The space of measures with $p$-th moments finite, the \emph{Wasserstein space} $\W_p(\mathcal X)$, when endowed with the distance $W_p$, is complete and separable if $\mathcal X$ is so. Although many other metrics can be defined on the space of probability measures \citep{rachev1991probability,gibbs2002choosing}, Wasserstein distances exhibit some particularly attractive features: \begin{itemize} \item They incorporate the geometry of the ground space $\mathcal X$: if $X$ and $Y$ are degenerate at points $x,y\in\mathcal X$, then $W_p(X,Y)$ is equal to the distance between $x$ and $y$ in $\mathcal X$. This property hints at why Wasserstein distances are successful in imaging problems and why they can capture the human perception of whether images are similar or not (see Section~\ref{sec:statWass}). \item Convergence of $X_n$ to $X$ in Wasserstein distance is equivalent to convergence in distribution, supplemented with $\E \|X_n\|^p\to \E\|X\|^p$. This makes $W_p$ convenient for proving central limit theorem-type results (see Section~\ref{sec:tool}). \item Since they are defined as the solution of minimisation problems, they are quite easy to bound from above: \emph{any} joint distribution with the correct marginals provides an upper bound for the Wasserstein distance (see Section~\ref{sec:tool}). Moreover, they enjoy some differentiability, allowing for application of the delta method (see Section~\ref{sec:inference}). \end{itemize} \noindent Further to the ``probabilistic" definition (Definition \ref{prob_definition}), one can consider the ``analytic" definition, which helps dissect the structure of the Monge--Kantorovich optimisation problem: \begin{equation}\label{analyst_definition} W_p(\mu,\nu) =\left ( \inf_{\gamma\in\Gamma(\mu,\nu)}\ownint{\mathcal X\times\mathcal X}{}{\|x-y\|^p}{\gamma(x,y)} \right)^{1/p}. \end{equation} Here $\Gamma(\mu,\nu)$ is the set of probability measures $\gamma$ on $\mathcal X\times\mathcal X$ satisfying $\gamma(A\times\mathcal X)=\mu(A)$ and $\gamma(\mathcal X\times B)=\nu(B)$ for all Borel subsets $A,B\subseteq\mathcal X$. Elements $\gamma\in\Gamma(\mu,\nu)$ are called \emph{couplings} of $\mu$ and $\nu$, i.e., joint distributions on $\mathcal X\times\mathcal X$ with prescribed marginals $\mu$ and $\nu$ on each ``axis", which hopefully elucidates the equivalence to Definition \ref{prob_definition}. Definition \ref{analyst_definition} has a simple intuitive interpretation in the discrete case: given a $\gamma\in \Gamma(\mu,\nu)$, and any pair of locations $(x,y)$, the value of $\gamma(x,y)$ tells us what proportion of $\mu$'s mass at $x$ ought to be transferred to $y$, in order to reconfigure $\mu$ into $\nu$. Quantifying the effort of moving a unit of mass from $x$ to $y$ by $\|x-y\|^p$ yields the interpretation of $W_p(\mu,\nu)$ as the minimal effort required to reconfigure $\mu$'s mass distribution into that of $\nu$. Definition \ref{analyst_definition} underlines that the feasible set $\Gamma$ is convex and that the objective function is (up to the power $1/p$) linear in $\gamma$. Optimal $\gamma$'s can thus be expected to be extremal, that is, relatively \emph{sparse}. Examples of such sparse couplings are \emph{deterministic} ones, i.e., couplings supported on the graph of some deterministic function $T:\mathcal X\to\mathcal X$, rather than on $\mathcal X\times \mathcal X$, so that they can be realised as \[ \gamma(A\times B)=\mu(A\cap T^{-1}(B)). \] Such a coupling reassigns \emph{all} of $\mu$'s mass at a given location to a \emph{unique} destination. When the vector $(X,Y)$ is distributed according to such a $\gamma$, its two coordinates are completely dependent: $Y=T(X)$ for the deterministic function $T:\mathcal X\to\mathcal X$. Such $T$ is called an \emph{optimal transport map} and must satisfy $\nu(B)=\mu(T^{-1}(B))$ for all $B\subseteq\mathcal X$ if $\gamma$ is to be in $\Gamma$, i.e., $T$ \emph{pushes $\mu$ forward to $\nu$} (denoted by $T\#\mu=\nu$). Figure~\ref{fig:illustrationTransport} illustrates these definitions. \begin{figure} \begin{center} \includegraphics[trim=0mm 123mm 0mm 0mm, clip, scale=0.8]{fig1.pdf} \includegraphics[trim=0mm 121mm 0mm 0mm, clip, scale=0.8]{fig2.pdf} \end{center} \caption{Illustration of the ``analytic" and ``probabilistic" definitions. The top row of plots shows the densities of two Gaussian probability measures $\mu$ (on the left, in blue) and $\nu$ (on the right, in red), and the optimal deterministic map $T$ (in the middle) that deforms $\mu$ into $\nu$, i.e., $T\#\mu=\nu$. The map is plotted in the form of the vector field $T(x)-x$, where each arrow indicates the source and destination of the mass being transported. Reversing the direction of the arrows would produce the inverse map, optimally deforming the measure $\nu$ to obtain $\mu$. The bottom row features two independent random samples $X_1,\ldots,X_N\stackrel{\mathrm{i.i.d.}}{\sim}\mu$ (on the left, in blue) and $Y_1,\ldots,Y_N\stackrel{\mathrm{i.i.d.}}{\sim}\nu$ (on the right, in red), for $N=120$. The sample $\{X_i\}_{i=1}^{N}$ was constructed by sampling $\mu$ directly. The sample $\{Y_i\}_{i=1}^{N}$ was constructed by applying the optimal map $T$ to the sample $\{X_i\}_{i=1}^{N}$, i.e. $Y_i=T(X_i)$. The plot in the middle illustrates how the sample $\{X_i\}_{i=1}^{N}$ is re-arranged in order to produce the sample $\{Y_i\}_{i=1}^{N}$, by plotting the vectors $T(X_i)-X_i$. The optimality of $T$ can be understood in terms of minimising the average squared length of these arrows. In all plots, the $x$ and $y$ axes range from $-3$ to $3$.} \label{fig:illustrationTransport} \end{figure} As it turns out, under sufficient regularity, it is precisely such deterministic couplings that are optimal. When $\mathcal X=\R^d$ is finite-dimensional and $\mu$ is absolutely continuous with respect to Lebesgue measure, the infimum (if finite) is attained (uniquely if $p>1$) by such a deterministic coupling. In this case we denote the map $T$ inducing the coupling by $\topt XY$ or $\topt\mu\nu$. In the next paragraph we briefly sketch the arguments leading to this result. As the problem is analytical in nature, characterising the solutions requires some tools from mathematical analysis. We have attempted to avoid technicalities to the extent possible, but with optimal transport ``the devil is in the details", as the problem is qualitatively different depending on whether the random variables are discrete or continuous. The less mathematically-inclined reader can skip to the paragraph containing Equation~\ref{eq:optimalMap}, simply retaining the loose statement that in the quadratic case $p=2$, optimal maps are characterised as gradients of convex functions. Our presentation mainly follows \cite{villani2003topics}; more references are given at the end of this section. \textbf{Uniqueness and characterisation.} Like any convex optimisation problem, the Monge--Kantorovich problem admits a dual, consideration of which leads to a \emph{characterisation} of optimal maps. The dual problem can be seen to be \[ \sup_{\phi,\psi}\Big\{ \E \phi(X) + \E \psi(Y)\Big\}, \qquad \textrm{subject to }\quad \phi(x) + \psi(y) \le \|x-y\|^p \] for integrable $\phi$ and $\psi$. The inequality $\E \phi(X) + \E\psi(Y)\le \E\|X-Y\|^p$ implies \emph{weak duality}, in that the above supremum is no larger than the infimum in Definition \ref{prob_definition}. But under mild conditions one has, in fact, \emph{strong duality}, and there exist a pair $(\phi,\psi)$ and a joint coupling $\gamma$ such that $\E \phi(X) + \E\psi(Y)=\E_\gamma\|X-Y\|^p$. Furthermore, a version of \emph{complementary slackness} holds between the two optimal solutions, in such a way that one provides a lot of information on the other. This is best demonstrated in the quadratic case $p=2$, by virtue of the factorisation $\|x-y\|^2=\|x\|^2 + \|y\|^2 - 2\innprod xy$. Algebraic manipulations then allow the dual to be recast as \[ \inf_{\varphi,\Psi}\Big\{ \E \varphi(X) + \E \Psi(Y)\Big\}, \qquad \textrm{subject to }\quad \varphi(x) + \Psi(y) \ge \innprod xy. \] A simple yet consequential observation is that for a given $\varphi$, the best candidate for $\Psi$ is the \emph{Legendre transform} of $\varphi$, \[ \varphi^*(y) =\sup_{x\in\mathcal X}\{ \innprod xy - \varphi(x)\}, \] the smallest function satisfying $\varphi^*(y)+\varphi(x)\ge \innprod xy$. Iterating this idea amounts to replacing $\varphi$ by $\varphi^{**}=(\varphi^*)^*$, which is larger than $\varphi$ yet still obeys the constraint $\varphi^{**}(x)+\varphi^*(y)\ge \innprod xy$. The choice $\Psi=\varphi^*$ makes the dual \emph{unconstrained}, and $\varphi$ is optimal if and only if $\varphi(x)+\varphi^*(y)=\innprod xy$ with probability one with respect to $X$ and $Y$. Going back to the primal problem, we see that once an optimal $\varphi$ is found, a joint distribution will be optimal if and only if it assigns unit probability to the event $\varphi(X)+\varphi^*(Y)=\innprod XY$. Furthermore, $\varphi$ itself may be assumed to be the Legendre transform of $\varphi^*$, namely $\varphi=\varphi^{**}$. At this stage one can invoke the rich theory of convex analysis. Legendre transforms are always convex, and the equality $\varphi(x)+\varphi^*(y)=\innprod xy$ holds if and only if $y$ is a \emph{subgradient} of $\varphi$ at $x$. If $\varphi$ has a unique subgradient $y$ at $x$, then $y=\nabla\varphi(x)$ is the gradient of $\varphi$ and is determined uniquely. The regularity of convex functions implies that this is the case for all $x$ up to a set of Lebesgue measure 0. Thus, if $X$ has a density, then the optimal map $T$ is characterised as the unique gradient of a convex function that pushes $X$ forward to $Y$. On the other hand, if $X$ is discrete, then it might be concentrated on the small set where $\varphi$ is not differentiable, in which case the optimal coupling will not be induced from a map. Similar arguemnts apply for other values of $p>1$. For a given $\phi$, the best candidate for $\psi$ is the \emph{$c$-transform}\footnote{Here the cost of transferring a unit of mass from $x$ to $y$ is $c(x,y)=\|x-y\|^p$, but these ideas are valid for more general cost functions $c$, hence the name.} of $\phi$, \[ \phi^c(y) =\inf_{x\in\mathcal X}\{\|x-y\|^p - \phi(x)\}, \] which again leads to an unconstrained dual problem $\sup_\phi \E\phi(X)+\phi^c(Y)$. A function $\phi$ is optimal if and only if $\phi(x)+\phi^c(y)=\|x-y\|^p$ with probability one, and $\phi$ itself can be assumed a $c$-transform. In analogy with the quadratic case, the equality $\phi(x)+\phi^c(y)=\|x-y\|^p$ entails a relation between $y$ and the gradient of $\phi$, and $c$-transforms enjoy differentiability properties similar to those of convex functions. In summary, when $X$ has a density, optimal maps $\topt XY$ are precisely functions of the form \begin{equation}\label{eq:optimalMap} \topt XY(x) =\begin{cases} \nabla \varphi(x) \textrm{ for some convex }\varphi, & p=2,\\ x - \|\nabla \phi(x)\|^{1/(p-1) - 1}\nabla\phi(x) \textrm{ for some }c\textrm{-transform }\phi, & p\ne2. \end{cases} \end{equation} This formula for general $p$ is also valid if $p=2$, with $\phi(x)=\|x\|^2/2-\varphi(x)$. Importantly, this uniqueness and characterisation result holds for two classes of spaces $\mathcal X$ extending $\R^d$: Riemmanian manifolds and separable Hilbert spaces. \textbf{Regularity.} The convex gradient characterisation gives rise to a rich regularity theory in the quadratic case. When both $\mu$ and $\nu$ have densities $f$ and $g$, the convex potential $\varphi$ solves the Monge--Amp\`ere equation \[ \mathrm{det}\nabla^2\varphi(x) =\frac{f(x)}{g(\nabla\varphi(x))}. \] The regularity theory of Monge--Amp\`ere equations allows one to deduce smoothness of the optimal map $T=\nabla\varphi$. Roughly speaking, if $X$ and $Y$ have convex supports and positive, bounded densities with derivatives up to order $k\ge0$, then the optimal map $\topt\mu\nu$ has continuous derivatives up to order $k+1$. \textbf{Explicit solutions.} Apart from the characterisation of optimal maps $T$ as gradients of convex functions (when $p=2$) or $c$-transforms, typically neither $T$ nor the Wasserstein distance $W_p$ admit closed-form expressions. There are two special yet important cases where one does have explicit formulae. When $d=1$, denoting $F_X$ and $F_X^{-1}(q)=\inf\{x:F_X(x)\ge q\}$, $q\in(0,1)$, the distribution and quantile functions of $X$, we have \[ W_p(X,Y) =\|F_X^{-1} - F_Y^{-1}\|_p =\left(\ownint 01{|F_X^{-1}(\alpha) - F_Y^{-1}(\alpha)|^p}\alpha\right)^{1/p}, \qquad \topt XY=F_Y^{-1} \circ F_X, \] where $\topt XY$ is optimal if $X$ is a continuous random variable. This allows the quantile function $F^{-1}_Y$ of any random variable $Y$ to be interpreted as the optimal map from a uniform random variable to $Y$ (also see Section \ref{sec:MoreReferences} for an interesting interpretation/extension of this fact). In the special case $p=1$ there is an alternative, often more convenient, formula: \[ W_1(X,Y) =\ownint {\R}{}{|F_X(t) - F_Y(t)|}t. \] The function $\topt XY=F_Y^{-1}\circ F_X$ is still optimal, but might not be unique. One can also bound $W_p$ in terms of the distribution functions: \[ W^p_p(X,Y) \le p2^{p-1}\ownint {\R}{}{|t|^{p-1}|F_X(t) - F_Y(t)|}t. \] The other case where closed-form formulae are available is when $X$ and $Y$ are Gaussian. If $X\sim N(m_1,\Sigma_1)$ and $Y\sim N(m_2,\Sigma_2)$, then \begin{equation} \begin{aligned}\label{eq:gaussTransport} W_2^2(X,Y) &= \|m_1 - m_2\|^2 + \mathrm{tr}[\Sigma_1 + \Sigma_2 - 2(\Sigma_1^{1/2}\Sigma_2\Sigma_1^{1/2})^{1/2}],\\ \topt XY(x) &=m_2 + \Sigma_1^{-1/2}[\Sigma_1^{1/2}\Sigma_2\Sigma_1^{1/2}]^{1/2}\Sigma_1^{-1/2}(x - m_1), \end{aligned} \end{equation} where $\topt XY$ is defined if $\Sigma_1$ is injective (more generally, if its kernel is included in that of $\Sigma_2$). These formulae are valid in infinite dimensions too, in which case $\topt XY$ may be unbounded, and only defined on an affine subspace of $\mathcal X$. Furthermore, this result holds in location-scale families that are not necessarily Gaussian. \subsection{Bibliographic Notes} In addition to the survey \cite{rachev1985monge}, there are a number of books dedicated to optimal transport: \cite{rachev1998mass}, \cite{villani2003topics,villani2008optimal}, \cite{ambrosio2013user}, \cite{santambrogio2015optimal}, and the forthcoming \cite{panaretos2018invitation}, leaning to the statistical side of the subject. The reader interested in the extensive bibliography may consult in particular the first, second and fourth of these references. For space considerations, we only give a very brief historical overview and a summary list of references. The origin of the optimal transport problem is the monograph by \cite{monge1781memoire}, in which he posed the question for the particular case $\mathcal X=\R^3$ and $p=1$; see also \cite{appell1887memoire} for an early reference. The probabilistic formulation of \cite{kantorovich1942translocation} was a major breakthrough, and one of the catalysers that led Kantorovich to develop linear programming, for which he was awarded the Nobel prize in 1975 (jointly with T.~C. Koopman, who independently arrived at similar results after Kantorovich). Duality results have a rich history dating back at least to \cite{kantorovich1958space}. Very general results (for all Borel cost functions) in this context can be found in \cite{beiglbock2011duality}. See also \cite{kellerer1984duality}, who explores duality in a \emph{multimarginal} formulation involving more than two measures (see also Section~\ref{sec:statWass}). The one-dimensional case is intrinsically related to the Fr\'echet--Hoeffding bounds \citep{hoffding1940masstabinvariante,frechet1951tableaux}. See \cite{bass1955compatibilite} and \cite{dallaglio1956sugli} for early references, and \cite{cuesta1993optimal} for detailed discussion when $p=2$. The bound for $W_p$ in terms of distribution functions is due to \cite{ebralidze1971inequalities}, and can be found in generalised form in \citet[Section 7.4]{bobkov2014one}. There are analogous results for measures on spaces with simple structure; see \cite{delon2010fast} for the unit circle and \cite{kloeckner2015geometric} for ultrametric spaces. For the Gaussian case, see \cite{olkin1982distance} or \cite{givens1984class} in finite dimensions, and \cite{gelbrich1990formula} and \cite{cuesta1996lower} for an infinite-dimensional extension. The convex gradient characterisation in the quadratic case was discovered independently by a number of authors: \cite{knott1984optimal}, \cite{cuesta1989notes}, \cite{ruschendorf1990characterization} and \cite{brenier1991polar}. For other values of the exponent $p$ (and more general cost functions), see \cite{gangbo1996geometry}. The Riemannian version is due to \cite{mccann2001polar}, and \citet[Section~6.2.2]{ambrosio2008gradient} treat the infinite-dimensional case. The regularity result was discovered by \cite{caffarelli1992regularity}; see \cite{figalli2017monge} for an accessible exposition. There are other (e.g., Sobolev) types of regularity results, as explained in \citet[pages 332--336]{villani2008optimal} or \citet[Section 1.7.6]{santambrogio2015optimal}. \section{Optimal Transport as a Technical Tool}\label{sec:tool} This section reviews some of the features of Wasserstein metrics that make them useful as a technical tool for deriving large sample theory results in statistics. To facilitate the presentation, we first state some simple facts that play a role in the development. Let $X$ and $Y$ be random vectors taking values in $\mathcal X=\mathbb R^d$; we maintain the notation $(\mathcal X,\|\cdot\|)$ to stress that the properties are valid in infinite dimensions as well. \begin{itemize} \item For any real number $a$, $W_p(aX,aY)=|a|W_p(X,Y)$. \item For any fixed vector $x\in \mathcal X$, $W_p(X+x,Y+x)=W_p(X,Y)$. \item For any fixed $x\in\mathcal X$, we have $W_2^2(X+x,Y)=\|x+\E(X)-\E(Y)\|^2+W_2^2(X,Y)$. \item For product measures and when $p=2$, we have $W_2^2(\otimes_{i=1}^n \mu_i,\otimes_{i=1}^n\nu_i)=\sum_{i=1}^n W_2^2(\mu_i,\nu_i)$ in the analytic notation. \end{itemize} The proofs of the first three statements rely on the equivalence between the classes of the corresponding couplings. For example, $U=(X+x,Y+x)$ is a coupling of $X+x$ and $Y+y$ if and only if $U-(x,x)$ is a coupling of $(X,Y)$. For the last property, observe that the map $x\mapsto [\topt{\mu_1}{\nu_1}(x),\dots,\topt{\mu_1}{\nu_1}(x)]$ is a gradient of a convex function and pushes forward $\otimes\mu_i$ to $\otimes \nu_i$. \subsection{Deviations from Gaussianity} If $\{X_i\}_{i\geq 1}$ are independent and identically distributed random variables with mean zero and finite variance, then the central limit theorem asserts that the suitably rescaled averages $S_n= n^{1/2}\overline X_n$ converge in distribution to a normal random variable $Z$ with the same variance. Since $\E S_n^2=\E Z^2$, the convergence also holds in 2-Wasserstein distance. This property makes the 2-Wasserstein distance convenient for handling deviations from Gaussianity. The arguments generally involve the \emph{subadditivity} of the Wasserstein distance with respect to convolutions, a property that can be established using the infimum-over-couplings definition of the Wasserstein distances. For example, assuming $\E X_i=0$, \begin{equation} \label{eq:subad} W_2^2\left(\sum_{i=1}^n a_iX_i,Z\right) \le \sum_{i=1}^n a_i^2 W_2^2(X_i,Z) ,\qquad Z\sim N(0,1) ,\qquad \sum_{i=1}^n a_i^2=1. \end{equation} To see this, let $Z_i\sim N(0,1)$ be independent and consider optimal couplings on $\R^2$ such that $\E |a_iX_i - a_iZ_i|=W_2^2(a_iX_i,a_iZ_i)$. Take the product $\pi$ of all these couplings (a joint distribution on $\R^{2n}$). Then under $\pi$, $\sum a_iZ_i$ is standard normal and \[ W_2^2\left(\sum_{i=1}^n a_iX_i,Z\right) \le \E_\pi \left|\sum_{i=1}^n a_iX_i - \sum_{i=1}^n a_iZ_i\right|^2 =\sum_{i=1}^n \E \left|a_iX_i - a_iZ_i\right|^2 =\sum_{i=1}^n W_2^2(a_iX_i,a_iZ), \] from which Equation \ref{eq:subad} follows. \cite{mallows1972note} used this property in order to derive necessary and sufficient conditions for a triangular array to be \emph{jointly asymptotically normal}. Recall that $X_n=(X_{n1},\dots,X_{nd})$ converge in distribution to a standard multivariate $N(0,I_d)$ if and only if $a^tX_n\to Z$ for all $a\in \R^d$, $\|a\|=1$. Now let $X_{nj}$ ($j\le n<\infty$) be a triangular array. In analogy with a fixed dimension, we say that $(X_{nj})$ is jointly asymptotically normal if $a_n^tX_n\to Z$ for any sequence of vectors $a_n\in \R^n$, $\|a_n\|=1$. This requires $X_{nj}$ to converge to $Z$ uniformly in $j$, i.e., $X_{nm_n}\to Z$ for any sequence of coordinates $m_n\le n$. This condition is not sufficient, however. \cite{mallows1972note} observed that metrics inducing convergence in distribution are not subadditive, and this is remedied by the Wasserstein distance. If $\E X_{nj}^2\to1$ uniformly in $j$, in addition to the uniform convergence in distribution, then $W_2^2(X_{nj},Z)\to0$ and as a consequence of Equation~\ref{eq:subad}, $W_2^2(a_n^tX_n,Z)\to0$, and the array is jointly asymptotically normal. The length of the $n$-th row of the array can be arbitrary, as long as it diverges to infinity with $n$. When the $X_i$'s in Equation~\ref{eq:subad} have the same distribution as $X$ and $a_i=1/\sqrt n$, the inequality gives a bound that is uniform in $n$. \cite{bickel1981some} use this result in their study of the asymptotics of the bootstrap. For instance, denote by $F_n$ the empiricial distribution function corresponding to a sample $X_1,\dots,X_n$ and the sample mean by $\mu_n=\overline X$. Let $X_1^*,\dots,X_m^*$ be a bootstrapped sample from $F_n$ with sample average $\mu_m^*$. Then as $n,m\to\infty$, the conditional (upon $(X_i)$) distribution of $\sqrt m(\mu_m^* - \mu_n)$ converges to $N(0,\mathrm{var}(X_1))$, which is the same asymptotic distribution of $\mu_n$. Another additive property, shown in a similar way to Equation \ref{eq:subad}, is \[ W_p\left(\sum_{i=1}^n U_i,\sum_{i=1}^n V_i\right) \le \sum_{i=1}^n W_p(U_i,V_i), \] for independent $(U_i)$ and $(V_i)$. A particular case is that $W_p(X+Y,X)\le W_p(Y,0)=[\E\|Y\|^p]^{1/p}$, and taking $Y$ to be Gaussian with small variance allows to approximate in $W_p$ any probability law with a smooth one to arbitrary precision. In other words, smooth measures are dense in $W_p$, just as they are dense with respect to convergence in distribution. Discrete measures are also dense; see Subsection~\ref{sec:empiricalWass}. Actually, the subadditivity properties can be used in order to prove the central limit theorem. \cite{tanaka1973inequality} does so by noticing that equality in (\ref{eq:subad}) holds only for Gaussian distributions. \cite{johnson2005central} obtain rates of convergence for the central limit theorem, and more generally, for convergence to stable laws. Berry--Esseen-type bounds for the Wasserstein distance can be found in \cite{rio2009upper}. For random elements in Banach spaces, see \cite{rachev1994rate}. \subsection{Equilibrium, Concentration, and Poisson Approximations} A different class of settings where Wasserstein distances are used is in the study of convergence of Markov chains to their equilibrium distribution and dates back to \cite{dobrushin1970prescribing}. The idea is to show a sort of contraction property of the transition kernel with respect to the Wasserstein distance. Let $P$ be the transition matrix. In studying convergence of the Kac random walk on the orthogonal group $\mathrm{SO}(n)$, \cite{oliveira2009convergence} showed that \[ W_{D,2}(\mu P,\nu P) \le \xi W_{D,2}(\mu,\nu) \] for some $\xi<1$, where $D$ is a distance between matrices, leading to exponential convergence to equilibrium. A result of similar spirit is derived by \cite{eberle2014error} for the transition kernel of the Metropolis adjusted Langevin algorithm, a Markov chain Monte Carlo method. The constant $\xi$ above is related to the \emph{Wasserstein spectral gap} of the transition kernel. \cite{hairer2014spectral} explore its behaviour in infinite-dimensional state spaces, when taking finite-dimensional projections of $P$. They show that for the preconditioned Crank--Nicolson algorithm, $\xi$ remains stable, whereas for the random walk Metropolis algorithm, $\xi$ may converge to one. \cite{rudolf2018perturbation} employ Wasserstein distances to bound the difference between the behaviour of some ``nicely behaved" Markov chain and a perturbed version thereof, obtained from a modification in the transition kernel. Wasserstein distances also appear in concentration of measure, in the form of \emph{transportation inequalities} \cite[Chapter 6]{ledoux2005concentration}. A measure $\mu_0$ satisfies such an inequality if for any other measure $\nu$, \[ W_1(\mu_0,\nu) \le C\sqrt{H(\mu_0,\nu)}, \qquad H(\mu,\nu) =\ownint{}{}{\log\frac{\rm d\mu}{\rm d\nu}}\mu. \] If this holds, and $\mu(A)\ge1/2$, then \[ \P(X \notin A_r) \le e^{-r^2/C'}, \qquad A_r=\{x:\|x-A\|\le r\}. \] Furthermore, the representation of $W_1$ as the supremum over Lipschitz functions (see the next subsection) yields concentration inequalities for $f(X) - \E f(X)$ with $f$ Lipschitz. In a different context, \cite{barbour1992stein} use Wasserstein metrics to quantify the error in approximating a point process $\Xi$ by a Poisson point process $P$ with the same mean measure $\lambda$. Suppose for simplicity that the sample space is $[0,1]$ and for two (not necessarily probability) measures $\tilde\mu,\tilde\nu$ with total masses $A$ and $B$, define the probabilities $\mu=\tilde\mu/A$, $\nu=\tilde\nu/B$ and $d(\tilde\mu,\tilde\nu)=W_1(\mu,\nu)$ if $A=B$ and 1 (the maximal value) if $A\ne B$. The processes $\Xi$ and $P$ can then be viewed as random elements in the metric space $\mathcal X$ of measures with the distance $d$, and their laws can be compared using the ``upper degree" Wasserstein space $W_1$ on $(\mathcal X,d)$. See also \cite{schuhmacher2009stein} for an extension where $d$ is replaced by a Wasserstein distance of different order $W_p$. \subsection{Relation to Other Metrics} We conclude this section by reviewing some useful relations between $W_p$ and other probability metrics. We firstly relate $W_p$ to $W_q$ by two simple results from \citet[Chapter 7]{villani2003topics}, and then describe bounds (mostly borrowed from \cite{gibbs2002choosing}) pertaining to $W_1$ and the Prokhorov, total variation and bounded Lipschitz distances. For notational simplicity we state the bounds in the Euclidean setting, but they hold on any complete separable metric space $(\mathcal X,\rho)$. For random variables $X$ and $Y$ on $\mathcal X$ let $\Omega$ be the union of their ranges and set \[ D =\sup_{x,y\in \Omega} \|x-y\|, \qquad d_{\min} =\inf_{x\ne y\in\Omega} \|x-y\|. \] In the analytic version $\Omega=\mathrm{supp}(\mu)\cup \mathrm{supp}(\nu)$, where $X\sim \mu$ and $Y\sim \nu$. If $X$ and $Y$ are bounded, then $D$ is finite; if $X$ and $Y$ are (finitely) discrete, then $d_{\min}>0$. \begin{itemize} \item If $p\le q$, then $W_p\le W_q$, by Jensen's inequality. \item On the other hand, $W_q^q\le W_p^pD^{q-p}$. \item Duality arguments yield the particularly useful \emph{Kantorovich--Rubinstein} \citep{kantorovich1958space} representation for $W_1$ as \[ W_1(X,Y) =\sup_{\|f\|_{\mathrm{Lip}}\le 1} |\E f(X) - \E f(Y)|, \qquad \|f\|_{\mathrm{Lip}} =\sup_{x\ne y}\frac{|f(x) - f(y)|}{\|x - y\|}, \] valid on any separable metric space \cite[Section 11.8]{dudley2002real}. \item This shows that $W_1$ is larger than the \emph{Bounded Lipschitz} (BL) metric \[ W_1(X,Y) \ge \mathrm{BL}(X,Y) =\sup_{\|f\|_\infty+\|f\|_{\mathrm{Lip}}\le 1} |\E f(X) - \E f(Y)| \] that metrises convergence in distribution \cite[Theorem 11.3.3]{dudley2002real}. \item Let $P$ denote the Prokhorov distance. Then $P^2(X,Y)\le W_1(X,Y)\le (D+1)P(X,Y)$. \item For the class of random variables supported on a fixed bounded subset $K\subseteq \mathcal X$, $\mathrm{BL}$ and $W_1$ are equivalent up to constant, and all metrics $W_p$ are topologically equivalent. \item The Wasserstein distances $W_p$ can be bounded by a version of total variation $\mathrm{TV}$ \cite[Theorem 6.15]{villani2008optimal}. A weaker but more explicit bound for $p=1$ is $W_1(X,Y)\le D\times \mathrm{TV}(X,Y)$. \item For discrete random variables, there is an opposite bound $\mathrm{TV}\le W_1/d_{\min}$. \item The total variation between convolutions with a sufficiently smooth measure is bounded above by $W_1$ \citep[Proposition~4]{mariucci2017wasserstein}. \item The \emph{Toscani} (or \emph{Toscani--Fourier}) distance is also bounded above by $W_1$ \citep[Proposition~2]{mariucci2017wasserstein}. \end{itemize} Beyond bounded random variables, $W_p$, $W_q$, $\mathrm{BL}$ and $\mathrm{TV}$ induce different topologies, so that one cannot bound, for example, $W_1$ in terms of $\mathrm{BL}$ in the unbounded case. On a more theoretical note, we mention that the Kantorovich--Rubinstein formula yields an embedding of any Polish space $(\mathcal X,\rho)$ in the Banach space of finite signed measures on $\mathcal X$. \section{Optimal Transport as a Tool for Inference}\label{sec:inference} As a measure of distance between probability laws, the Wasserstein distance can be used for carrying out of goodness-of-fit tests, and indeed this has been its main use as a tool for statistical inference. In the simplest \emph{one-sample} setup, we are given a sample $X_1,\dots,X_n$ with unknown law $\mu$ and wish to test whether $\mu$ equals some known fixed law $\mu_0$ (e.g., standard normal or uniform). The \emph{empirical measure} $\mu_n$ associated with the sample $(X_1,\dots,X_n)$ is the (random) discrete measure that assigns mass $1/n$ to each observation $X_i$. In this sense, the strong law of large numbers holds in Wasserstein space: with probability one, $W_p(\mu_n,\mu)\to0$ as $n\to\infty$ if and only if $\E \|X\|^p<\infty$. It is consequently appealing to use $W_p(\mu_n,\mu_0)$ as a test statistic. In the \emph{two-sample} setup, one independently observes a sample $Y_1,\dots,Y_m\sim \nu$ with corresponding empirical measure $\nu_m$, and $W_p(\mu_n,\nu_m)$ is a sensible test statistic for the null hypothesis $\mu=\nu$. \subsection{Univariate Measures} We shall identify measures $\mu$ on the real line ($\mathcal X=\R$), with their distribution function $F$; the \emph{empirical distribution function} corresponding to $\mu_n$ is $F_n(t)=n^{-1}\sum_{i=1}^n\mathbf{1}\{X_i\le t\}$. Thus $X_i\sim F$, $Y_j\sim G$ and we slightly abuse notation by writing $W_p(F,G)$ for $W_p(\mu,\nu)$. \cite{munk1998nonparametric} derive the asymptotic distribution of $W_2(F_n,F_0)$ (and trimmed versions thereof). The main tool for the derivation is a Brownian bridge representation for the quantile process $q_n=\sqrt n(F_n^{-1} - F^{-1})$ that holds under suitable assumptions on $F$. There are four types of limiting results, depending on the combination null/alternative and one/two-sample. Roughly speaking, the limits are of order $\sqrt n$ and normal under the alternative, and of order $n$ and not normal under the null. The two-sample asymptotics entail that $m/n$ converges to a finite positive constant. In symbols: \begin{equation}\label{eq:fourlimits} \begin{aligned} \sqrt n(W_2^2(F_n,F_0) - W_2^2(F,F_0))&\to \textrm{normal} \quad (F\ne F_0),\\ nW_2^2(F_n,F_0)&\to \textrm{something} \quad (F=F_0),\\ \sqrt \frac {mn}{m+n}(W_2^2(F_n,G_m) - W_2^2(F,G))&\to \textrm{normal} \quad (F\ne G),\\ \frac {mn}{m+n} W_2^2(F_n,G_m) &\to \textrm{something} \quad (F=G). \end{aligned} \end{equation} Similar results were obtained independently in \cite{delBarrio2000contributions}, where one can also find a nice survey of other goodness-of-fit tests. If one instead wants to test whether $F$ belongs to a parametric family $\mathcal F$ of distributions, then the test statistic is the infimum of the Wasserstein distance between the empirical measure and members of $\mathcal F$. For example, in order to test the fit to some normal distribution, \cite{delBarrio1999tests} find the asymptotic distribution of the test statistic \[ R_n = \frac{\inf_{\mu,\sigma^2}W^2_2(F_n,N(\mu,\sigma^2))}{S_n^2}, \qquad S_n^2 = \frac1n\sum_{i=1}^n (X_i - \overline X)^2, \] an infinite sum of rescaled and centred $\chi^2$ random variables (under the null hypothesis). Using a weighted version of the Wasserstein distance, \cite{deWet2002goodness} constructs a test for location or scale families. Here the null hypothesis is that $F=F_0(\cdot - \theta)$ or $F=F_0(\cdot/\theta)$ for some known distribution $F_0$ and $F$ and unknown $\theta\in \R$ (or $(0,\infty)$). In a more general setup, \cite{freitag2005hadamard} consider the case of a ``structural relationship" between $F$ and $F_0$ in the form \[ F^{-1}(t) = \phi_1(F_0^{-1}(\phi_2(t,\theta)),\theta), \] for some (known) functions $\phi_1,\phi_2:\R\times \Theta\to\R$ and parameters $\theta\in\Theta$. This setup includes the location-scale model when $\phi_2(t,\theta)=t$ and $\phi_1(t,\theta_1,\theta_2)=(t-\theta_1)/\theta_2$, and the \emph{Lehmann alternatives} model when $\phi_2(t,\theta)=1-(1-t)^\theta$ and $\phi_1(t,\theta)=t$. Motivated by population bioequivalence problems, \cite{freitag2007nonparametric} treat the dependent two-sample case, where one observes a sample $(X_i,Y_i)_{i=1}^n$ and wishes to compare the Wasserstein distance between the marginals. Some of the required regularity is apparent from the following observation. The empirical process $\sqrt n(F_n - F)$ converges to $\mathbb B\circ F$, where $\mathbb B$ is a Brownian bridge on $[0,1]$, without assumptions on $F$ (this result is known as \emph{Donsker's theorem}). But the quantile process $q_n$ involves inversion, and the limiting distribution is $\mathbb B(t)/F'(F^{-1}(t))$, which requires assumptions on $F$. See \cite{csorgo1993weighted} for a detailed study of the quantile process and asymptotics of functionals thereof. In the context of Wasserstein distance, \cite{delBarrio2005asymptotics} study the limiting behaviour of the norm $\|q_n\|_{2,w}^2=\ownint01{q_n^2(t)w(t)}t$, for an integrable weight function $w$ on $(0,1)$. The covariance function of the process $\mathbb B/F'\circ F^{-1}$ is \[ \eta(s,t) =\frac{\min(s,t) - st}{F'(F^{-1}(t) F'(F^{-1}(s))}, \qquad s,t\in (0,1), \] and the limits are qualitatively different depending on whether the integrals $\ownint 01{\eta(t,t)w(t)}t$ and/or $\ownint 01{\ownint01{\eta^2(t,s)w(t)w(s)}t}s$ are finite or not. \subsection{Multivariate Measures} Results in the multivariate setup are more scarce. One apparent reason for this is that the Wasserstein space of measures with multidimensional support is no longer embeddable in the function space $L_p(0,1)$ via quantile functions, and has positive curvature (see Section~\ref{sec:statWass}). As perhaps can be expected, multivariate distributional results for the empirical $p$-Wasserstein distance are chiefly available when it admits a closed form; that is, when $p=2$ and we consider Gaussian distributions. Assume that $\mu=N(m_1,\Sigma_1)$. Given a sample $X_1,\dots,X_n$ from $\mu$, let $\widehat{\mu}_n$ be the \emph{empirical Gaussian measure} \[ \widehat\mu_n =N(\widehat m,\widehat{\Sigma}), \qquad \widehat m=\overline X=\frac 1n\sum_{i=1}^n X_i, \quad \widehat \Sigma = \frac 1{n-1} \sum_{i=1}^n (X_i - \overline X)(X_i - \overline X)^t. \] The test statistic is now $W_2^2(\widehat{\mu}_n,\mu_0)$ for one sample and $W_2^2(\widehat{\mu}_n,\widehat{\nu}_m)$ for two samples, and the analogue of the four cases in Display \ref{eq:fourlimits} holds true. The underlying idea is to combine the classical central limit theorem for $\widehat m$ and $\widehat\Sigma$ with a delta method, and \cite{rippl2016limit} establish the necessary differentiability of the squared Wasserstein distance in the Gaussian setup in order to apply the delta method. Importantly, Gaussianity can be replaced with any location-scatter family of $d$-dimensional distribution functions \[ \{F(x) =F_0(m+\Sigma^{1/2} x) :m\in \R^d; \Sigma\in\R^{d\times d} \textrm{ positive definite} \}, \] where $F_0$ is an arbitrary distribution function with finite nonsingular covariance matrix. For sufficiently smooth measures $\mu,\nu$ (with moment conditions), \cite{delBarrio2017central} find the normal limit of \[ \sqrt n(W_2^2(\mu_n,\nu) - \E W_2^2(\mu_n,\nu)). \] They establish stability of the convex potential with respect to perturbations of the measures and invoke the Efron--Stein inequality. Again in analogy with Display~\ref{eq:fourlimits}, the limiting distribution is degenerate at 0 if $\mu=\nu$. This central limit theorem does not, however, yield a limit for $W_2^2(\mu_n,\nu) - W_2^2(\mu,\nu)$, since the speed at which $\E W_2^2(\mu_n,\mu)$ decays to zero (and consequently that of $\E W_2^2(\mu_n,\nu) - W_2^2(\mu,\nu)$) depends on $\mu$ in a rather delicate way, and can be arbitrarily slow (see Subsection~\ref{sec:empiricalWass}). When $\mu$ and $\nu$ are finitely supported measures, they can be identified with vectors $r$ in the unit simplex, and the empirical vector $r_n$ obeys a central limit theorem. \cite{sommerfeld2018inference} apply a delta method to obtain the limiting distributions of the Wasserstein distance. The latter is only \emph{directionally Hadamard differentiable}, leading to a non-standard delta method with nonlinear derivative. Correspondingly, the limiting distributions are not Gaussian, in general. In analogy with Display~\ref{eq:fourlimits}, they show that $n^{1/2}(W_p(r_n,s) - W_p(r,s))$ has a distributional limit under the alternative, whereas under the null, the rate is $n^{1/(2p)}$ in agreement with results in Subsection~\ref{sec:empiricalWass}. \cite{sommerfeld2018inference} highlight the implications of the non-standard delta method for the bootstrap, whose consistency requires subsampling. These results extend to \emph{countably supported} measures, where one needs to impose an extra summability condition on $r$ in order to ensure convergence of $\sqrt n(r_n - r)$ to the Gaussian limit $\mathbb G$ \citep{tameling2017empirical}. Both references also provide more explicit expressions for the limiting distribution when $\mathcal X$ has a metric structure of a tree. \cite{bigot2017central} establish similar limits for a regularised version (see Section~\ref{sec:numerics}) of the Wasserstein distance. Wasserstein distances have recently been proposed by \cite{bernton2017inference} for parameter inference in approximate Bayesian computation (also known as \emph{plug-and-play} methods). The setup is that one observes data on $\mathcal X$ and wishes to estimate the underlying distribution $\mu$ belonging to a parametrised set of distributions $\{\mu_\theta\}_{\theta\in\R^N}$. However, the densities of these measures are too complicated to evaluate/optimise a likelihood. Instead one can only simulate from them, and retain parameters that yield synthetic data resembling the observed data. A core issue here is how to contrast the true and simulated data, and \cite{bernton2017inference} suggest using $W_p$ to carry out such comparisons. A Wasserstein metric has also been employed to compare \emph{persistence diagrams}, a fundamental tool in topological data analysis (see \cite{wasserman2018topological} for a recent review) summarising the persistent homology properties of a dataset. See, for example, \cite{mileyko2011probability}, who introduce a version of the Wasserstein distance on the space of persistence diagrams, endowing it with a metric structure that allows statistical inference. \subsection{Bounds for the Expected Empirical Wasserstein Distance}\label{sec:empiricalWass} As discussed in the previous subsections, the speed of convergence of the empirical measure $\mu_n$ to $\mu$ in Wasserstein distance $W_p$ is important for statistical inference. This topic has a history dating back to the seminal work of \cite{dudley1969speed}, and a very rich literature. For space considerations, we will focus on the average value $\E W_p(\mu_n,\mu)$, but see the bibliographical notes for concentration inequalities and almost sure results. Upper bounds for the one-sample version are also valid for the two-sample version, since $\E W_p(\mu_n,\nu_n)\le 2\E W_p(\mu_n,\mu)$ when $\nu_n$ is another empirical measure. For brevity we write $W_p$ for $W_p(\mu_n,\mu)$ and inequalities such as $\E W_p\ge Cn^{-1/2}$ hold for given $p$, some $C=C(\mu)$ and for all $n$. We also tacitly assume that $\mu\in \W_p$, i.e., it has a finite $p$-th moment, when writing $W_p$. The behaviour of $\E W_p(\mu_n,\mu)$ is qualitatively different depending on whether the underlying dimension $d>2p$ or $d<2p$. For discrete measures, $\E W_p$ is generally of the order $n^{-1/(2p)}$, independently of the dimension. In high dimensions this is better than absolutely continuous measures, for which the rate is $n^{-1/d}$, but when $d=1$, some smooth measures attain the optimal rate $n^{-1/2}$, faster than $n^{-1/(2p)}$. We first note that it is quite easy to see that $W_p\to0$ almost surely. However, even for $p=1=d$ the decay of $\E W_p$ can be arbitrarily slow; see \citet[Theorem 3.3]{bobkov2014one}. Lower bounds are easier to obtain, and here are some examples: \begin{itemize} \item \emph{[fundamental $\sqrt n$ bound]} If $\mu$ is nondegenerate, then $\E W_p\ge Cn^{-1/2}$. \item \emph{[separated support]} If $\mu(A)>0$, $\mu(B)>0$, $\mu(A\cup B)=1$ and $dist(A,B)=\inf_{x\in A,y\in B}\|x-y\|>0$, then $\E W_p\ge C_pn^{-1/(2p)}$. Any finitely discrete nondegenerate measure satisfies this condition, as well as most countably discrete ones. This agrees with the rates of \cite{sommerfeld2018inference} above. \item \emph{[curse of dimensionality]} If $\mu$ is absolutely continuous on $\R^d$, then $\E W_p\ge Cn^{-1/d}$. (This result is void of content when $d\le 2$ in view of the $n^{-1/2}$ bound.) More generally, $\mu$ only needs to have an absolutely continuous part (e.g.\ a mixture of a Gaussian with a discrete measure), and the bound holds when $\mu_n$ is replaced with \emph{any} measure supported on $n$ points. Equivalently, it holds for the \emph{quantiser} of $\mu$, the $n$-point measure that is $W_p$-closest to $\mu$. \end{itemize} We briefly comment on how these bounds are obtained. The $\sqrt n$ bound is a corollary of the central limit theorem on $f(X)$, where $X\sim \mu$ and $f$ is a suitable Lipschitz function. If $\mu$ has separated support and $k\sim B(n,\mu(A))$ is the number of points in $\mu_n$ falling in $A$, then a mass of $|k/n - \mu(A)|$ must travel at least $dist(A,B)>0$ units of distance, yielding a lower bound on the Wasserstein distance. One then invokes the central limit theorem for $k$. For the curse of dimensionality, note that the number of balls of radius $\epsilon$ needed to cover the support of $\mu$ is proportional to $\epsilon^{-d}$. If we take $\epsilon=Kn^{-1/d}$ with an appropriate $K>0$, then $n$ balls of radius $\epsilon$ centred at the points of the empirical measure miss mass $\tau$ from $\mu$, and this mass has to travel at least $\epsilon$, so $W_p^p\ge C'\tau n^{-p/d}$. The last lower bound was derived by counting the number of balls needed in order to cover $\mu$, which turns out to be a determining quantity for the upper bounds, too. To account for unbounded supports we need to allow covering only a (large) fraction of the mass. Let \[ N(\mu,\epsilon,\tau) =\textrm{minimal number of }\epsilon\textrm{-balls whose union has }\mu\textrm{ measure at least }1-\tau. \] These \emph{covering numbers} increase as $\epsilon$ and $\tau$ approach zero, and are finite for all $\epsilon,\tau>0$. To put the next upper bound in context, we remark that any compactly supported $\mu$ on $\R^d$ satisfies $N(\mu,\epsilon,0)\le K\epsilon^{-d}$. \begin{itemize} \item If for some $d>2p$, $N(\mu,\epsilon,\epsilon^{dp/(d-2p)})\le \epsilon^{-d}$, then $\E W_p\le C_pn^{-1/d}$. \end{itemize} This covering number condition is verified if $\mu$ has finite moment of order large enough \citep[Proposition 3.4]{dudley1969speed}. The exact formulae on the real line lead to a characterisation of the measures attaining the optimal $n^{-1/2}$ rate: \begin{itemize} \item If $\mu\in \W_p(\R)$ has compact support, then $\E W_1\le Cn^{-1/2}$, and consequently $\E W_p\le C_pn^{-1/(2p)}$. \item A necessary and sufficient condition for $\E W_1\le Cn^{-1/2}$ is that \[ J_1(\mu) = J_1(F) = \ownint \R {}{\sqrt{F(t)(1-F(t))}}t < \infty. \] \item The same holds for $\E W_p$, with the integrand in $J_1$ replaced by $ [F(t)(1-F(t))]^{p/2}/[f(t)]^{p-1}$, where $f$ is the density of the absolutely continuous part of $\mu$. \end{itemize} Using the representation of $W_1$ as the integral of $|F_n - F|$, one sees that $J_1<\infty$ suffices for the $n^{-1/2}$ rate, since the integrand has variance $n^{-1}F(t)(1-F(t))$. The condition $J_1<\infty$ is essentially a moment condition, as it implies $\E X^2<\infty$ and is a consequence of $\E X^{2+\delta}$ for some $\delta>0$. But for $p>1$, $J_p<\infty$ entails some smoothness of $\mu$. In particular, the above lower bounds show that $\mu$ must be supported on an (possibly unbounded) interval, and the $J_p$ condition means that the density should not vanish too quickly in the interior of the support. \subsubsection{Bibliographic Notes} The lower bounds were adapted from \cite{dudley1969speed}, \cite{fournier2015rate} and \cite{weed2017sharp}. The upper bound with the coverings dates back to \cite{dudley1969speed}, who showed it for $p=1$ and with the bounded Lipschitz metric. The version given here can be found in \cite{weed2017sharp} and extends \cite{boissard2014mean}. We emphasise that their results are not restricted to Euclidean spaces. For Gaussian measures in a Banach space, \cite{boissard2014mean} relate $\E W_2$ to small ball probabilities. \cite{weed2017sharp} also show that absolutely continuous measures that are ``almost" low-dimensional enjoy better rates for moderate values of $n$, until eventually giving in to the curse of dimensionality. In the limiting case $d=2p$, there is an additional logarithmic term. For $p=1$ the sufficiency of this term was noted by \citet[page 44]{dudley1969speed}, and the necessity follows from a classical result of \cite{ajtai1984optimal} for $\mu$ uniform on $[0,1]^2$. For $p>1$ and $d=2p$, see for example \cite{fournier2015rate}. That absolutely continuous measures are the ``bad" measures in high dimensions was already observed by \cite{dobric1995asymptotics} in an almost sure sense: $n^{1/d}W_p$ has a positive limit if and only if $\mu$ has an absolutely continuous part. There are results for more general cost functions than powers of Euclidean distance, see \cite{talagrand1994transportation} for $\mu$ uniform on $[0,1]^d$ and \cite{barthe2013combinatorial} for a careful study of the two-sample version $W_p(\mu_n,\nu_n)$. \cite{fournier2015rate} also deal with the Euclidean case, with some emphasis on deviation bounds and the limit cases $d=2p$. \cite{delBarrio1999central} showed that $J_1<\infty$ is necessary and sufficient for the empirical process $\sqrt n(F_n - F)$ to converge in distribution to $\mathbb{B}\circ F$, with $\mathbb B$ Brownian bridge. A thorough treatment of the univariate case, including but not restricted to the $J_p$ condition, can be found in \cite{bobkov2014one}, using an order statistic representation for the Wasserstein distance. One may also consult \cite{mason2016weighted} for the alternative approach of weighted Brownian bridge approximations. The topic is one of intense study, and the references here are far from being exhaustive. Let us also mention some extensions for dependent data: \cite{dede2009empirical}, \cite{cuny2017invariance}, \cite{dedecker2017behavior}. \section{Optimal Transport as the Object of Inference}\label{sec:statWass} The previous section described applications of Wasserstein distances for carrying out statistical tasks such as goodness-of-fit testing. The topic of this section is a more recent trend, where one views the Wasserstein space as a sample space for statistical inference. In this setup, one observes a sample $\mu_1,\dots,\mu_n$ from a random measure $\Lambda$ taking values in Wasserstein space $\W_p$ of measures with finite $p$-th moment, and seeks to infer some quantity pertaining to the law of $\Lambda$ using the observed data, typically in a nonparametric fashion. Such questions can be seen as part of \emph{next-generation functional data analysis}, borrowing the terminology of \citet[Section~6]{wang2016functional}. \subsection{Fr\'echet Means of Random Measures}\label{subsec:frechetmeans} Perhaps the most basic question here, as anywhere, is estimating a mean. Clearly we could estimate the mean of $\Lambda$ by the average $n^{-1}(\mu_1+\dots+\mu_n)$, which is also a probability measure. While this may often be a good estimator, in certain modern applications, such as imaging, it exhibits some unsatisfactory properties. As a simple example, consider two Dirac measures at distinct points $x\ne y$. Their average is the ``blurred" measure putting mass $1/2$ at $x$ and $y$. In contrast, as we shall see below, the Wasserstein distance leads to an average that is a Dirac measure at the midpoint $(x+y)/2$. We shall focus on the special case $p=2$, which is the most elegant, and provides the canonical setup in deformation models (see Subsection~\ref{subsec:generativeModel}). One way of giving a meaning to the notion of expectation in general metric space is to consider the \emph{Fr\'echet mean} (better known in analysis as \emph{barycentre}), named after \cite{frechet1948elements} and defined as the minimiser of the \emph{Fr\'echet functional} \begin{center} $ F(\mu) =\E W^2_2(\Lambda,\mu) =\ownint{\W_2}{}{W_2^2(\lambda,\mu)}{\P(\lambda)} ,\qquad \mu\in \W_2, $ \end{center} where $\P$ is the law of $\Lambda$. We shall refer to such a minimiser as the \emph{population (Fr\'echet) mean} to distinguish it from the empirical version, where $\E W^2_2(\Lambda,\mu)$ is replaced with $\sum W_2^2(\mu_i,\mu)$. Existence, uniqueness, computation, laws of large numbers and central limit theorems for Fr\'echet means with respect to general metrics have been studied extensively, under the umbrella of \emph{non-Euclidean statistics} \citep[e.g.,][]{huckemann2010intrinsic,kendall2011limit}. Even existence and uniqueness are nontrivial questions for many metrics and depend subtly on the induced geometry. It turns out that $\W_2$ induces a geometry that is very close to Riemannian (see Subsection~\ref{subsec:geometry}). Despite posing challenges in that it is infinite-dimensional, has unbounded curvature, and presents an abundance of singularities, its geometry exhibits many favourable (indeed quite unusual for nonlinear spaces) properties owing to the structure of the optimal transport problem. By means of convex analysis, \cite{agueh2011barycenters} deduce existence, uniqueness, and a characterisation of empirical Fr\'echet means in $\W_2(\R^d)$, in what has become a seminal paper. Existence always holds, whereas the mean is unique provided that one of the measures $\mu_i$ is absolutely continuous. The results extend to the population version \citep{pass2013optimal}: the condition is that with positive probability $\Lambda$ is absolutely continuous (assuming that the Fr\'echet functional $F$ is finite). A notable exception is again when $d=1$, in which case Fr\'echet means are unique with the sole restriction that $F$ is finite. A law of large numbers in Wasserstein space was proved by \cite{legouic2017existence}, in a very general setting (for arbitrary $p>1$, and for spaces more general than $\R^d$). Since $\W_2(\R^d)$ is itself a complete and separable metric space, one can view $\P$, the law of $\Lambda$, as an element in the ``second level" Wasserstein space $\W_2(\W_2(\R^d))$. Le Gouic \& Loubes show that if $\P_n$ is a sequence of laws converging to $\P$ in the second level Wasserstein space, then the Fr\'echet means of $\P_n$ converge to that of $\P$ (if unique) in the ``first level" $\W_2(\R^d)$. This setup covers the case where $\P_n$ is the empirical measure (in $\W_2(\W_2(\R^d))$) corresponding to a sample from $\Lambda$. See \cite{alvarez2018wide} for an extension to \emph{trimmed} Fr\'echet means. \subsection{Fr\'echet Means and Generative Models}\label{subsec:generativeModel} From a statistical perspective, the choice of a metric and the consideration of the corresponding Fr\'echet mean often implicitly assumes a certain underlying data-generating mechanism for the data. In the case of the Wasserstein metric, this mechanism is inextricably linked to \emph{warping} or \emph{phase variation} \citep{ramsay2005functional,marron2015functional,wang2016functional}, where one wishes to infer the law of a process $Y$ on (say) $[0,1]$, but only has access to realisations of $\tilde{Y}=Y\circ T^{-1}$, where $T:[0,1]\to [0,1]$ is a random \emph{warp/deformation} map. This setup is quite natural in physiological data such as growth curves or spike trains where each individual may have an intrinsic time scale, a sort of functional random effect. The problem would then be to correct for the effect of $T$ that distorts time, and recover the sample paths in the ``correct", or ``objective" time scale. Typically, it is natural to assume that $T$ is an increasing homeomorphism, on the basis that time should always move forward, rather than backward, and, for identifiability reasons, that $\E T(t)=t,\, t\in[0,1]$. Now, when the functional datum $Y$ is a random probability measure in $\W_2(\R^d)$ with intensity $\mathbb{E}[Y]=\lambda$, the warped version $\tilde{Y}=T\# Y$ is a random measure with conditional intensity $\Lambda=\mathbb{E}[\tilde{Y}|T]=T\#\lambda$. Assuming that $T$ is increasing with $\E T$ equal to the identity then implies that $\lambda$ is a Fr\'echet mean of $\Lambda$. More generally, if $\lambda\in\W_2(\R^d)$ and $T$ is a random continuous function with mean identity that can be written as the gradient of a convex function on $\R^d$, then $\lambda$ is a Fr\'echet mean of the random measure $\Lambda=T\#\lambda$. In other words, the Wasserstein geometry is canonical under the deformation model, and estimation of a Fr\'echet mean implicitly assumes a deformation model. The result in this form is due to \cite{zemel2017frechet}, but a parametric version is due to \cite{bigot2012characterization}. When $\lambda$ is absolutely continuous, and $T$ is sufficiently injective, $\Lambda=T\#\lambda$ is absolutely continuous and the Fr\'echet mean of $\Lambda$ is unique, and equals $\lambda$. In the particular case of Gaussian measures, the result even holds in infinite dimensions \citep{masarotto2018procrustes}. \subsection{Fr\'echet Means and Multicouplings}\label{sec:frechetMulticoupling} The Fr\'echet mean problem is related to a \emph{multimarginal} formulation of optimal transport considered by \cite{gangbo1998optimal}. Given $\mu_1,\dots,\mu_n\in \W_2(\R^d)$, an optimal \emph{multicoupling} is a joint distribution of a random vector $(X_1,\dots,X_n)$ such that $X_i\sim \mu_i$ and \[ \frac1{2n^2}\E \sum_{1\le i<j\le n} \|X_i - X_j\|^2 =\frac 1{2n}\E \sum_{i=1}^n \|X_i - \overline X\|^2, \] is minimised. \cite{agueh2011barycenters} show that if $(X_1,\dots,X_n)$ is an optimal multicoupling, then the law of $\overline{X}=n^{-1}\sum_i X_i$ a is a Fr\'echet mean of $\{\mu_i\}_{i=1}^n$. Inspection of their argument shows that it can also give the ``only if direction". And, when at least one measure $\mu_i$ is regular, necessity and sufficiency combined can be used to construct the optimal multicoupling as $X_i=\mathbf{t}_{\lambda}^{\mu_i}(Z)$, where $Z\sim \lambda$ and $\lambda$ is the Fr\'echet mean (see \cite{pass2013optimal} and \cite{zemel2017frechet} for more details). This illustrates how constructing the optimal multicoupling is inextricably linked to finding the Fr\'echet mean (for the latter, see Section \ref{steepest_descent}). In fact, the argument of \cite{agueh2011barycenters} extends to infinite-dimensional and even non-linear space. Let $(\mathcal X,\rho)$ be a complete separable ``barycentric metric space": for any $x_1,\dots,x_n\in\mathcal X$ there exists a unique Fr\'echet mean $\overline{x}$. Fr\'echet means of given measures $\mu_1,\dots,\mu_n\in \W_2(\mathcal X)$ are precisely the laws of $\overline X$, where $(X_1,\dots,X_n)$ is an optimal multicoupling with respect to the cost $\E \sum_{i=1}^n \rho(X_i,\overline X)^2$. This relation illustrates the idea that the Wasserstein space captures the geometry of the underlying space. As a particular special case, the Fr\'echet mean of Dirac measures is a Dirac measure at the Fr\'echet mean of the underlying points. Finally, we stress that the relation extends to any $p>1$, where $\overline x^{(p)}$ minimises $\sum \rho(x_i,x)^p$ and optimality is with respect to $\E \sum \rho(X_i,\overline{X}^{(p)})^p$. (Strictly speaking, these are not Fr\'echet means, as one minimises $\sum W_p^p(\mu_i,\mu)$ instead of $\sum W_p^2(\mu_i,\mu)$.) \subsection{Geometry of Wasserstein space}\label{subsec:geometry} A typical step in estimating Fr\'echet means in non-Euclidean settings is approximation of the manifold by a linear space, the tangent space. In the Wasserstein case, the latter is a function space. Let $\lambda$ be the Fr\'echet mean, and assume sufficient regularity that $\lambda$ is unique and absolutely continuous. Then convergence of a sample Fr\'echet mean $\widehat{\lambda}_n$ to $\lambda$ can be quantified by that of the optimal map $\topt \lambda{\widehat{\lambda}_n}$ to the identity map $\mathbf i$, because \[ W_2^2(\widehat{\lambda}_n,\lambda) =\ownint{\R^d}{}{\|\topt\lambda{\widehat{\lambda}_n}(x) - x\|^2}{\lambda(x)} =\|\topt\lambda{\widehat{\lambda}_n} - \mathbf i\|^2_{\mathcal L^2(\lambda)}. \] Here $\mathcal L^2(\lambda)$ is the $L^2$-like space of measurable functions $\mathbf r:\R^d\to\R^d$ such that the real-valued function $x\mapsto \|\mathbf r(x)\|$ is in $L^2(\lambda)$, and whose $L^2(\lambda)$-norm defines $\|\mathbf r\|_{\mathcal L^2(\lambda)}$. Thus, we can linearise the Wasserstein space by identifying an arbitrary measure $\mu$ with the function $\topt\lambda\mu-\mathbf i$ in the linear space $\mathcal L_2(\lambda)$; subtracting the identity ``centres" this linear space at $\lambda$. \subsubsection{The Tangent Bundle} \cite{ambrosio2008gradient} consider absolutely continuous curves in Wasserstein space, and show that optimal maps arise as minimial tangent vectors to such curves. With that in mind they define the tangent space at $\lambda$ as the span of such maps minus the identity: \[ \mathrm{Tan}_\lambda =\overline{\{t(\topt\lambda\mu-\mathbf i):\mu\in \W_2; t\in\R\}}^{\mathcal L^2(\lambda)}. \] By definition, each $\topt\lambda\mu$ (and the identity) is in $\mathcal L^2(\lambda)$, so $\mathrm{Tan}_\lambda\subseteq \mathcal L^2(\lambda)$, from which it inherits the inner product. The definition can be adapted to a non-absolutely continuous $\lambda$ by restricting $\mu$ in the definition of $\mathrm{Tan}_\lambda$ to those $\mu$ for which $\topt\lambda\mu$ exists (this optimal map might not be unique, and any possible choice of $\topt\lambda\mu$ is in the tangent space). There is an alternative equivalent definition of the tangent space in terms of gradients of smooth functions, see \citet[Definition~8.4.1 and Theorem~8.5.1]{ambrosio2008gradient}. The alternative definition highlights that it is essentially the \emph{inner product} that depends on $\lambda$, but not the elements of the tangent space. The exponential map ${\exp}_\lambda:\mathrm{Tan}_\lambda \to \W_2$ at $\lambda$ is the restriction of the transformation that sends $\mathbf r\in \mathcal L^2(\lambda)$ to $(\mathbf r + \mathbf i)\#\lambda\in \W_2$. Specifically, \[ {\exp}_{\lambda}(t(\mathbf t - \mathbf i)) =[t(\mathbf t - \mathbf i) + \mathbf i)\#\lambda = [t\mathbf t + (1-t)\mathbf i]\#\lambda \quad(t\in\R). \] When $\lambda$ is absolutely continuous, the log map ${\log}_\lambda:\W_2 \to \mathrm{Tan}_\lambda$ is \[ \log_{\lambda}(\mu) =\topt{\lambda}{\mu} - \mathbf i, \] and is the right inverse of the exponential map (which is therefore surjective). Segments in the tangent space are retracted to the Wasserstein space under $\exp_\lambda$ to McCann's (1997) interpolant \phantom{\cite{mccann1997convexity}} \[ \left[t(\topt{\lambda}{\mu} + (1-t)\mathbf i\right]\#\lambda, \] and these are the unique (constant speed) geodesics in Wasserstein space \citep[Proposition 5.32]{santambrogio2015optimal}. If $\lambda$ is singular, then the log map is only defined on a subset of Wasserstein space. See \cite{gigli2011inverse} for a description of the tangent bundle when the underlying space $\R^d$ is replaced by a Riemannian manifold. \subsubsection{Curvature and Compatible Measures} If $\mu,\nu,\rho\in\W_2$, then a coupling argument shows that \begin{equation}\label{eq:positiveCurvature} \|\log_\rho(\mu) - \log_\rho(\nu)\|_{\mathcal L^2(\rho)}^2 = \|\topt \rho\mu - \topt\rho\nu\|_{\mathcal L^2(\rho)}^2 =\ownint {}{}{\|\topt\rho\mu(x) - \topt\rho\nu(x)\|^2}{\rho(x)} \ge W_2^2(\mu,\nu). \end{equation} In differential geometry terminology, this means that $\W_2$ has nonnegative sectional curvature. In the special case $d=1$, there is equality, and the Wasserstein space is flat; the correspondence $\mu\iff \topt\rho\mu-\mathbf i$ is an \emph{isometry}, and $\W_2(\R)$ can be viewed as a subset of the Hilbert space $L^2(\mu)$. Computation of Fr\'echet means in then particularly simple: if $\mu_1,\dots,\mu_n$ are arbitrary measures in $\W_2(\R)$ and $\nu$ is any absolutely continuous measure, then the Fr\'echet mean of $(\mu_i)$ is $[(1/n)\sum \topt\nu{\mu_i}]\#\nu$; this extends to the population version. An important extension to $\R^d$ was obtained by \cite{boissard2015distribution}. Equality will hold in Equation~\ref{eq:positiveCurvature} provided some ``compatibility" holds between the measures $\mu,\nu,\rho$. The composition $\topt\rho\nu\circ \topt\mu\rho$ pushes $\mu$ forward to $\rho$ by definition, but might not be the optimal one. We say that $\mu,\nu,\rho$ are \emph{compatible} if $\topt\rho\nu\circ \topt\mu\rho$ is optimal, i.e., equals $\topt\mu\nu$. \cite{boissard2015distribution} show that if the collection $(\mu_1,\dots,\mu_n,\nu)$ is compatible (in their terminology, the optimal maps are \emph{admissible}) in this sense, then again the Fr\'echet mean is $[(1/n)\sum \topt\nu{\mu_i}]\#\nu$. This setup covers the one-dimensional setup, but also multivariate measures with structure that mimics the one-dimensional case. For example, a collection of measures having the same $d$-dimensional copula (and potentially different marginals) is compatible, and so is a collection of measures having the same ``angular" behaviour but different marginal distributions for their norms. \subsubsection{Gaussian Measures} Without such structural restrictions the Wasserstein space is positively curved, and computation of the Fr\'echet mean of a sample is not straightforward. As an important example, if $\mu_i\sim N(0,\Sigma_i)$ are nonsingular Gaussian measures on $\R^d$, then the Fr\'echet mean is also Gaussian and its covariance is the unique nonsingular solution of the matrix equation \begin{equation}\label{eq:GaussFrechet} \Sigma =\frac 1n \sum_{i=1}^n (\Sigma^{1/2}\Sigma_i\Sigma^{1/2})^{1/2}. \end{equation} The $\mu_i$'s will be compatible if the covariances commute, in which case we have the explicit solution $\Sigma^{1/2}=n^{-1}(\Sigma_1^{1/2}+\dots+\Sigma_n^{1/2})$, but otherwise there is no explicit expression for the Fr\'echet mean. The restriction of $\W_2(\R^d)$ to Gaussian measures leads to a \emph{stratified space}, whose geometry was studied carefully by \cite{takatsu2011wasserstein}, including expressions for the curvature. In particular, the latter grows without bound as one approaches singular covariance matrices. \subsection{Fr\'echet Means via Steepest Descent}\label{steepest_descent} A common procedure for finding Fr\'echet means is differentiation of the Fr\'echet functional $F$ and moving in the negative direction of the gradient \citep{karcher1977riemannian,afsari2013convergence}. The gradient at $x_0$ typically takes the form \[ \nabla F(x) =\frac1n \sum_{i=1}^n-\log_{x}(x_i). \] This formula also holds true in Wasserstein space, where the log map is as given in Subsection~\ref{subsec:geometry}. Steepest descent can then be defined using the exponential map as \[ \rho_{j+1} =\exp_{\rho_j}(\nabla F(\rho_j)) =\left[\frac 1n\sum_{i=1}^n\topt{\rho_j}{\mu_i}\right]\#\rho_j. \] The resulting iteration was independently arrived at in this steepest descent form by \cite{zemel2017frechet} and in the form of a fixed point equation iteration by \cite{alvarez2016fixed}. It has the advantage of reducing the multitransport problem of finding the Fr\'echet mean to a succession of pairwise problems that are simpler in nature, in the same spirit as \emph{generalised Procrustes analysis} \citep{dryden1998statistical}. This benefit is best illustrated in the Gaussian case, where the optimal maps have the explicit expression given in Equation~\ref{eq:gaussTransport}. The algorithm converges to the unique Fr\'echet mean in this Gaussian case, and in general will reach at least a stationary point (where $\nabla F$ vanishes). There are local minima that are not global: \cite{alvarez2016fixed} construct measures $\mu_1,\dots,\mu_4,\mu$ in $\R^2$ such that the average of $\topt\mu{\mu_i}$ is the identity, but $\mu$ is not the Fr\'echet mean. Their example shows that the problem cannot be solved by smoothness conditions on the measures. But smoothness and convexity of the supports yields an optimality criterion for local minima \citep{zemel2017frechet}, roughly in that a sufficiently smooth local minimum is a global minimum. \subsection{Large Sample Statistical Theory in Wasserstein Space} The general consistency result of \cite{legouic2017existence} is the important and necessary first step in providing a sound statistical theory for random measures in Wasserstein space. The next step would be to establish the rate of convergence and a central limit theorem. Exploiting the central limit theorem in Hilbert spaces, the one-dimensional case is well-understood, even under sampling noise: the empirical mean $\widehat{\lambda}_n$, viewed as the $L^2$ map, $\sqrt n(\topt\lambda{\widehat{\lambda}_n} - \mathbf i)$, converges in distribution to a zero-mean Gaussian process whose covariance structure is that of the random element $\topt\lambda\Lambda$ \citep{panaretos2016amplitude}; see \cite{bigot2018upper} for minimax-type results in this vein. Since the Wasserstein space on $\R^d$ stays embedded in a Hilbert under the compatible setup of \cite{boissard2015distribution}, these results can certainly be extended to that setup. In fact, \cite{boissard2015distribution} use this embedding to carry out principal component analysis (PCA) in Wasserstein space. See \cite{bigot2017geodesic} for an alternative procedure, \emph{convex PCA}. The only central limit theorem-type result we know of beyond the compatible setup was found recently by \cite{agueh2017vers}. Suppose that $\Lambda$ takes finitely many values: $\P(\Lambda=\lambda_k)=p_k$, $k=1,\dots,K$, and $\lambda_k$ is Gaussian $N(0,\Sigma_k)$ with $\Sigma_k$ nonsingular. Given a sample $\mu_1,\dots,\mu_n$ from $\Lambda$, let $\widehat{p}_n(k)$ be the proportion of $(\mu_i)$'s that equal $\lambda_k$. Then $\sqrt n(\widehat{p}_n - p)$ has a Gaussian limit. Equation~\ref{eq:GaussFrechet} extends to weighted Fr\'echet means, and defines $\Sigma$ in a sufficiently smooth way, so one can invoke the delta method to obtain a central limit theorm for $\sqrt n(\widehat{\Sigma}_n-\Sigma)$. \cite{agueh2017vers} also cover the case $K=2$ and $\lambda_i$ arbitrary, though this setup falls under the umbrella of compatibility, since any pair of measures is a compatible collection. Ongoing work by \cite{kroshnin2018central} focusses on extending the results of \cite{agueh2017vers} to arbitrary random Gaussian/elliptical measures. Beyond this location-scatter setup, very recent results by \cite{ahidar2018rate} suggest that the rate of convergence of the empirical Fr\'echet mean to its population counterpart can be slower than $n^{-1/2}$. \section{Computational Aspects}\label{sec:numerics} Beyond the one-dimensional and Gaussian cases, explicit expressions for the Wasserstein distance and/or the optimal couplings are rare. When $\mu=(1/n)\sum_{i=1}^n\delta_{x_i}$ and $\nu=(1/m)\sum_{j=1}^m\delta_{y_j}$ are uniform discrete measures on $n$ and $m$ points, a coupling $\gamma$ can be identified with an $n\times m$ matrix $\Gamma$, where $\Gamma_{ij}$ represents the mass to be transferred from $x_i$ to $y_j$. The cost function reduces to a cost matrix $c_{ij}=\|x_i - y_j\|^p$, and the total cost associated with it is $\sum_{ij}\Gamma_{ij}c_{ij}$. This double sum is to be minimised over $\Gamma$ subject to the $m+n$ mass preservation constraints \[ \sum_{i=1}^n \Gamma_{ij} = 1/m \quad (j=1,\dots,m), \qquad \sum_{j=1}^m \Gamma_{ij} = 1/n \quad (i=1,\dots,n), \qquad \Gamma_{ij}\ge0. \] One can easily write the constraints in the weighted version of the problem. This optimisation problem can be solved using standard linear programming techniques. In particular, there exists an optimal solution $\Gamma$ with at most $n+m-1$ nonzero entries. In the special case $n=m$ and uniform measures, the extremal points of the constraints polytope are the permutation matrices, and these correspond precisely to deterministic couplings, that have $n$ (rather than $2n-1$) nonzero entries. The specific structure of the constraints matrix allows the development of specialised algorithms: the Hungarian method of \cite{kuhn1955hungarian} and its variant by \cite{munkres1957algorithms} are classical examples, with alternatives such as network simplex, min flow-type algorithms and others \citep[see][Chapter~6]{luenberger2008linear}. The best algorithms have the prohibitive complexity $n^3\log n$ in the worst-case scenario. \cite{sommerfeld2018optimal} propose sampling $s<<n$ points from $\mu$ and $\nu$ and estimating $W_p(\mu,\nu)$ by the empirical distance $W_p(\mu_s,\nu_s)$. They provide bounds on the computational and statistical trade-off regulated by $s$. The multimarginal problem can also be recast as a linear program whose solution yields the Fr\'echet mean (see Subsection \ref{sec:frechetMulticoupling}). If we have $n$ measures $\mu_i$ supported on $m_i$ points ($i=1,\dots,n$), then the number of variables in the problem is $\prod m_i$, and the number of equality constraints is $\sum m_i$, of which $n-1$ are redundant. See \cite{anderes2016discrete} for a detailed account of the problem, where they show the peculiar property that the optimal maps $\topt{\overline \mu}{\mu_i}$ exist, where $\overline \mu$ is a Fr\'echet mean. This is far from obvious, since besides the uniform discrete setup with equal number of points, the optimal coupling between discrete measures is rarely induced from a map. There are alternative formulations with fewer variables and fewer constraints: exact ones \citep{borgwardt2018improved} as well as polynomial-time approximations \citep{borgwardt2017strongly}. One can certainly approximate $W_p(\mu,\nu)$ by $W_p(\mu_n,\nu_n)$ for some $\mu_n,\nu_n$ supported on, say, $n$ points. The approximated problem can be solved exactly, as it is a finite linear program. How to best approximate a measure by discrete measures amounts to \emph{quantisation} and is treated in detail in \cite{graf2007foundations}. Unfortunately, quantisation is extremely difficult in practice, and even one-dimensional measures rarely admit explicit solutions, and, moreover, the computational cost of solving the $n$-to-$n$ points scales badly with $n$. Another class of algorithm is ``continuous" in nature. Recall from Subsection~\ref{subsec:geometry} that optimal maps $\topt\mu\nu$ are equivalent to the unique geodesics in $\W_2$. \cite{benamou2000computational} exploit this equivalence and develop a numerical scheme to approximate the entire geodesic. Although this dynamic formulation adds an extra ``time" dimension to the problem, it can be recast as a convex problem, unlike the formulation with the optimal map as variable. \cite{chartrand2009gradient} carry out steepest descent in the dual variable $\varphi$ in order to maximise the dual $\varphi\mapsto \ownint{}{}\varphi\mu+\ownint{}{}{\varphi^*}\nu$. In an influential paper, \cite{cuturi2013sinkhorn} advocated adding an \emph{entropy} penalty term $\kappa \sum \Gamma_{ij}\log \Gamma_{ij}$ to the objective function. This yields a strictly convex problem with complexity $n^2$, much smaller than the linear programming complexity $n^3\log n$. This entropy term enforces $\Gamma$ to be diffuse (strictly positive), in stark contrast with the unpenalised optimal coupling, but the regularised solution converges to the sparse one as $\kappa\searrow 0$. This idea is extended to the Fr\'echet mean problem in \cite{cuturi2014fast}, where the Fr\'echet mean is computed with respect to the penalised Wasserstein distance, and in \cite{bigot2017penalized}, where the penalisation is imposed on the mean itself, rather than the distance. \cite{bigot2018data} suggest a data-driven choice of the regularisation parameter according to the Goldenshluger--Lepski principle. This field of research is very active, and there are tens of extensions and new algorithms. One can find a short survey in \cite{tameling2018computational}, and we refer to \citet[Chapter 6]{santambrogio2015optimal} and especially the forthcoming book \cite{peyre2018computational} for more details and references. \section{On Some Related Developments}\label{sec:MoreReferences} An interesting recent development that is, strictly speaking, not so much about Wasserstein distances, as about measure transportation itself, considers how to generalise notions related to quantiles to several dimensions. In one dimension, the quantile function $F_Y^{-1}$ is the optimal map from a uniform variable $U$ to $Y$. This observation can be used in order to define a multivariate quantile function of $Y$ using the optimal transport map $\topt UY$ from some reference random variable $U$ (e.g., uniform on the unit ball). \cite{chernozhukov2017monge} describe the resulting form of the quantile contours and the induced notions of depth and ranks, and estimate them from data. Further work by \cite{hallin2017distribution} considers extensions of the approach that do not require finite variance for $Y$ (as is the case in one dimension). This measure-transportation approach also allows to extend quantile regression to multivariate setups \citep{carlier2016vector}. Finally, due to space considerations we have not attempted to describe the machine learning side of optimal transport, though there is a fast-growing literature for such tasks. Indicative examples include estimation of a low-dimensional measure in high-dimensional space \citep{canas2012learning}, regression in the space of histograms \citep{bonneel2016wasserstein}, dictionary learning \citep{rolet2016fast}, Gaussian processes indexed by measures on $\R$ \citep{bachoc2017gaussian} or $\R^d$ \citep{bachoc2018gaussian}, clustering in Wasserstein space \citep{delBarrio2018robust}, and unsupervised alignment of point clouds in high dimensions \citep{grave2018unsupervised}. \section*{Acknowledgments} This was supported in part by an European Reasearch Council Starting Grant Award to Victor M. Panaretos. Yoav Zemel is funded by Swiss National Science Foundation grant \#178220. We thank a reviewer for comments on a preliminary version of the paper.
2,869,038,154,200
arxiv
\section{Usefulness of calculating Fourier series in the SEM} In this note is presented a method, given nodal values on multidimensional nonconforming spectral elements, for calculating global Fourier-series coefficients. This method is ``exact'' in that given the approximation inherent in the spectral-element method (SEM), no further approximation is introduced that exceeds computer round-off error. The method is very useful when the SEM has yielded an adaptive-mesh representation of a spatial function whose global Fourier spectrum must be examined, e.g., in dynamically adaptive fluid-dynamics simulations such as \citep{RFFP2005}. \section{Derivation of an exact transform} \label{mthdlg} Suppose we have some functional problem in a spatial domain $\mathbb{D}\assign[-\pi,\pi]^d$ (possibly including toroidal geometry) and use coordinate transformations \begin{equation \vec\vartheta_k\quad\text{from}\quad \vec\xi\in\mathbb{E}_0:=[-1,1]^d\quad\text{to}\quad\vec{x}\in\mathbb{E}_k \label{lct \end{equation to partition $\mathbb{D}=\bigcup_{k=1}^K\mathbb{E}_k$ by $K$ elements $\mathbb{E}_k\assign\vec\vartheta_k(\mathbb{E}_0)$ with disjoint\footnote{$\overset{\bullet}{\mathbb{E}}_k\bigcap\overset{\bullet}{\mathbb{E}}_{k'}=\varnothing$ if $k\neq{k'}$} interiors. Typically the SEM approximates the exact solution by its piecewise polynomial representation of degree $P$: \begin{equation u^{\rm{ex}}(\vec{x})\approx u(\vec{x})=\sum_{k=1}^K\sum_{\vec\jmath\in\mathbb{J}} u_{\vec\jmath,k}\phi_{\vec\jmath,k}(\vec{x}), \label{ppr \end{equation where $\mathbb{J}:=\{0,\ldots P\}^d$ indexes the values $u_{\vec\jmath,k}\assign u(\vec{x}_{\vec\jmath,k})$ and nodes $\vec{x}_{\vec\jmath,k}\assign\vec\vartheta_k(\vec\xi_{\vec\jmath})$ mapped from the $d$-dimensional Gauss-Lobatto-Legendre (GLL) quadrature nodes $\xi^\alpha_{\vec\jmath}\assign\xi_{\jmath^\alpha}\in[-1,1]$, \begin{equation \phi_{\vec\jmath,k}(\vec{x})\assign\begin{cases} \phi_{\vec\jmath}\circ\vec\vartheta^{-1}_k(\vec{x}),&\vec{x}\in\mathbb{E}_k\\ 0,&\vec{x}\not\in\mathbb{E}_k\end{cases} \label{mappedip \end{equation is the $\vec{x}_{\vec\jmath,k}$-interpolating piecewise-polynomial, \begin{equation \phi_{\vec\jmath}(\vec\xi)\assign \prod_{\alpha=1}^d\phi_{\jmath^\alpha}(\xi^\alpha) \quad\text{and}\quad \phi_j(\xi)\assign\sum_{p=0}^P\check{\phi}_{j,p}{\rm{L}}_{p}(\xi) \label{GLLip} \end{equation are $\vec\xi_{\vec\jmath}$\:- and $\xi_j$-interpolating polynomials, $\check{\phi}_{j,p}\equiv w_j{\rm{L}}_{p}(\xi_j)/\sum_{j'=0}^Pw_{j'}{\rm{L}}_{p}(\xi_{j'})^2$ is a Legendre coefficient \citep[e.g.,][(B.3.15)]{DFM2002}, $\sqrt{p+\half}{\rm{L}}_p(\xi)$ is the orthonormal Legendre polynomial of degree $p$ on $[-1,1]$ and $w_j$ is the GLL quadrature weight. In many cases a physically interesting quantity is the global Fourier-series coefficient $\hat{u}_{\vec{q}}$ at integer wavenumber components $q^\alpha$, usually approximated by $M^d$-point trigonometric $d$-cubature in such manner as \begin{align \hat{u}_{\vec{q}}&\assign\frac{1}{(2\pi)^d}\int_{\mathbb{D}} u(\vec{x})\e^{-{\rm{i}}\vec{q}\vdp\vec{x}}\d v(\vec{x}) \equiv\sum_{k=1}^K\sum_{\vec\jmath\in\mathbb{J}} \hat{\phi}_{\vec\jmath,k,\vec{q}}u_{\vec\jmath,k}, \label{Fsc}\\%---------------------------------------------------------- \text{where}\quad\hat{\phi}_{\vec\jmath,k,\vec{q}}&= \frac{1}{M^d}\sum_{\vec{m}\in\mathbb{M}} \phi_{\vec\jmath,k}(\vec{x}_{\vec{m}})\e^{-{\rm{i}}\vec{q}\vdp\vec{x}_{\vec{m}}} -\mathcal{E}_{\vec{q}}\phi_{\vec\jmath,k}, \label{aFsc \end{align $\d v(\vec{x}):=\prod_{\alpha=1}^d\d x^\alpha$ is the volume differential and $\mathbb{M}:=\{1,\ldots M\}^d$ indexes trigonometric nodes $x^\alpha_{\vec{m}}\assign(2m^\alpha/M-1)\pi$. Note whenever $\mathbb{D}$ is adaptively repartitioned there is an additional computation cost of $\mathcal{O}(M^d)$ per node to use \eqref{ppr} to provide in \eqref{aFsc} the values $\phi_{\vec\jmath,k}(\vec{x}_{\vec{m}})$, as well as a $d$-cubature error \citep[generalizing][theorem 4.7]{Boyd1989} \begin{align* \mathcal{E}_{\vec{q}}u\equiv\sum_{\vec{r}\in \Zset^d\setminus\{\vec0\}}\hat{u}_{\vec{q}+M\vec{r}} \end{align* that in general converges no faster than $\mathcal{O}(M^{-2})$, because $\mathbb{C}^1$ discontinuities of \eqref{ppr} across element boundaries cause $|\hat{u}_{\vec{q}}|$ to decay only as $\mathcal{O}(|\vec{q}|^{-2})$. We discover a more accurate method by substituting \eqref{mappedip} into \eqref{Fsc} to yield \begin{align* \hat{\phi}_{\vec\jmath,k,\vec{q}}&=\frac{1}{(2\pi)^d} \int_{\mathbb{E}_k}\e^{-{\rm{i}}\vec{q}\vdp\vec{x}} \phi_{\vec\jmath}\circ\vec\vartheta^{-1}_k(\vec{x})\d v(\vec{x})\\ &\overset{\eqref{lct}}{=}\frac{1}{(2\pi)^d}\int_{\mathbb{E}_0} \e^{-{\rm{i}}\vec{q}\vdp\vec\vartheta_k(\vec\xi)} \phi_{\vec\jmath}(\vec\xi) \left|\frac{\partial\vec\vartheta_k}{\partial\vec\xi}\right|\d v(\vec\xi)\\%----------------------------------------------------------- &\overset{\eqref{GLLip}}{=} \frac{1}{(2\pi)^d}\int_{\mathbb{E}_0} \e^{-{\rm{i}}\vec{q}\vdp\vec\vartheta_k(\vec\xi)} \left(\prod_{\alpha=1}^d\sum_{p=0}^P \check{\phi}_{\jmath^\alpha,p}{\rm{L}}_p(\xi^\alpha)\right) \left|\frac{\partial\vec\vartheta_k}{\partial\vec\xi}\right|\d v(\vec\xi). \end{align* In many applications, especially when $u$-structure rather than domain geometry is guiding the mesh adaption, each $\mathbb{E}_k$ is a $d$-parallelepiped with center $\vec{a}_k$ and $d$ legs $2\vec{h}^\alpha_k$, so we have an affinity $\vec\vartheta_k(\vec\xi):=\vec{a}_k+\vec{\vec{h}}_k\vdp\vec\xi$, where $\vec{h}^\alpha_k$ make up the columns of $\vec{\vec{h}}_k$. Then we obtain $ \hat{\phi}_{\vec\jmath,k,\vec{q}}= \frac{1}{(2\pi)^d}\left|\vec{\vec{h}}_k\right| \e^{-{\rm{i}}\vec{q}\vdp\vec{a}_k} \prod_{\alpha=1}^d\sum_{p=0}^P \check{\phi}_{\jmath^\alpha,p}\int_{-1}^1 \e^{-{\rm{i}}\vec{q}\vdp\vec{h}^\alpha_k\xi} {\rm{L}}_p(\xi)\d\xi. $ Finally, recalling the classical identity \citep[e.g.,][exercise 12.4.9]{Arf85} for the spherical Bessel function ${\rm{B}}_p(r)$ of the first kind, \begin{align {\rm{B}}_p(r)&\equiv\frac{{\rm{i}}^p}{2}\int_{-1}^1 \e^{-{\rm{i}}r\xi}{\rm{L}}_p(\xi)\d\xi, \label{sBf \\\text{we obtain}\quad \hat{\phi}_{\vec\jmath,k,\vec{q}}&= \frac{1}{\pi^d}\left|\vec{\vec{h}}_k\right| \e^{-{\rm{i}}\vec{q}\vdp\vec{a}_k}\prod_{\alpha=1}^d\sum_{p=0}^P \check{\phi}_{\jmath^\alpha,p}{\rm{i}}^{-p} {\rm{B}}_p(\vec{q}\vdp\vec{h}^\alpha_k). \label{ltm \end{align Note that most expressions in \eqref{ltm} can be precomputed; objects that may vary during a dynamically adaptive computation, such as $\vec{a}_k$ or $\vec{h}^\alpha_k$, typically take values from a sparse set, e.g., a collection of powers of 2. The computation of \eqref{Fsc} now incurs no additional error beyond that of \eqref{ppr}. Also note, to generalize to the case $P=P_k^\alpha$ is straightforward. \section{Accuracy of transform for 1D \& 2D test cases} \label{rslts} Equation \eqref{ltm} was implemented in MatLab$^{\circledR}$ and tested using known results for \eqref{Fsc}. The most immediate test follows from \eqref{sBf}, namely $\hat{u}^{\rm{ex}}_q=\widehat{\rm{L}_p(\cdot/\pi)}_q ={\rm{i}}^{-p}{\rm{B}}_p(\pi q)$. In this case \eqref{Fsc} was found to reproduce \eqref{sBf} to 12-16 digits for $K=1$, $P\leq18$, implying similar performance for any polynomial $u(\vec{x})$ in this range. The next test was to put $u^{\rm{ex}}(x)=\sin x$, or $\hat{u}^{\rm{ex}}_q=(\delta_{q,1}-\delta_{q,-1})/2{\rm{i}}$. Since this is not a polynomial we should expect at best to see algebraic convergence w.r.t.\ $K$ in a uniform meshing $a_k=(k-1)h_k-\pi$, $h_k=2\pi/K$ and exponential convergence w.r.t.\ $P$, as verified in Fig.\ \ref{f:Gqn2Fwnt}. Note there is no need to test $u^{\rm{ex}}(x)=\sin rx$ for $r>1$ because of scaling. \begin{figure}\begin{center \includegraphics[height=.25\textheight,width=.5\textwidth]{Gqn2Fwnt.eps}% \caption{Surface plot (blue low to red high) of log$_{10}$ relative r.m.s.\ error in \eqref{Fsc} for $u^{\rm{ex}}(x)=\sin x$, vs $\log_2K$ and $P$.} \label{f:Gqn2Fwnt}\end{center}\end{figure We conclude by examining three 2D tests with adaptive meshing in the fashion of \citep{FR2005}, using MatLab$^{\circledR}$. Fig.\ \ref{f:GqnsiLor} confirms \eqref{Fsc} in the case \citep[(19)]{fbc2005} \begin{equation u^{\rm{ex}}(\vec{x})\equiv\sum_{\vec{q}\in\Zset^2} \e^{b^1|q^1|+b^2|q^2|+{\rm{i}}\vec{q}\vdp\vec{\vec{l}}\vdp\vec{x}}, \label{e:GqnsiLor} \end{equation where $b^\alpha=-\frac{2}{5}$ and $\vec{\vec{l}}\doteq\left(\begin{smallmatrix}\hphantom{-}l^1&l^2\\ -l^2&l^1\end{smallmatrix}\right)=\left(\begin{smallmatrix}\hphantom{-}1&2\\ -2&1\end{smallmatrix}\right)$ is a biperiodicity-preserving ``rotation''. As expected, the red curve (connecting the $|\hat{u}_{\vec{q}}|$ peaks) shows a power-law decay in $\vec{q}$-space. Note, in this plot and those below the $\vec{\vec{l}}$-operation helps instigate mesh adaption but has the consequence of leaving $\vec{q}$ undersampled in $\Zset^2$. In Fig.\ \ref{f:GqnBurg0} is shown an initial condition \citep[(22)]{fbc2005} \begin{equation \vec{u}^{\rm{ex}}(0,\vec{x}):=-\vec{l}\sin\vec{l}\vdp\vec{x} \label{e:GqnBurg0} \end{equation for the 2D Burgers eq. As expected, $\hat{u}_{\vec{q}}$ almost vanishes for $\vec{q}\neq\pm\vec{l}$. Finally, at time $t=1.6037/\pi|\vec{l}|^2$ the analytic solution generalizing \citep[(2.5)]{BDHLOPO} to 2D is shown in Fig.\ \ref{f:GqnBurg1}. As expected for the \emph{nearly} $\mathbb{C}^0$-discontinuous fronts $\perp\vec{l}$ seen at left, $|\hat{u}^1_{\vec{q}}|$ decays slightly faster than $\mathcal{O}(|{\vec{q}}|^{-1})$ but \emph{only for wavevectors} $\vec{q}\|\vec{l}$ (red curve). \begin{figure}\begin{center \includegraphics[width=.5\textwidth]{GqnsiLor.eps \includegraphics[height=.5\textwidth,width=.5\textwidth]{FwnsiLor.eps \caption{Left, $u$ \eqref{e:GqnsiLor} over the spatial $\vec{x}$ domain, increasing from blue to red; yellow lines indicate element boundaries, black lines show nodes $\vec{x}_{\vec\jmath,k}$ with $P=5$. Right, surface plot of $|\hat{u}_{\vec{q}}|$ from \eqref{Fsc} vs $q^1$ and $q^2$.} \label{f:GqnsiLor}\end{center}\end{figure \begin{figure}\begin{center \includegraphics[width=.5\textwidth]{GqnBurg0.eps \includegraphics[height=.5\textwidth,width=.5\textwidth]{FwnBurg0.eps \caption{As in Fig.\ \ref{f:GqnsiLor} but for the $t=0$ state given by \eqref{e:GqnBurg0}, in $K=2^6$ elements.} \label{f:GqnBurg0}\end{center}\end{figure \begin{figure}\begin{center \includegraphics[width=.5\textwidth]{GqnBurg1.eps \includegraphics[height=.5\textwidth,width=.5\textwidth]{FwnBurg1.eps \caption{As in Fig.\ \ref{f:GqnBurg0} but for $t=1.6037/5\pi$.} \label{f:GqnBurg1}\end{center}\end{figure
2,869,038,154,201
arxiv
\section{Introduction} Let $A$ be a connected Nakayama algebra without simple projective modules. All modules are left modules of finite length. We denote the number of simple $A$-modules by $n(A)$. Let $\gamma(S)=\tau\soc{P(S)}$ for a simple $A$-module $S$ \cite{R1}, where $P(S)$ is the projective cover of $S$ and $\tau=\mathrm{DTr}$ is the Auslander-Reiten translation \cite{ARS}. Ringel \cite{R1} defined the {\em resolution quiver} $R(A)$ of $A$ as follows: the vertices correspond to simple $A$-modules and there is an arrow from $S$ to $\gamma(S)$ for each simple $A$-module $S$. The resolution quiver gives a fast algorithm to decide whether $A$ is a Gorenstein algebra or not, and whether it is CM-free or not; see \cite{R1}. Using the map $f$ introduced in \cite{G}, the notion of resolution quiver applies to any connected Nakayama algebra. It is known that each connected component of $R(A)$ has a unique cycle. Let A be a connected Nakayama algebra and $C$ be a cycle in $R(A)$. Assume that the vertices of $C$ are $S_1, S_2, \cdots , S_m$. We define the {\em weight} of $C$ to be $\frac{\sum_{k=1}^mc_k}{n(A)}$, where $c_k$ is the length of the projective cover of $S_k$. The aim of this note is to prove the following result. \begin{prop} \label{prop 1.1} Let $A$ be a connected Nakayama algebra. Then all cycles in its resolution quiver are of the same size and of the same weight. \end{prop} As a consequence of Proposition \ref{prop 1.1}, if the resolution quiver has a loop, then all cycles are loops; this result is obtained by Ringel \cite{R1, R2}. The proof of Proposition \ref{prop 1.1} uses {\em left retractions} of Nakayama algebras studied in \cite{CY}. \section{The proof of Proposition 1.1} Let $A$ be a connected Nakayama algebra. Recall that $n=n(A)$ is the number of simple $A$-modules. Let $S_1,S_2,\cdots,S_n$ be a complete set of pairwise non-isomorphic simple $A$-modules and $P_i$ be the projective cover of $S_i$. We require that $\rad{P_i}$ is a factor module of $P_{i+1}$. Here, we identify $n+1$ with $1$. Recall that $\mathbf{c}(A)=(c_1,c_2,\cdots,c_n)$ is an {\em admissible sequence} for $A$, where $c_i$ is the length of $P_i$; see \cite[Chapter IV. 2]{ARS}. We denote $p(A)=\min\{c_1,c_2,\cdots,c_n\}$. The algebra $A$ is called a {\em line algebra} if $c_n=1$ or, equivalently, the valued quiver of $A$ is a line; otherwise, $A$ is called a {\em cycle algebra} or, equivalently, the valued quiver of $A$ is a cycle. Then $A$ is a cycle algebra if and only if $A$ has no simple projective modules. Following \cite{G}, we introduce a map $f_A:\{1,2,\cdots,n\}\to\{1,2,\cdots,n\}$ such that $n$ divides $f_A(i)-(c_i+i)$ for $1\leq{i}\leq n$. The {\em resolution quiver} $R(A)$ of $A$ is defined as follows: its vertices are $1,2,\cdots,n$ and there is an arrow from $i$ to $f_A(i)$. Observe that for a cycle algebra $A$ we have $\gamma(S_i)=S_{f_A(i)}$. Then by identifying $i$ with $S_i$, the resolution quiver $R(A)$ coincides with that in \cite{R1}. Assume that $A$ is a cycle algebra which is not self-injective. After possible cyclic permutations, we may assume that its admissible sequence $\mathbf{c}(A)=(c_1,c_2,\cdots,c_n)$ is {\em normalized} \cite{CY}, that is, $p(A)=c_1=c_n-1$. Recall from \cite{CY} that there is an algebra homomorphism $\eta:A \to L(A)$ with $L(A)$ a connected Nakayama algebra such that its admissible sequence $\mathbf{c}(L(A)) = (c_1',c_2',\cdots,c_{n-1}')$ is given by $c_i'=c_i-[\frac{c_i+i-1}{n}]$ for $1\leq i\leq n-1$; in particular, $n(L(A))=n(A)-1$. Here, for a real number $x$, $[x]$ denotes the largest integer not greater than $x$. The algebra homomorphism $\eta$ is called the {\em left retraction} \cite{CY} of $A$ with respect to $S_n$. We introduce a map $\pi:\{1,2,\cdots,n\}\to\{1,2,\cdots,n-1\}$ such that $\pi(i)=i$ for $i<n$ and $\pi(n)=1$. The following result is contained in the proof of \cite[Lemma 3.7]{CY}. \begin{lem} \label{lem 2.1} Let $A$ be a cycle algebra which is not self-injective. Then $\pi f_A(i)=f_{L(A)}\pi(i)$ for $1\leq i\leq n$. \end{lem} \begin{proof} Let $c_i+i=kn+j$ with $k\in\Natural$ and $1\leq j\leq n$. In particular, $f_A(i)=j$. For $i<n$, we have \begin{equation} \label{eq 1} c_{\pi(i)}^\prime+i=c_i+i-\bigg[\frac{c_i+i-1}{n}\bigg]=kn+j-\bigg[\frac{kn+j-1}{n}\bigg]=k(n-1)+j. \end{equation} Then $\pi f_A(i)=\pi(j)$ and $f_{L(A)}\pi(i)=f_{L(A)}(i)=\pi(j)$. For $i=n$, we have \begin{equation} \label{eq 2} c_{\pi(n)}^\prime+n=c_n-1+n-\bigg[\frac{c_n-1}{n}\bigg]=kn+j-1-\bigg[\frac{kn+j-n-1}{n}\bigg]=k(n-1)+j. \end{equation} Then $\pi f_A(n)=\pi(j)$ and $f_{L(A)}\pi(n)=f_{L(A)}(1)=\pi(j)$. \end{proof} The previous lemma gives rise to a unique morphism of resolution quivers \[ \tilde\pi:R(A)\longrightarrow R(L(A)) \] such that $\tilde\pi(i)=\pi(i)$. Then $\tilde\pi$ sends the unique arrow from $i$ to $f_A(i)$ to the unique arrow in $R(L(A))$ from $\pi(i)$ to $f_{L(A)}\pi(i)=\pi f_A(i)$. The morphism $\tilde\pi$ identifies the vertices $1$ and $n$ as well as the arrows starting from $1$ and $n$. Because $1$ and $n$ are in the same connected component of $R(A)$, we infer that $R(A)$ and $R(L(A))$ have the same number of connected components. Let $A$ be a connected Nakayama algebra and $C$ be a cycle in $R(A)$. The {\em size} of $C$ is the number of vertices in $C$. We recall that the {\em weight} of $C$ is given by $w(C)=\frac{\sum_{k}c_k}{n(A)}$, where $k$ runs over all vertices in $C$. We mention that $w(C)$ is an integer; see \eqref{eq 3}. A vertex $x$ in $R(A)$ is said to be {\em cyclic} provided that $x$ belongs to a cycle. \begin{lem} \label{lem 2.2} Let $A$ be a cycle algebra which is not self-injective. Then $\tilde\pi$ induces a bijection between the set of cycles in $R(A)$ and the set of cycles in $R(L(A))$, which preserves sizes and weights. \end{lem} \begin{proof} We observe that for two vertices $x$ and $y$ in $R(A)$, $\pi(x)=\pi(y)$ if and only if $x=y$ or $\{x,y\}=\{1,n\}$. Note that $f_A(1)=f_A(n)$. So the vertices $1$ and $n$ are in the same connected component of $R(A)$ and they are not cyclic at the same time. Let $C$ be a cycle in $R(A)$ with vertices $x_1,x_2,\cdots,x_s$ such that $x_{i+1}=f_A(x_i)$. Here, we identify $s+1$ with $1$. Since the vertices $1$ and $n$ are not cyclic at the same time, we have that $\pi(x_1),\pi(x_2),\cdots,\pi(x_s)$ are pairwise distinct and $\tilde{\pi}(C)$ is a cycle in $R(L(A))$. Hence $\tilde\pi$ induces a map from the set of cycles in $R(A)$ to the set of cycles in $R(L(A))$. Obviously the map is injective. On the other hand, recall that $R(L(A))$ and $R(A)$ have the same number of connected components, thus they have the same number of cycles. Hence $\tilde\pi$ induces a bijection between the set of cycles in $R(A)$ and the set of cycles in $R(L(A))$ which preserves sizes. It remains to prove that $w(C)=w(\tilde\pi(C))$. We assume that $c_{x_i}+x_i=k_i n+x_{i+1}$ with $k_i\in\Natural$. Then we have \begin{equation} \label{eq 3} w(C)=\frac{\sum_{i=1}^sc_{x_i}}{n}=\sum_{i=1}^s{k_i}. \end{equation} Recall that $n(L(A))=n-1$. We note that $c_{\pi(x_i)}^\prime+x_i= k_i(n-1)+x_{i+1}$; see \eqref{eq 1} and \eqref{eq 2}. Hence $\sum_{i=1}^sc_{\pi(x_i)}'=(n-1)\sum_{i=1}^sk_i$ and the assertion follows. \end{proof} Recall from \cite[Theorem 3.8]{CY} that there exists a sequence of algebra homomorphisms \begin{equation} \label{eq 4} A=A_0\To{\eta_0}A_1\To{\eta_1}A_2\to\cdots\to A_{r-1}\To{\eta_{r-1}}A_r \end{equation} such that each $A_i$ is a connected Nakayama algebra, $\eta_i:A_i\to A_{i+1}$ is a left retraction and $A_r$ is self-injective. We now prove Proposition \ref{prop 1.1}. \begin{proof}[\bf Proof of Proposition \ref{prop 1.1}:] Assume that $A$ is a connected self-injective Nakayama algebra with $n(A)=n$ and admissible sequence $\mathbf{c}(A)=(c,c,\cdots,c)$. Then a direct calculation shows that $R(A)$ consists entirely of cycles and each cycle is of size $\frac{n}{(n,c)}$ and of weight $\frac{c}{(n,c)}$, where $(n,c)$ is the greatest common divisor of $n$ and $c$. In particular, all cycles in $R(A)$ are of the same size and of the same weight. In general, let $A$ be a connected Nakayama algebra whose admissible sequence is $\mathbf{c}(A)=(c_1,c_2,\cdots,c_n)$. Take $A'$ to be a connected Nakayama algebra with admissible sequence $\mathbf{c}(A')=(c_1+n,c_2+n,\cdots,c_n+n)$. Then $R(A)=R(A')$ and for any cycle $C$ in $R(A)$, the corresponding cycle $C'$ in $R(A')$ satisfies $w(C')=w(C)+s(C)$, where $s(C)$ denotes the size of $C$. The statement for $A$ holds if and only if it holds for $A'$. We now assume that $A$ is a connected Nakayama algebra with $p(A)>n(A)$. One proves by induction that each $A_i$ in the sequence \eqref{eq 4} satisfies $p(A_i)>n(A_i)$. In particular, each $A_i$ is a cycle algebra. We can apply Lemma \ref{lem 2.2} repeatedly. Then the statement for $A$ follows from the statement for the self-injective Nakayama algebra $A_r$, which is already proved above. \end{proof} We conclude this note with a consequence of the above proof. \begin{cor} Let $A$ be a connected Nakayama algebra of infinite global dimension. Then we have the following statements. $\mathrm{(1)}$ The number of cyclic vertices of the resolution quiver $R(A)$ equals the number of simple $A$-modules of infinite projective dimension. $\mathrm{(2)}$ The number of simple $A$-modules of infinite projective dimension equals the number of simple $A$-modules of infinite injective dimension. \end{cor} \begin{proof} $\mathrm{(1)}$ All the algebras $A_i$ in the sequence \eqref{eq 4} have infinite global dimension; see \cite[Lemma 2.4]{CY}. In particular, they are cycle algebras. We apply Lemma \ref{lem 2.2} repeatedly and obtain a bijection between the set of cyclic vertices of $R(A)$ and the set of cyclic vertices of $R(A_r)$. Recall that all vertices of $R(A_r)$ are cyclic, and $n(A_r)$ equals $n(A)$ minus the number of simple $A$-modules of finite projective dimension; see \cite[Theorem 3.8]{CY}. Then the statement follows immediately. $\mathrm{(2)}$ Recall from \cite[Corollary 3.6]{M} that a simple $A$-module $S$ is cyclic in $R(A)$ if and only if $S$ has infinite injective dimension. Then $\mathrm{(2)}$ follows from $\mathrm{(1)}$. \end{proof} \section*{Acknowledgements} The author thanks his supervisor Professor Xiao-Wu Chen for his guidance and Professor Claus Michael Ringel for his encouragement.
2,869,038,154,202
arxiv
\section{Introduction} It is known that all analytically solvable potentials in quantum mechanics have the property of shape invariance~\cite{gend}. In fact shape invariance is an integrability condition, however, one should emphasize that shape invariance is not the most general integrability condition as not all exactly solvable potentials seem to be shape invariance to~\cite{cooper,dabro}. An interesting feature of supersymmetric quantum mechanics is that for a shape invariant system \cite{cooper1, infeld} the entire spectrum can be determined algebraically without ever referring to underlying differential equations.\\ In this paper we briefly describe supersymmetric quantum mechanics, then by using the method of point canonical transformation we find that the Coulomb and Kratzer potentials can be mapped to the Morse potential \cite{cooper2}. The Kratzer potential \cite{kratzer} we consider in this paper has played an important role in the history of the molecular and quantum chemistry and it has been so far extensively used to describe the molecular structure and interactions \cite{mol}. After that we show that the P\"{o}schl-Teller potential type I belongs to the same subclass of shape invariant potentials as Hulth\'{e}n potential. The Hulth\'{e}n potential \cite{h1,h2} is one of the important short-range potentials in physics. This potential is a special case of the Eckart potential \cite{eckart} which has been widely used in several branches of physics and its bound-state and scattering properties have been investigated by a variety of techniques. \section{Supersymmetry Quantum Mechanics and Shape invariance} According to the factorization method~\cite{{Infeld},{dong}}, the quantum mechanical Hamiltonian, after subtracting the ground energy, is written as the product of an operator $\hat{A}$ and its Hermitian conjugate, $\hat{A}^{\dag}$ \begin{equation}\label{1} \hat{H}-E_{0}=\hat{A}^{\dag}\hat{A} \end{equation} where $E_{0}$ is the ground state energy, and $\hat{A}$, $\hat{A}^{\dag}$ are given by \begin{eqnarray}\label{2} \hat{A}=W(x)+\frac{i}{\sqrt{2\,m}}\hat{p}\\\label{3} \hat{A}^{\dag}=W(x)-\frac{i}{\sqrt{2\,m}}\hat{p} \end{eqnarray} By definition (\ref{1}) the ground state wave function satisfies the following condition \begin{equation}\label{4} \hat{A}|\psi_{0}>=0 \end{equation} Since the ground-state wave function $\psi_{0}(x)$ for a bound state has no node, it can be written as \begin{equation}\label{5} \psi_{0}(x)=e^{-\frac{\sqrt{2\,m}}{\hbar} \int{W(x)\,dx}} \end{equation} Using (\ref{4}) we have the following supersymmetric partner Hamiltonian \begin{equation}\label{6} \hat{H}_{1}=\hat{A}^{\dag}\hat{A}, \hspace{15mm}\hat{H}_{2}=\hat{A}\hat{A}^{\dag} \end{equation} The corresponding potentials are given as \begin{equation}\label{7} V_{1}=W^{2}(x)-\frac{\hbar}{\sqrt{2\,m}}\frac{d\,W(x)}{dx} \end{equation} \begin{equation}\label{8} V_{2}=W^{2}(x)+\frac{\hbar}{\sqrt{2\,m}}\frac{d\,W(x)}{dx} \end{equation} The Hamiltonian in (\ref{1}) is called shape-invariant\cite{dabro} if the following condition is satisfied: \begin{eqnarray}\label{9} \hat{A}(a_{1})\hat{A}^{\dag}(a_{1})=\hat{A}^{\dag}(a_{2})\hat{A}(a_{2})+R(a_{a_{1}}) \end{eqnarray} where $a_{1}$, and $a_{2}$ represent the parameters of the Hamiltonian. One can rewrite the above condition in term of the partner potentials as: \begin{eqnarray}\label{10} V_{2}(x,a_{1})=V_{1}(x,a_{2})+R(a_{1}) \end{eqnarray} shape-invariant problem was formulated in algebraic terms in\cite{Balan}. We assume that replacing $a_{1}$ by $a_{2}$ in a given operator can be achieved with a similarity transformation \begin{equation}\label{11} \hat{T}(a_{1}){\cal{O}}(a_{1})\hat{T}^{-1}(a_{1})={\cal O}(a_{2}) \end{equation} There are two classes of shape-invariant potentials. For the first class the parameters $a_{1}$ and $a_{2}$ of the two suppersymmetric parameters are related to each other by translation~\cite{cooper,Chuan} \begin{equation}\label{12} a_{2}=a_{1}+\eta \end{equation} For the second class, the parameters $a_{1}$ and $a_{2}$ are related to each other by scaling~\cite{khare1,Barc} \begin{equation}\label{13} a_{2}=q\,a_{1} \end{equation} For the first class the operator $\hat{T}(a_{1})$ of (\ref{11}) is given by \begin{equation}\label{14} \hat{T}(a_{1})=e^{\eta\frac{\partial}{\partial a_{1}}}, \hspace{15mm}\hat{T}^{-1}(a_{1})=\hat{T}^{\dag}(a_{1}) \end{equation} In the second class, the similarity transformation (\ref{11}) is given by following operator \begin{equation}\label{15} \hat{S}(a_{1})=e^{\ln{q}\,a_{1}\frac{\partial}{\partial a_{1}}}, \hspace{15mm}\hat{S}^{-1}(a_{1})=\hat{S}^{\dag}(a_{1}) \end{equation} By introducing new operators \begin{equation}\label{16} \hat{B}_{+}=\hat{A}^{\dag}(a_{1})\hat{T}(a_{1}), \hspace{15mm}\hat{B}_{-}=\hat{B}^{\dag}_{+}=\hat{T}^{\dag}(a_{1})\hat{A}(a_{1}) \end{equation} the Hamiltonian can be rewritten as \begin{equation}\label{17} \hat{H}-E_{0}=\hat{A}^{\dag}\hat{A}=\hat{B}_{+}\hat{B}_{-} \end{equation} Using Eqs.(\ref{9}) and (\ref{16}), one can obtain following commutation relation \begin{equation}\label{18} [\hat{B}_{-},\hat{B}_{+}]=R(a_{0}) \end{equation} where \begin{equation}\label{19} a_{n}=a_{0}+n\,\eta\hspace{15mm}or\hspace{15mm}a_{n}=q^{n}\, a_{0} \end{equation} also following identities \begin{equation}\label{20} R(a_{n})=\hat{T}(a_{1})R(a_{n-1})\hat{T}^{\dag}(a_{1}),\hspace{15mm} R(a_{n})=\hat{S}(a_{1})R(a_{n-1})\hat{S}^{\dag}(a_{1}) \end{equation} valid for any $n$. By using Eqs.(\ref{16},\ref{20}) we can establish the commutation relations \begin{eqnarray}\label{21} [\hat{H},\hat{B}^{n}_{+}]&=&(R(a_{1})+R(a_{2})+\ldots+R(a_{n}))\hat{B}^{n}_{+}\\\label{22} [\hat{H},\hat{B}^{n}_{-}]&=&-\hat{B}^{n}_{-}(R(a_{1})+R(a_{2})+\ldots+R(a_{n})) \end{eqnarray} means that, $B^{n}_{+}|\psi_{0}>$ is an eigenstate of the Hamiltonian with the eigenvalue $R(a_{1})+R(a_{2})+\ldots+R(a_{n})$. The normalized eigenstate is \begin{equation}\label{23} |\psi_{n}>=\frac{1}{\sqrt{R(a_{1})+\ldots+R(a_{n})}}\hat{B}_{+}\times \ldots\times\frac{1}{\sqrt{R(a_{1})+R(a_{2})}}\hat{B}_{+}\times\frac{1} {\sqrt{R(a_{1})}}\hat{B}_{+}|\psi_{0}> \end{equation} In addition to the oscillatorlike commutation relations Eq.~(\ref{21}) one gets the commutation relations \begin{equation}\label{24-1} [\hat{B}_{+},R(a_{0})]=\{R(a_{1})-R(a_{0})\}\hat{B}_{+} \end{equation} \begin{equation}\label{24-2} [\hat{B}_{+},[\hat{B}_{+},R(a_{0})]]=(\{R(a_{2})-R(a_{1})\}-\{R(a_{1})-R(a_{0})\}) \hat{B}^2_{+} \end{equation} and so on. \section{Mapping of Kratzer and Coulomb potentials to the Morse potential} Consider the following potential (We are using units with $\hbar=1$, $2\,m=1$.) \begin{equation}\label{24} V(x)=-\frac{\alpha}{x}+\frac{\beta}{x^2}+\gamma \end{equation} If we take $\alpha=\beta=1$ and $\gamma=0$, we obtain the Kratzer potential as \begin{equation}\label{25} V(x)=-(\frac{1}{x}-\frac{1}{x^2}) \end{equation} in another case we take $\alpha=e^2$, $\beta=l(l+1)$ and $\gamma=\frac{e^4}{4(l+1)^2}$, in this case the potential (\ref{24}) is as \begin{equation}\label{26} V(x)=-\frac{e^2}{x}+\frac{l(l+1)}{x^2}+\frac{e^4}{4(l+1)^2} \end{equation} which is equivalent with Coulomb potential in 3-Dimension Schr\"{o}dinger equation in spherical coordinates.\\ At first, we briefly review the method of mapping of shape-invariant under point canonical transformation. For given potential of Eq.~(\ref{24}), one can write the Schr\"{o}dinger equation as \begin{equation}\label{5-1} \{-\frac{d^2}{dx^2}+V(\alpha_{i};x)-E(\alpha_{i})\}\psi(\alpha_{i};x)=0 \end{equation} here $\alpha_{i}$ are the set of parameters of given potential Eq.~(\ref{24}). Under a point canonical transformation, as following \begin{equation}\label{5-2} x:=f(z),\hspace{10mm}\psi(\alpha_{i},x):=g(z)\,\widetilde{\psi}(\widetilde{\alpha}_{i};z) \end{equation} the Schr\"{o}dinger equation (\ref{5-1}) is transformed into \begin{equation}\label{5-3} \{-\frac{d^2}{dz^2}+(\frac{f''}{f'}-2\,\frac{g'}{g})\frac{d}{dz}+(\frac{g'}{g}\,\frac{f''}{f'}-\frac{g''}{g}) f'^{2}(V(\alpha_{i};f(z))-E(\alpha_{i}))\}\widetilde{\psi}(\widetilde{\alpha}_{i};z)=0 \end{equation} or in the familiar form as \begin{equation}\label{5-4} \{-\frac{d^2}{dz^2}+\widetilde{V}(\widetilde{\alpha}_{i};z)-\widetilde{E}(\widetilde{\alpha}_{i})\} \widetilde{\psi}(\widetilde{\alpha}_{i};z)=0 \end{equation} in which $\alpha_{i}$ represent set of parameters of the transformed potential,and the prime denotes differential with respect to the variable $z$. To remove the first-derivative term from Eq.~(\ref{5-3}), one requires \begin{equation}\label{5-5} g(z)=C\sqrt{f'(z)} \end{equation} Using Eq.~(\ref{5-5}) and comparing Eqs.~(\ref{5-2},\ref{5-3}) we obtain \begin{equation}\label{5-6} \widetilde{V}(\widetilde{\alpha}_{i};z)-\widetilde{E}(\widetilde{\alpha}_{i}) =f'^{2}\{V(\alpha_{i};f(z))-E(\alpha_{i})\}+\frac{1}{2}\{\frac{3}{2}(\frac{f''}{f'})^2-\frac{f'''}{f'}\} \end{equation} By substitution Eq.~(\ref{24}) into Eq.~(\ref{5-6}), we have \begin{eqnarray}\label{5-7} \widetilde{V}(\widetilde{\alpha}_{i};z)-\widetilde{E}(\widetilde{\alpha}_{i})&=&f'^{2}( {-\frac{\alpha}{f}+\frac{\beta}{f^2}+\gamma-E})+\frac{1}{2}\{\frac{3}{2}(\frac{f''}{f'})^2-\frac{f'''}{f'}\} \end{eqnarray} We consider \begin{equation}\label{5-8} f(z)=e^{-z} \end{equation} by this selection, one can define a point canonical transformation as \begin{eqnarray}\label{5-9} f(z)&=&e^{-z}\cr g(z)&=&e^{-\frac{z}{2}} \end{eqnarray} with above transformation, we can rewrite Eq.~(\ref{5-7}) as \begin{eqnarray}\label{5-10} \widetilde{V}(\widetilde{\alpha}_{i};z)-\widetilde{E}(\widetilde{\alpha}_{i}) =(\gamma-E+\frac{3}{4})\,e^{-2z}-\alpha\,e^{-z}+(\beta-\frac{1}{2}) \end{eqnarray} which is like to Morse potential. In other words, by acting the point canonical transformation Eq.~(\ref{5-9}) on the potential of Eq.~(\ref{24}), that can explain the Kratzer and Coulomb potentials, we obtain the Morse potential. In this situation we are looking for this potential's algebra. For Morse potential \begin{equation}\label{5-11} \widetilde{V}(z)=e^{-2z}-2be^{-z} \end{equation} the superpotential is \begin{equation}\label{5-12} \widetilde{W}(z;a_{n})=a_{n}-e^{-z} \end{equation} Therefore the reminder in Eq.~(\ref{10}) is given by \begin{equation}\label{5-13} R(a_{n})=2(a_{n}-1) \end{equation} where \begin{equation}\label{5-14} a_{n}=b-(n+\frac{1}{2}) \end{equation} One can use Eq.~(\ref{19}) \begin{equation}\label{5-15} R(a_{n})-R(a_{n-1})=-2 \end{equation} Therefore, the commutation relation of Eq.~(\ref{24-2}) will vanish. Now, we define the following dimensionless operators \begin{equation}\label{5-16} \hat{K}_{0}:=\frac{1}{4}R(a_{0}) \end{equation} and \begin{equation}\label{5-17} \hat{K}_{\pm}:=\frac{1}{\sqrt{2}}\hat{B}_{\pm} \end{equation} where $\hat{B}_{\pm}$ has defined by Eq.~(\ref{16}). One can find that the shape-invariant algebra for these potentials is $SU(1,1)$ \begin{eqnarray}\label{5-18} [\hat{K}_{+},\hat{K}_{-}]&=&\frac{1}{2}[\hat{B}_{+},\hat{B}_{-}]\cr &=&2(-\frac{1}{4}R(a_{0}))\cr &=&-2\hat{K}_{0} \end{eqnarray} \begin{eqnarray}\label{5-19} [\hat{K}_{0},\hat{K}_{\pm}]&=&\frac{1}{4\sqrt{2}}[R(a_{0}),\hat{B}_{\pm}]\cr &=&\pm(\frac{4}{4\sqrt{2}}\hat{B}_{\pm})\cr &=&\pm\hat{K}_{\pm} \end{eqnarray} \section{Mapping of Hulth\'{e}n potential into P\"{o}schl-Teller potential 1} The Hulth\'{e}n potential has the following form (we are using units with $\hbar=1$, $2\,m=1$ ) \begin{equation}\label{6-1} V(r)=-\frac{e^{-r}}{1-e^{-r}} \end{equation} for mapping of this potential, we consider \begin{equation}\label{6-2} f(z)=-2\ln{[\cos{z}]} \end{equation} by this selection, one can define a point canonical transformation as \begin{eqnarray}\label{6-3} f(z)&=&-2\ln{[\cos{z}]}\cr g(z)&=&\sqrt{-2\ln{[\cos{z}]}} \end{eqnarray} with above transformation, we can rewrite Eq.~(\ref{5-7}) as \begin{eqnarray}\label{6-4} \widetilde{V}(\widetilde{\alpha}_{i};z)-\widetilde{E}(\widetilde{\alpha}_{i}) =4(E-1)-\frac{1}{4}(1+16\,E)\sec^2{z}+\frac{3}{4}\csc^2{z} \end{eqnarray} which is like to P\"{o}schl-Teller potential 1. For P\"{o}schl-Teller potential 1 \begin{equation}\label{6-5} \widetilde{V}(z)=-(A+B)^2+A(A-1)\sec^2{z}+B(B-1)\csc^2{z} \end{equation} the energy eignestates are given by \begin{equation}\label{6-6} \widetilde{E}_{n}=(A+B+2\,n)^2-(A+B)^2 \end{equation} Therefore the reminder in Eq.~(\ref{10}) is given by ( One can find $R(a_{n})=\widetilde{E}_{n}-\widetilde{E}_{n-1}$) \begin{equation}\label{6-7} R(a_{n})=4(2n+A+B-1) \end{equation} One can use Eq.~(\ref{19}) \begin{equation}\label{6-8} R(a_{n})-R(a_{n-1})=8 \end{equation} Therefore, the commutation relation of Eq.~(\ref{24-2}) will vanish. Now, we define the following dimensionless operators \begin{equation}\label{6-9} \hat{K}_{0}:=\frac{-1}{8}R(a_{0}) \end{equation} and \begin{equation}\label{6-10} \hat{K}_{\pm}:=\frac{1}{2}\hat{B}_{\pm} \end{equation} where $\hat{B}_{\pm}$ has defined by Eq.~(\ref{16}). One can find that the shape-invariant algebra for these potentials is $SU(2)$ \begin{eqnarray}\label{6-11} [\hat{K}_{+},\hat{K}_{-}]&=&\frac{1}{4}[\hat{B}_{+},\hat{B}_{-}]\cr &=&2(-\frac{1}{8}R(a_{0}))\cr &=&2\hat{K}_{0} \end{eqnarray} \begin{eqnarray}\label{6-12} [\hat{K}_{0},\hat{K}_{\pm}]&=&\frac{-1}{16}[R(a_{0}),\hat{B}_{\pm}]\cr &=&\pm(\frac{1}{2}\hat{B}_{\pm})\cr &=&\pm\hat{K}_{\pm} \end{eqnarray} \section {conclusion} For exactly solvable potentials of nonrelativistic quantum mechanics, eigenvalues and eigenvectors can be derived using well known methods of supersymmetric quantum mechanics. In this paper the Schr\"{o}dinger equation with some potentials (Coulomb, Kratzer, with Morse and P\"{o}schl-Teller type I with Hulth\'{e}n) has been studied and we have shown that such potentials can be easily inter-related among themselves within the framework of point canonical coordinate transformations as the corresponding eigenvalues may be written down in a closed form algebraically using the well known results for the shape invariant potentials. Also we have shown that the shape-invariant algebra for Coulomb, Kratzer, and Morse potentials is $SU(1,1)$, while the shape-invariant algebra for P\"{o}schl-Teller type I and Hulth\'{e}n is $SU(2)$. We must mention that the Morse potential is also related with the SU(2) group except for SU(1,1) one, to see the SU(2) group approach refere to \cite{dong1}.
2,869,038,154,203
arxiv
\section{\label{sec:Intro}Introduction} Majorana Zero Modes (MZMs) are explored as a promising platform for topological quantum computation \cite{Kitaev2001,Nayak2008,Alicea2012,Beenakker2013,Lutchyn2018}. As a direct consequence of their nonlocal nature, Majorana-based qubits are, in principle, less susceptible to decoherence and can provide better protected gates when compared to conventional qubits. Throughout the past decade a lot of experimental progress has been made on detecting signatures of the MZMs via observing robust zero-bias conductance peaks \cite{Mourik2012,Deng2012,Das2012,Churchill2013,Finck2013,Nichele2017, Zhang2018}, a $4\pi$-periodic Josephson effect \cite{Rokhinson2012, Deacon2017, Laroche2019}, signatures of exponential length-dependence of energy splittings \cite{Albrecht2016,Vaitiekenas20}, and coherent single electron charge transfer between superconductors \cite{VanZanten2020}. Though promising, these signatures have been proved inconclusive to make a definitive judgment on the presence of the MZMs in the system \cite{Liu2012, Pikulin2012, Kells2012, Liu2017, Vuik2019, Pikulin2012a, San-Jose2012,San-Jose2016}. For this reason a measurement of a topological Majorana qubit draws significant attention from both experimental and theoretical standpoints. A successful implementation of such a readout of a topological qubit would mark the transition from studying properties of the topological phase to topologically protected quantum information processing. Moreover, as physically moving MZMs \cite{Alicea2011} currently appears to be practically challenging, measurement-based schemes \cite{Bonderson2008,Bonderson2009} come to the forefront as the most likely means of operating a Majorana-based topological quantum computer. \begin{figure} \includegraphics[width = 8.5 cm, height = 5.6 cm]{setup} \caption{Schematic of the measurement setup of multi-MZM qubit islands. Only the measured MZMs of the qubit are labeled. (a) 2-MZM (single qubit) measurement setup. (b) 4-MZM (two qubit) measurement setup.} \label{fig:Setup} \end{figure} Various theoretical proposals for Majorana qubits and their readout procedure have been put forward \cite{Flensberg2011,Hyart2013,Aasen2016,Plugge2017,Karzig2017,Grimsmo2019,Szechenyi20,Manousakis2020} Here we concentrate on the design for the qubit that features a superconducting island in the Coulomb blockaded regime \cite{Plugge2017,Karzig2017} consisting of two or more one-dimensional topological superconductors -- realized for example in proximitized semiconductor nanowires \cite{Lutchyn2010,Oreg2010} -- connected by a trivial superconductor. Each topological superconductor carries two MZMs at the ends. The qubit state is encoded by the parity of pairs of MZMs, e.g., $\sigma^z=i\gamma_i\gamma_j$, where $\sigma^z$ is Pauli operator in the computational space of the qubit and $\gamma_{i/j}$ are the corresponding Majorana operators. The total parity of a qubit island is conserved, which fixes the parity of the other two MZMs in 4-MZM islands. Measurements of the qubits are performed by coupling two (for single qubit measurements, see Fig.~\ref{fig:Setup}(a)) or four (for two-qubit measurements, see Fig.~\ref{fig:Setup}(b)) MZMs to quantum dots (QDs) while using parity-dependent shifts of the QD charge or capacitance as the readout signal. Such QD-based measurements are particularly promising since they can be embedded in scalable designs for the operation of topological qubits \cite{Karzig2017}. Motivated by this prospect, experimental studies of QD measurements in materials suitable for topological qubits are emerging \cite{deJong2019,vanVeen2019}. Despite the topological protection of Majorana qubits, quantum information storage and measurements are never perfect in practice due to sources of noise intrinsic and extrinsic to the qubit system. Quantifying the effect of noise is thus essential to understand the prospective performance of topological qubits. The effect of noise within the topological superconductors has been considered as the cause of the slow decoherence of idle qubits \cite{Knapp2018, Aseev2019} or as a possible reduction of the visibility of 2-MZM measurements \cite{Munk2019}. Crucially, coupling the Majorana qubit to the QDs of the readout apparatus introduces new sources of noise. The desired effect of this noise is to collapse the qubit state into the outcome of the measurement \cite{Steiner2020,Munk2020}. However, noise coupling to the QDs can also have negative effects on the visibility of the measurement and with the known susceptibility of QD to charge noise one might wonder whether QDs are a suitable platform for high-fidelity measurements. In this paper we study the effect of such noise on the measurement visibility and show that typical strengths of QD noise allow for high-fidelity qubit measurements. To study the optimal operation point of measurements we will pay particular attention to the regime where the QD and the qubit island are tuned close to resonance (i.e. energy detuning between the two is much smaller than the MZM-QD coupling) in contrast to the widely applied far-detuned regime where the MZM-QD coupling is much smaller than the energy detuning and can be considered perturbatively \cite{Karzig2017,Plugge2017}. Such careful tuning to resonance can be particularly beneficial for 4-MZM measurements which were previously not discussed in this regime. The rest of the paper is organized as follows. First, we review the single qubit measurements paying particular attention to the regime of the resonantly coupled island-QD system. We then extend this analysis to two-qubit measurements. Next, focusing on the single qubit measurement case, we analyze how noise in island-QD detuning affects the measurement visibility by calculating the signal-to-noise ratio (SNR) of the measurements. The Appendix presents details of calculations and treatment of the subleading noise sources -- flux and coupling noise. \section{\label{sec:QD measurement}QD-based measurements} We start by reviewing how coupling a single QD to a pair of MZMs leads to a measurable change in the properties of the coupled MZM-QD system that depend on the parity of the MZMs before generalizing to measurements of four MZMs. As we show below, the regime of maximal measurement visibility is typically achieved when the QD and the qubit island are tuned so that the energy configurations of an electron occupying the QD or the qubit island are close-to degeneracy. We therefore pay particular attention to this regime, which we refer to as resonant regime, and discuss how careful tuning enhances the visibility of 4-MZM measurements to be of similar order as the 2-MZM measurements. \subsection{\label{subsec:2MZM measurement}2-MZM measurement} A typical setup for a 2-MZM single qubit measurement is depicted in Fig.~\ref{fig:Setup}(a). The effective low-energy Hamiltonian of the qubit-QD system is given by \begin{equation} \hat{H}=\hat{H}_\text{C}+\hat{H}_\text{QD}+\hat{H}_\text{QD-MZM}. \label{eq:H_2MZM} \end{equation} Here $\hat{H}_\text{C}$ is the charging energy Hamiltonian of the superconducting island, $\hat{H}_\text{QD}$ is a Hamiltonian of the QD and $\hat{H}_\text{QD-MZM}$ is a term describing tunneling between the island and the QD through MZMs. Both $\hat{H}_\text{C}$ and $\hat{H}_\text{QD}$ contain charging energy contributions due to capacitance to the ground and between the subsystems. Additionally, $\hat{H}_\text{QD}$ contains the energy of the single-particle level on the QD. Due to charge conservation these contributions can be combined into: \begin{align} \hat{H}_\text{C+QD} &= \varepsilon_\text{C}(\hat{n} - n_\text{g})^2. \label{eq:H_total_charging_energy} \end{align} with $\hat{n}$ being the charge occupation of the QD while $\varepsilon_\text{C}$ and $n_\text{g}$ denote the effective charging energy and effective dimensionless gate voltage of the island-QD system. Expressions for the effective parameters in terms of original parameters of $\hat{H}_\text{C}$ and $\hat{H}_\text{QD}$ are given in Appendix \ref{sec:effective_H}. Here we assumed a single-level QD without spin degeneracy, which is a valid assumption in high external magnetic field for small enough QD when the energy difference between the two lowest levels of the dot is larger than the MZM-QD coupling. The tunneling Hamiltonian reads: \begin{equation} \label{eq:T-MZM} \hat{H}_\text{QD-MZM}=e^{-i\hat{\phi}}(t_1f^{\dagger}\gamma_1+t_2f^{\dagger}\gamma_2) + \text{h.c.} \end{equation} where $t_{\alpha}$, $\alpha=1,2$ are coupling matrix elements of the MZMs to the fermionic mode on the QD described by creation operator $f^\dagger$. Note that since the Majorana operators are chargeless charge conservation is ensured by the operator $e^{i\hat{\phi}}$ that raises charge of the island by one electron charge. The couplings can be written as \begin{equation} t_{\alpha}=|t_{\alpha}|e^{i\phi_{\alpha}};\ \alpha=1,2;\ \end{equation} where the gauge invariant phase difference $\phi_1-\phi_2$ depends on microscopic details of the matrix elements but can be tuned by varying the magnetic flux penetrating the enclosed area of the interference loop (see Fig.~\ref{fig:Setup}). We now focus on the regime close to the QD-Majorana-island resonance, where $n_\text{g}= 1/2 + \Delta/2\varepsilon_\text{C}$ and the detuning $\Delta$ between the island and the QD level is $\Delta \ll \varepsilon_\text{C}$. The low energy Hamiltonian is then spanned by four states $\ket{n,p}$ where $n=0,1$ and $p=\pm 1$ are eigenvalues of the QD occupation and combined parity $p=p_{12}(-1)^n$ with $p_{12}=i\gamma_1\gamma_2$ being MZM parity. In contrast to the case of large detuning where the charge of the qubit island is fixed except for virtual tunneling events \cite{Plugge2017,Karzig2017} it is important to note that in the presence of the QD $p_{12}$ is no longer conserved. A non-demolition measurement therefore cannot directly determine $p_{12}$. Instead, the measurement outcome depends on the parity of the combined MZM-QD system $p$ which is a constant of motion in the absence of exponentially weak qubit dynamics \cite{Flensberg2011,Munk2020,Steiner2020}. Within the model discussed here, this manifests in a block-diagonal form of the Hamiltonian. Using the basis $|n,p\rangle$ with $\ket{1,p}=e^{-i\hat{\phi}}f^\dagger \gamma_1|0,p\rangle$ the elements of the Hamiltonian blocks of given $p$ can be directly read off from Eqs.\eqref{eq:H_total_charging_energy} and \eqref{eq:T-MZM} with the parity dependence entering via $\langle 1,p|t_2 e^{-i\hat{\phi}}f^\dagger \gamma_2|0,p\rangle=-ipt_2$ such that \begin{equation} \hat{H}_{p}= \begin{pmatrix} \Delta/2 & \bar{t}_p^* \\ \bar{t}_p & -\Delta/2 \end{pmatrix}\,. \label{eq:2MZM_mx} \end{equation} Here we introduced the effective MZM-QD coupling $\bar{t}_p=t_1 -ipt_2$. Equation~\eqref{eq:2MZM_mx} allows for a straight forward interpretation of the effect of $p$ on the MZM-QD system. Due to the interference of the two different paths that couple the QD and the qubit island, $p$ will control the strength of the effective coupling $|\bar{t}_p|=\sqrt{|t_1|^2+|t_2|^2+2p|t_1t_2|\sin\phi}$ where $\phi=\phi_2-\phi_1$. The parity-dependence of the coupling has measurable consequences for several observables and is used to diagnose the parity of the MZMs. The energy spectrum of the system takes the form \begin{equation} \varepsilon_{p,\pm} = \pm\frac{1}{2}\sqrt{\Delta^2+4|\bar{t}_p|^2}. \label{eq:2MZM_energies} \end{equation} Figure~\ref{fig:2MZM}(a) illustrates the energy spectrum in the case of $\phi=\pi/2$ and $|t_1|=1.5|t_2|$. Even though optimal visibility is achieved when $|t_1|=|t_2|$ where $\bar{t}_p$ is either maximal or zero depending on the parity $p$, here we present plots away from this fine tuned point since a certain degree of the coupling asymmetry is expected in the QD-based readout experiments. Using the ground state of \eqref{eq:2MZM_energies} the corresponding charge expectation value of the QD in the $\Delta \ll \varepsilon_\text{C}$ limit can be obtained as \begin{equation} \langle n_{\text{QD},p} \rangle = n_\text{g}-\frac{1}{2\varepsilon_\text{C}}\frac{\partial \varepsilon_{p,-}}{\partial n_\text{g}}=\frac{1}{2}+\frac{\Delta}{2\sqrt{\Delta^2+4|\bar{t}_p|^2}}\,. \label{eq:nQD} \end{equation} The differential capacitance in the same limit takes the form \begin{equation} \frac{C_{\text{diff},p}}{C_\text{g}^2/C_{\rm \Sigma,D}}=\frac{1}{2\varepsilon_\text{C}}\frac{\partial^2 \varepsilon_{p,-}}{\partial n_\text{g}^2}=- \frac{4 \varepsilon_\text{C} |\bar{t}_p|^2 }{(\Delta^2+4|\bar{t}_p|^2)^{3/2}} \label{eq:Cdiff} \end{equation} where $C_\text{g}$ is the capacitance between the gate and the QD and $C_{\rm \Sigma,D}\equiv e^2/2\varepsilon_\text{C}$ is the total capacitance of the QD. \begin{figure} \includegraphics[width = 0.9\columnwidth]{2MZM_spectrum} \includegraphics[width = 0.9\columnwidth]{2MZM_dnQD_W} \includegraphics[width = 0.9\columnwidth]{2MZM_dC_W} \caption{(a) MZM parity dependent part of the energies of the two lowest QD-MZM levels \eqref{eq:2MZM_energies} as a function of island-QD detuning $\Delta$ in units of the MZM-QD hopping $t$. (b) Average QD charge difference between the two parity states $\delta\langle n_{\text{QD}}\rangle=\langle n_{\text{QD},p=+1}\rangle-\langle n_{\text{QD},p=-1}\rangle$ as a function of detuning. (c) Differential capacitance difference between the two parity states $\delta C_{\text{diff}}=C_{\text{diff,+}}-C_{\text{diff,-}}$ as a function detuning. We set $|t_1|=t,\ |t_2|=1.5t$ for (a)-(c) and $C_\text{g}/C_{\rm \Sigma,D}=2$, $\varepsilon_\text{C}=5t$ for (c).} \label{fig:2MZM} \end{figure} These two observables \eqref{eq:nQD}-\eqref{eq:Cdiff} can be measured in charge sensing, or quantum capacitance measurements respectively. Here we do not consider the details of the corresponding measurements but instead use the observables as a proxy for the measurement outcomes. Fig.~\ref{fig:2MZM}(b)-(c) depict the $\Delta$-dependence for various values of the phase $\phi$ of the charge expectation and differential capacitance for the ground state of the system at different parities $p$. In the absence of noise the parity dependence of the observables is strongest at $\phi=\pi/2$ and at or close to zero detuning. In the most favorable regime close to zero detuning it becomes particularly important that $p$ is measured while we are ultimately interested in $p_{12}$ of the island decoupled from the QD. Failure to correctly infer $p_{12}$ from the measured value of $p$ would result in a measurement error and ultimately decrease readout and (in case of measurement-based topological quantum computing) gate fidelity. Connecting the measurement of $p$ to $p_{12}$ requires a well-defined initialization and finalization procedure of the measurement where the QD charge before and after the measurement is known. Charge conservation then allows to infer $p_{12}$ of the decoupled system from the measured $p$. A possible procedure is given by adiabatic tuning where the QD starts out and ends up far-detuned from resonance before and after the measurement to ensure a fixed charge state. The measurement is then initiated by first turning the MZM-QD coupling on and then tuning the system to resonance while the decoupling proceeds in opposite order. An alternative to this adiabatic tuning procedure would be to explicitly check the QD charge before and after the measurement by a separate charge measurement. Indeed, even if a close to adiabatic tuning is attempted such additional measurement might be required when one is aiming at very high measurement fidelities. \subsection{\label{subsec:4MZM measurement}4-MZM measurement} The setup for a 4-MZM measurement is shown in Fig.~\ref{fig:Setup}(b). 4-MZM measurements can be done utilizing only one QD. Here we consider two QDs since they provide greater tunability and are likely the generic case in scalable designs \cite{Karzig2017}. Similarly to 2-MZM situation, the effective low-energy Hamiltonian of this system has the form of \eqref{eq:H_2MZM}. $\hat{H}_\text{C}$ and $\hat{H}_\text{QD}$ contributions are given in Appendix \ref{sec:effective_H} while the tunneling Hamiltonian reads: \begin{align} \hat{H}_\text{QD-MZM}=&e^{-i\hat{\phi}_1}(t_1f^{\dagger}_1\gamma_1+t_2f^{\dagger}_2\gamma_2) \nonumber \\ &+e^{-i\hat{\phi}_2}(t_3f^{\dagger}_1\gamma_3+t_4f^{\dagger}_2\gamma_4)+ h.c. \end{align} where $t_{\alpha}$ are couplings of the QDs described by fermionic operators $f^\dagger_\beta$ to the respective MZMs and $e^{i\hat{\phi}_{\beta}}$ is the raising operator of the charge of the island $\beta$. For concreteness we consider the case where the system is tuned such that the lowest energy states are given by the 4 configurations of a single excess electron located on one of the QDs or islands. We denote the corresponding energies in the absence of tunnel coupling as $\varepsilon_{\alpha}$ with $\alpha\in\{\rm i1,i2,d1,d2\}$ denoting the position of the electron. These energies are determined by the individual and mutual charging energies of the islands and QDs, and by the single-electron levels on the QDs. As in the 2-MZM case we will be particularly interested in the resonant regime where the energies $\varepsilon_{\alpha}$ become small. This requires tuning three parameters in general and can be done by tuning gate voltages on the two QDs and one of the islands. Given that couplings of the low-energy subspace to MZMs other than $\gamma_1 \dots \gamma_4$ are exponentially small, the total parity $p=p_{12}p_{34}(-1)^{n_1+n_2}$, where $n_{\beta}=f^\dagger_\beta f_\beta$, is conserved. We thus denote the low energy states as $|\alpha,p\rangle$: \begin{align} \ket{\text{i1},p}&=e^{-i \phi_1}e^{i\hat{\phi}_1}\gamma_1 f_1 \ket{\text{d1},p} \\ \ket{\text{i2},p}&=e^{-i \phi_3}e^{i\hat{\phi}_2}\gamma_3 f_1 \ket{\text{d1},p} \\ \ket{\text{d2},p}&=e^{i \phi_2}e^{-i\hat{\phi}_1}f_2^\dagger \gamma_2 \ket{\text{i1},p} \end{align} where we included the phases of the tunnel matrix elements $t_{\alpha}=|t_{\alpha}|e^{i\phi_{\alpha}}$ for convenience. In the above basis the Hamiltonian takes the form \begin{equation} H= \begin{pmatrix} \varepsilon_{\rm d1} & |t_1| & |t_3| & 0 \\ |t_1| & \varepsilon_{\rm i1} & 0 & |t_2| \\ |t_3| & 0 & \varepsilon_{\rm i2} & -p|t_4|e^{i\phi} \\ 0 & |t_2| & -p|t_4|e^{-i\phi} & \varepsilon_{\rm d2} \end{pmatrix}, \label{eq:H_4MZM_mx} \end{equation} where $\phi=\phi_1-\phi_2-\phi_3+\phi_4$. From the form of the Hamiltonian \eqref{eq:H_4MZM_mx} it becomes clear that the energies of the system are independent on the individual 2-MZM parities and instead will depend via $\phi$ on the flux passing through the loop of the 4 tunneling junctions and on the overall parity $p$ which acts as a $\pi$ phase shift of $\phi$. Since the goal of the measurement is to ultimately determine 4-MZM parity $p_{12} p_{34}$ a similar tuning procedure as for 2-MZM measurements is required to fix the QD occupation. In fact, while the relation between $p$ and $p_{12} p_{34}$ suggests that the tuning procedure only needs to ensure that the joint QD parity $(-1)^{n_{1}+n_2}$ is the same before and after the measurement the charge occupation of all islands and QD need to remain unchanged by the measurement. The reason is that the measurement should determine $p_{12}p_{34}$ while not otherwise disturbing the quantum state of the qubits. Any net transfer of electrons between the islands or between the QDs relative to their state before the measurement would result in applying the corresponding operators involved in the electron transfer $\gamma_i\gamma_j$ to the qubit states. Without a tuning procedure that ensures the occupation of the final configuration or an additional measurement to determine the configuration, the application of unknown pairs of Majorana operators would lead to dephasing. A possible tuning procedure from the resonant measurement configuration would work in a circular way: first detune the QD 1 to favor an occupation $n_1=1$, then tune island 1 to favor the empty state, followed by tuning QD 2 and island 2 to the empty state as well. Tuning all the couplings to zero then ensures a well defined charge configuration. The initialization procedure would be done in opposite order. \begin{figure*} \includegraphics[width = 5.9 cm, height = 4.5 cm]{4MZM_spectrum_Wdd_Wdi0} \includegraphics[width = 5.9 cm, height = 4.5 cm]{4MZM_spectrum_Wdd_Wdim1} \includegraphics[width = 5.9 cm, height = 4.5 cm]{4MZM_spectrum_Wdd_Wdim5} \caption{Eigenenergies of the Hamiltonian \eqref{eq:H_4MZM_mx} for different parities $p$ and as a function of QD-QD detuning $\Delta_\text{dd}$ for various values of QD-island detuning $\Delta_\text{di}$. Here we set $|t_1|=|t_2|=1.5t,\ |t_3|=|t_4|=t$. Panel (a) is given by the analytical expressions of Eq.~\eqref{eq:4MZM_energies}. Legends are the same as in Fig.~\ref{fig:4MZM_spectrum_Wdd_beta}.} \label{fig:4MZM_spectrum_Wdd_Deltadi} \end{figure*} Exact diagonalization of \eqref{eq:H_4MZM_mx} for arbitrary parameters involves cumbersome expressions. To gain intuition about the behavior of the energy levels we keep $\varepsilon_{\rm i1}=\varepsilon_{\rm i2}$ while introducing the QD detuning $\Delta_\text{dd}=\varepsilon_{\rm d1}-\varepsilon_{\rm d2}$ and the average detuning $\Delta_\text{di}=(\varepsilon_{\rm d1}+\varepsilon_{\rm d2})/2- \varepsilon_{\rm i1}$ between the QDs and islands. For now, we will also set $\Delta_\text{di}=0$. The energy eigenvalues $\varepsilon$ are then given by the equation \begin{equation} \varepsilon^4-\varepsilon^2 \left(\frac{1}{4}\Delta_\text{dd}^2+t_\Sigma^2\right)-\frac{1}{2}\varepsilon\Delta_\text{dd}t_\delta^2+\bar{t}_p^{(4)}(\phi)^4=0 \label{4MZM_spectrum_eq} \end{equation} in terms of $t_\Sigma^2=\sum_{\alpha=1}^4|t_{\alpha}|^2$, $t_\delta^2=|t_1|^2+|t_3|^2-|t_2|^2-|t_4|^2$ and the interference term \begin{equation} \bar{t}_p^{(4)}(\phi)^4=|t_1t_4|^2+|t_2t_3|^2+2p|t_1t_2t_3t_4|\cos\phi \label{eq:4MZM_t}\,. \end{equation} The qualitative behavior is already captured by the case $t_\delta=0$ which can be solved analytically yielding \begin{equation} \varepsilon_p^{(4)}(\phi) = \pm \frac{1}{\sqrt{2}}\sqrt{\frac{\Delta_\text{dd}^2}{4}+t_\Sigma^2 \pm \sqrt{\left(\frac{\Delta_\text{dd}^2}{4}+t_\Sigma^2\right)^2 - 4\bar{t}_p^{(4)}(\phi)^4 }}. \label{eq:4MZM_energies} \end{equation} The case of $t_\delta\neq 0$ is considered in Appendix \ref{sec:4MZM_details}. Going beyond $\Delta_\text{di}=0$, Fig.~\ref{fig:4MZM_spectrum_Wdd_Deltadi} shows plots of the energy eigenvalues of \eqref{eq:H_4MZM_mx} as functions of $\Delta_\text{dd}$ for $\phi=0$, $t_\delta=0$ and various values of island-QD detuning $\Delta_\text{di}$. For large negative $\Delta_\text{di}/t$ we recover the perturbative regime obtained in \cite{Karzig2017} where the parity dependent energy shift is of order $t^2/\Delta_\text{di}$. Figure~\ref{fig:4MZM_spectrum_Wdd_Deltadi} demonstrates that the energy differences between the ground states of different parity is maximal when the MZM-QD couplings are symmetric $|t_\alpha|=t$ and the system is on resonance $\Delta_\text{dd}=\Delta_\text{di}=0$. By appropriately tuning the 4-MZM measurement system close to these parameters it becomes possible to reach a similarly strong parity dependence as in the case of 2-MZM measurements. Specifically, in the case of $\phi=0$, $|t_\alpha|=t$, and $\Delta_\text{dd}=\Delta_\text{di}=0$ one finds $\varepsilon^{(4)}_{+,\text{gs}}-\varepsilon^{(4)}_{-,\text{gs}}=(\sqrt{2}-2)t$ which is of similar order as for the 2-MZM case. For a more explicit comparison of the capacitive response see App.~\ref{sec:24compare}. For the purpose of the following sections we note that the low energy part of the 4-MZM system spectrum in Fig.~\ref{fig:4MZM_spectrum_Wdd_Deltadi} (energies $\varepsilon_1$ and $\varepsilon_2$) qualitatively resembles the one of the 2-MZM system, see Fig.~\ref{fig:2MZM}(a). Since all measurement visibility properties we consider in the next section are derived from the low energy part of the spectrum, we conclude that 4-MZM and 2-MZM cases are qualitatively similar in this regard and thus concentrate on the simpler 2-MZM case \footnote{Technically, the 4-MZM measurement is performed by measuring charge/capacitance of one of the dots and in order to compare it to the 2-MZM case, one needs to plot the spectra of Fig.~\ref{fig:4MZM_spectrum_Wdd_Deltadi} as functions of variables $\varepsilon_{\rm d1},\varepsilon_{\rm d2}$ rather than $\Delta_{\text{dd}},\Delta_{\text{di}}$. However, the two variable sets are related to each other by simple linear transformation and thus the spectra as a function of, for example, $\varepsilon_{\rm d1}$ looks rotated with respect to the ones in Fig.~\ref{fig:4MZM_spectrum_Wdd_Deltadi} such that the ground state part still qualitatively resembles the one in Fig.~\ref{fig:2MZM}(a)}. \section{\label{sec:Noise}Noise and its effects on measurement visibility} We now describe the noise that will broaden the distribution of the observables. Here, we pay particular attention to intrinsic noise sources and their dependence on the system parameters. External noise sources, like amplifier noise, do not depend on the system parameters and are uncorrelated with the system noise -- therefore they can be added straight-forwardly. The leading internal noise source in the measurement setup of Fig.~\ref{fig:Setup} would likely be the charge noise which affects the on-site energy and thus detuning of the QDs. In our study we assume the $1/f$ power spectrum of the charge noise which has been reported in other QD-based devices, most notably semiconductor charge qubits~\cite{Buizert2008,Petersson2010,Paladino2014}. We discuss noise in the strength of the tunnel couplings and flux noise which affects the phase $\phi$ in Appendices \ref{sec:Couplings noise} and \ref{sec:Phase noise}. Using noise estimates from related experimental setups we conclude that these noise sources likely play a subleading role on the visibility of the measurement compared to the charge noise considered in the main text. We first formulate the general framework of how we treat noise. Consider an observable $\hat{y}(x(t))$ that depends on the parameter $x(t)=x+\delta x(t)$, where $x$ is the fixed setting of the parameter $x$ and $\delta x(t)$ is the time-dependent noise. We describe the noise perturbatively by considering the second order expansion in the parameter of noise: \begin{equation} \hat{y}(x(t)) = \hat{y}_0(x) + \hat{y}_{1}(x) \delta x(t) + \frac{1}{2}\hat{y}_{2}(x) \delta x(t)^2\,, \label{eq:noise_expansion} \end{equation} where $\hat{y}_0$ is the unperturbed observable and $\hat{y}_{1}$, $\hat{y}_{2}$ are first and second derivatives of $\hat{y}_0$ with respect to $x$. Since measurements are recorded over a finite measurement time $\tau_\text{m}$ we are ultimately interested in the time averaged quantities $\hat{Y}=\frac{1}{\tau_\text{m}}\int_0^{\tau_\text{m}} dt \hat{y}(x(t))$. We use the expectation value $Y=\langle \hat{Y} \rangle$ and variance $\sigma_Y^2=\langle \hat{Y}^2\rangle - \langle \hat{Y}\rangle^2$ to determine the measurable signal and internal noise. The above expectation value $\langle \dots \rangle$ is taken with respect to the environment for the noisy parameter. There are two opposing limits how to incorporate a finite temperature in the expectation values of the system operators. (1) The operator $\hat{Y}$ is temperature independent and the expectation value is taken with respect to the full density matrix of the system which includes both finite-temperature and noise effects; (2) the operator $\hat{Y}$ is already the temperature-averaged observable (i.e. the expectation value with respect to the unperturbed finite-temperature density matrix has been already taken) in which case taking the expectation value $\langle\dots\rangle$ amounts to only performing noise-averaging. Method (1) would give a finite variance even in the absence of noise due to temperature fluctuations while (2) only includes fluctuations due to noise. These differences only become important for temperatures that allow excitations above the ground state. In the case when there is a significant occupation of the excited state, the timescales involved in the temperature fluctuations determine which of the two methods are more appropriate in capturing the variance of the measurement outcomes. If during the measurement time the system transitions frequently between the ground and excited state, the measurement will probe temperature averaged quantities (2), while for transitions slower than the measurement time the distribution of measurement outcomes would be broadened by temperature (1). To focus on the effect of the noise we take the limit (2) of long measurement times. We assume that the expectation values of the fluctuations are fully described by the spectral function $S_x(\omega)$ of the noise via $\langle \delta x(0) \delta x(t)\rangle=\int d \omega e^{i\omega t} S_x(\omega)$. For the $1/f$ noise which we assume below $S_x(\omega)=\alpha_x/|\omega|$ we find up to second order in the noise \begin{eqnarray} Y &=& y_0+ y_2 \alpha_x\big(1-\gamma - \log(\omega_{\text{min}}\tau_\text{c}/2)\big) \label{eq:Y} \\ \sigma_Y^2 &=&y_1^2 \alpha_x c+\frac{y_2^2}{2}\alpha_x^2\left(5+c^2 \right)\,\label{eq:sigma} \end{eqnarray} where $\gamma\approx 0.577$ is Euler's constant and $c=3-2\gamma - 2 \log(\omega_{\rm min}\tau_\text{m})$, see Appendix~\ref{sec:noise_app} for details. Note that the nature of $1/f$ noise requires to introduce low and high frequency cutoffs of the noise in addition to the finite measurement time. The cutoffs can be physically motivated. The high frequency cutoff arises due to finite correlation time of the noise. For short times $t \ll \tau_\text{c}$ one expects $\langle\delta x(0) \delta x(t)\rangle$ to approach a constant. The specific value of this time scale is not important due to the weak logarithmic dependence. We associate $\tau_\text{c}^{-1}$ with the highest frequency that the measurement apparatus can possibly resolve. Noise at higher frequencies simply averages out and cannot be detected during measurement. The measurements are performed by coupling resonators to the quantum dot and observing shifts in the resonance frequency. This frequency thus provides a natural cutoff for the time scale the detector can resolve. Typical resonator frequencies are $\sim 1$ GHz and thus we set $\tau_\text{c}^{-1}=1$ GHz. The low frequency cutoff $\omega_{\rm min}$ is given by the inverse timescale at which the system is recalibrated since very slow components of the noise act as drift which can be removed by calibration. While the dependence on $\omega_{\rm min}$ is weak it should be noted that Eqs.~\eqref{eq:Y},\eqref{eq:sigma} emphasize that similar to conventional qubits, the measurement apparatus of topological qubits needs to be regularly recalibrated. For numerical estimates we use $\omega_{\rm min}^{-1}=10\tau_\text{m}$ with $\tau_\text{m}=1\ \mu$s. For longer recalibration times the noise will grow slowly as $\sqrt{\log(\omega_\text{min}\tau_m)}$. This effect becomes relevant when considering very long time-intervals between calibration that might be desirable for quantum computation. For example, for $\omega_\text{min}^{-1}\sim 1$day with $\tau_m=1\ \mu$s the noise would be increased by a factor $\sim 3$ relative to our estimates. \begin{figure} \includegraphics[width=0.7\columnwidth,height=0.23\columnwidth]{SNR} \caption{Diagram explaining definition of the signal and noise given by Eqs. \eqref{eq:signal},\eqref{eq:noise}. $Y$ is a measured quantity which depends on $p$, black lines indicate respective values of $Y$. The red line is a level broadening due to the noise with a standard deviation $\sigma_Y$.} \label{fig:SNR} \end{figure} During the qubit readout the goal is to be able to differentiate between parity $p=+1$ and $p=-1$ states by measuring the observables discussed in the previous section. This is schematically illustrated in Fig.~\ref{fig:SNR}. Thus, for the particular case of measurement visibility analysis, we define the signal $\cal{S}$ and the noise $\cal{N}$ in variable $Y$ as \begin{align} \mathcal{S}_Y&= |Y(p=+1)-Y(p=-1)|\label{eq:signal}\\ \mathcal{N}_Y&=\sigma_Y(p=+1)+\sigma_Y(p=-1).\label{eq:noise} \end{align} \section{\label{sec:Detuning noise}Detuning noise} The dominant source of noise in the island-QD detuning $\Delta$ is the gate voltage noise on the QD $n_\text{g}$ which is typically dominated by $1/f$ charge noise \begin{equation} S_\Delta(\omega)=\varepsilon_\text{C}^2\frac{\alpha_\text{C}}{|\omega|}. \label{charge_noise} \end{equation} Here we explicitly wrote the coupling strength of the noise to the system which is controlled by the charging energy $\varepsilon_\text{C}$ and the strength of the noise described by the dimensionless parameter $\alpha_\text{C}$ that depends on the environment that is causing the charge noise. The latter depends on the experimental setup and materials. We estimate $\alpha_\text{C}$ by considering the strength of dephasing of charge qubits in InAs/Al hybrid systems that are investigated for their potential use as building blocks for Majorana qubits. Reference \cite{VanZanten2020} reports coherence time of the InAs/Al based superconducting charge qubit with $\varepsilon_\text{C}/h \sim 10$ GHz to be $T_2^{\ast}\sim 1$ ns. A simple estimate for the dephasing caused by charge noise is given by $T_2^{\ast}\sim \hbar/(\varepsilon_\text{C} \sqrt{\alpha_\text{C}})$ \cite{Knapp2018}. We use this relation to estimate the experimentally relevant $\sqrt{\alpha_\text{C}}\sim 0.01$. Typical values for the charging energy of InAs QDs are $\varepsilon_\text{C} \sim 100\mu$eV \cite{vanVeen2019, deJong2019} which leads to $\varepsilon_\text{C} \sqrt{\alpha_\text{C}} \sim 1\mu$eV. Note that this gives a conservative estimate for the strength of charge noise for the topological qubit as it assumes no optimization of noise as compared to current experimental capabilities. Similar estimates for charge qubits in much more mature GaAs-based systems yield $\sqrt{\alpha}\sim 10^{-4}$ \cite{Petersson2010,Knapp2018}. The perturbative treatment of the noise in Eq.~\eqref{eq:noise_expansion} close to $\Delta=0$ is justified for the charge noise as long as $\sqrt{\alpha_\text{C}}\varepsilon_\text{C} \ll |\bar{t}_p|$. The above estimate of $\sqrt{\alpha_\text{C}} \ll 1$ therefore justifies the perturbative treatment as long as the effective tunnel couplings $\bar{t}_p$ are not too small compared to $\varepsilon_\text{C}$. For our numerical estimates we take $|t_1|=t,\ |t_2|=1.5t,\ t=\varepsilon_\text{C}/5$ which guarantees validity of the perturbative treatment for the entire parameter range of $\Delta$ and $\phi$. The coupling asymmetry $|t_2|/|t_1|=1.5$ does not correspond to the case of maximum visibility (which is reached for $|t_2|/|t_1|=1$). However, a certain degree of the coupling asymmetry is expected in the QD-based readout experiments as the coupling fine-tuning might pose a challenge. Using expressions of Eq.~\eqref{eq:nQD} (Eq.~\eqref{eq:Cdiff}) for the average QD charge (differential capacitance of the QD) in the 2-MZM measurement case we plot the zero-temperature dependence of the signal $\mathcal{S}_n$($\mathcal{S}_\text{C}$) and noise $\mathcal{N}_n$($\mathcal{N}_\text{C}$) given by Eqs.~\eqref{eq:signal} and \eqref{eq:noise} in terms of the phase and detuning in Figs. \ref{fig:SNR_Delta_Delta} and \ref{fig:SNR_Delta_phi}. Temperature dependence of detuning noise is analyzed in Appendix~\ref{sec:Temperature dependence}. \begin{figure} \includegraphics[width=0.48\columnwidth]{SN_nQD_Delta_phipi4} \includegraphics[width=0.48\columnwidth]{SN_nQD_Delta_phipi2} \includegraphics[width=0.48\columnwidth]{SN_Cdiff_Delta_phipi4} \includegraphics[width=0.48\columnwidth]{SN_Cdiff_Delta_phipi2} \caption{Signal \eqref{eq:signal} and noise \eqref{eq:noise} for the 2-MZM measurements of the average QD charge $\langle n_{\text{QD}} \rangle$ (a)-(b) and differential QD capacitance $C_{\text{diff}}/C_{\rm \Sigma,D}$ (c)-(d) as a function of detuning $\Delta$ for different values of $\phi$. Here we assume that the system is in its ground state ($T=0$) and set $|t_1|=t,\ |t_2|=1.5t,\ t=\varepsilon_\text{C}/5=0.02\text{ meV},C_\text{g}/C_{\rm \Sigma,D}=2$, and the noise is detuning noise of strength $\sqrt{\alpha_C}=0.01$.} \label{fig:SNR_Delta_Delta} \end{figure} The dependence on the detuning $\Delta$ shows that the charge signal $\mathcal{S}_n$ takes its maximal value for $\Delta=\Delta_n^\text{max}$ with $\Delta_n^\text{max} \sim t$. This follows from the suppression of the signal at $\Delta=0$ ($\Delta \rightarrow \infty$) due to the QD charge reaching $\langle n_{\rm QD} \rangle =1/2$ ($\langle n_{\rm QD} \rangle =1$) independent of parity. Neglecting noise one can find analytically $\Delta_n^\text{max} = 2|\bar{t}_-^2 \bar{t}_+|^{1/3}/\sqrt{1+|\bar{t}_-/\bar{t}_+|^{2/3}}$. We checked numerically that for our choice of parameters the noise-induced term in $\mathcal{S}_n$ is perturbative, i.e. much smaller then the noise-free term, and thus produces small corrections to this analytical result. In the regime of perturbative noise and $T=0$ the differential capacitance signal $\mathcal{S}_\text{C}$ is always maximal at $\Delta=0$ while vanishing at $\Delta=\Delta_\text{C}^\text{min}=\Delta_n^\text{max}$. The latter marks the point where the differential capacitance corresponding to the smaller of $|\bar{t}_-|$ and $|\bar{t}_+|$, which is generally dominating around small detuning due to a larger curvature, is equal to the differential capacitance of the larger coupling which dominates in the regime of large detuning. Note that at finite temperature and in the presence of noise $\Delta=0$ might not always be the point of maximal signal and $\Delta_{C}^\text{min}$ might differ from $\Delta_n^\text{max}$. Consider for example the regime of extreme fine tuning where $|\bar{t}_-| \ll T,\sigma_\Delta$ while $|\bar{t}_+| \gg T,\sigma_\Delta$. In that limit the contribution to the differential capacitance of the $p=-1$ parity would vanish as $C_{\text{diff},-}\propto |\bar{t}_-|/(T\sigma_\Delta)$. A derivation of this expression is given in Appendix~\ref{sec:Cdiff_-}. Approaching this regime would mean that $\Delta_\text{C}^\text{min}$ would shift to values smaller than $\Delta_n^\text{max}$ and eventually reach $\Delta_\text{C}^\text{min}=0$. Further reducing $|\bar{t}_-|$ would make $\mathcal{S}_\text{C}$ be dominated by the $p=+1$ branch independent of $\Delta$ thus restoring $\Delta=0$ as the point of maximal signal. Naturally, the limit of very small $|\bar{t}_-|$ breaks the perturbative treatment of noise used in this paper. Nevertheless, as long as the noise $\sigma_\Delta$ is weak compared to $|\bar{t}_+|$ results for the limit $|\bar{t}_-|\rightarrow 0$ can be obtained using our formalism by replacing $\mathcal{S}_\text{C} \rightarrow C_{\text{diff},+}$ and $\mathcal{S}_n \rightarrow \langle n_{\text{QD},+}\rangle- \langle n_{\sigma}\rangle$, where $\langle n_{\sigma}\rangle$ is charge expectation value for vanishing coupling broadened by noise \footnote{For Gaussian noise of width $\sigma_\Delta$ the charge of the ground ($-$) and excited ($+$) state is broadened via $n_{\sigma,\pm}=(1\mp\erf(\Delta/\sqrt{2}\sigma_{\Delta}))/2$. The expectation value \unexpanded{$\langle n_{\sigma} \rangle$} is then given by appropriately temperature averaging the ground and excited state contribution.}. The noise $\mathcal{N}_n$ is maximal at $\Delta=0$ and falls off quickly for large detuning. From the perspective of pure charge noise the SNR would thus be largest for large detuning where the signal is also becoming suppressed. The presence of other noise sources will limit this behavior. For example the effect of external amplifier noise is typically minimized for the strongest signal. At the maximal signal, i.e. $\Delta=\Delta_n^\text{max}$, we find a charge-noise-limited SNR of $\approx12$ for $\phi=\pi/2$. Thus, as long as the integration times are not sufficiently long to extend the amplifier-limited SNR beyond 12 the point of maximal experimental SNR will be close to $\Delta=\Delta_n^\text{max}$. \begin{figure} \includegraphics[width=0.48\columnwidth]{SN_nQD_phi_Delta2} \includegraphics[width=0.48\columnwidth]{SN_nQD_phi_Delta5} \includegraphics[width=0.48\columnwidth]{SN_Cdiff_phi_Delta0} \includegraphics[width=0.48\columnwidth]{SN_Cdiff_phi_Delta5} \caption{Signal \eqref{eq:signal} and noise \eqref{eq:noise} for the 2-MZM measurement of the average QD charge $\langle n_{\text{QD}} \rangle$ (a)-(b) and differential QD capacitance $C_{\text{diff}}/C_{\rm \Sigma,D}$ (c)-(d) as a function of phase $\phi$ for different values of $\Delta/t$. We used the same parameters as in Fig.~\ref{fig:SNR_Delta_Delta}.} \label{fig:SNR_Delta_phi} \end{figure} The noise $\mathcal{N}_\text{C}$ shows a local minima at $\Delta=0$ due to the absence of the first-order contribution of charge noise. This emphasizes that for capacitive measurements $\Delta=0$ is likely the optimal operation point. The only exception is the above-mentioned regime where the smaller of the effective couplings, say $|\bar{t}_-|$, is accidentally of the order of $T \sigma_\Delta/|\bar{t}_+|$. For the parameters we used, we find a charge-noise-limited SNR of $\approx 20$ for $\phi=\pi/2$. Figure~\ref{fig:SNR_Delta_phi} shows the $\phi$-dependence of the signal and noise. The main effect of changing $\phi$ is to increase the difference between $|\bar{t}_+|$ and $|\bar{t}_-|$ as $\phi$ approaches $\pi/2$. This generally increases the signal for all observables as long as the noise remains perturbative. Away from $\Delta=0$ changing $\phi$ has only a relatively weak effect on the noise which means that for charge measurements $\phi \rightarrow \pi/2$ is always preferable. In the case of capacitance measurements that are operated at $\Delta=0$ approaching $\phi=\pi/2$ not only increases the signal but also the noise. The optimal SNR can thus be obtained away from $\phi=\pi/2$. Similar to the discussion of the effect of noise for the charge measurements at large detuning, external constraints will determine whether the increase in the signal or the reduction of the noise are more important for obtaining the best experimental SNR. \section{\label{sec:Conclusion}Conclusion} In the present work we identified detuning charge noise as the dominant source of intrinsic noise that affects the measurement visibility of Majorana qubits probed by QDs. We studied the Hamiltonian for 2-MZM and 4-MZM measurements non-perturbatively in the tunnel coupling and emphasized the similarity of their description in the regime of small detunings which in general optimizes SNR in the presence of external noise. 4-MZM measurements require more tuning and more manipulations to bring the system into the optimal measurement regime, but can produce signal of the \textit{same} order as 2-MZM measurements. We thus analyzed the 2-MZM measurement for SNR in detail and claim the 4-MZM one will behave similarly. Generally we obtain large SNRs $\gtrsim 10$ for conservative assumptions on the charge noise of the system that is tuned to the optimal measurement regime. Since we did not explicitly treat external noise sources like amplifiers our SNRs should be understood as the limiting SNRs that can be obtained after long measurement times. The large obtained SNRs indicate that charge noise will likely not be limiting the fidelity of measurements of topological qubits. We make concrete predictions for the visibility of the topological qubit measurement, but our results are relevant for and can be tested in simpler setups. For example, test devices replacing the qubit island with another QD show similar interference effects. Our SNRs can then be understood as describing the difference between measurements where the enclosed phases of the tunnel couplings are $\phi$ and $\phi+\pi$. \textit{Note added.} Recently a related manuscript appeared addressing the effects of the charge noise on 2-MZM measurements \cite{Maman2020}. The authors treat noise in the detuning non-perturbatively by convolving the signal by a phenomenological Gaussian broadening. The explicit treatment of the $1/f$ character of the charge noise presented here could be used to better inform the parameters of the Gaussian broadening. \begin{acknowledgments} We thank Roman Lutchyn and Bela Bauer for useful discussions. \end{acknowledgments}
2,869,038,154,204
arxiv
\section{Introduction} The Singular Value Decomposition (SVD) can factorize a matrix into orthogonal eigenbases and non-negative singular values, serving as an essential step for many matrix operations. Recently in computer vision and deep learning, many approaches integrated the SVD as a meta-layer in the neural networks to perform some differentiable spectral transformations, such as the matrix square root and inverse square root. The applications arise in a wide range of methods, including Global Covariance Pooling (GCP)~\cite{li2017second,song2021approximate,gao2021temporal}, decorrelated Batch Normalization (BN)~\cite{huang2018decorrelated,huang2021group,song2022fast}, Whitening an Coloring Transform (WCT) for universal style transfer~\cite{li2017universal,chiu2019understanding,wang2020diversified}, and Perspective-n-Point (PnP) problems~\cite{brachmann2017dsac,campbell2020solving,dang2020eigendecomposition}. For the input feature map ${\mathbf{X}}$ passed to the SVD meta-layer, one often first computes the covariance of the feature as ${\mathbf{X}}\mX^{T}$. This can ensure that the covariance matrix is both symmetric and positive semi-definite, which does not involve any negative eigenvalues and leads to the identical left and right eigenvector matrices. However, it is observed that inserting the SVD layer into deep models would typically make the covariance very ill-conditioned~\cite{song2021approximate}, resulting in deleterious consequences on the stability and optimization of the training process. For a given covariance ${\mathbf{A}}$, its conditioning is measured by the condition number: \begin{equation} \kappa({\mathbf{A}}) = \sigma_{max}({\mathbf{A}}) \sigma_{min}^{-1}({\mathbf{A}}) \end{equation} where $\sigma(\cdot)$ denotes the eigenvalue of the matrix. Mathematically speaking, the condition number measures how sensitive the SVD is to the errors of the input. Matrices with low condition numbers are considered \textbf{well-conditioned}, while matrices with high condition numbers are said to be \textbf{ill-conditioned}. Specific to neural networks, the ill-conditioned covariance matrices are harmful to the training process in several aspects, which we will analyze in detail later. This phenomenon was first observed in the GCP methods by~\cite{song2021approximate}, and we found that it generally extrapolates to other SVD-related tasks, such as decorrelated BN. Fig.~\ref{fig:cover_cond} depicts the covariance conditioning of these two tasks throughout the training. As can be seen, the integration of the SVD layer makes the generated covariance very ill-conditioned (${\approx}1e12$ for decorrelated BN and ${\approx}1e16$ for GCP). By contrast, the conditioning of the approximate solver, \emph{i.e.,} Newton-Schulz iteration (NS iteration)~\cite{higham2008functions}, is about $1e5$ for decorrelated BN and is around $1e15$ for GCP, while the standard BN only has a condition number of $1e3$. \begin{figure}[htbp] \centering \includegraphics[width=0.99\linewidth]{imgs/cover_cond.jpg} \caption{The covariance conditioning of the SVD meta-layer during the training process in the tasks of decorrelated BN (\emph{left}) and GCP (\emph{Right}). The decorrelated BN is based on ResNet-50 and CIFAR100, while ImageNet and ResNet-18 are used for the GCP.} \label{fig:cover_cond} \end{figure} Ill-conditioned covariance matrices can harm the training of the network in both the forward pass (FP) and the backward pass (BP). For the FP, mainly the SVD solver is influenced in terms of stability and accuracy. Since the ill-conditioned covariance has many trivially-small eigenvalues, it is difficult for an SVD solver to accurately estimate them and large round-off errors are likely to be triggered, which might hurt the network performances. Moreover, the very imbalanced eigenvalue distribution can easily make the SVD solver fail to converge and cause the training failure~\cite{wang2021robust,song2021approximate}. For the BP, as pointed out in~\cite{lecun2012efficient,wiesler2011convergence,huang2018decorrelated}, the feature covariance is closely related to the Hessian matrix during the backpropagation. As the error curvature is given by the eigenvalues of the Hessian matrix~\cite{sutskever2013importance}, for the ill-conditioned Hessian, the Gradient Descent (GD) step would bounce back and forth in high curvature directions (large eigenvalues) and make slow progress in low curvature directions (small eigenvalues). As a consequence, the ill-conditioned covariance could cause slow convergence and oscillations in the optimization landscape. The generalization abilities of a deep model are thus harmed. Due to the data-driven learning nature and the highly non-linear transform of deep neural networks, directly giving the analytical form of the covariance conditioning is intractable. Some simplifications have to be performed to ease the investigation. Since the covariance is generated and passed from the previous layer, the previous layer is likely to be the most relevant to the conditioning. Therefore, we naturally limit our focus to the Pre-SVD layer, \emph{i.e.,} the layer before the SVD layer. To further simplify the analysis, we study the Pre-SVD layer in two consecutive training steps, which can be considered as a mimic of the whole training process. Throughout the paper, we mainly investigate some meaningful manipulations on the weight, the gradient, and the learning rate of the Pre-SVD layer in two sequential training steps. \textit{Under our Pre-SVD layer simplifications, one promising direction to improve the conditioning is enforcing orthogonality on the weights.} Orthogonal weights have the norm-preserving property, which could improve the conditioning of the feature matrix. This technique has been widely studied in the literature of stable training and Lipschitz networks~\cite{mishkin2016all,wang2020orthogonal,singla2021skew}. We select some representative methods and validate their effectiveness in the task of decorrelated BN. Our experiment reveals that these orthogonal techniques can greatly improve the covariance conditioning, but could only bring marginal performance improvements and even slight degradation. \textit{This indicates that when the representation power of weight is limited, the improved conditioning does not necessarily lead to better performance. Orthogonalizing only the weight is thus insufficient to improve the generalization.} Instead of seeking orthogonality constraints on the weights, we propose our Nearest Orthogonal Gradient (NOG) and Optimal Learning Rate (OLR). These two techniques explore the orthogonality possibilities about the learning rate and the gradient. More specifically, our NOG modifies the gradient of the Pre-SVD layer into its nearest-orthogonal form and keeps the GD direction unchanged. On the other hand, the proposed OLR dynamically changes the learning rate of the Pre-SVD layer at each training step such that the updated weight is as close to an orthogonal matrix as possible. The experimental results demonstrate that the proposed two techniques not only significantly improve the covariance conditioning but also bring obvious improvements in the validation accuracy of both GCP and decorrelated BN. Moreover, when combined with the orthogonal weight treatments, the performance can have further improvements. Besides the application on differentiable SVD, we propose that our orthogonality techniques can be also used for unsupervised latent disentanglement of Generative Adversarial Networks (GANs)~\cite{goodfellow2014generative}. Recent works~\cite{zhu2021low,shen2021closed} revealed that the latent disentanglement of GANs is closely related to the gradient or weight of the first projector after the latent code. In particular, the eigenvectors of the gradient or weight can be viewed as closed-formed solutions of interpretable directions~\cite{shen2021closed}. This raises the need for enforcing orthogonal constraints on the projector. \textit{As shown in Fig.~\ref{fig:ortho_illu}, compared with non-orthogonal matrices, orthogonal matrices can lead to more disentangled representations and more precise attributes due to the property of equally-important eigenvectors.} Motivated by this observation, we propose to enforce our NOG and OLR as orthogonality constraints in generative models. Extensive experiments on various architectures and datasets demonstrate that our methods indeed improve the disentanglement ability of identifying semantic attributes and achieve state-of-the-art performance against other disentanglement approaches. The main contributions are summarized below: \begin{itemize} \item We systematically study the problem of how to improve the covariance conditioning of the SVD meta-layer. We propose our Pre-SVD layer simplification to investigate this problem from the perspective of orthogonal constraints. \item We explore different techniques of orthogonal weights to improve the covariance conditioning. Our experiments reveal that these techniques could improve the conditioning but would harm the generalization abilities due to the limitation on the representation power of weight. \item We propose the nearest orthogonal gradient and optimal learning rate. The experiments on GCP and decorrelated BN demonstrate that these methods can attain better covariance conditioning and improved generalization. Their combinations with weight treatments can further boost the performance. \item We show that our proposed orthogonality approaches can be applied on the GANs projector for improved latent disentanglement ability of discovering precise semantic attributes, which opens the way for new applications of orthogonality techniques. \end{itemize} This paper is an extension of the previous conference paper~\cite{song2022improving}. In~\cite{song2022improving}, we propose two orthogonality techniques and demonstrate that these methods can simultaneously improve the covaraince conditioning and generalization abilities of the SVD meta-layer. This journal extension motivates and proposes that these techniques can be also applied in generative models for better latent disentanglement. This point is validated through extensive experiments on various generative architectures and datasets. Moreover, we also investigate the probability of occurrence of our OLR throughout the training and show that the evaluation results agree well with our theoretical analysis. The rest of the paper is organized as follows: Sec.~\ref{sec:related} describes the related work in differentiable matrix decomposition, orthogonality applications, and unsupervised latent disentanglement. Sec.~\ref{sec:pre_svd_and_weight} introduces our Pre-SVD layer simplification and orthogonal weight treatments, and Sec.~\ref{sec:NOG_OLR} presents the proposed orthogonality techniques. Sec.~\ref{sec:ortho_latent} motivates why orthogonality can improve latent disentanglement. Sec.~\ref{sec:exp} provides experimental results and some in-depth analysis. Finally, Sec.~\ref{sec:conclusion} summarizes the conclusions. \section{Related Work} \label{sec:related} \subsection{Differentiable Matrix Decomposition} The differentiable matrix decomposition is widely used in neural networks as a spectral meta-layer. Ionescu~\emph{et al.}~\cite{ionescu2015matrix,ionescu2015training} first propose the theory of matrix back-propagation and laid a foundation for the follow-up research. In deep neural networks, the transformation of matrix square root and its inverse are often desired due to the appealing spectral property. Their applications cover a wide range of computer vision tasks~\cite{song2022fast,song2022fast2}. To avoid the huge time consumption of the SVD, some iterative methods are also developed to approximate the solution~\cite{higham2008functions,song2022fast,song2022fast2}. Recently Song~\emph{et al.}~\cite{song2022batch} propose a dedicated eigen-solver for improving the computation speed of batched matrices. In~\cite{huang2018decorrelated,chiu2019understanding,huang2019iterative,huang2020investigation,huang2021group,song2022fast}, the inverse square root is used in the ZCA whitening transform to whiten the feature map, which is also known as the decorrelated BN. The Global Covariance Pooling (GCP) models~\cite{li2017second,li2018towards,wang2020deep,xie2021so,song2021approximate,gao2021temporal,song2022eigenvalues} compute the matrix square root of the covariance as a spectral normalization, which achieves impressive performances on some recognition tasks, including large-scale visual classification~\cite{li2017second,song2021approximate,xie2021so,song2022fast}, fine-grained visual categorization~\cite{li2017second,li2018towards,song2022eigenvalues}, and video action recognition~\cite{gao2021temporal}. The Whitening and Coloring Transform (WCT), which uses both the matrix square root and inverse square root, is usually adopted in some image generation tasks such as neural style transfer~\cite{li2017universal,wang2020diversified}, image translation~\cite{ulyanov2017improved,cho2019image}, and domain adaptation~\cite{abramov2020keep,choi2021robustnet}. In the geometric vision problems, the differentiable SVD is usually applied to estimate the fundamental matrix and the camera pose~\cite{ranftl2018deep,dang2020eigendecomposition,campbell2020solving}. Besides the SVD-based factorization, differentiating Cholesky decomposition~\cite{murray2016differentiation} and some low-rank decomposition is used to approximate the attention mechanism~\cite{geng2020attention,xiong2021nystromformer,lu2021soft} or to learn the constrained representations~\cite{chan2015pcanet,yang2017admm}. \subsection{Orthogonality in Neural Network} Orthogonal weights have the benefit of the norm-preserving property, \emph{i.e.,} the relation $||{\mathbf{W}}{\mathbf{A}}||_{\rm F}{=}||{\mathbf{A}}||_{\rm F}$ holds for any orthogonal ${\mathbf{W}}$. When it comes to deep neural networks, such a property can ensure that the signal stably propagates through deep networks without either exploding or vanishing gradients~\cite{bengio1994learning,glorot2010understanding}, which could speed up convergence and encourage robustness and generalization. In general, there are three ways to enforce orthogonality to a layer: orthogonal weight initialization~\cite{saxe2014exact,mishkin2016all,xiao2018dynamical}, orthogonal regularization~\cite{rodriguez2016regularizing,bansal2018can,qi2020deep,bansal2018can,wang2020orthogonal}, and explicit orthogonal weight via Carley transform or matrix exponential~\cite{maduranga2019complex,trockman2020orthogonalizing,singla2021skew}. Among these techniques, orthogonal regularization and orthogonal weight are most commonly used as they often bring some practical improvements in generalization. Since the covariance is closely related to the weight matrix of the Pre-SVD layer, enforcing the orthogonality constraint could help to improve the covariance conditioning of the SVD meta-layer. We will choose some representative methods and validate their impact in Sec.~\ref{sec:general_orthogonality}. Notice that the focus of existing literature is different from our work. The orthogonality constraints are often used to improve the Lipschitz constants of the neural network layers, which is expected to improve the visual quality in image generation~\cite{brock2018large,miyato2018spectral}, to allow for better adversarial robustness~\cite{tsuzuku2018lipschitz,singla2021skew}, and to improve generalization abilities~\cite{sedghi2018singular,wang2020orthogonal}. Our work is concerned with improving the covariance conditioning and generalization performance. Moreover, the orthogonality literature mainly investigates how to enforce orthogonality to weight matrices, whereas less attention is put on the gradient and learning rate. In Sec.~\ref{sec:NOG_OLR}, we will explore such possibilities and propose our solutions: nearest orthogonal gradient and optimal learning rate which is optimal in the sense that the updated weight is as close to an orthogonal matrix as possible. \subsection{Unsupervised Latent Disentanglement of GANs} Interpreting latent spaces of GAN models in an unsupervised manner has received wide attention recently~\cite{bau2019gan,jahanian2020steerability,voynov2020unsupervised,tzelepis2021warpedganspace}. This can help to identify semantic attributes of the image and to have precise control of the generation process, which could benefit both local and global image editing tasks~\cite{shen2020interpreting,zhu2021low}. Voynov~\emph{et al.}~\cite{voynov2020unsupervised} proposed to jointly learn a set of directions and an extra classifier such that the interpretable directions can be recognized. In~\cite{harkonen2020ganspace}, the authors proposed to perform PCA on the sampled data to capture the interpretable directions. More recently, Shen~\emph{et al.}~\cite{shen2021closed} and Zhu~\emph{et al.}~\cite{zhu2021low} pointed out that the semantic attributes are characterized by the eigenvectors of the weight or gradient of the first projector after the latent code. Motivated by this observation, we propose to enforce our orthogonality techniques to the gradient and weight matrices. Besides our orthogonality techniques, a few works have applied implicit orthogonality into the training process of GANs to attain more disentangled representations~\cite{peebles2020hessian,zhu2020learning,he2021eigengan,wei2021orthogonal}. In~\cite{peebles2020hessian,wei2021orthogonal}, the authors proposed to add orthogonal Hessian/Jacobian penalty to encourage disentanglement. He~\emph{et al.}~\cite{he2021eigengan} designed a dedicated GAN architecture where multi-level latent codes and orthogonal weight constraints are applied. Different from previous approaches, our orthogonality treatments do not rely on any implicit regularization. Instead, our NOG explicitly maps the original gradient into the nearest-orthogonal form, while our OLR keeps the updated weight in the closest form to orthogonal matrices. \section{Pre-SVD Layer and Weight Treatments} \label{sec:pre_svd_and_weight} In this section, we first motivate our simplification of the Pre-SVD layer, and then validate the efficacy of some representative weight treatments. \subsection{Pre-SVD Layer Simplification} \label{sec:pre_svd} The neural network consists of a sequential of non-linear layers where the learning of each layer is data-driven. Stacking these layers leads to a highly non-linear and complex transform, which makes directly analyzing the covariance conditioning intractable. To solve this issue, we have to perform some simplifications. Our simplifications involve limiting the analysis only to the layer previous to the SVD layer (which we dub as the Pre-SVD layer) in two consecutive training steps. The Pre-SVD layer directly determines the conditioning of the generated covariance, while the two successive training steps are a mimic of the whole training process. The idea is to simplify the complex transform by analyzing the sub-model (two layers) and the sub-training (two steps), which can be considered as an "abstract representation" of the deep model and its complete training. Let ${\mathbf{W}}$ denote the weight matrix of the Pre-SVD layer. Then for the input ${\mathbf{X}}_{l}$ passed to the layer, we have: \begin{equation} {\mathbf{X}}_{l+1} = {\mathbf{W}}{\mathbf{X}}_{l} + \mathbf{b} \end{equation} where ${\mathbf{X}}_{l+1}$ is the feature passed to the SVD layer, and $\mathbf{b}$ is the bias vector. Since the bias $\mathbf{b}$ has a little influence here, we can sufficiently omit it for simplicity. The covariance in this step is computed as ${\mathbf{W}}{\mathbf{X}}_{l}{\mathbf{X}}_{l}^{T}{\mathbf{W}}^{T}$. After the BP, the weight matrix is updated as $\mathbf{W}{-}{\eta}\frac{\partial l}{\partial \mathbf{W}}$ where $\eta$ denotes the learning rate of the layer. Let ${\mathbf{Y}}_{l}$ denote the passed-in feature of the next training step. Then the covariance is calculated as: \begin{equation} \begin{aligned} {\mathbf{C}} &= \Big( (\mathbf{W}-\eta\frac{\partial l}{\partial \mathbf{W}})\cdot{\mathbf{Y}}_{l} \Big)\Big( (\mathbf{W}-\eta\frac{\partial l}{\partial \mathbf{W}})\cdot{\mathbf{Y}}_{l} \Big)^{T}\\ &=\begin{gathered} {\mathbf{W}}{\mathbf{Y}}_{l}{\mathbf{Y}}_{l}^{T}{\mathbf{W}}^{T} {-} \eta\frac{\partial l}{\partial \mathbf{W}}{\mathbf{Y}}_{l}{\mathbf{Y}}_{l}^{T}{\mathbf{W}}^{T}\\ {-} \eta{\mathbf{W}}{\mathbf{Y}}_{l}{\mathbf{Y}}_{l}^{T}(\frac{\partial l}{\partial \mathbf{W}})^{T} {+} \eta^{2}\frac{\partial l}{\partial \mathbf{W}}{\mathbf{Y}}_{l}{\mathbf{Y}}_{l}^{T}(\frac{\partial l}{\partial \mathbf{W}})^{T} \end{gathered} \label{eq:problem} \end{aligned} \end{equation} where ${\mathbf{C}}$ denotes the generated covariance of the second step. Now the problem becomes how to stop the new covariance ${\mathbf{C}}$ from becoming worse-conditioned than ${\mathbf{W}}{\mathbf{X}}_{l}{\mathbf{X}}_{l}^{T}{\mathbf{W}}^{T}$. In eq.~\eqref{eq:problem}, three variables could influence the conditioning: the weight ${\mathbf{W}}$, the gradient of the last step $\frac{\partial l}{\partial {\mathbf{W}}}$, and the learning rate $\eta$ of this layer. Among them, the weight ${\mathbf{W}}$ seems to be the most important as it contributes to three terms of eq.~\eqref{eq:problem}. Moreover, the first term ${\mathbf{W}}{\mathbf{Y}}_{l}{\mathbf{Y}}_{l}^{T}{\mathbf{W}}^{T}$ computed by ${\mathbf{W}}$ is not attenuated by $\eta $ or $\eta^2$ like the other terms. Therefore, it is natural to first consider manipulating ${\mathbf{W}}$ such that the conditioning of ${\mathbf{C}}$ could be improved. \subsection{General Treatments on Weights} \label{sec:general_orthogonality} In the literature of enforcing orthogonality to the neural network, there are several techniques to improve the conditioning of the weight ${\mathbf{W}}$. Now we introduce some representatives methods and validate their impacts. \subsubsection{Spectral Normalization (SN)} In~\cite{miyato2018spectral}, the authors propose a normalization method to stabilize the training of generative models~\cite{goodfellow2014generative} by dividing the weight matrix with its largest eigenvalue. The process is defined as: \begin{equation} {\mathbf{W}} / \sigma_{max}({\mathbf{W}}) \end{equation} Such a normalization can ensure that the spectral radius of ${\mathbf{W}}$ is always $1$, \emph{i.e.,} $\sigma_{max}({\mathbf{W}}){=}1$. This could help to reduce the conditioning of the covariance since we have $\sigma_{max}({\mathbf{W}}{\mathbf{Y}}_{l}){=}\sigma_{max}({\mathbf{Y}}_{l})$ after the spectral normalization. \subsubsection{Orthogonal Loss (OL)} Besides limiting the spectral radius of ${\mathbf{W}}$, enforcing orthogonality constraint could also improve the covariance conditioning. As orthogonal matrices are norm-preserving (\emph{i.e.,} $||{\mathbf{W}}{\mathbf{Y}}_{l}||_{\rm F}{=}||{\mathbf{W}}||_{\rm F}$), lots of methods have been proposed to encourage orthogonality on weight matrices for more stable training and better signal-preserving property~\cite{pascanu2013difficulty,bansal2018can,wang2020orthogonal,trockman2020orthogonalizing,singla2021skew}. One common technique is to apply \emph{soft} orthogonality~\cite{wang2020orthogonal} by the following regularization: \begin{equation} l=||{\mathbf{W}}\mW^{T}-{\mathbf{I}}||_{\rm F} \end{equation} This extra loss is added in the optimization objective to encourage more orthogonal weight matrices. However, since the constraint is achieved by regularization, the weight matrix is not exactly orthogonal at each training step. \subsubsection{Orthogonal Weights (OW)} Instead of applying \emph{soft} orthogonality by regularization, some methods can explicitly enforce \emph{hard} orthogonality to the weight matrices~\cite{trockman2020orthogonalizing,singla2021skew}. The technique of~\cite{singla2021skew} is built on the mathematical property: for any skew-symmetric matrix, its matrix exponential is an orthogonal matrix. \begin{equation} \exp({\mathbf{W}}-{\mathbf{W}}^{T})\exp({\mathbf{W}}-{\mathbf{W}}^{T})^{T}={\mathbf{I}} \end{equation} where the operation of ${\mathbf{W}}{-}{\mathbf{W}}^{T}$ is to make the matrix skew-symmetric, \emph{i.e.,} the relation ${\mathbf{W}}{-}{\mathbf{W}}^{T}{=}-({\mathbf{W}}{-}{\mathbf{W}}^{T})^{T}$ always holds. Then $\exp({\mathbf{W}}{-}{\mathbf{W}}^{T})$ is used as the weight. This technique explicitly constructs the weight as an orthogonal matrix. The orthogonal constraint is thus always satisfied during the training. \begin{figure}\CenterFloatBoxes \begin{floatrow} \ffigbox{% \includegraphics[width=0.99\linewidth]{imgs/dbn_weight.jpg} }{% \caption{Covariance conditioning during the training process. All weight treatments can improve conditioning.}% \label{fig:ortho_weight} } \capbtabbox{% \resizebox{0.99\linewidth}{!}{ \begin{tabular}{r|c|c} \toprule Methods & mean$\pm$std & min \\ \hline SVD & 19.99$\pm$0.16 &19.80 \\ \hline SVD + SN & 19.94$\pm$0.33 &19.60 \\ SVD + OL & \textbf{19.73$\pm$0.28} & \textbf{19.54} \\ SVD + OW & 20.06$\pm$0.17 &19.94 \\ \hline\hline NS iteration &19.45$\pm$0.33&19.01\\ \bottomrule \end{tabular} } } { \caption{Performance of different weight treatments on ResNet-50 and CIFAR100 based on $10$ runs.}% \label{tab:ortho_weight} } \end{floatrow} \end{figure} We apply the above three techniques in the experiment of decorrelated BN. Fig.~\ref{fig:ortho_weight} displays the covariance conditioning throughout the training, and Table~\ref{tab:ortho_weight} presents the corresponding validation errors. As can be seen, all of these techniques attain much better conditioning, but the performance improvements are not encouraging. The SN reduces the conditioning to around $10^{5}$, while the validation error marginally improves. The \emph{soft} orthogonality by the OL brings slight improvement on the performance despite some variations in the conditioning. The conditioning variations occur because the orthogonality constraint by regularization is not strictly enforced. Among the weight treatments, the \emph{hard} orthogonality by the OW achieves the best covariance conditioning, continuously maintaining the condition number around $10^{3}$ throughout the training. However, the OW slightly hurts the validation error. This implies that better covariance conditioning does not necessarily correspond to the improved performance, and orthogonalizing only the weight cannot improve the generalization. \textit{We conjecture that enforcing strict orthogonality only on the weight might limit its representation power.} Nonetheless, as will be discussed in Sec.~\ref{sec:NOG}, the side effect can be canceled when we simultaneously orthogonalize the gradient. \section{Nearest Orthogonal Gradient and Optimal Learning Rate} \label{sec:NOG_OLR} This section introduces our proposed techniques on modifying the gradient and learning rate of the Pre-SVD layer. The combinations with weight treatments are also discussed. \subsection{Nearest Orthogonal Gradient (NOG)} \label{sec:NOG} As discussed in Sec.~\ref{sec:pre_svd}, the covariance conditioning is also influenced by the gradient $\frac{\partial l}{\partial {\mathbf{W}}}$. However, existing literature mainly focuses on orthogonalizing the weights. To make the gradient also orthogonal, we propose to find the nearest-orthogonal gradient of the Pre-SVD layer. Different matrix nearness problems have been studied in~\cite{higham1988matrix}, and the nearest-orthogonal problem is defined as: \begin{equation} \min_{{\mathbf{R}}} ||\frac{\partial l}{\partial {\mathbf{W}}}-{\mathbf{R}} ||_{\rm F}\ subject\ to\ {\mathbf{R}}\mR^{T}={\mathbf{I}} \end{equation} where ${\mathbf{R}}$ is the seeking solution. To obtain such an orthogonal matrix, we can construct the error function as: \begin{equation} e({\mathbf{R}}) = Tr\Big((\frac{\partial l}{\partial {\mathbf{W}}}-{\mathbf{R}})^{T}(\frac{\partial l}{\partial {\mathbf{W}}}-{\mathbf{R}})\Big) + Tr\Big(\mathbf{\Sigma} {\mathbf{R}}^{T}{\mathbf{R}} -{\mathbf{I}} \Big) \end{equation} where $Tr(\cdot)$ is the trace measure, and $\mathbf{\Sigma}$ denotes the symmetric matrix Lagrange multiplier. The closed-form solution is given by: \begin{equation} {\mathbf{R}} = \frac{\partial l}{\partial {\mathbf{W}}} \Big(( \frac{\partial l}{\partial {\mathbf{W}}})^{T} \frac{\partial l}{\partial {\mathbf{W}}}\Big)^{-\frac{1}{2}} \end{equation} The detailed derivation is given in the supplementary material. If we have the SVD of the gradient (${\mathbf{U}}{\mathbf{S}}{\mathbf{V}}^{T}{=}\frac{\partial l}{\partial {\mathbf{W}}}$), the solution can be further simplified as: \begin{equation} {\mathbf{R}} = {\mathbf{U}}{\mathbf{S}}{\mathbf{V}}^{T} ({\mathbf{V}}{\mathbf{S}}^{-1}{\mathbf{V}}^{T})={\mathbf{U}}{\mathbf{V}}^{T} \end{equation} As indicated above, the nearest orthogonal gradient is achieved by setting the singular value matrix to the identity matrix, \emph{i.e.,} setting ${\mathbf{S}}$ to ${\mathbf{I}}$. Notice that only the gradient of Pre-SVD layer is changed, while that of the other layers is not modified. Our proposed NOG can bring several practical benefits. \subsubsection{Orthogonal Constraint and Optimal Conditioning} The orthogonal constraint is exactly enforced on the gradient as we have $({\mathbf{U}}{\mathbf{V}}^{T})^{T}{\mathbf{U}}{\mathbf{V}}^{T}{=}{\mathbf{I}}$. Since we explicitly set all the singular values to $1$, the optimal conditioning is also achieved, \emph{i.e.,} $\kappa(\frac{\partial l}{\partial {\mathbf{W}}}){=}1$. This could help to improve the conditioning. \subsubsection{Keeping Gradient Descent Direction Unchanged} In the high-dimensional optimization landscape, the many curvature directions (GD directions) are characterized by the eigenvectors of gradient (${\mathbf{U}}$ and ${\mathbf{V}}$). Although our modification changes the gradient, the eigenvectors and the GD directions are untouched. In other words, our NOG only adjusts the step size in each GD direction. This indicates that the modified gradients will not harm performance. \subsubsection{Combination with Weight Treatments} \label{sec:ol_nog_combination} Our orthogonal gradient and the previous weight treatments are complementary. They can be jointly used to simultaneously orthogonalize the gradient and weight. In the following, we will validate their joint impact on the conditioning and performance. \begin{figure}\CenterFloatBoxes \begin{floatrow} \ffigbox{% \includegraphics[width=0.99\linewidth]{imgs/dbn_gradient.jpg} }{% \caption{Covariance conditioning during the training process using orthogonal gradient and weight treatments.}% \label{fig:gradient} } \capbtabbox{% \resizebox{0.99\linewidth}{!}{ \begin{tabular}{r|c|c} \toprule Methods & mean$\pm$std & min \\ \hline SVD & 19.99$\pm$0.16 &19.80 \\ SVD + NOG & 19.43$\pm$0.24 &19.15\\\hline SVD + NOG + SN & 19.43$\pm$0.21 &19.20 \\ SVD + NOG + OL & 20.14$\pm$0.39 &19.54 \\ SVD + NOG + OW & \textbf{19.22$\pm$0.28}&\textbf{18.90} \\ \hline\hline NS iteration &19.45$\pm$0.33&19.01\\ \bottomrule \end{tabular} } }{% \caption{Performance of gradient treatments on ResNet-50 and CIFAR100. Each result is based on $10$ runs.}% \label{tab:gradient} } \end{floatrow} \end{figure} Fig.~\ref{fig:gradient} and Table~\ref{tab:gradient} present the covariance conditioning of decorrelated BN and the corresponding validation errors, respectively. As we can observe, solely using the proposed NOG can largely improve the covariance conditioning, decreasing the condition number from $10^{12}$ to $10^6$. Though this improvement is not as significant as the orthogonal constraints (\emph{e.g.,} OL and OW), our NOG can benefit more the generalization abilities, leading to the improvement of validation error by $0.6\%$. Combining the SN with our NOG does not lead to obvious improvements in either the conditioning or validation errors, whereas the joint use of NOG and OL harms the network performances. This is because the orthogonality constraint by loss might not be enforced under the gradient manipulation. When our NOG is combined with the OW, the side effect of using only OW is eliminated and the performance is further boosted by $0.3\%$. This phenomenon demonstrates that when the gradient is orthogonal, applying the orthogonality constraint to the weight could also be beneficial to the generalization. \subsection{Optimal Learning Rate (OLR)} So far, we only consider orthogonalizing ${\mathbf{W}}$ and $\frac{\partial l}{\partial {\mathbf{W}}}$ separately, but how to jointly optimize ${\mathbf{W}}{-}{\eta}\frac{\partial l}{\partial {\mathbf{W}}}$ has not been studied yet. Actually, it is desired to choose an appropriate learning rate $\eta$ such that the updated weight is close to an orthogonal matrix. To this end, we need to achieve the following objective: \begin{equation} \min_{\eta} ||({\mathbf{W}}-{\eta}\frac{\partial l}{\partial {\mathbf{W}}})({\mathbf{W}}-{\eta}\frac{\partial l}{\partial {\mathbf{W}}})^{T}-{\mathbf{I}}||_{\rm F} \end{equation} This optimization problem can be more easily solved in the vector form. Let $\mathbf{w}$, ${\mathbf{i}}$, and $\mathbf{l}$ denote the vectorized ${\mathbf{W}}$, ${\mathbf{I}}$, and $\frac{\partial l}{\partial {\mathbf{W}}}$, respectively. Then we construct the error function as: \begin{equation} e(\eta) = \Big((\mathbf{w}-\eta\mathbf{l})^{T}(\mathbf{w}-\eta\mathbf{l})-\mathbf{i}\Big)^{T}\Big((\mathbf{w}-\eta\mathbf{l})^{T}(\mathbf{w}-\eta\mathbf{l})-\mathbf{i}\Big) \end{equation} Expanding and differentiating the equation w.r.t. $\eta$ lead to: \begin{equation} \begin{gathered} \frac{d e(\eta)}{d \eta} \approx -4{\mathbf{w}}\mw^{T}{\mathbf{l}}^{T}{\mathbf{w}} + 4\eta{\mathbf{w}}\mw^{T}{\mathbf{l}}^{T}{\mathbf{l}} + 8\eta{\mathbf{l}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{w}} =0\\ \eta^{\star} \approx \frac{{\mathbf{w}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{w}}}{{\mathbf{w}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{l}}+2{\mathbf{l}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{w}}} \label{optimal_lr} \end{gathered} \end{equation} where some higher-order terms are neglected. The detailed derivation is given in the supplementary material. Though the proposed OLR yields the updated weight nearest to an orthogonal matrix theoretically, the value of $\eta^{\star}$ is unbounded for arbitrary ${\mathbf{w}}$ and ${\mathbf{l}}$. Directly using $\eta^{\star}$ might cause unstable training. To avoid this issue, we propose to use the OLR only when its value is smaller than the learning rate of other layers. Let $lr$ denote the learning rate of the other layers. The switch process can be defined as: \begin{equation} \eta =\begin{cases} \eta^{\star} & if\ \eta^{\star}<lr\\ lr & otherwise \end{cases} \end{equation} \subsubsection{Combination with Weight/Gradient Treatments} When either the weight or the gradient is orthogonal, our OLR needs to be carefully used. When only ${\mathbf{W}}$ is orthogonal, ${\mathbf{w}}^{T}{\mathbf{w}}$ is a small constant and it is very likely to have ${\mathbf{w}}^{T}{\mathbf{w}}{\ll}{\mathbf{l}}^{T}{\mathbf{w}}$. Consequently, we have ${\mathbf{w}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{w}}{\ll}{\mathbf{l}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{w}}$ and $\eta^{\star}$ will attenuate to zero. Similarly for orthogonal gradient, we have ${\mathbf{w}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{w}}{\ll}{\mathbf{l}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{l}}$ and this will cause $\eta^{\star}$ close to zero. Therefore, the proposed OLR cannot work when either the weight or gradient is orthogonal. Nonetheless, we note that if both ${\mathbf{W}}$ and $\frac{\partial l}{\partial {\mathbf{W}}}$ are orthogonal, our $\eta^{\star}$ is bounded. Specifically, we have: \begin{prop} When both ${\mathbf{W}}$ and $\frac{\partial l}{\partial {\mathbf{W}}}$ are orthogonal, $\eta^{\star}$ is both upper and lower bounded. The upper bound is $\frac{N^2}{N^2 + 2}$ and the lower bound is $\frac{1}{N^{2}+2}$ where $N$ denotes the row dimension of ${\mathbf{W}}$. \end{prop} We give the detailed proof in the supplementary material. Obviously, the upper bound of $\eta^{\star}$ is smaller than $1$. For the lower bound, since the row dimension of $N$ is often large (\emph{e.g.,} $64$), the lower bound of $\eta^{\star}$ can be according very small (\emph{e.g.,} $2e{-}4$). This indicates that our proposed OLR could also give a small learning rate even in the later stage of the training process. In summary, the optimal learning rate is set such that the updated weight is optimal in the sense that it become as close to an orthogonal matrix as possible. In particular, it is suitable when both the gradient and weight are orthogonal. \begin{figure}\CenterFloatBoxes \begin{floatrow} \ffigbox{% \includegraphics[width=0.99\linewidth]{imgs/dbn_lr.jpg} }{% \caption{Covariance conditioning during the training process using optimal learning rate and hybrid treatments.}% \label{fig:olr} } \capbtabbox[\Xhsize]{% \resizebox{0.99\linewidth}{!}{ \begin{tabular}{r|c|c} \toprule Methods & mean$\pm$std & min \\ \hline SVD & 19.99$\pm$0.16 &19.80 \\ SVD + OLR & 19.50$\pm$0.39 &18.95 \\\hline SVD + NOG + OLR & 19.77$\pm$0.27 &19.36 \\ SVD + OW + OLR & 20.61$\pm$0.22 &20.43 \\ \makecell[c]{SVD + NOG \\+ OW +OLR} & \textbf{19.05$\pm$0.31}&\textbf{18.77} \\ \hline\hline NS iteration &19.45$\pm$0.33&19.01\\ \bottomrule \end{tabular} } }{% \caption{Performance of optimal learning rate on ResNet-50 and CIFAR100 based on $10$ runs. \label{tab:olr} } \end{floatrow} \end{figure} We give the covariance conditioning and the validation errors in Fig.~\ref{fig:olr} and in Table~\ref{tab:olr}, respectively. Our proposed OLR significantly reduces the condition number to $10^{4}$ and improves the validation error by $0.5\%$. When combined with either orthogonal weight or orthogonal gradient, there is a slight degradation on the validation errors. This meets our expectation as $\eta^{\star}$ would attenuate to zero in both cases. However, when both ${\mathbf{W}}$ and $\frac{\partial l}{\partial {\mathbf{W}}}$ are orthogonal, jointly using our OLR achieves the best performance, outperforming only OLR by $0.5\%$ and beating OW$+$NOG by $0.2\%$. This observation confirms that the proposed OLR works well for simultaneously orthogonal ${\mathbf{W}}$ and $\frac{\partial l}{\partial {\mathbf{W}}}$. \section{Orthogonality for Unsupervised Latent Disentanglement} \label{sec:ortho_latent} In this section, we motivate why orthogonal treatments (orthogonal weight or gradient) would help in unsupervised latent disentanglement of GANs. \subsection{Image Manipulation in Latent Space of GANs} The latent space of GANs encodes rich semantics information, which can be used for image editing via vector arithmetic property~\cite{radford2015unsupervised}. Consider a generator $G(\cdot)$ and the latent code $\mathbf{z}{\in}\mathbb{R}^{d}$. The image manipulation is achieved by finding a semantically meaningful direction $\mathbf{n}$ such that \begin{equation} \texttt{edit}(G(\mathbf{z}))=G(\mathbf{z}+\alpha\mathbf{n}) \label{eq:g} \end{equation} where $\texttt{edit}(\cdot)$ denotes the image editing process, and $\alpha$ represents the perturbation strength. That being said, moving the latent code $\mathbf{z}$ along with the interpretable direction $\mathbf{n}$ should change the targeting semantic concept of the image. Since the generator $G(\cdot)$ is highly non-linear and complex, directly analyzing $G(\mathbf{z}+\alpha\mathbf{n})$ is intractable. To avoid this issue, existing approaches propose to simplify the analysis by considering only the first projector matrix $G_{1}(\cdot)$ or performing local Taylor expansion~\cite{shen2021closed,zhu2021low,zhu2022region,balakrishnan2022rayleigh}. \noindent\textbf{Eigenvector of the first projector.} In SeFa~\cite{shen2021closed}, the authors propose to seek for interpretable directions from the eigenvector of the first projector matrix. Specifically, they consider the affine transformation of the layer as: \begin{equation} G_{1}(\mathbf{z}+\alpha\mathbf{n}) = \mathbf{A}\mathbf{z} + \mathbf{b} + \alpha\mathbf{A}\mathbf{n} = G_{1}(\mathbf{z}) + \alpha\mathbf{A}\mathbf{n} \label{eq:g_1} \end{equation} where $\mathbf{A}$ is the weight matrix. Intuitively, a meaningful direction should lead to large variations of the generated image. So the problem can be cast into an optimization problem as: \begin{equation} \mathbf{n}^{\star} = \arg\max ||\mathbf{A}\mathbf{n}||^{2} \label{eq:argmax_n} \end{equation} All the possible closed-form solution correspond to the eigenvector of $\mathbf{A}^{T}\mathbf{A}$. The top-$k$ eigenvectors are thus selected as the interpretable directions for image manipulation. \noindent\textbf{Eigenvector of the Jacobian.} LowRankGAN~\cite{zhu2021low} proposes to linearly approximate $G(\mathbf{z}+\alpha\mathbf{n})$ by the Taylor expansion as: \begin{equation} G(\mathbf{z}+\alpha\mathbf{n}) \approx G(\mathbf{z}) + \alpha\mathbf{J}_{\mathbf{z}}\mathbf{n} \label{eq:g_jacob} \end{equation} where $\mathbf{J}_{\mathbf{z}}$ is the Jacobian matrix w.r.t. the latent code $\mathbf{z}$. Similarly to the deduction of eq.~\eqref{eq:argmax_n}, the closed-form solution is given by the eigenvector of $\mathbf{J}_{\mathbf{z}}^{T}\mathbf{J}_{\mathbf{z}}$. The above two formulations illustrate how the weight and gradient matrices are related with the interpretable direction discovery. Currently, most GAN models do not enforce orthogonality to their architectures. Now we turn to explaining the concrete benefit of introducing orthogonality to the latent disentanglement. \subsection{Usefulness of Orthogonality} \begin{figure}[htbp] \centering \includegraphics[width=0.99\linewidth]{imgs/orthogonal_illustration.jpg} \caption{Illustration of the benefit of orthogonality in latent disentanglement. As revealed in~\cite{shen2021closed,zhu2021low}, the interpretable directions of latent codes are the eigenvectors of weight or gradient matrices. For non-orthogonal matrices, the principle eigenvector is of the most importance, which would make this direction correspond to many semantic attributes. The other eigenvectors might fail to capture any semantic information. By contrast, the eigenvectors of orthogonal matrices are equally important. The network with the orthogonal weight/gradient is likely to learn more disentangled representations. } \label{fig:ortho_illu} \end{figure} Though few previous works have applied implicit orthogonality as regularization in GANs~\cite{voynov2020unsupervised,peebles2020hessian,he2021eigengan,wei2021orthogonal}, there are no generally accepted explanations on how the orthogonality is related to the disentangled representations. Here we give an intuitive explanation. As discussed in the above image manipulation modelling, the eigenvectors of weight and gradient matrices naturally imply the interpretable directions for latent disentanglement. For common non-orthogonal matrices, the importance of each eigenvector is characterized by the corresponding eigenvalue. Each eigenvector is not equally important and the first few ones would dominate the spectrum. This imbalance would cause most semantic attributes entangled in the first few directions. Fig.~\ref{fig:ortho_illu} top illustrates this phenomenon: \emph{moving the latent code along with the top-1 eigenvector direction triggers changes of many semantic attributes. On the contrary, the small eigenvector direction does not indicate any semantic changes. The learned representation are thus deemed entangled.} The orthogonal matrices can greatly relieve this issue thanks to the flat spectrum and equally-important eigenvectors. As shown in Fig.~\ref{fig:ortho_illu} bottom, when our NOG and OLR are applied, each direction of the orthogonal matrix is equally important and corresponds to one semantic attribute. Shifting the latent code in one direction only changes the targeting semantic concept, while the identity and other attributes are not touched. Enforcing orthogonality would lead to the superior disentanglement of learned representations. Our proposed NOG and OLR can serve as strict orthogonal gradient constraint and \textit{relaxed} orthogonal weight constraint, respectively. Enforcing them on the first layer after the latent code during the training process is very likely to lead to more disentangled representations. In Sec.~\ref{sec:latent}, we apply these two techniques in various GAN architectures and benchmarks for unsupervised latent disentanglement. \section{Experiments} \label{sec:exp} \subsection{Covariance Conditioning} We validate the proposed approaches in two applications: GCP and decorrelated BN. These two tasks are very representative because they have different usages of the SVD meta-layer. The GCP uses the matrix square root, while the decorrelated BN applies the inverse square root. In addition, the models of decorrelated BN often insert the SVD meta-layer at the beginning of the network, whereas the GCP models integrate the layer before the FC layer. \subsubsection{Decorrelated Batch Normalization} \begin{figure}[htbp] \centering \includegraphics[width=0.4\linewidth]{imgs/zca_arch.jpg} \caption{The scheme of the modified ResNet for decorrelated BN. We reduce the kernel size of the first convolution layer from $7{\times}7$ to $3{\times}3$. The BN after this layer is replaced with our decorrelated BN layer.} \label{fig:zca_arch} \end{figure} We use ResNet-50~\cite{he2016deep} as the backbone for the experiment on CIFAR10 and CIFAR100~\cite{krizhevsky2009learning}. The kernel size of the first convolution layer of ResNet is $7{\times}7$, which might not suit the low resolution of these two datasets (the images are only of size $32{\times}32$). To avoid this issue, we reduce the kernel size of the first convolution layer to $3{\times}3$. The stride is also decreased from $2$ to $1$. The BN layer after this layer is replace with our decorrelated BN layer (see Fig.~\ref{fig:zca_arch}). Let ${\mathbf{X}}{\in}\mathbb{R}^{C{\times}BHW}$ denotes the reshaped feature. The whitening transform is performed as: \begin{equation} {\mathbf{X}}_{whitened} = ({\mathbf{X}}\mX^{T})^{-\frac{1}{2}} {\mathbf{X}} \end{equation} Compared with the vanilla BN that only standardizes the data, the decorrelated BN can further eliminate the data correlation between each dimension. \begin{table}[ht] \centering \resizebox{0.99\linewidth}{!}{ \begin{tabular}{r|c|c|c|c} \toprule \multirow{2}*{Methods} & \multicolumn{2}{c|}{CIFAR10} & \multicolumn{2}{c}{CIFAR100} \\ \cline{2-5} &mean$\pm$std & min &mean$\pm$std & min\\ \hline SVD &4.35$\pm$0.09&4.17&19.99$\pm$0.16&19.80 \\ \hline SVD + SN &4.31$\pm$0.10 &4.15 &19.94$\pm$0.33 &19.60 \\ SVD + OL &4.28$\pm$0.07 &4.23 &19.73$\pm$0.28 &19.54\\ SVD + OW &4.42$\pm$0.09& 4.28&20.06$\pm$0.17&19.94\\ \hline SVD + NOG &\textbf{\textcolor{green}{4.15$\pm$0.06}}&\textbf{\textcolor{cyan}{4.04}} &\textbf{\textcolor{green}{19.43$\pm$0.24}} &\textbf{\textcolor{cyan}{19.15}} \\ SVD + OLR &\textbf{\textcolor{cyan}{4.23$\pm$0.17}}&\textbf{\textcolor{blue}{3.98}} &\textbf{\textcolor{cyan}{19.50$\pm$0.39}}&\textbf{\textcolor{green}{18.95}} \\ \hline SVD + NOG + OW &\textbf{\textcolor{blue}{4.09$\pm$0.07}}& \textbf{\textcolor{green}{4.01}} &\textbf{\textcolor{blue}{19.22$\pm$0.28}}&\textbf{\textcolor{blue}{18.90}} \\ SVD + NOG + OW + OLR &\textbf{\textcolor{red}{3.93$\pm$0.09}}&\textbf{\textcolor{red}{3.85}} &\textbf{\textcolor{red}{19.05$\pm$0.31}}&\textbf{\textcolor{red}{18.77}} \\ \hline\hline NS iteration &4.20$\pm$0.11 &4.11 &19.45$\pm$0.33&19.01\\ \bottomrule \end{tabular} } \caption{Performance comparison of decorrelated BN methods on CIFAR10/CIFAR100~\cite{krizhevsky2009learning} based on ResNet-50~\cite{he2016deep}. We report each result based on $10$ runs. The best four results are highlighted in \textbf{\textcolor{red}{red}}, \textbf{\textcolor{blue}{blue}}, \textbf{\textcolor{green}{green}}, and \textbf{\textcolor{cyan}{cyan}} respectively.} \label{tab:zca_res50} \end{table} Table~\ref{tab:zca_res50} compares the performance of each method on CIFAR10/CIFAR100~\cite{krizhevsky2009learning} based on ResNet-50~\cite{he2016deep}. Both of our NOG and OLR achieve better performance than other weight treatments and the SVD. Moreover, when hybrid treatments are adopted, we can observe step-wise steady improvements on the validation errors. Among these techniques, the joint usage of OLR with NOG and OW achieves the best performances across metrics and datasets, outperforming the SVD baseline by $0.4\%$ on CIFAR10 and by $0.9\%$ on CIFAR100. This demonstrates that these treatments are complementary and can benefit each other. \begin{table}[ht] \centering \resizebox{0.99\linewidth}{!}{ \begin{tabular}{r|c|c|c|c} \toprule \multirow{2}*{Methods} & \multicolumn{2}{c|}{DenseNet-121~\cite{huang2017densely}} & \multicolumn{2}{c}{MobileNet-v2~\cite{howard2017mobilenets}} \\ \cline{2-5} &mean$\pm$std & min &mean$\pm$std & min\\ \hline SVD &27.37$\pm$0.54&26.88 & 34.35$\pm$0.32 &34.00 \\ \hline SVD + SN &27.05$\pm$0.44 &26.51 & 34.19$\pm$0.37 & 33.82 \\ SVD + OL &27.41$\pm$0.35 &26.99 &34.58$\pm$0.43 &34.15 \\ SVD + OW & 27.25$\pm$0.47 &26.67 &34.27$\pm$0.46 & 33.77\\ \hline SVD + NOG & \textbf{\textcolor{green}{25.14$\pm$0.39}} & \textbf{\textcolor{green}{24.65}} & \textbf{\textcolor{green}{33.42$\pm$0.41}} & \textbf{\textcolor{cyan}{32.91}} \\ SVD + OLR & \textbf{\textcolor{cyan}{25.34$\pm$0.28}} & \textbf{\textcolor{cyan}{25.01}} &\textbf{\textcolor{cyan}{33.59$\pm$0.64}} &\textbf{\textcolor{green}{32.84}} \\ \hline SVD + NOG + OW & \textbf{\textcolor{blue}{24.49$\pm$0.43}} &\textbf{\textcolor{blue}{23.97}} &\textbf{\textcolor{blue}{33.13$\pm$0.55}} & \textbf{\textcolor{blue}{32.61}} \\ SVD + NOG + OW + OLR &\textbf{\textcolor{red}{23.74$\pm$0.24}} &\textbf{\textcolor{red}{23.41}} &\textbf{\textcolor{red}{32.83$\pm$0.48}} & \textbf{\textcolor{red}{32.33}} \\ \hline\hline NS iteration &25.87$\pm$0.43 &25.31 &33.67$\pm$0.51 &33.24\\ \bottomrule \end{tabular} } \caption{Performance comparison of decorrelated BN methods on CIFAR100~\cite{krizhevsky2009learning} with DenseNet-121~\cite{huang2017densely} and MobileNet-v2~\cite{howard2017mobilenets} based on $10$ runs. The best four results are highlighted in \textbf{\textcolor{red}{red}}, \textbf{\textcolor{blue}{blue}}, \textbf{\textcolor{green}{green}}, and \textbf{\textcolor{cyan}{cyan}} respectively.} \label{tab:zca_dense_mobile} \end{table} {Table~\ref{tab:zca_dense_mobile} presents the validation errors on CIFAR100 with DenseNet-121~\cite{huang2017densely} and MobileNet-v2~\cite{howard2017mobilenets}. The results are coherent with those on ResNet-50~\cite{he2016deep}: our methods bring consistent performance improvements to the ordinary SVD on different architectures. This demonstrates the model-agnostic property of the proposed orthogonality approaches. Fig.~\ref{fig:cvgc_improvement} displays the corresponding best validation accuracy during the training process. Our method can also accelerate the convergence of the training process. The acceleration is particularly significant in the initial training stage. } \begin{figure}[htbp] \centering \includegraphics[width=0.99\linewidth]{imgs/cvgc_improvement.jpg} \caption{{The best validation accuracy during the training process. Our proposed techniques can consistently improve the convergence speed and help the model to achieve better accuracy within fewer training epochs.}} \label{fig:cvgc_improvement} \end{figure} {Finally, we would like to note that the performance gain of our methods depends on the specific architectures and the ill-conditioned extent of the covariance. Generally speaking, the larger the model is, the worse-conditioned the covariance is and the larger the performance gain would be. Take the above decorrelated BN experiments as an example, the accuracy improvement on MobileNet is around $1.5\%$, while the performance gain on larger DenseNet is about $4.0\%$.} \subsubsection{Global Covariance Pooling} \begin{figure}[htbp] \centering \includegraphics[width=0.7\linewidth]{imgs/gcp_arch.jpg} \caption{The architecture of a GCP model~\cite{li2017second,song2021approximate}. After all the convolution layers, the covariance square root of the feature is computed and used as the final representation.} \label{fig:gcp_arch} \end{figure} We use ResNet-18~\cite{he2016deep} for the GCP experiment and train it from scratch on ImageNet~\cite{deng2009imagenet}. Fig.~\ref{fig:gcp_arch} displays the overview of a GCP model. For the ResNet backbone, the last Global Average Pooling (GAP) layer is replaced with our GCP layer. Consider the final batched convolutional feature ${\mathbf{X}}{\in}\mathbb{R}^{B{\times}C{\times}HW}$. We compute the matrix square root of its covariance as: \begin{equation} {\mathbf{Q}} = ({\mathbf{X}}\mX^{T})^{\frac{1}{2}} \end{equation} where ${\mathbf{Q}}{\in}\mathbb{R}^{B{\times}C{\times}C}$ is used as the final representation and directly passed to the fully-connected (FC) layer. \begin{table}[htbp] \centering \caption{Performance comparison of different GCP methods on ImageNet~\cite{deng2009imagenet} based on ResNet-18~\cite{he2016deep}. The failure times denote the total times of non-convergence of the SVD solver during one training process. The best four results are highlighted in \textbf{\textcolor{red}{red}}, \textbf{\textcolor{blue}{blue}}, \textbf{\textcolor{green}{green}}, and \textbf{\textcolor{cyan}{cyan}} respectively.} \resizebox{0.99\linewidth}{!}{ \begin{tabular}{r|c|c|c} \toprule Method & \makecell[c]{Failure\\ Times} & Top-1 Acc. (\%) & Top-5 Acc. (\%) \\ \hline SVD & 5 & 73.13 & 91.02 \\ \hline SVD + SN &2 &73.28 ($\uparrow$ 0.2) &91.11 ($\uparrow$ 0.1)\\ SVD + OL & 1& 71.75 ($\downarrow$ 1.4) &90.20 ($\downarrow$ 0.8)\\ SVD + OW &2 &73.07 ($\downarrow$ 0.1) &90.93 ($\downarrow$ 0.1)\\ \hline SVD + NOG & 1 &\textbf{\textcolor{green}{73.51}} (\textbf{\textcolor{green}{$\uparrow$ 0.4}}) & \textbf{\textcolor{green}{91.35}} (\textbf{\textcolor{green}{$\uparrow$ 0.3}})\\ SVD + OLR & 0 & \textbf{\textcolor{cyan}{73.39}} (\textbf{\textcolor{cyan}{$\uparrow$ 0.3}}) & \textbf{\textcolor{cyan}{91.26}} (\textbf{\textcolor{cyan}{$\uparrow$ 0.2}})\\ \hline SVD + NOG + OW & 0 & \textbf{\textcolor{blue}{73.71}} (\textbf{\textcolor{blue}{$\uparrow$ 0.6}})& \textbf{\textcolor{blue}{91.43}} (\textbf{\textcolor{blue}{$\uparrow$ 0.4}})\\ SVD + NOG + OW + OLR & 0 & \textbf{\textcolor{red}{73.82}} (\textbf{\textcolor{red}{$\uparrow$ 0.7}})& \textbf{\textcolor{red}{91.57}} (\textbf{\textcolor{red}{$\uparrow$ 0.6}})\\ \hline\hline NS iteration & 0 & 73.36 ($\uparrow$ 0.2) & 90.96 ($\downarrow$ 0.1)\\ \bottomrule \end{tabular} } \label{tab:gcp_res18} \end{table} Table~\ref{tab:gcp_res18} presents the total failure times of the SVD solver in one training process and the validation accuracy on ImageNet~\cite{deng2009imagenet} based on ResNet-18~\cite{he2016deep}. The results are very coherent with our experiment of decorrelated BN. Among the weight treatments, the OL and OW hurt the performance, while the SN improves that of SVD by $0.2\%$. Our proposed NOG and OLR outperform the weight treatments and improve the SVD baseline by $0.4\%$ and by $0.3\%$, respectively. Moreover, the combinations with the orthogonal weight further boost the performance. Specifically, combining NOG and OW surpasses the SVD by $0.6\%$. The joint use of OW with NOG and OLR achieves the best performance among all the methods and beats the SVD by $0.7\%$. \begin{figure*}[t] \centering \includegraphics[width=0.99\linewidth]{imgs/eigengan_animeface_compare.jpg} \caption{Latent traversal on AnimeFace~\cite{chao2019/online}. The EigenGAN has entangled attributes in the identified interpretable directions, while our methods achieve better disentanglement and each direction corresponds to a unique attribute. } \label{fig:animeface_eigengan_compare} \end{figure*} \begin{figure}[htbp] \centering \includegraphics[width=0.99\linewidth]{imgs/gcp_cond.jpg} \caption{The covariance conditioning of GCP methods in the later stage of the training. The periodic spikes are caused by the evaluation on the validation set after every epoch.} \label{fig:gcp_cond} \end{figure} Fig.~\ref{fig:gcp_cond} depicts the covariance conditioning in the later training stage. Our OLR and the OW both reduce the condition number by around $1e15$, whereas the proposed NOG improves the condition number by $2e15$. When hybrid treatments are used, combining NOG and OW attains better conditioning than the separate usages. Furthermore, simultaneously using all the techniques leads to the best conditioning and improves the condition number by $5e15$. The covariance conditioning of GCP tasks is not improved as much as that of decorrelated BN. This might stem from the unique architecture of GCP models: the covariance is directly used as the final representation and fed to the FC layer. We conjecture that this setup might cause the covariance to have a high condition number. The approximate solver (NS iteration) does not have well-conditioned matrices either (${\approx}1e15$), which partly supports our conjecture. \subsubsection{Computational Cost} \begin{table}[htbp] \centering \caption{Time consumption of each forward pass (FP) and backward pass (BP) measured on a RTX A6000 GPU. The evaluation is based on ResNet-50 and CIFAR100.} \resizebox{0.8\linewidth}{!}{ \begin{tabular}{c|cc} \toprule Methods & FP (ms) & BP (ms) \\ \midrule SVD & 44 & 95 \\ SVD + NOG & 44 & 97 (+2)\\ SVD + OLR & 44 & 96 (+1) \\ SVD + OW & 48 (+4) & 102 (+7) \\ SVD + OW + NOG + OLR & 49 (+5) &106 (+11) \\ NS Iteration & 43 & 93\\ \midrule Vanilla ResNet-50 & 42 & 90 \\ \bottomrule \end{tabular} } \label{tab:time} \end{table} Table~\ref{tab:time} compares the time consumption of a single training step for the experiment of decorrelated BN. Our NOG and OLR bring negligible computational costs to the BP ($2\%$ and $1\%$), while the FP is not influenced. Even when all techniques are applied, the overall time costs are marginally increased by $10\%$. Notice that NOG and OLR have no impact on the inference speed. \subsection{Latent Disentanglement} \label{sec:latent} In this subsection, we first introduce the experiment setup, followed by the evaluation results on different GAN architectures and datasets. We defer the implementation details to the Supplementary Material. \subsubsection{Experimental Setup} \noindent\textbf{Models.} We evaluate our methods on EigenGAN~\cite{he2021eigengan} and vanilla GAN~\cite{goodfellow2014generative}. EigenGAN~\cite{he2021eigengan} is a particular GAN architecture dedicated to latent disentanglement. It progressively injects orthogonal subspaces into each layer of the generator, which can mine controllable semantic attributes in an unsupervised manner. For the vanilla GAN~\cite{goodfellow2014generative}, we adopt the basic GAN model that consists of stacked convolutional layers and do not make any architectural modifications. \noindent\textbf{Datasets.} For EigenGAN, we use AnimeFace~\cite{chao2019/online} and FFHQ~\cite{kazemi2014one} datasets. AnimeFace~\cite{chao2019/online} is comprised of $63,632$ aligned anime faces with resolution varying from $90{\times}90$ to $120{\times}120$. FFHQ~\cite{kazemi2014one} consists of $70,000$ high-quality face images that have considerable variations in identifies and have good coverage in common accessories. Since the vanilla GAN has a smaller architecture and fewer parameters, we use relatively simpler CelebA~\cite{liu2018large} and LSUN Church~\cite{yu2015lsun} datasets. CelebA~\cite{liu2018large} contains $202,599$ face images of $10,177$ celebrities, while LSUN Church~\cite{yu2015lsun} has $126,227$ scenes images of church. \noindent\textbf{Metrics.} We use Frechet Inception Distance (FID)~\cite{heusel2017gans} to quantitatively evaluate the quality of generate images. For the performance of latent disentanglement, we use Variational Predictability (VP)~\cite{zhu2020learning} as the quantitative metric. The VP metric adopts the few-shot learning setting to measure the generalization abilities of a simple neural network in classifying the discovered latent directions. \noindent\textbf{Baselines.} For the EigenGAN model that already has inherent orthogonality constraints and good disentanglement abilities, we compare the ordinary EignGAN with the modified version augmented by our proposed orthogonal techniques (NOG and OLR). For the vanilla GAN that suffers from limited disentanglement, we compare our NOG and OLR against other disentanglement schemes used in GANs, including (1) Hessian Penalty (HP)~\cite{peebles2020hessian}, (2) Orthogonal Jacobian Regularization (OrthoJar)~\cite{wei2021orthogonal}, and (3) Latent Variational Predictability (LVP)~\cite{zhu2020learning}. \subsubsection{EigenGAN Architecture and Modifications} \begin{figure*}[t] \centering \includegraphics[width=0.99\linewidth]{imgs/ffhq_finegrained.jpg} \caption{Visualization of some fine-grained attributes learned by out method on FFHQ~\cite{kazemi2014one} dataset. Our method can learn very subtle and fine-grained attributes while keeping the identity unchanged.} \label{fig:ffhq_finegrained} \end{figure*} Fig.~\ref{fig:eigengan_arch} displays the overview of the EigenGAN. At each layer, the latent code $\mathbf{z}_{i}$ is multiplied with the orthogonal basis $\mathbf{U}_{i}$ and the diagonal importance matrix $\mathbf{L}_{i}$ to inject weighted orthogonal subspace for disentangled representation learning. The original EigenGAN~\cite{he2021eigengan} adopts the OL loss $||\mathbf{U}_{i}\mathbf{U}_{i}^{T}{-}\mathbf{I}||_{\rm F}$ to enforce \emph{relaxed} orthogonality to each subspace $\mathbf{U}_{i}$. Instead, we apply our NOG and OLR to achieve the weight and gradient orthogonality, respectively. {Notice that when our NOG and OLR are applied, we do not use the OL loss of EigenGAN. This is because the \emph{soft} orthogonality introduced by the OL loss might not be enforced under the gradient manipulation of our NOG, which is similar to our experimental results of decorrelated BN (see Sec.~\ref{sec:ol_nog_combination}).} \begin{figure}[h] \centering \includegraphics[width=0.9\linewidth]{imgs/eigengan.jpg} \caption{Overview of the EigenGAN architecture. } \label{fig:eigengan_arch} \end{figure} \subsubsection{Results on EigenGAN} \begin{figure}[htbp] \centering \includegraphics[width=0.99\linewidth]{imgs/eigengan_animeface_finegrained.jpg} \caption{Subtle semantic attributes mined by our method. } \label{fig:animeface_eigengan_finegrained} \end{figure} \noindent\textbf{Qualitative Evaluation.} Fig.~\ref{fig:animeface_eigengan_compare} compares the latent traversal results of the ordinary EigenGAN and our methods on AnimeFace. The interpretable direction of EigenGAN has many entangled attributes; the identity is poorly preserved during the latent traversal. By contrast, moving along with the discovered direction of our method would only introduce changes of a single semantic attribute. This demonstrates that our interpretable directions have more precisely-controlled semantics and our orthogonality techniques indeed help the model to learn more disentangled representations. Moreover, thanks to the power of orthogonality, our methods can mine many subtle and fine-grained attributes. Fig.~\ref{fig:animeface_eigengan_finegrained} displays such attributes that are precisely captured by out method but are not learnt by EigenGAN. These attributes include very subtle local details of the image, such as facial blush, facial shadow, and mouth openness. \begin{figure}[tbp] \centering \includegraphics[width=0.99\linewidth]{imgs/ffhq_quantitative.jpg} \caption{Qualitative comparison on FFHQ. The attributes are entangled in one latent direction of EigenGAN, while our method can avoid this and discover orthogonal concepts.} \label{fig:ffhq_quantiative} \end{figure} Fig.~\ref{fig:ffhq_quantiative} compares the exemplary latent traversal on FFHQ. Similar with the result on AnimeFace, the interpretable directions have more disentangled attributes when our orthogonality techniques are used. Since FFHQ covers a wide range of image attributes, our method is able to learn very fine-grained attributes (\emph{e.g.,} angle and thickness of eyebrow) of a given super attribute (\emph{e.g.,} eyebrow) accordingly. We give a few examples in Fig.~\ref{fig:ffhq_finegrained}. As can be observed, our method can precisely control the subtle detail of the image while keeping other attributes unchanged. \begin{table}[htbp] \centering \resizebox{0.99\linewidth}{!}{ \begin{tabular}{c|cc|cc} \toprule \multirow{2}*{Methods} & \multicolumn{2}{c|}{AnimeFace~\cite{chao2019/online}} & \multicolumn{2}{c}{FFHQ~\cite{kazemi2014one}} \\ \cmidrule{2-5} & FID ($\downarrow$) & VP ($\uparrow$) & FID ($\downarrow$) & VP ($\uparrow$)\\ \midrule EigenGAN & 23.59 & 37.01 & 36.81 & 31.79\\ \midrule EigenGAN+NOG &19.48 &43.53 &33.34 &37.27 \\ EigenGAN+OLR &18.30 &43.99 &31.42 &37.23 \\ EigenGAN+OLR+NOG &\textbf{16.31} &\textbf{45.48} &\textbf{30.06} &\textbf{39.32} \\ \bottomrule \end{tabular} } \caption{Quantitative evaluation on EigenGAN.} \label{tab:eigengan_results} \end{table} \noindent\textbf{Quantitative Evaluation.} Table~\ref{tab:eigengan_results} compares the performance of EigenGAN on AnimeFace and FFHQ datasets. Our proposed NOG and OLR can improve both the image quality score (FID) and the disentanglement score (VP). Furthermore, when these two techniques are combined, the evaluation results achieve the best performance across metrics and datasets. This implies that enforcing simultaneous gradient and weight orthogonality allows for the learning of more disentangled representations and improved image fidelity. \noindent\textbf{Discussion.} Both quantitative and qualitative evaluation on two datasets demonstrates that our orthogonality approaches lead to better latent disentanglement than the inherent orthogonality loss of EigenGAN. This behavior is coherent to our previous experiment of decorrelated BN: the proposed NOG and OLR also outperform OL in that case. This further confirms the general applicability of our orthogonal methods. \subsubsection{Vanilla GAN Architecture} For the vanilla GAN model, we use simple convolutional layers as building blocks. The orthogonality techniques are applied on the first convolution layer after the latent code. \subsubsection{Results on Vanilla GAN} \begin{figure}[htbp] \centering \includegraphics[width=0.99\linewidth]{imgs/celeba_comparison.jpg} \caption{Qualitative comparison on CelebA. For HP~\cite{peebles2020hessian}, The latent traversal in one direction would introduce many attributes changes. By contrast, the image identity of our method is well preserved and only the target attribute varies.} \label{fig:celeba} \end{figure} \noindent\textbf{Qualitative Evaluation.} Fig.~\ref{fig:celeba} presents the qualitative evaluation results on CelebA~\cite{liu2018large} against HP~\cite{peebles2020hessian}. The semantic factors discovered by our methods controls traversal process more precisely; only a single attribute is changed when one latent code is modified. By contrast, a interpretable direction mined by HP~\cite{peebles2020hessian} would correspond to multiple attributes sometimes. This implies that the learned representations and attributes of our NOG and OLR are more disentangled. Fig.~\ref{fig:lsun} displays some learned attributes of our methods. The complex scenes and structures of churches are preserved well, and each semantic factor precisely controls the image attribute. This also demonstrates the diverse application domains of our disentanglement method beyond face analysis. \begin{figure}[htbp] \centering \includegraphics[width=0.99\linewidth]{imgs/church_comparison.jpg} \caption{Latent traversal of our NOG on LSUN Church.} \label{fig:lsun} \end{figure} \begin{table}[htbp] \centering \resizebox{0.99\linewidth}{!}{ \begin{tabular}{c|c|cc|cc} \toprule \multirow{2}*{Methods} &\multirow{2}*{Time (ms)} & \multicolumn{2}{c|}{CelebA~\cite{liu2018large}} & \multicolumn{2}{c}{LSUN Church~\cite{yu2015lsun}} \\ \cmidrule{3-6} & & FID ($\downarrow$) & VP ($\uparrow$) & FID ($\downarrow$) & VP ($\uparrow$)\\ \midrule OrJar~\cite{wei2021orthogonal} &23 & 32.43 & 24.24 & 38.96 & 11.62 \\ HP~\cite{peebles2020hessian} &30 & 31.65 & 24.67 & 39.20 & \textbf{\textcolor{green}{13.73}} \\ LVP~\cite{zhu2020learning} &16 & 34.36 & 23.49 & 41.24 & 12.58\\ \midrule NOG &\textbf{8} &\textbf{\textcolor{red}{29.69}} &\textbf{\textcolor{green}{25.33}} & \textbf{\textcolor{green}{37.22}} & 13.43 \\ OLR &\textbf{8} &\textbf{\textcolor{green}{33.29}}&\textbf{\textcolor{blue}{27.22}} & \textbf{\textcolor{blue}{37.83}} & \textbf{\textcolor{blue}{14.50}} \\ NOG+OLR & \textbf{9} &\textbf{\textcolor{blue}{30.65}} &\textbf{\textcolor{red}{28.74}} & \textbf{\textcolor{red}{35.20}} & \textbf{\textcolor{red}{16.98}} \\ \bottomrule \end{tabular} } \caption{Quantitative evaluation on vanilla GAN. We measure the time consumption of a single forward pass and backward pass. The best three results are highlighted in \textbf{\textcolor{red}{red}}, \textbf{\textcolor{blue}{blue}}, and \textbf{\textcolor{green}{green}} respectively.} \label{tab:vanillagan_results} \end{table} \noindent\textbf{Quantitative Evaluation.} Table.~\ref{tab:vanillagan_results} reports the quantitative evaluation results on vanilla GAN. Our proposed orthogonality techniques outperform other disentanglement schemes in terms of both FID and VP, achieving state-of-the-art performance in the unsupervised latent disentanglement. Moreover, our approaches are much more efficient than other baselines due to the marginal computational cost. \begin{table}[htbp] \centering \resizebox{0.99\linewidth}{!}{ \begin{tabular}{c|ccc|ccc} \toprule Datasets & OrJar~\cite{wei2021orthogonal} & HP~\cite{peebles2020hessian} & LVP~\cite{zhu2020learning} & NOG & OLR & NOG+OLR\\ \midrule CelebA~\cite{liu2018large}&2.75 &2.67 &2.78 &2.28 &2.14 &\textbf{2.01} \\ Church~\cite{yu2015lsun}&2.48 &2.57 &2.66 &2.13 &2.09 &\textbf{1.93} \\ \bottomrule \end{tabular} } \caption{{Condition number of the first convolution weight in vanilla GANs on CelebA~\cite{liu2018large} and LSUN Church~\cite{yu2015lsun}.}} \label{tab:cond_gan} \end{table} \noindent\textbf{{Condition Number in Vanilla GANs.}} {Similar to our previous experiments, we measure the condition number of the fist convolution weight in vanilla GANs (\emph{i.e.,} the projection matrix that maps latent codes to features). Table~\ref{tab:cond_gan} presents the evaluation results on CelebA~\cite{liu2018large} and LSUN Church~\cite{yu2015lsun}. As can be observed, our methods (NOG, OLR, and NOG+OLR) outperform other baselines and have much better condition numbers. This demonstrates that our methods can also improve the conditioning of the weight matrix of vanilla GANs. Notice that the convolution weight matrix is small in dimensionality. The corresponding condition number is thus much smaller compared with the covariance conditioning in the previous experiments.} \section{Conclusion} \label{sec:conclusion} In this paper, we explore different approaches to improve the covariance conditioning of the SVD meta-layer. Existing treatments on orthogonal weight are first studied. Our experiments reveal that these techniques could improve the conditioning but might hurt the performance due to the limitation on the representation power. To avoid the side effect of orthogonal weight, we propose the nearest orthogonal gradient and the optimal learning rate, both of which could simultaneously attain better covariance conditioning and improved generalization abilities. Moreover, their combinations with orthogonal weight further boost the performance. Besides the usage on the SVD meta-layer, we show that our proposed orthogonality approaches can benefit generative models for better latent disentanglement. \section*{Acknowledgements} This research was supported by the EU H2020 projects AI4Media (No. 951911) and SPRING (No. 871245) and by the PRIN project CREATIVE Prot. 2020ZSL9F9. \section{Background: SVD Meta-Layer} This subsection presents the background knowledge about the propagation rules of the SVD meta-layer. \subsection{Forward Pass} Given the reshape feature ${\mathbf{X}}{\in}\mathbb{R}^{d{\times}N}$ where $d$ denotes the feature dimensionality (\emph{i.e.,} the number of channels) and $N$ represents the number of features (\emph{i.e.,} the product of spatial dimensions of features), an SVD meta-layer first computes the sample covariance as: \begin{equation} {\mathbf{P}} = {\mathbf{X}} {\mathbf{J}} {\mathbf{X}}^{T}, {\mathbf{J}} =\frac{1}{N}({\mathbf{I}}-\frac{1}{N}\mathbf{1}\mathbf{1}^{T}) \end{equation} where ${\mathbf{J}}$ represents the centering matrix, $\mathbf{I}$ denotes the identity matrix, and $\mathbf{1}$ is a column vector whose values are all ones, respectively. The covariance is always positive semi-definite (PSD) and does not have any negative eigenvalues. Afterward, the eigendecomposition is performed using the SVD: \begin{equation} {\mathbf{P}}={\mathbf{U}}\mathbf{\Lambda}{\mathbf{U}}^{T},\ \mathbf{\Lambda}=\rm{diag}(\lambda_{1},\dots,\lambda_{d}) \label{SVD} \end{equation} where $\mathbf{U}$ is the orthogonal eigenvector matrix, ${\rm diag}(\cdot)$ denotes transforming a vector to a diagonal matrix, and $\mathbf{\Lambda}$ is the diagonal matrix in which the eigenvalues are sorted in a non-increasing order \emph{i.e.}, $\lambda_i {\geq} \lambda_{i+1}$. Then depending on the application, the matrix square root or the inverse square root is calculated as: \begin{equation} \begin{gathered} \mathbf{Q}\triangleq\mathbf{P}^{\frac{1}{2}}=\mathbf{U}\mathbf{\Lambda}^{\frac{1}{2}} \mathbf{U}^{T}, \mathbf{\Lambda}^{\frac{1}{2}}={\rm diag}(\lambda_{1}^{\frac{1}{2}},\dots,\lambda_{d}^{\frac{1}{2}}) \\ \mathbf{S}\triangleq\mathbf{P}^{-\frac{1}{2}}=\mathbf{U}\mathbf{\Lambda}^{-\frac{1}{2}} \mathbf{U}^{T}, \mathbf{\Lambda}^{-\frac{1}{2}}={\rm diag}(\lambda_{1}^{-\frac{1}{2}},\dots,\lambda_{d}^{-\frac{1}{2}}) \end{gathered} \end{equation} The matrix square root ${\mathbf{Q}}$ is often used in GCP-related tasks~\cite{li2017second,xie2021so,song2021approximate}, while the application of decorrelated BN~\cite{huang2018decorrelated,siarohin2018whitening} widely applies the inverse square root ${\mathbf{S}}$. In certain applications such as WCT, both ${\mathbf{Q}}$ and ${\mathbf{S}}$ are required. \subsection{Backward Pass} Let $\frac{\partial l}{\partial{\mathbf{Q}}}$ and $\frac{\partial l}{\partial{\mathbf{S}}}$ denote the partial derivative of the loss $l$ w.r.t to the matrix square root ${\mathbf{Q}}$ and the inverse square root ${\mathbf{S}}$, respectively. Then the gradient passed to the eigenvector is computed as: \begin{equation} \frac{\partial l}{\partial \mathbf{U}}\Big|_{{\mathbf{Q}}}=(\frac{\partial l}{\partial \mathbf{Q}} + (\frac{\partial l}{\partial \mathbf{Q}})^{T})\mathbf{U}\mathbf{\Lambda}^{\frac{1}{2}},\ \frac{\partial l}{\partial \mathbf{U}}\Big|_{{\mathbf{S}}}=(\frac{\partial l}{\partial \mathbf{S}} + (\frac{\partial l}{\partial \mathbf{S}})^{T})\mathbf{U}\mathbf{\Lambda}^{-\frac{1}{2}} \label{vec_de} \end{equation} Notice that the gradient equations for ${\mathbf{Q}}$ and ${\mathbf{S}}$ are different. For the eigenvalue, the gradient is calculated as: \begin{equation} \begin{gathered} \frac{\partial l}{\partial \mathbf{\Lambda}}\Big|_{{\mathbf{Q}}}=\frac{1}{2}\rm{diag}(\lambda_{1}^{-\frac{1}{2}},\dots,\lambda_{d}^{-\frac{1}{2}})\mathbf{U}^{T} \frac{\partial \it{l}}{\partial \mathbf{Q}} \mathbf{U}, \\ \frac{\partial l}{\partial \mathbf{\Lambda}}\Big|_{{\mathbf{S}}}=-\frac{1}{2}\rm{diag}(\lambda_{1}^{-\frac{3}{2}},\dots,\lambda_{d}^{-\frac{3}{2}})\mathbf{U}^{T} \frac{\partial \it{l}}{\partial \mathbf{S}} \mathbf{U} \end{gathered} \end{equation} Subsequently, the derivative of the SVD step can be calculated as: \begin{equation} \frac{\partial l}{\partial \mathbf{P}}=\mathbf{U}( (\mathbf{K}^{T}\circ(\mathbf{U}^{T}\frac{\partial l}{\partial \mathbf{U}}))+ (\frac{\partial l}{\partial \mathbf{\Lambda}})_{\rm diag})\mathbf{U}^{T} \label{COV_de} \end{equation} where $\circ$ denotes the matrix Hadamard product, and the matrix $\mathbf{K}$ consists of entries $K_{ij}{=}{1}/{(\lambda_{i}{-}\lambda_{j})}$ if $i{\neq}j$ and $K_{ij}{=}0$ otherwise. This step is the same for both ${\mathbf{Q}}$ and ${\mathbf{S}}$. Finally, we have the gradient passed to the feature ${\mathbf{X}}$ as: \begin{equation} \frac{\partial l}{\partial \mathbf{X}}=(\frac{\partial l}{\partial \mathbf{P}}+(\frac{\partial l}{\partial \mathbf{P}})^{T})\mathbf{X}{\mathbf{J}} \label{X_de} \end{equation} With the above rules, the SVD function can be easily inserted into any neural networks and trained end-to-end as a meta-layer. \section{Mathematical Derivation and Proof} \subsection{Derivation of Nearest Orthogonal Gradient} The problem of finding the nearest orthogonal gradient can be defined as: \begin{equation} \min_{{\mathbf{R}}} ||\frac{\partial l}{\partial {\mathbf{W}}}-{\mathbf{R}} ||_{\rm F}\ subject\ to\ {\mathbf{R}}\mR^{T}={\mathbf{I}} \end{equation} To solve this constrained optimization problem, We can construct the following error function: \begin{equation} e({\mathbf{R}}) = Tr\Big((\frac{\partial l}{\partial {\mathbf{W}}}-{\mathbf{R}})^{T}(\frac{\partial l}{\partial {\mathbf{W}}}-{\mathbf{R}})\Big) + Tr\Big(\mathbf{\Sigma} {\mathbf{R}}^{T}{\mathbf{R}} -{\mathbf{I}} \Big) \end{equation} where $Tr(\cdot)$ is the trace measure, and $\mathbf{\Sigma}$ denotes the symmetric matrix Lagrange multiplier. Setting the derivative to zero leads to: \begin{equation} \begin{gathered} \frac{d e({\mathbf{R}})}{d {\mathbf{R}}} = -2 (\frac{\partial l}{\partial {\mathbf{W}}}-{\mathbf{R}}) + 2{\mathbf{R}}\mathbf{\Sigma} = 0 \\ \frac{\partial l}{\partial {\mathbf{W}}} = {\mathbf{R}}({\mathbf{I}} + \mathbf{\Sigma} ),\ {\mathbf{R}} = \frac{\partial l}{\partial {\mathbf{W}}}({\mathbf{I}} + \mathbf{\Sigma})^{-1} \label{inter_result} \end{gathered} \end{equation} The term $({\mathbf{I}} + \mathbf{\Sigma})$ can be represented using $\frac{\partial l}{\partial {\mathbf{W}}}$. Consider the covariance of $\frac{\partial l}{\partial {\mathbf{W}}}$: \begin{equation} \begin{gathered} (\frac{\partial l}{\partial {\mathbf{W}}})^{T}\frac{\partial l}{\partial {\mathbf{W}}} = ({\mathbf{I}} + \mathbf{\Sigma} )^{T}{\mathbf{R}}^{T}{\mathbf{R}}({\mathbf{I}} + \mathbf{\Sigma} ) = ({\mathbf{I}} + \mathbf{\Sigma} )^{T}({\mathbf{I}} + \mathbf{\Sigma} ) \\ ({\mathbf{I}} + \mathbf{\Sigma} ) = \Big((\frac{\partial l}{\partial {\mathbf{W}}})^{T}\frac{\partial l}{\partial {\mathbf{W}}}\Big)^{\frac{1}{2}} \end{gathered} \end{equation} Substituting the term $({\mathbf{I}} + \mathbf{\Sigma})$ in eq.~\eqref{inter_result} with the above equation leads to the closed-form solution of the nearest orthogonal gradient: \begin{equation} {\mathbf{R}} = \frac{\partial l}{\partial {\mathbf{W}}} \Big(( \frac{\partial l}{\partial {\mathbf{W}}})^{T} \frac{\partial l}{\partial {\mathbf{W}}}\Big)^{-\frac{1}{2}} \end{equation} \subsection{Derivation of Optimal Learning Rate} To jointly optimize the updated weight ${\mathbf{W}}{-}{\eta}\frac{\partial l}{\partial {\mathbf{W}}}$, we need to achieve the following objective: \begin{equation} \min_{\eta} ||({\mathbf{W}}{-}{\eta}\frac{\partial l}{\partial {\mathbf{W}}})({\mathbf{W}}{-}{\eta}\frac{\partial l}{\partial {\mathbf{W}}})^{T}-{\mathbf{I}}||_{\rm F} \end{equation} This optimization problem can be more easily solved in the form of vector. Let $\mathbf{w}$, ${\mathbf{i}}$, and $\mathbf{l}$ denote the vectorized ${\mathbf{W}}$, ${\mathbf{I}}$, and $\frac{\partial l}{\partial {\mathbf{W}}}$, respectively. Then we construct the error function as: \begin{equation} e(\eta) =\Big( (\mathbf{w}-\eta\mathbf{l})^{T}(\mathbf{w}-\eta\mathbf{l})-{\mathbf{i}}\Big)^{T}\Big( (\mathbf{w}-\eta\mathbf{l})^{T}(\mathbf{w}-\eta\mathbf{l})-{\mathbf{i}}\Big) \end{equation} Expanding the equation leads to: \begin{equation} e(\eta)=(\mathbf{w}^{T}\mathbf{w}-2\eta\mathbf{l}^{T}\mathbf{w}+\eta^{2}\mathbf{l}^{T}\mathbf{l}-\mathbf{i})^{T}(\mathbf{w}^{T}\mathbf{w}-2\eta\mathbf{l}^{T}\mathbf{w}+\eta^{2}\mathbf{l}^{T}\mathbf{l}-\mathbf{i}) \end{equation} Differentiating $e(\eta)$ w.r.t. $\eta$ yields: \begin{equation} \begin{gathered} \frac{d e(\eta)}{d \eta} = -4{\mathbf{w}}\mw^{T}{\mathbf{l}}^{T}{\mathbf{w}}+ 4\eta{\mathbf{w}}\mw^{T}{\mathbf{l}}^{T}{\mathbf{l}}\\+ 8\eta{\mathbf{l}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{w}}-12\eta^{2}{\mathbf{l}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{l}}+4{\mathbf{l}}{\mathbf{w}}^{T}{\mathbf{i}} +4\eta^{3}{\mathbf{l}}\ml^{T} - 4\eta{\mathbf{i}}{\mathbf{l}}\ml^{T} \end{gathered} \end{equation} Since $\eta$ is typically very small, the higher-order terms (\emph{e.g.,} $\eta^{2}$ and $\eta^{3}$) are sufficiently small such that they can be neglected. After omitting these terms, the derivative becomes: \begin{equation} \frac{d e(\eta)}{d \eta}{\approx}-4{\mathbf{w}}\mw^{T}{\mathbf{l}}^{T}{\mathbf{w}} + 4\eta{\mathbf{w}}\mw^{T}{\mathbf{l}}^{T}{\mathbf{l}} + 8\eta{\mathbf{l}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{w}} +4{\mathbf{l}}{\mathbf{w}}^{T}{\mathbf{i}} - 4\eta{\mathbf{i}}{\mathbf{l}}\ml^{T}\\ \end{equation} Setting the derivative to zero leads to the optimal learning rate: \begin{equation} \eta^{\star} \approx \frac{{\mathbf{w}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{w}}-{\mathbf{l}}^{T}{\mathbf{w}}{\mathbf{i}}}{{\mathbf{w}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{l}}+2{\mathbf{l}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{w}} - {\mathbf{l}}^{T}{\mathbf{l}}{\mathbf{i}}} \end{equation} Notice that ${\mathbf{i}}$ is the vectorization of the identify matrix ${\mathbf{I}}$, which means that ${\mathbf{i}}$ is very sparse ($\emph{i.e.,}$ lots of zeros) and the impact can be neglected. The optimal learning rate can be further simplified as: \begin{equation} \eta^{\star} \approx \frac{{\mathbf{w}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{w}}}{{\mathbf{w}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{l}}+2{\mathbf{l}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{w}}} \end{equation} \subsection{Proof of the learning rate bounds} \begin{duplicate} When both ${\mathbf{W}}$ and $\frac{\partial l}{\partial {\mathbf{W}}}$ are orthogonal, $\eta^{\star}$ is both upper and lower bounded. The upper bound is $\frac{N^2}{N^2 + 2}$ and the lower bound is $\frac{1}{N^{2}+2}$ where $N$ denotes the row dimension of ${\mathbf{W}}$. \label{prop:lr_bounds} \end{duplicate} \begin{proof} Since the vector product is equivalent to the matrix Frobenius inner product, we have the relation: \begin{equation} {\mathbf{l}}^{T}{\mathbf{w}} = \langle\frac{\partial l}{\partial {\mathbf{W}}},{\mathbf{W}}\rangle_{\rm F} \end{equation} For a given matrix pair ${\mathbf{A}}$ and ${\mathbf{B}}$, the Frobenius product $\langle\cdot\rangle_{\rm F}$ is defined as: \begin{equation} \langle{\mathbf{A}},{\mathbf{B}}\rangle_{\rm F}=\sum A_{i,j}B_{i,j}\leq \sigma_{1}({\mathbf{A}})\sigma_{1}({\mathbf{B}})+\dots+\sigma_{N}({\mathbf{A}})\sigma_{N}({\mathbf{B}}) \end{equation} where $\sigma(\cdot)_{i}$ represents the $i$-th largest eigenvalue, $N$ denotes the matrix size, and the inequality is given by Von Neumann’s trace inequality~\cite{mirsky1975trace,grigorieff1991note}. The equality takes only when ${\mathbf{A}}$ and ${\mathbf{B}}$ have the same eigenvector. When both ${\mathbf{W}}$ and $\frac{\partial l}{\partial {\mathbf{W}}}$ are orthogonal, \emph{i.e.,} their eigenvalues are all $1$, we have the following relation: \begin{equation} \langle\frac{\partial l}{\partial {\mathbf{W}}},\frac{\partial l}{\partial {\mathbf{W}}}\rangle_{\rm F}=N,\ \langle\frac{\partial l}{\partial {\mathbf{W}}},{\mathbf{W}}\rangle_{\rm F}\leq N \end{equation} This directly leads to: \begin{equation} \langle\frac{\partial l}{\partial {\mathbf{W}}},{\mathbf{W}}\rangle_{\rm F}\leq\langle\frac{\partial l}{\partial {\mathbf{W}}},\frac{\partial l}{\partial {\mathbf{W}}}\rangle_{\rm F},\ {\mathbf{l}}^{T}{\mathbf{w}} \leq {\mathbf{l}}^{T}{\mathbf{l}} \end{equation} Exploiting this inequality, the optimal learning rate has the relation: \begin{equation} \begin{aligned} \eta^{\star} \approx \frac{{\mathbf{w}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{w}}}{{\mathbf{w}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{l}}+2{\mathbf{l}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{w}}} \leq \frac{{\mathbf{w}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{l}}}{{\mathbf{w}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{l}}+2{\mathbf{l}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{w}}} \end{aligned} \label{eq:upper_bound_1} \end{equation} For ${\mathbf{l}}^{T}{\mathbf{w}}$, we have the inequality as: \begin{equation} \begin{aligned} {\mathbf{l}}^{T}{\mathbf{w}} &= \langle\frac{\partial l}{\partial {\mathbf{W}}},{\mathbf{W}}\rangle_{\rm F}=\sum_{i,j} \frac{\partial l}{\partial {\mathbf{W}}}_{i,j}{\mathbf{W}}_{i,j}\\ &\geq \sigma_{min}(\frac{\partial l}{\partial {\mathbf{W}}})\sigma_{min}({\mathbf{W}})=1 \end{aligned} \label{eq:inequality_lw} \end{equation} Then we have the upper bounded of $\eta^{\star}$ as: \begin{equation} \begin{aligned} \eta^{\star} &\leq \frac{{\mathbf{w}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{l}}}{{\mathbf{w}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{l}}+2{\mathbf{l}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{w}}}\\ &= \frac{N^2}{N^2+2{\mathbf{l}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{w}}} < \frac{N^2}{N^2 + 2} \end{aligned} \end{equation} For the lower bound, since we also have ${\mathbf{l}}^{T}{\mathbf{w}}{\leq}{\mathbf{w}}^{T}{\mathbf{w}}$, $\eta^{\star}$ can be re-written as: \begin{equation} \begin{aligned} \eta^{\star} &\approx \frac{{\mathbf{w}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{w}}}{{\mathbf{w}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{l}}+2{\mathbf{l}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{w}}}\\ &\geq \frac{{\mathbf{l}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{w}}}{{\mathbf{w}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{l}}+2{\mathbf{l}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{w}}}\\ &= \frac{1}{\frac{{\mathbf{w}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{l}}}{{\mathbf{l}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{w}}}+2} \\ &=\frac{1}{\frac{N^{2}}{{\mathbf{l}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{w}}}+2} \end{aligned} \label{eq:lower_bound_1} \end{equation} Injecting eq.~\eqref{eq:inequality_lw} into eq.~\eqref{eq:lower_bound_1} leads to the further simplification: \begin{equation} \eta^{\star} \approx \frac{1}{\frac{N^{2}}{{\mathbf{l}}^{T}{\mathbf{w}}{\mathbf{l}}^{T}{\mathbf{w}}}+2} \geq \frac{1}{N^{2}+2} \end{equation} As indicated above, the optimal learning rate $\eta^{\star}$ has a lower bound of $\frac{1}{N^{2}+2}$. \end{proof} \begin{figure*}[t] \centering \includegraphics[width=0.9\linewidth]{imgs/olr_value.jpg} \caption{Scheme of learning rate during the training process of decorrelated BN. For the orthogonal weight and gradient, our OLR has a much higher probability of occurrence and can enforce a stronger orthogonality constraint.} \label{fig:olr_value} \end{figure*} \section{Detailed Experimental Settings} In this section, we introduce the implementation details and experimental settings. \subsection{Covariance Conditioning} \subsubsection{Decorrelated Batch Normalization} The training lasts $350$ epochs and the learning rate is initialized with $0.1$. The SGD optimizer is used with momentum $0.9$ and weight decay $5e{-}4$. We decrease the learning rate by $10$ every $100$ epochs. The batch size is set to $128$. We use the technique proposed in~\cite{song2021approximate} to compute the stable SVD gradient. The Pre-SVD layer in this experiment is the $3{\times}3$ convolution layer. \subsubsection{Global Covariance Pooling} The training process lasts $60$ epochs and the learning rate is initialize with $0.1$. We decrease the learning rate by $10$ at epoch $30$ and epoch $45$. The SGD optimizer is used with momentum $0.9$ and weight decay $1e{-}4$. The model weights are randomly initialized and the batch size is set to $256$. The images are first resized to $256{\times}256$ and then randomly cropped to $224{\times}224$ before being passed to the model. The data augmentation of randomly horizontal flip is used. We use the technique proposed in~\cite{song2021approximate} to compute the stable SVD gradient. The Pre-SVD layer denotes the convolution transform of the previous layer. \subsection{Latent Disentanglement} \subsubsection{EigenGAN} The input image is resize to $128{\times}128$ for AnimeFace~\cite{chao2019/online} and to $256{\times}256$ for FFHQ~\cite{kazemi2014one}. We set the batch size to $128$, and the training process lasts $500,000$ steps. The subspace dimension of each layer is set to $6$, \emph{i.e.,} each layer has $6$ interpretable directions. All the orthogonality techniques are enforced on the projection matrix $\mathbf{U}_{i}$ at each layer. \subsubsection{Vanilla GAN} For both CelebA~\cite{liu2018large} and LSUN Church~\cite{yu2015lsun}, we resize the input image to the resolution of $128{\times}128$. The training lasts $200$ epochs for CelebA and lasts $400$ epochs for LSUN Church. We set the batch size to $128$ and set the latent dimension to $30$. \section{Occurrence of OLR} Since our proposed OLR needs manual tuning during the training, it would be interesting to investigate the probability of occurrence in different settings. Fig.~\ref{fig:olr_value} depicts the learning rate schemes of decorrelated BN with ordinary learning rate (\emph{left}), OLR for non-orthogonal weight/gradient (\emph{middle}), and OLR for orthogonal weight/gradient (\emph{right}). As can be seen, in both settings (orthogonal and non-orthogonal weight/gradient), our OLR occurs with a reasonable probability during the training, which enforces a \emph{related} orthogonality constraint on the weight. When the weight and gradient are non-orthogonal, our OLR mainly occurs at the first training stage where the ordinary learning rate is relative large. For orthogonal gradient and weight, the OLR happens more frequently and consistently occurs throughout all the training stages. This meets our theoretical analysis in Prop.~\ref{prop:lr_bounds}: our OLR suits simultaneously orthogonal weight and gradient. \section*{Acknowledgments} \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
2,869,038,154,205
arxiv
\section{Introduction} For quite a while now, a $7 \sigma$ discrepancy has existed between the proton rms charge radius (\ensuremath{r_{\mathrm p}}) determined using electrons and muons. On the one hand, the value from laser spectroscopy of the exotic muonic hydrogen atom (\ensuremath{\mu \mathrm{p} }), \begin{equation} \label{eq:Rp_mup} \ensuremath{r_{\mathrm p}}(\ensuremath{\mu \mathrm{p} }) = 0.8409\,(4)\,\mathrm{fm} \end{equation} has been reported by the CREMA collaboration~\cite{Pohl:2010:Nature_mup1,Antognini:2013:Science_mup2}. On the other hand, the most recent CODATA-2010 ``world average'' value \begin{equation} \label{eq:Rp_CODATA} \ensuremath{r_{\mathrm p}}(\mathrm{\mbox{CODATA-2010}}) = 0.8775\,(51)\,\mathrm{fm} \end{equation} has been determined by a self-consistent least-squares adjustment (LSA) of the fundamental physical constants~\cite{Mohr:2012:CODATA10}. The discrepancy of $\sim 7\sigma$ between these two values has been coined the ``Proton Radius Puzzle''~\cite{Pohl:2013:ARNPS,Carlson:2015:Puzzle}. The CREMA collaboration has just published a value of the deuteron charge radius \ensuremath{r_{\mathrm d}}\ from laser spectroscopy of muonic deuterium (\ensuremath{\mu \mathrm{d} })~\cite{Pohl:2016:Science_mud} \begin{equation} \label{eq:Rd_mud} \ensuremath{r_{\mathrm d}}(\ensuremath{\mu \mathrm{d} }) = 2.1256\,(8)\,\mathrm{fm}, \end{equation} again more than $7 \sigma$ smaller than the CODATA-2010 value of \ensuremath{r_{\mathrm d}} \begin{equation} \label{eq:Rd_CODATA} \ensuremath{r_{\mathrm d}}(\mathrm{\mbox{CODATA-2010}}) = 2.1424\,(21)\,\mathrm{fm}. \end{equation} However, comparison of the new \ensuremath{r_{\mathrm d}}(\ensuremath{\mu \mathrm{d} }) value with the CODATA-2010 value may be considered inadequate or redundant, because the CODATA values of \ensuremath{r_{\mathrm d}}\ and \ensuremath{r_{\mathrm p}}\ are highly correlated, with a correlation coefficient c(\ensuremath{r_{\mathrm p}},\ensuremath{r_{\mathrm d}}) = 0.9989 (see Ref.~\cite{Mohr:2012:CODATA10}, Eq.\,(92)). This large correlation is the result of the very precisely measured isotope shift of the $1S \rightarrow 2S$ transition in atomic hydrogen (H) and deuterium (D) \cite{Huber:1998:HydrIsoShift,Parthey:2010:PRL_IsoShift}, which yields a very accurate value for the {\em difference} of the (squared) deuteron and proton charge radii~\cite{Jentschura:2011:IsoShift} \begin{equation} \label{eq:HDiso} \ensuremath{r_{\mathrm d}}^2-\ensuremath{r_{\mathrm p}}^2 = 3.82007(65)\,\mathrm{fm^2}. \end{equation} One could thus argue that the CODATA deuteron charge radius is larger than the muonic deuterium value only because the correlated, and very accurately determined, proton charge radius is larger than the muonic hydrogen value. Here we use the available data on spectroscopy of atomic deuterium to deduce a precise value of \ensuremath{r_{\mathrm d}}\ which does {\em not} depend on \ensuremath{r_{\mathrm p}}\ through Eq.\,(\ref{eq:HDiso}). In our analysis we use a value of the $1S \rightarrow 2S$ transition in atomic deuterium (see Tab.~\ref{tab:D}) that has not been used by CODATA. Its value can either be inferred from published data or found in a PhD thesis~\cite{Udem:PhD}. This $1S \rightarrow 2S$ value helps improve the accuracy of the deuteron charge radius by a factor of five, compared to the CODATA Partial Adjustment~10~\cite{Mohr:2012:CODATA10}. \subsection{CODATA Partial Adjustments} The final CODATA-2010 recommended values of the fundamental constants are deduced in the so-called ``Adjustment~3''. As detailed in Sec.~XIII.B.2 on page~1577\,ff.\ of the CODATA-2010 report~\cite{Mohr:2012:CODATA10}, there are additional adjustments that use only a subset of the available input data. ``Adjustments 6-12'' are the ones relevant for \ensuremath{r_{\mathrm p}}, \ensuremath{r_{\mathrm d}}\ and the Rydberg constant \ensuremath{R_{\infty}}, and the results are summarized in Tab.~XXXVIII of Ref.~\cite{Mohr:2012:CODATA10}. These auxiliary Partial Adjustments serve two purposes: On the one hand, they verify the internal consistency of the CODATA LSA, as results from different subsets of the data are in good agreement with each other. On the other hand, these adjustments provide uncorrelated values of \ensuremath{r_{\mathrm p}}\ and \ensuremath{r_{\mathrm d}}. These can then be compared with their muonic counterparts to obtain a clearer picture of the issues surrounding the ``proton radius puzzle''. For the proton, the value of \ensuremath{r_{\mathrm p}}\ that is deduced from data obtained by precision spectroscopy in atomic hydrogen alone (omitting both elastic electron-proton (e-p) scattering results and measurements in deuterium) is determined in Adjustment~8, see Tab.~XXXVIII of Ref.~\cite{Mohr:2012:CODATA10}: \begin{equation} \label{eq:Rp_H_CODATA} \ensuremath{r_{\mathrm p}}\mathrm{(H~spectr.,~CODATA)} = 0.8764(89)\,\mathrm{fm}. \end{equation} This value is in excellent agreement with Eq.\,(\ref{eq:Rp_CODATA}), and only slightly less accurate, see Fig.~\ref{fig:Rp}. The ``atomic physics'' part of the proton radius puzzle is the $4.0\sigma$ discrepancy between Eq.\,(\ref{eq:Rp_mup}) and Eq.\,(\ref{eq:Rp_H_CODATA}). It is unaffected by the problems that may exist in the analysis of e-p scattering data~\cite{Hill:2010:ModelIndepScatt,Lorenz:2012:Closing_in,Kraus:2014:polyFits,Higinbotham:2016:Rp_ep,Horbatsch:2016:ep_scatt}. The situation is somewhat less favorable for the deuteron charge radius \ensuremath{r_{\mathrm d}}. The CODATA-2010 value from the full Adjustment~3 given in Eq.\,(\ref{eq:Rd_CODATA}) is very precise: $\ensuremath{r_{\mathrm d}}\mathrm{(CODATA)} = 2.1424(21)$\,fm. The value from laser spectroscopy of atomic deuterium from Adjustment~10, on the other hand, is less so~\footnote{\label{fn:extra_precision} The CODATA-2010 report quotes 2.121(25)\,fm, but we list all charge radii with 4 decimal figures to make the different accuracies immediately obvious. The numbers in Eq.\,(\ref{eq:Rd_D_CODATA}) were provided by Barry N.\ Taylor and David B.\ Newell from CODATA/NIST.}: \begin{equation} \label{eq:Rd_D_CODATA} \ensuremath{r_{\mathrm d}}\mathrm{(D~spectr.,~CODATA)} = 2.1214(253)\,\mathrm{fm}. \end{equation} This value is not accurate enough for a useful comparison with the new result from muonic deuterium, see Fig.~\ref{fig:Rd}. \subsection{The ``missing'' $\boldsymbol{1S\rightarrow2S}$ measurement in D} The reason for this significantly worse accuracy of \ensuremath{r_{\mathrm d}}\ in Eq.\,(\ref{eq:Rd_D_CODATA}) is the apparent lack of a precise measurement of the $1S \rightarrow 2S$ transition in atomic deuterium. Only the isotope shift, {\it i.e.} the {\em difference} of the $1S \rightarrow 2S$ transitions in H and D, is used in the CODATA LSA, see Ref.~\cite{Mohr:2012:CODATA10}, Tab.\,XI. This is perfectly valid for the ``full'' CODATA Adjustment~3 using all available input data. However, for Adjustment~10 of spectroscopy data in D, the lack of a precise value for the $1S \rightarrow 2S$ transition in D results in a much larger uncertainty. In this note we argue that the $1S \rightarrow 2S$ transition frequency in atomic deuterium has been measured very accurately by some of the authors at MPQ. The published isotope shifts~\cite{Huber:1998:HydrIsoShift,Parthey:2010:PRL_IsoShift} are in fact the calculated differences of the measured $1S \rightarrow 2S$ transitions in atomic deuterium and hydrogen. We can thus proceed to deduce a precise value of the deuteron radius from deuterium spectroscopy alone, combining the $1S \rightarrow 2S$ transition in D, measured by some of the authors at MPQ, with the $1S \rightarrow 8S$, $8D$, and $12D$ transitions in D, measured by some of the authors at LKB. The new value is five times more precise as the one in Eq.\,(\ref{eq:Rd_D_CODATA}), and can be usefully compared to the muonic deuterium value of \ensuremath{r_{\mathrm d}}~\cite{Pohl:2016:Science_mud}. Next we proceed with a pedagogical introduction to the theory of the energy levels in atomic H and D. We determine the {\em proton} charge radius from atomic hydrogen data alone. Our value is in excellent agreement with the one from CODATA Adjustment~8. Afterwards we apply the same formalism to the deuterium data. \section{Energy levels in hydrogen and deuterium} \begin{table*}[t!] \caption{\label{tab:CorrH} Values of \ensuremath{\Delta(n,\ell,j)}\ in kHz for relevant energy levels in atomic hydrogen. \ensuremath{\Delta(n,\ell,j)}\ includes all relevant corrections to the energy levels from fine structure splittings and QED effects. The uncertainties are taken from Ref.~\cite{Mohr:2012:CODATA10}, Tab.XVIII. They arise mostly from the estimated uncertainty of uncalculated two-loop corrections~\cite{Biraben:2009:SpectrAtHyd}. An uncertainty of ``(0)'' denotes ``negligibly small''.} \begin{tabular}{l | d d d d d} \hline \hline $n$ & \multicolumn{1}{c}{$S_{1/2}$} & \multicolumn{1}{c}{$P_{1/2}$} & \multicolumn{1}{c}{$P_{3/2}$} & \multicolumn{1}{c}{$D_{3/2}$} & \multicolumn{1}{c}{$D_{5/2}$} \\ \hline 1 & -35\,626\,637.5(2.5) \\ 2 & -12\,636\,167.73(31) & -13\,693\,861.67(3) & -2\,724\,820.10(3) \\ 3 & -4\,552\,757.02(9) & & & -1\,622\,832.29(0) & -539\,495.09(0) \\ 4 & -2\,091\,350.05(4) & -2\,224\,408.70(0) & -853\,278.87(0) & -855\,566.25(0) & -398\,533.10(0) \\ 8 & -293\,431.56(1) & & & -138\,996.24(0) & -81\,867.09(0) \\ 12 & & & & -44\,349.61(0) & -27\,422.46(0) \\ \hline \hline \end{tabular} \end{table*} \begin{table*}[t!] \caption{\label{tab:CorrD} Values of \ensuremath{\Delta(n,\ell,j)}\ in kHz for relevant energy levels in atomic deuterium. The caption of Tab.~\ref{tab:CorrH} applies.} \begin{tabular}{l | d d d d d} \hline \hline $n$ & \multicolumn{1}{c}{$S_{1/2}$} & \multicolumn{1}{c}{$P_{1/2}$} & \multicolumn{1}{c}{$P_{3/2}$} & \multicolumn{1}{c}{$D_{3/2}$} & \multicolumn{1}{c}{$D_{5/2}$} \\ \hline 1 & -35\,621\,512.1(2.3) \\ 2 & -12\,638\,504.55(29) & -13\,696\,839.80(3) & -2\,724\,804.25(3) \\ 3 & -4\,553\,743.34(9) & & & -1\,623\,126.89(0) & -539\,493.99(0) \\ 4 & -2\,091\,828.14(4) & -2\,224\,966.95(0) & -853\,462.87(0) & -855\,752.49(0) & -398\,594.64(0) \\ 8 & -293\,502.94(1) & & & -139\,031.16(0) & -81\,886.41(0) \\ 12 & & & & -44\,361.10(0) & -27\,429.34(0) \\ \hline \hline \end{tabular} \end{table*} The energy levels in H and D, $E/h$ in frequency units [kHz] due to the Planck constant $h$, can be parameterized~\cite{Biraben:2009:SpectrAtHyd} as a function of principal quantum number $n$, orbital quantum number $\ell$, and total angular momentum $j$, as \begin{equation} \label{eq:Etot} E(n,\ell,j)/h ~ = ~ - \dfrac{c\ensuremath{R_{\infty}}}{n^2} \dfrac{\ensuremath{m_\mathrm{red}}}{m_e} ~ ~ + \dfrac{\ensuremath{E_{NS}}}{n^3} \, \delta_{\ell 0} ~ ~ + \ensuremath{\Delta(n,\ell,j)}. \end{equation} The first term on the right hand side is the famous Bohr result for the energy levels of an electron orbiting an infinitely heavy nucleus $-\ensuremath{R_{\infty}}/n^2$, corrected for the leading order nuclear motion by the reduced mass ratio $\ensuremath{m_\mathrm{red}}/m_e$. Here, \ensuremath{R_{\infty}}\ denotes the Rydberg constant, $c$ is the speed of light in vacuum, and the reduced mass of the atom with an electron of mass $m_e$ and a nucleus of mass $m_N$ is given by \begin{equation} \label{eq:mred} \ensuremath{m_\mathrm{red}} = \dfrac{m_e \, m_N}{m_e + m_N} = \dfrac{m_e}{1 + \frac{m_e}{m_N}}. \end{equation} The mass ratios $m_e/m_N$ are tabulated in Ref.~\cite{Mohr:2012:CODATA10}. The second term in Eq.\,(\ref{eq:Etot}) is the finite nuclear size correction, whose leading order is given in kHz by~\cite{Mohr:2012:CODATA10,Biraben:2009:SpectrAtHyd} \begin{equation} \label{eq:NS} \ensuremath{E_{NS}}^{(0)} ~ = ~ \dfrac{2}{3 h} \left( \dfrac{\ensuremath{m_\mathrm{red}}}{m_e} \right)^3 (Z \alpha)^4 m_e c^2 \left( \dfrac{\rN}{\mbox{\sout{\ensuremath{\lambda}}\ensuremath{_C}}} \right) ^2 . \end{equation} Here, $\alpha \approx 1/137.036$ is the fine structure constant, $Z = 1$ is the nuclear charge for H and D, $\mbox{\sout{\ensuremath{\lambda}}\ensuremath{_C}} \approx 386.16$\,fm is the reduced Compton wavelength of the electron, and \rN\ is the rms charge radius of the nucleus, {\it i.e.} \ensuremath{r_{\mathrm p}}\ for H and \ensuremath{r_{\mathrm d}}\ for D. The charge radius contribution \ensuremath{E_{NS}}\ is significant only for S-states ($\ell = 0$), as indicated by the Kronecker symbol $\delta_{\ell 0}$ in Eq.\,(\ref{eq:Etot}). The $1/n^3$ dependence of \ensuremath{E_{NS}}\ in Eq.\,(\ref{eq:Etot}) originates from the overlap of the electron's wave function with the extended nuclear charge distribution. For our purposes it is convenient to sum $\ensuremath{E_{NS}}^{(0)}$ and all other finite nuclear size effects that are proportional to $1/n^3$. These higher-order nuclear size corrections are $2\times10^{-4}$ of \ensuremath{E_{NS}}\, and thus very small, see Ref.~\cite{Mohr:2012:CODATA10} Eqs.\,(75), (77) and (78). We obtain \begin{eqnarray} \label{eq:NS_H} \ensuremath{E_{NS}}\mathrm{(H)} & = & 1\,564.60 \times \ensuremath{r_{\mathrm p}}^2 \quad \mathrm{kHz/fm^2}, \\ \label{eq:NS_D} \ensuremath{E_{NS}}\mathrm{(D)} & = & 1\,565.72 \times \ensuremath{r_{\mathrm d}}^2 \quad \mathrm{kHz/fm^2}, \end{eqnarray} both with negligible uncertainty on the level of a few Hz/fm$^2$. For reference, $E_{\rm NS}$ amounts to approx.\ 1100\;kHz and 7100\;kHz for the 1S ground state in H and D, respectively. The third ingredient of Eq.\,(\ref{eq:Etot}), \ensuremath{\Delta(n,\ell,j)}, summarizes all the remaining corrections. The largest part of \ensuremath{\Delta(n,\ell,j)}\ is due to the use of the Dirac equation instead of the simple Bohr formula. Other contributions are the fine- and hyperfine-splittings, the relativistic, QED, radiative, recoil and Darwin-Foldy corrections, finite size corrections for $P$-states, nuclear polarizability, and many higher-order contributions. These are listed in Sec.~IV.A.1 of Ref.~\cite{Mohr:2012:CODATA10}. The \ensuremath{\Delta(n,\ell,j)}\ can be calculated very accurately using the detailed formulas found e.g.\ in Refs.~\cite{Eides:2006:Book,Mohr:2012:CODATA10,Horbatsch:2016:Tab_H}. We list in Tab.~\ref{tab:CorrH} and Tab.~\ref{tab:CorrD} the values of \ensuremath{\Delta(n,\ell,j)}\ for relevant states in H and D, respectively. For reference, the sum of all so-called QED corrections, included in $\Delta(1,0,1/2)$ of the $1S$ ground state in H and D amount to $8\,171\,663.8 \pm 2.5$\,kHz and $8\,176\,795.7 \pm 2.3$\,kHz, respectively. The dominant uncertainties arise from the two-loop corrections~\cite{Biraben:2009:SpectrAtHyd}, and they are responsible for almost all of the uncertainties of the $\Delta(n,\ell,j)$. The hyperfine splittings of the $1S$ and $2S$ states have been measured very accurately~\cite{Cheng:1980:HFS_H1S,Kolachevsky:2004:HFS_H2S,Karshenboim:2005:PPS}. All constants except \ensuremath{R_{\infty}}\ and the radii \rN\ in Eqs.\,(\ref{eq:Etot})-(\ref{eq:NS_D}) are known with sufficient accuracy~\cite{Mohr:2012:CODATA10} from measurements other than H or D spectroscopy. This leaves \ensuremath{R_{\infty}}\ and \rN\ to be determined from H or D spectroscopy. Note that we will later only be concerned with {\em transition frequencies} between different energy levels, so the Planck constant $h$ on the left hand side of Eq.\,(\ref{eq:Etot}) drops out. The Rydberg constant \ensuremath{R_{\infty}}\ appears in Eq.\,(\ref{eq:Etot}) explicitly only for the 1st (Bohr) term. This is to emphasize that the full accuracy of $\sim10^{-12}$ is required only for the Bohr term, because only the measurements of optical transitions between levels with different principal quantum number $n$ are accurate on the $10^{-12}$ level or better, see Tab.~\ref{tab:H}. These measurements achieve accuracies in the kHz range or better, for transitions frequencies of a several hundred THz. Technically, also the 2nd (finite size) and 3rd ($\Delta(n,\ell,j)$) terms contain the Rydberg constant, acting as a ``unit converter'' between atomic units, used in the calculation of \ensuremath{E_{NS}}\ and $\Delta(n,\ell,j)$, and the SI unit of frequency, in which the measurements are done. The accuracy required in the latter terms is much lower, on the order of a few times $10^{-8}$. This becomes obvious from kHz-accuracy required for the \ensuremath{E_{NS}}\ (1100\,kHz and 7100\,kHz for H and D, respectively), or for the $\Delta(1S)$ ($-35.6\times 10^6$\,kHz). Thus, these terms do not require the full $10^{-12}$ accuracy in \ensuremath{R_{\infty}}. Instead, one can {\em calculate} \ensuremath{R_{\infty}}\ with an accuracy of a few parts in $10^{8}$ from the definition \begin{equation} \label{eq:Ryd} \ensuremath{R_{\infty}} = \dfrac{\alpha^2 m_e c}{2 h}, \end{equation} and the values of $\alpha$, $m_e$ and $h$ from measurements other than spectroscopy of H or D~\cite{Hanneke:2008,Aoyama:2012:10th_order_g_2,Bouchendira:2011:h_MRb,Stock:2012:WattBalance,Sturm:2014:m_e}. The CODATA-2010 report lists 24 transition frequencies in H and D that enter the LSA, see Ref.~\cite{Mohr:2012:CODATA10}, Tab.\,XI. We reproduce the most relevant numbers, and a few more, in Tabs.~\ref{tab:H}, \ref{tab:iso} and \ref{tab:D}. In particular, we list several measurements of the $1S\rightarrow2S$ transition frequency in D. Next we introduce the {\em modified} transition frequencies \begin{equation} \label{eq:nuCorr} \widetilde{\nu}[ (n,\ell,j) \rightarrow (n',\ell',j') ] = \nu_\mathrm{meas} + \ensuremath{\Delta(n,\ell,j)} - \ensuremath{\Delta(n',\ell',j')} \end{equation} where all fine-, hyperfine-, and QED contributions (except for the finite size effect of $S$ states) have been removed. These {\em modified} transition frequencies can then be used to extract \rN\ and \ensuremath{R_{\infty}}\ using \begin{multline} \label{eq:nuC} \widetilde{\nu}[ (n,\ell,j) ~ \rightarrow ~ (n',\ell',j') ] ~ = \\ c\ensuremath{R_{\infty}} \dfrac{\ensuremath{m_\mathrm{red}}}{m_e} \left(\dfrac{1}{n^2} - \dfrac{1}{n'^2}\right) - \ensuremath{E_{NS}} \left(\dfrac{\delta_{\ell 0}}{n^3} - \dfrac{\delta_{\ell' 0}}{n'^3} \right), \end{multline} which of course follows from Eq.\,(\ref{eq:Etot}). \section{Proton radius from hydrogen spectroscopy} \begin{table*}[t] \caption{\label{tab:H} Some recent measurements in atomic hydrogen. An asterisk following the reference denotes items considered in the most recent CODATA-2010 report. Following our nomenclature, the $2S\rightarrow2P_{1/2}$ transition must be assigned a negative frequency, because the final state $(n',\ell',j') = 2P_{1/2}$ is {\em lower} than the initial $(n,\ell,j) = 2S_{1/2}$ state.} \begin{tabular}{l | l l l l l} \hline \hline \# & $(n,\ell,j)-(n',\ell',j')$ & \multicolumn{1}{c}{$\nu_\mathrm{meas}$ (kHz)} & rel. unc. & Source & Ref. \\ \hline H1 & $2S_{1/2} \rightarrow ~2P_{1/2}$ & \qquad~~~\, -1 057 862(20) & $1.9\times10^{-5}$ & Sussex 1979 & \cite{Newton:1979:H2S2P} *\\ H2 & & \qquad~~~\, -1 057 845.0(9.0) & $8.5\times10^{-6}$ & Harvard 1986 & \cite{Lundeen:1986:LS} *\\ H3 & $2S_{1/2} \rightarrow ~2P_{3/2}$ & \qquad\quad~ 9 911 200(12) & $1.2\times10^{-6}$ & Harvard 1994 & \cite{Hagley:1994:FShyd} *\\ \hline H4 & $2S_{1/2} \rightarrow ~8S_{1/2}$ & ~~ 770 649 350 012.0(8.6) & $1.1\times10^{-11}$ & LKB 1997 & \cite{Beauvoir:1997:H2S8SD} * \\ H5 & $2S_{1/2} \rightarrow ~8D_{3/2}$ & ~~ 770 649 504 450.0(8.3) & $1.1\times10^{-11}$ & LKB 1997 & \cite{Beauvoir:1997:H2S8SD} * \\ H6 & $2S_{1/2} \rightarrow ~8D_{5/2}$ & ~~ 770 649 561 584.2(6.4) & $8.3\times10^{-12}$ & LKB 1997 & \cite{Beauvoir:1997:H2S8SD} * \\ H7 & $2S_{1/2} \rightarrow 12D_{3/2}$ & ~~ 799 191 710 472.7(9.4) & $1.1\times10^{-11}$ & LKB 1999 & \cite{Schwob:1999:Hydr2S12D} * \\ H8 & $2S_{1/2} \rightarrow 12D_{5/2}$ & ~~ 799 191 727 403.7(7.0) & $8.7\times10^{-12}$ & LKB 1999 & \cite{Schwob:1999:Hydr2S12D} * \\ \hline H9 & $1S_{1/2} \rightarrow ~2S_{1/2}$ & 2 466 061 413 187.103(46) & $1.9\times10^{-14}$& MPQ 2000 & \cite{Niering:2000:Hy1S2S} \\ H10& & 2 466 061 413 187.080(34) & $1.4\times10^{-14}$& MPQ 2004 & \cite{Fischer:2004:DriftFundConst} * \\ H11& & 2 466 061 413 187.035(10) & $4.2\times10^{-15}$& MPQ 2011 & \cite{Parthey:2011:PRL_H1S2S}\\ H12& & 2 466 061 413 187.018(11) & $4.5\times10^{-15}$& MPQ 2013 & \cite{Matveev:2013:H1S2S} \\ H13& $1S_{1/2} \rightarrow ~3S_{1/2}$ & 2 922 743 278 678(13) & $4.4\times10^{-12}$ & LKB 2010 & \cite{Arnoult:2010:1S3S} * \\ H14& & 2 922 743 278 659(17) & $5.8\times10^{-12}$ & MPQ 2016 & \cite{Yost:2016:1S3S} \\ \hline \hline \end{tabular} \end{table*} Table~\ref{tab:H} lists 14 transition frequencies in atomic hydrogen. These can be separated in three blocks. \subsubsection{Radio-frequency measurements within $\boldsymbol{n=2}$} \label{sec:H2S2P} The first block in Tab.~\ref{tab:H}, items H1-H3, are radio-frequency measurements of $2S \rightarrow 2P$ transition frequencies in H. Modifying the measured frequencies by $\Delta(2S_{1/2}) - \Delta(2P_{j'})$ from Tab.~\ref{tab:CorrH}, each of these three measurements can be used individually to determine a value of the proton charge radius \ensuremath{r_{\mathrm p}}\ from Eq.\,(\ref{eq:NS_H}) \begin{equation} \label{eq:2S2P} \widetilde{\nu}(2S_{1/2} \rightarrow 2P_{1/2}) ~ = ~ \dfrac{1}{8} \, \ensuremath{E_{NS}}. \end{equation} Each of these three measurements H1-H3 thus yields, a value of \ensuremath{r_{\mathrm p}}, listed in Tab.~\ref{tab:Rp}. As explained above, these three \ensuremath{r_{\mathrm p}}\ values are in fact independent of the {\em exact} value of the Rydberg constant: The relative uncertainties of the radio-frequency measurements are on the order of $10^{-6}$, so only the 6 most significant digits of \ensuremath{R_{\infty}}\ enter the calculation. The ``proton radius puzzle'' could ultimately require a change of \ensuremath{R_{\infty}}\ by $7 \sigma$, or $10^{-11}$, as explained below. But such a change would not affect the \ensuremath{r_{\mathrm p}}\ values obtained from items H1-H3. \subsubsection{Optical measurements between levels with different $\boldsymbol{n}$} The 2nd block in Tab.~\ref{tab:H}, items H4-H8, lists the five most accurate measurements of transition frequencies between the metastable 2S state and higher-$n$ ``Rydberg'' states with $n$=8 or 12. Because these transitions are between levels with different principal quantum number $n$, one has to combine each of these measurements with a 2nd measurement to obtain a pair of values for \ensuremath{r_{\mathrm p}}\ and \ensuremath{R_{\infty}}, using Eq.\,(\ref{eq:Etot}). Ideally, one combines each of the items H4-H8 with a measurement of the $1S \rightarrow 2S$ transition from block 3 in Tab.~\ref{tab:H}, solving pairs of equations like \begin{eqnarray} \label{eq:1s2s} \widetilde{\nu}(1S\rightarrow2S) & = & ~\dfrac{3}{4} c\ensuremath{R_{\infty}}\ ~ - ~ ~\dfrac{7}{8} \ensuremath{E_{NS}} \\ \label{eq:2s8s} \widetilde{\nu}(2S\rightarrow8S) & = & \dfrac{15}{64} c\ensuremath{R_{\infty}}\ ~ - ~ \dfrac{63}{512} \ensuremath{E_{NS}} . \end{eqnarray} Considering the uncertainties of the experimental values in Tab.~\ref{tab:H} and of the \ensuremath{\Delta(n,\ell,j)}\ in Tab.~\ref{tab:CorrH} one sees immediately, that the dominant uncertainty is always given by the $2S \rightarrow n\ell$ measurements with their experimental uncertainty of the order of $\sim 7$\,kHz. Several measurements of the $1S \rightarrow 2S$ transition exist with uncertainties of much less than 1\,kHz. Hence one can choose any of the items H9-H12 to reach the same conclusion. \begin{table}[b] \caption{\label{tab:Rp} Proton charge radii from hydrogen. The row labeled ``CODATA Adjustment~8'' is the value using all hydrogen data, listed in Ref.~\cite{Mohr:2012:CODATA10}, Tab.\,XXXVIII. Also given are the radii from combining the $2S \rightarrow n\ell$ transitions in H with either $1S \rightarrow 2S$ or $1S \rightarrow 3S$. All values agree very well. ``avg'' denotes the average of all values in the rows above, also considering correlations. } \begin{center} \begin{tabular}{c c c c} \hline \hline \# & Transition(s) & \ensuremath{r_{\mathrm p}}\ [fm]\\ \hline H1 & $2S \rightarrow 2P_{1/2}$ & $0.9270 \pm 0.0553$ \\ H2 & $2S \rightarrow 2P_{1/2}$ & $0.8788 \pm 0.0262$ \\ H3 & $2S \rightarrow 2P_{3/2}$ & $0.8688 \pm 0.0354$ \\ \hline H10 + H4 & $1S \rightarrow 2S$ + $2S \rightarrow \;8S_{1/2}$ & $0.8666 \pm 0.0211$ \\ H10 + H5 & $1S \rightarrow 2S$ + $2S \rightarrow \;8D_{3/2}$ & $0.8789 \pm 0.0204$ \\ H10 + H6 & $1S \rightarrow 2S$ + $2S \rightarrow \;8D_{5/2}$ & $0.8911 \pm 0.0155$ \\ H10 + H7 & $1S \rightarrow 2S$ + $2S \rightarrow 12D_{3/2}$ & $0.8551 \pm 0.0222$ \\ H10 + H8 & $1S \rightarrow 2S$ + $2S \rightarrow 12D_{5/2}$ & $0.8641 \pm 0.0164$ \\[0.4ex] \hline \multicolumn{2}{l}{\raisebox{0mm}[3mm][1.5mm]{} $1S \rightarrow 2S$ (H10) \quad + \hfill all H($2S\rightarrow n\ell$)} & $0.8747 \pm 0.0091$ & avg.\\ \hline \multicolumn{2}{l}{\raisebox{0mm}[3mm][1.5mm]{} $1S \rightarrow 3S$ (H13+H14) + all H($2S\rightarrow n\ell$)} & $0.8780 \pm 0.0108$ \\ \hline \multicolumn{2}{l}{\raisebox{0mm}[3mm][1.5mm]{} CODATA Adj.~8 } & $0.8764 \pm 0.0089$ & ~ Eq.\,(\ref{eq:Rp_H})\\ \hline \hline \end{tabular} \end{center} \end{table} We choose the 2004 measurement~\cite{Fischer:2004:DriftFundConst} H10 with an uncertainty of 0.034\,kHz, which was also used in CODATA-2010. The results are summarized in Tab.~\ref{tab:Rp}. A trivial weighted average of all individual \ensuremath{r_{\mathrm p}}\ values in Tab.~\ref{tab:Rp} yields \ensuremath{r_{\mathrm p}}\ from H spectroscopy alone, of $\ensuremath{r_{\mathrm p}}(\mathrm{H}) = 0.8746 \pm 0.0076$\,fm, $4.4 \sigma$ larger than the \ensuremath{\mu \mathrm{p} }\ value. This number is in good agreement with a recent evaluation~\cite{Horbatsch:2016:Tab_H}, which finds a 0.035(7)\,fm, or $4.9\sigma$, difference between H and \ensuremath{\mu \mathrm{p} }. However, relevant correlations exist between the various measurements of block~2, see Ref.~\cite{Mohr:2012:CODATA10}, Tab.\,XIX. These correlations increase the uncertainty of the derived $\ensuremath{r_{\mathrm p}}(\mathrm{H})= 0.8747(91)$\,fm. Alternatively, one can, instead of the $1S \rightarrow 2S$ transition (H10) combine the $1S \rightarrow 3S$ transitions (H13 and H14) with all $2S \rightarrow n\ell$ transitions. This yields (including correlations) $\ensuremath{r_{\mathrm p}}(\mathrm{H'}) = 0.8780(108)$\,fm, in very good agreement with the value above, and only slightly less accurate. A reliable value for the proton rms charge radius deduced from H data alone, which takes into account all data in H listed in Tab.\,XI of Ref.~\cite{Mohr:2012:CODATA10}, as well as the correlations between all input parameters, is given in Adjustment~8 of the CODATA-2010 LSA, see Ref.~\cite{Mohr:2012:CODATA10}, Tab.\,XXXVIII. \begin{equation} \label{eq:Rp_H} \ensuremath{r_{\mathrm p}}\mathrm{(H~spectroscopy)} = 0.8764 (89)\,\mathrm{fm}. \end{equation} This value is $4.0\sigma$ larger than the value from muonic hydrogen, see Fig.~\ref{fig:Rp}. \begin{figure}[b!] \includegraphics[width=1.0\columnwidth]{fig1.eps} \caption{Proton rms charge radii from muonic hydrogen ($\mu$H, the stripe includes the uncertainty) and muonic deuterium~\cite{Pohl:2016:Science_mud} (``$\mu$D + iso'', obtained using Eq.\,(\ref{eq:HDiso})), in comparison with the CODATA-2010 value (Eq.\,(\ref{eq:Rp_CODATA})), the value from hydrogen spectroscopy alone (Eq.\,(\ref{eq:Rp_H})), and the alternative value from using the $1S\rightarrow 3S$ measurement in hydrogen instead of the $1S\rightarrow 2S$ transition, see text. Also shown are the individual values from $2S\rightarrow 2P$ and from combining $1S\rightarrow 2S$ and $2S\rightarrow n\ell$, see Tab.~\ref{tab:Rp}. \vspace{2.5ex}\mbox{~}} \label{fig:Rp} \end{figure} Considering elastic electron-proton (e-p) scattering data together with H spectroscopy, as done in Adjustment~9 of the CODATA-2010 LSA, yields \ensuremath{r_{\mathrm p}}(H and e-p) = $0.8796(56)$\,fm, which is $6.9\sigma$ larger than the \ensuremath{\mu \mathrm{p} }\ value. This is the ``proton radius puzzle'' between measurements with electrons and muonic hydrogen. \section{Deuteron radius from deuterium spectroscopy alone} The principle of determining the deuteron radius from deuterium spectroscopy is exactly analogous to the one described for hydrogen above. However, not all measurements were done for deuterium. Table~\ref{tab:D} lists the relevant deuterium data. First, we note that there are no radio-frequency measurements of $2S \rightarrow 2P$ transitions (i.e.\ no ``block 1''). Thus there are no ``Rydberg-free'' \ensuremath{r_{\mathrm d}}\ values such as the \ensuremath{r_{\mathrm p}}\ values H1-H3. Moreover, no measurement of the $1S \rightarrow 2S$ transition in ``deuterium only'' is listed in the CODATA list of measurements, see Ref.~\cite{Mohr:2012:CODATA10}, Tab.\,XI. Only the $1S \rightarrow 2S$ isotope shift, {\it i.e.} the difference of the $1S \rightarrow 2S$ transition in D and H, is listed there. We give the two most recent values of the H/D isotope in Tab.~\ref{tab:iso}. This apparent lack of a precise measurement of the $1S \rightarrow 2S$ transition in D seems to make it impossible to apply the procedure outlined above for hydrogen, in which pairs of (\ensuremath{R_{\infty}}, \ensuremath{r_{\mathrm d}}) are obtained by combining $1S \rightarrow 2S$ and $2S \rightarrow n\ell$ measurements. CODATA instead performs their Adjustment~10 of all ``deuterium only'' measurements using only $2S \rightarrow n\ell$ measurements (plus some much less accurate differences of $2S \rightarrow 4S/D$ and $1/4$ of the $1S \rightarrow 2S$ transition~\cite{Weitz:1995:LS_HD}, which we omit here for brevity). This has the serious drawback that the ``long lever-arm'' provided by the extremely accurate $1S \rightarrow 2S$ transition is lost, which is reflected by the large uncertainty of \ensuremath{r_{\mathrm d}}\ obtained in Adjustment~10 of CODATA-2010 of \ensuremath{r_{\mathrm d}}\ = 2.1207(253)\,fm, see Eq.\,(\ref{eq:Rd_D_CODATA}). Several very precise values for the $1S \rightarrow 2S$ transition in atomic deuterium exist, however, see Tab.~\ref{tab:D}. The most precise value is obtained by simply adding the $1S \rightarrow 2S$ transition frequency in H and the $1S \rightarrow 2S$ H/D isotope shift. Indeed, the published values of the H/D isotope shift are obtained by subtracting two frequency measurements of $1S \rightarrow 2S$ transitions in H and D~\cite{Huber:1998:HydrIsoShift,Parthey:2010:PRL_IsoShift}. For the full CODATA adjustment~3, this choice makes no difference. However, without the $1S \rightarrow 2S$ transition in D one does not obtain the best possible deuteron radius from D spectroscopy in Adjustment~10. Any frequency measurement is nothing more than a frequency {\em comparison}. The so-called ``absolute frequency measurements'' are characterized by a comparison to a Cs clock~\cite{Bize:2004:FOM}. Technically, all these comparisons between H and Cs involve intermediate comparisons with ``transfer oscillators''. \begin{table*}[t] \caption{\label{tab:iso} Some recent measurements of the H-D isotope shift. An asterisk following the reference denotes items considered in the most recent CODATA-2010 report.} \begin{tabular}{l | l l l l l} \hline \hline \# & Transitions & \quad Frequency (kHz) & rel. unc. & Source & Ref. \\ \hline I1 & D($1S_{1/2} \rightarrow ~2S_{1/2}$) - H($1S_{1/2} \rightarrow ~2S_{1/2}$) & 670 994 334.64(15) & $2.2\times10^{-10}$& MPQ 1998 & \cite{Huber:1998:HydrIsoShift} \\ I2 & & 670 994 334.606(15) & $2.2\times10^{-11}$& MPQ 2010 & \cite{Parthey:2010:PRL_IsoShift} *\\ \hline \hline \end{tabular} \end{table*} \begin{table*} \caption{\label{tab:D} Some recent measurements in atomic deuterium. An asterisk following the reference denotes items considered in the most recent CODATA-2010 report. Items D9 and D10 are direct measurements using a CH$_4$ stabilized He:Ne laser as a transfer oscillator, while D11 and D12 have been measured using the $1S\rightarrow 2S$ transition in hydrogen and a hydrogen maser as transfer oscillators.} \begin{tabular}{l | l l l l l} \hline \hline \# & $(n,\ell,j)-(n',\ell',j')$ & \multicolumn{1}{c}{$\nu_\mathrm{meas}$ (kHz)} & rel. unc. & Source & Ref. \\ \hline D4 & $2S_{1/2} \rightarrow ~8S_{1/2}$ & ~~ 770 859 041 245.7(6.9) & $8.9\times10^{-12}$ & LKB 1997 & \cite{Beauvoir:1997:H2S8SD} * \\ D5 & $2S_{1/2} \rightarrow ~8D_{3/2}$ & ~~ 770 859 195 701.8(6.3) & $8.2\times10^{-12}$ & LKB 1997 & \cite{Beauvoir:1997:H2S8SD} * \\ D6 & $2S_{1/2} \rightarrow ~8D_{5/2}$ & ~~ 770 859 252 849.5(5.9) & $7.7\times10^{-12}$ & LKB 1997 & \cite{Beauvoir:1997:H2S8SD} * \\ D7 & $2S_{1/2} \rightarrow 12D_{3/2}$ & ~~ 799 409 168 038.0(8.6) & $1.1\times10^{-11}$ & LKB 1999 & \cite{Schwob:1999:Hydr2S12D} * \\ D8 & $2S_{1/2} \rightarrow 12D_{5/2}$ & ~~ 799 409 184 966.8(6.8) & $8.5\times10^{-12}$ & LKB 1999 & \cite{Schwob:1999:Hydr2S12D} * \\ \hline \hline D9 & $1S_{1/2} \rightarrow ~2S_{1/2}$ & 2 466 732 407 521.8(1.5) & $6.1\times10^{-13}$& MPQ 1997 & \cite{Udem:PhD} \\ D10& & 2 466 732 407 522.88(91) & $3.7\times10^{-13}$& MPQ 1997 & \cite{Udem:PhD} \\ D11& & 2 466 732 407 521.74(20) & $7.9\times10^{-14}$& MPQ 1998/2000 & H9~+I1 \\ D12& & 2 466 732 407 521.641(25) & $1.0\times10^{-14}$& MPQ 2010/2011 & H11+I2 \\ \hline \hline \end{tabular} \end{table*} For example, items I1, D9 and D10 used a CH$_4$-stabilized HeNe laser, which was then transported to the German Standards Institute PTB for comparison with a Cs clock. In between, a plethora of local oscillators were used in two ``frequency chains''~\cite{Udem:PhD}. More recently, items H9-H12 used a hydrogen maser as a transfer oscillator. This maser was then compared to a Cs fountain clock~\cite{Bize:2004:FOM}. The isotope shift measurement I2 is a frequency comparison between D($1S \rightarrow 2S$) and the same hydrogen maser, using GPS calibration. The maser was then compared to the hydrogen $1S \rightarrow 2S$ transition. The practical reason to use hydrogen as an intermediate transfer oscillator to the Cs SI clock was that it did not require the availability of a primary Cs frequency standard at MPQ. \begin{figure}[b!] \includegraphics[width=1.0\columnwidth]{fig2.eps} \caption{Deuteron rms charge radii from spectroscopy of deuterium alone, see Tab.~\ref{tab:Rd}, and muonic atoms. Also shown are the CODATA value Eq.\,(\ref{eq:Rd_CODATA}), and the value from CODATA Adjustment~10 (Eq.\,(\ref{eq:Rd_D_CODATA})) that does not use the value for the $1S \rightarrow 2S$ transition in D (see text). The value ``$\mu$H + iso''~\cite{Antognini:2013:Science_mup2} is obtained from Eq.\,(\ref{eq:HDiso}) using the proton charge radius from muonic hydrogen Eq.\,(\ref{eq:Rp_mup}). The discrepancy is the same ``proton radius puzzle'' as the one in Fig.~\ref{fig:Rp}. The new deuteron radius from muonic deuterium~\cite{Pohl:2016:Science_mud} ($\mu$D) is \ensuremath{3.5 \sigma}\ smaller than the average value from deuterium spectroscopy (Eq.\,(\ref{eq:Rd_D})). } \label{fig:Rd} \end{figure} Thus, we combine items H9 and I1, and H11 and I2, to obtain two values for the D($1S \rightarrow 2S$) transition frequency, D11 and D12. This avoids double-counting, because item H10 has been used above to determine the proton radius. For simplicity, we add the uncertainties linearly, although a more rigorous evaluation of the combined uncertainty, including all correlations, would certainly yield a smaller uncertainty of the D($1S \rightarrow 2S$) transition frequency. \begin{table}[t] \caption{\label{tab:Rd} Deuteron charge radii from deuterium. The value labeled ``Eq.\,(\ref{eq:Rd_D})'' is our result. It is the average of the individual values above it, taking into account the known correlations between the $2S \rightarrow n\ell$ measurements. The next two values use items D9 and D10, which have not been measured using atomic hydrogen as a transfer oscillator (see text).} \begin{center} \begin{tabular}{c c c c} \hline \hline \# & Transitions & \ensuremath{r_{\mathrm d}}\ [fm]\\ \hline D12 + D4 & $1S \rightarrow 2S$ + $2S \rightarrow \;8S_{1/2}$ & $2.1451 \pm 0.0068$ \\ D12 + D5 & $1S \rightarrow 2S$ + $2S \rightarrow \;8D_{3/2}$ & $2.1435 \pm 0.0064$ \\ D12 + D6 & $1S \rightarrow 2S$ + $2S \rightarrow \;8D_{5/2}$ & $2.1465 \pm 0.0059$ \\ D12 + D7 & $1S \rightarrow 2S$ + $2S \rightarrow 12D_{3/2}$ & $2.1385 \pm 0.0081$ \\ D12 + D8 & $1S \rightarrow 2S$ + $2S \rightarrow 12D_{5/2}$ & $2.1358 \pm 0.0064$ \\ \hline \multicolumn{2}{c}{\raisebox{0mm}[3.5mm][1.5mm]{} D12 + all D($2S\rightarrow n\ell$)} & $2.1415 \pm 0.0045$ & ~ Eq.\,(\ref{eq:Rd_D})\\ \hline \multicolumn{2}{c}{\raisebox{0mm}[3mm][1.5mm]{} D9\,+ all D($2S\rightarrow n\ell$)} & $2.1414 \pm 0.0045$ \\[0.2ex] \multicolumn{2}{c}{ D10 + all D($2S\rightarrow n\ell$)} & $2.1411 \pm 0.0045$ \\ \hline \hline \end{tabular} \end{center} \end{table} If one wishes, one could also use the values D9 or D10 which can be found in the PhD thesis of Th.~Udem~\cite{Udem:PhD}. These values are ``absolute'' frequency measurements without the use of hydrogen as a transfer oscillator. All of the four values D9...D12 are sufficiently accurate to proceed with the determination of \ensuremath{r_{\mathrm d}}\ values from combining $1S \rightarrow 2S$ and $2S \rightarrow n\ell$ for $n$=8,12, see Tab.~\ref{tab:Rd}. The trivial weighted average of the values in Tab.~\ref{tab:Rd} is \ensuremath{r_{\mathrm d}}\ = \valerrRdTrivial\,fm, {\it i.e.} $5.3\sigma$ larger than the \ensuremath{\mu \mathrm{d} }\ value. Again, however, correlations~\footnote{See Ref.~\cite{Mohr:2012:CODATA10}, Tab.\,XIX} between the $2S \rightarrow n\ell$ measurements increase the uncertainty. Taking into account these correlations we obtain \begin{equation} \label{eq:Rd_D} \ensuremath{r_{\mathrm d}}\mathrm{(D~spectroscopy)} = \ensuremath{2.1415(45)}\,\mathrm{fm}. \end{equation} This value is \ensuremath{3.5 \sigma}\ larger than the new value from muonic deuterium. For comparison, using, instead of D12, the $1S\rightarrow2S$ measurements D9 or D10, yields $\ensuremath{r_{\mathrm d}} = 2.1414(45)$\,fm and $\ensuremath{r_{\mathrm d}} = 2.1411(45)$\,fm, respectively, including the correlations. The agreement of these three values shows that it is not important which of the available D($1S\rightarrow2S$) measurements is chosen (see Tab.~\ref{tab:Rd}). Moreover, this ``D spectroscopy'' value is in excellent agreement with the global CODATA value from Adjustment~3, \ensuremath{r_{\mathrm d}}\ = $2.1424 \pm 0.0021$\,fm. This is a strong indication for the internal consistency of CODATA LSA. This agreement is also evident in the agreement of the Rydberg constants from H spectroscopy on the one hand, and D spectroscopy on the other. This is further discussed in section~\ref{sec:Ryd}. We emphasize again that this \ensuremath{3.5 \sigma}\ discrepancy between muonic and electronic deuterium spectroscopy measurements is as independent as possible of any measurement used in the proton charge radius determination. Correlations may exist because of unidentified systematic shifts in any of the electronic or muonic measurements, or missing or wrong theory contributions in electronic or muonic atoms. In the absence of any indication for such an unknown correlation, the new \ensuremath{\mu \mathrm{d} }\ measurement~\cite{Pohl:2016:Science_mud} constitutes an independent discrepancy. \section{The deuteron structure radius} In the preceding sections we were concerned with hidden or implicit correlations between the (CODATA) values of \ensuremath{r_{\mathrm p}}\ and \ensuremath{r_{\mathrm d}}, which originate from the nature of performing a least-squares adjustment using all available input data in H and D. Here, we could provide values of \ensuremath{r_{\mathrm p}}\ and \ensuremath{r_{\mathrm d}}\ which are ``as uncorrelated as possible'' by separating the analysis of H and D data. Physics, on the other hand, is also the source of an explicit correlation between \ensuremath{r_{\mathrm p}}\ and \ensuremath{r_{\mathrm d}}, simply because the deuteron contains a proton. The deuteron charge radius is related to the proton charge radius by~\cite{Buchmann:1996:Rdeut,Jentschura:2011:IsoShift} \begin{equation} \label{eq:Dstruct} \ensuremath{r_{\mathrm d}}^2 = \ensuremath{r_{\mathrm{struct.}}}^2 + \ensuremath{r_{\mathrm p}}^2 + \rn^2 + \dfrac{3 \hbar^2}{4 m_p^2 c^2}, \end{equation} where $\ensuremath{r_{\mathrm{struct.}}} = 1.97507(78)$\,fm~\cite{Jentschura:2011:IsoShift} is the deuteron structure radius, {\it i.e.} the proton-neutron separation, \rn\ is the neutron mean square charge radius $<\rn^2> = -0.114(3)$\,fm~\cite{Kopecki:1995:Rneutron,Kopecki:1997:Rneutron}, and the rightmost term is the Darwin-Foldy correction of 0.0331\,fm$^2$ due to the zitterbewegung of the proton, see~\cite{Jentschura:2011:IsoShift} and also the Appendix of Ref.~\cite{Krauth:2016:Annals}. The 0.8\% smaller deuteron charge radius from muonic deuterium in Eq.~(\ref{eq:Rd_mud}) is very consistent with the 4\% smaller proton radius from muonic deuterium Eq.~(\ref{eq:Rp_mup}), inserted in Eq.~(\ref{eq:Dstruct}. This is the reason why the new \ensuremath{r_{\mathrm d}}(\ensuremath{\mu \mathrm{d} }) is understood to confirm the smaller proton radius from muonic hydrogen~\cite{Pohl:2016:Science_mud}. \section{The Rydberg constant} \label{sec:Ryd} The correlation coefficient between the proton radius \ensuremath{r_{\mathrm p}}\ and the Rydberg constant \ensuremath{R_{\infty}}\ is as large as 0.989 in the CODATA LSA. Therefore, a change of \ensuremath{r_{\mathrm p}}\ by $x \sigma$ will normally result in a change of \ensuremath{R_{\infty}}\ by almost the same $x \sigma$. This can be understood by considering Eq.\,(\ref{eq:Etot}), and the accuracy of the measurements in H listed in Tab.~\ref{tab:H}:\\ The accuracy of each of the $2S \rightarrow n\ell$ transitions ($n=8, 12$), which determine the accuracy of \ensuremath{R_{\infty}}, is about 1 part in $10^{11}$. As a consequence, the uncertainty of the Rydberg constant in CODATA-2010 is about 6 parts in $10^{12}$. The $1S \rightarrow 2S$ transition, on the other hand, has been measured with an uncertainty of 4 parts in $10^{15}$, {\it i.e.} a factor of 1000 more accurately. A look at Eq.\,(\ref{eq:1s2s}) reveals the correlation: The left side is measured with an accuracy of 0.010\,kHz. The 1st term on the right side is known only to $\sim 10$\,kHz (3/4 of the 17\,kHz uncertainty of the CODATA value of $c\ensuremath{R_{\infty}}$)~\cite{Mohr:2012:CODATA10}. \begin{figure}[t!] \includegraphics[width=1.0\columnwidth]{fig3.eps} \caption{Rydberg constant from CODATA-2010 \cite{Mohr:2012:CODATA10}, Eq.~(\ref{eq:Ryd_CODATA10}) and CODATA-2014 \cite{Mohr:2016:CODATA14}, from spectroscopy of regular H and D, Eqs.~(\ref{eq:Ryd_H}) and (\ref{eq:Ryd_D}), respectively, and from combining the muonic charge radius of the proton and the deuteron and the measurement of the $1S \rightarrow 2S$ transition in H and D, Eqs.~(\ref{eq:Ryd_mup}) and (\ref{eq:Ryd_mud}), respectively. Also shown is the result from spectroscopy of high-lying ($n = 27...30$) circular Rydberg states of atomic hydrogen~\cite{deVries:PhD}, Eq.~(\ref{eq:Ryd_Ryd}).} \label{fig:Ryd} \end{figure} Adopting the muonic values of \ensuremath{r_{\mathrm p}}\ and \ensuremath{r_{\mathrm d}}\ in \ensuremath{E_{NS}}\ will thus shift the central value of \ensuremath{E_{NS}}, which must immediately be compensated by a corresponding change in \ensuremath{R_{\infty}}\ because of the 1000-fold more precisely determined left side of Eq.\,(\ref{eq:1s2s}). At the same time, the smaller uncertainty of the muonic charge radii will yield more accurate values of \ensuremath{R_{\infty}}, when combined with the electronic $1S \rightarrow 2S$ transitions: \begin{multline} \label{eq:Ryd_mup} \ensuremath{R_{\infty}}\ [ H(1S \rightarrow 2S); \ensuremath{r_{\mathrm p}}(\ensuremath{\mu \mathrm{p} }) ] = \\ 3.\,289\,841\,960\,249\,(3) \times 10^{12}\;\mathrm{kHz/c} \end{multline} from electronic and muonic hydrogen~\cite{Antognini:2013:Science_mup2}, and \begin{multline} \label{eq:Ryd_mud} \ensuremath{R_{\infty}}\ [ D(1S \rightarrow 2S); \ensuremath{r_{\mathrm d}}(\ensuremath{\mu \mathrm{d} }) ] = \\ 3.\,289\,841\,960\,234\,(6) \times 10^{12}\;\mathrm{kHz/c} \end{multline} from electronic and muonic deuterium~\cite{Pohl:2016:Science_mud}. The value in Eq.\,(\ref{eq:Ryd_mup}) is in good agreement with the one from CODATA Adjustment~11, \begin{multline} \ensuremath{R_{\infty}}~\mathrm{(Adjustment~11)} = \\ 3.\,289\,841\,960\,255\,(4) \times 10^{12}\;\mathrm{kHz/c} \end{multline} see Tab.~XXXVIII of Ref.~\cite{Mohr:2012:CODATA10}, which includes \ensuremath{r_{\mathrm p}}\ from muonic hydrogen in the global LSA. Because of its tiny uncertainty, the muonic \ensuremath{r_{\mathrm p}}\ value dominates Adjustment~11, yielding $\ensuremath{r_{\mathrm p}}~\mathrm{(Adjustment~11)} = 0.84225(65)$\,fm, and this change of \ensuremath{r_{\mathrm p}}\ is accompanied by a change of \ensuremath{R_{\infty}}, as described above. For reference, the CODATA recommended value of the Rydberg constant is \begin{multline} \label{eq:Ryd_CODATA10} \ensuremath{R_{\infty}}~\mathrm{(CODATA-2010)} = \\ 3.\,289\,841\,960\,364(17) \times 10^{12}\;\mathrm{kHz/c} \end{multline} which is $7 \sigma$ larger. For completeness, the values of the Rydberg constant from hydrogen data alone, taken from CODATA Adjustment~8, is \begin{multline} \label{eq:Ryd_H} \ensuremath{R_{\infty}}~\mathrm{(H~spectroscopy)} = \\ 3.\,289\,841\,960\,361(28) \times 10^{12}\;\mathrm{kHz/c.} \end{multline} The one we deduce from deuterium data alone, including the $1S\rightarrow 2S$ transition is \begin{multline} \label{eq:Ryd_D} \ensuremath{R_{\infty}}~\mathrm{(D~spectroscopy)} = \\ 3.\,289\,841\,960\,357(35) \times 10^{12}\;\mathrm{kHz/c.} \end{multline} A measurement of transition frequencies between high-lying circular Rydberg states of atomic H, with $n = 27 ... 30$, which are insensitive to the proton charge radius, yielded~\cite{deVries:PhD} \begin{multline} \label{eq:Ryd_Ryd} \ensuremath{R_{\infty}}~\mathrm{(Rydberg-H)} = \\ 3.\,289\,841\,960\,306(69) \times 10^{12}\;\mathrm{kHz/c.} \end{multline} This result is unfortunately not accurate enough to discriminate the muonic and the ``purely electronic'' values, see Fig.~\ref{fig:Ryd}. New insight into the ``proton radius puzzle'' is expected from several new atomic physics measurements: The $2S \rightarrow 4P$ transitions in H~\cite{Beyer:2013:AdP_2S4P,Beyer:2013:Conf:ICOLS} will yield an independent value of the Rydberg constant. A new measurement of the classical Lamb shift in H~\cite{Vutha:2012:H2S2P} will yield a proton charge radius that is independent of the exact value \ensuremath{R_{\infty}}, see Sec.~\ref{sec:H2S2P}. Improved measurements of the $1S\rightarrow 3S$ transition in H are underway at MPQ and LKB~\cite{Galtier:2015:JPCRD,Fleurbaey:2016:CPEM}. Measurements of the 1S-2S transition in H-like He$^+$ ions~\cite{Herrmann:2009:He1S2S,Kandula:2010:XUV_comb_metrology,Morgenweg:2014:RamseyComb} will, when combined with a new value of the alpha particle charge radius from muonic helium spectroscopy~\cite{Antognini:2011:Conf:PSAS2010}, yield a Rydberg constant or test higher-order QED contributions. The Rydberg constant can also be determined from high-precision spectroscopy of molecules and molecular ions of hydrogen isotopes~\cite{Liu:2009:H2Diss,Schiller:2014:MolClock,Dickenson:2013:H2vib,Biesheuvel:2015:HDplus,Karr:2016:HmolIon}, combined with improved calculations~\cite{Pachucki:2016:H2Schrodinger}. One-electron ions in circular Rydberg states~\cite{Jentschura:Mohr:fundamental:constants:2008,Tan:NIST:2011} will also yield a Rydberg constant free from nuclear radius effects. As a final remark, we may attribute the small $2.2\sigma$ difference between the two Rydberg values using the muonic radii (Eq.\,(\ref{eq:Ryd_mup}) and Eq.\,(\ref{eq:Ryd_mud})) to the deuteron polarizability contribution~\cite{Pachucki:2011:PRL106_193007,Friar:2013:PRC88_034004,Hernandez:2014:PLB736_344,Carlson:2014:PRA89_022504,Pachucki:2015:PRA91_040503}, summarized in Ref.~\cite{Pohl:2016:Science_mud}. \section{Conclusions} The most accurate value of the deuteron rms charge radius from laser spectroscopy of regular (electronic) deuterium only is \ensuremath{r_{\mathrm d}} = \ensuremath{2.1415(45)}\,fm. It is obtained using a value for the $1S \rightarrow 2S$ transition in atomic deuterium which can be inferred from published data~\cite{Parthey:2010:PRL_IsoShift,Parthey:2011:PRL_H1S2S}, or found in a PhD thesis~\cite{Udem:PhD}. Our value is in excellent agreement with the CODATA value~\cite{Mohr:2012:CODATA10}, and only twice less accurate. In contrast to the CODATA value, the deuteron radius above is as uncorrelated as possible to measurements that determine the proton rms charge radius \ensuremath{r_{\mathrm p}}. The CODATA Adjustment~10, which is also independent of \ensuremath{r_{\mathrm p}}, is five times less accurate than the value above, because of a more conservative treatment of the deuterium $1S \rightarrow 2S$ measurements. \paragraph*{Note added:} After the submission of this manuscript, the updated CODATA-2014 paper was published~\cite{Mohr:2016:CODATA14}. The numbering of the partial Adjustments remained the same. What was Tab.~XXXVIII in CODATA-10 is now Tab.~XXIX on page~54 of CODATA-14. The partial Adjustments~8 (H spectroscopy) and 10 (D spectroscopy) yield identical values compared to CODATA-10, our Eqs.~(\ref{eq:Rp_H_CODATA}) and (\ref{eq:Rd_D_CODATA}), respectively. The only new input data is our item H12, the 2013 measurement of the $1S \rightarrow 2S$ transition from MPQ. The change of the recommended values of \ensuremath{r_{\mathrm p}}, \ensuremath{r_{\mathrm d}}, and \ensuremath{R_{\infty}}\ (from the full Adjustment~3) is exclusively from a reassessment of the uncertainty of the electron scattering data~\cite{ArringtonSick:2015:JPCRD}. None of the conclusions of the present manuscript are changed. \section{Acknowledgments} We thank Ingo Sick for insightful comments, Peter J.\ Mohr, Barry N.\ Taylor and David B.\ Newell from NIST for providing us with more accurate results of the CODATA LSA which were very valuable to cross-check our code, and Kjeld~S.E.\ Eikema for useful remarks. R.P.\ acknowledges support from the European Research Council trough ERC StG.\ 279765, A.A.\ from the SNF, Projects 200020\_159755 and 200021\_165854, and T.W.H.\ from the Max Planck Society and the Max Planck Foundation. H.F thanks the LABEX Cluster of Excellence FIRST-TF (ANR-10-LABX-48-01), within the Program "Investissements d'Avenir" operated by the French National Research Agency (ANR) for financial support.
2,869,038,154,206
arxiv
\section{Introduction} \IEEEPARstart{A}{s} one of the core research branches in social network analysis, influence maximization (\textbf{IM}), proposed by Kempe, Kleinberg and Tardos \cite{kempe2003maximizing}, studies the problem of launching information cascades such that the influence can be maximized. Inspired by influence maximization, various topics in online social networks have been investigated, such as misinformation control, online friending, and viral marketing \cite{li2018influence,zhang2014recent,sun2011survey,aslay2018influence}. The classic influence maximization problem adopts two settings: (a) \textit{non-adaptive strategy}: the seed users are all computed before the diffusion process, and (b) \textit{unlimited time steps}: the influence is counted without a time limit. These classic settings are elegant, but they are incapable of modeling many real applications. First, in order to optimize the seeding selection, one often prefers to deploy seed nodes adaptively, which is formulated as the Adaptive Influence Maximization (\textbf{AIM}) problem \cite{golovin2010adaptive,tong2017adaptive,vaswani2016adaptive,han2018efficient,chen2019adaptivity}. Allowing an adaptive seeding enables us to identify the best seed node(s) conditioned on the observed diffusion results, and it therefore can result in a higher influence under the budget constraint. For example, a higher profit would be expected if our online advertisements were posted adapted to customer feedback \cite{kazienko2007adrosa}. Second, time-critical applications are commonly seen in online social networks, and in such cases, only the influence resulted before the deadline matters. For instance, launching a positive cascade to counter misinformation is expected to exert effects expeditiously \cite{farajtabar2017fake,tong2018misinformation}. For such scenarios, we would like to maximize the influence under a time constraint, which is termed as the Time-constrained Influence Maximization (\textbf{TIM}) problem \cite{liu2012time,liu2013influence,han2017time,xie2015dynadiffuse,chen2012time}. In order to support time-critical tasks through adaptive seeding methods, we in this paper propose the Time-constrained Adaptive Influence Maximization (\textbf{TAIM}) problem. \textbf{Problem Formulation.} An adaptive seeding process alternates between seeding steps and diffusing steps, and in each seeding step, we select a set of seed users to trigger more influence, and our decision is made adapted to the observed diffusion results. An adaptive seeding policy essentially consists of two modules, \textit{seeding pattern} and \textit{node selection rule}, where the seeding pattern specifies the size of the seed set while the node selection rule determines which nodes to select. Given two integers $K, T \in \mathbb{Z}^+$, the TAIM problem asks for a policy to deploy $K$ seed nodes in an adaptive manner such that the total influence resulted in the first $T$ diffusion rounds can be maximized. In this paper, we study the TAIM problem and aim at both the theoretical analysis and practical solution design. \textbf{A Key Trade-off.} The TAIM problem is a natural combination of AIM and TIM, both of which have been extensively studied and have been shown to admit the ($1-1/e$)-approximation subject to controllable sampling errors. For the AIM problem, the optimal seeding policy follows the full-adoption feedback model \cite{golovin2010adaptive} in which (a) before making the next seeding decision, we always keep observing the diffusion process until it terminates, and (b) we always use one budget whenever a seeding action has to be performed. Such a seeding pattern is intuitively optimal as it maximally obtains observations before selecting the next seed node. However, when a time constraint is enforced, one can see that the full-adoption feedback model is not optimal anymore, because waiting for more diffusion rounds, though brings more feedback, will incur the loss of future diffusion rounds. In short, waiting is not ``free'' in TAIM. Consequently, the critical issue is to determine the balance between (a) waiting for more feedback and (b) performing a seeding action at an early stage. We observe that solving such a trade-off in optimal is theoretically hard, making TAIM different from the existing problems. Through appropriate methods designed in this paper for achieving a reasonable trade-off, we have been able to design seeding policies that can solve the TAIM problem effectively. \textbf{Contributions.} This paper presents a systematic analysis of the TAIM problem, and the contributions are briefly summarized as follows: \begin{itemize} \item We perceive the adaptive seeding process as a procedure alternating between seeding steps and diffusing steps, based on which we propose the Time-constrained Adaptive Influence Maximization (TAIM) problem, which finds the policy to compute a seed set in each seeding step subject to a budget constraint such that the influence within a time limit can be maximized. \item Theoretically, we prove that TAIM problem exhibits a unique hardness that is different from existing problems such as IM or AIM. Furthermore, we provide the first result on the adaptive gap for the time-constrained case and prove a lower bound of $\frac{e^2-2}{e-1}$. \item Towards solving TAIM effectively, we design a sampling method to enable an efficient greedy node selection rule for the time-constrained case, based on which we propose a collection of seeding policies, from basic to advanced, including static seeding policy, greedy seeding policy, and several foresight seeding policies. We experimentally evaluate the proposed polices through simulations on real-world graphs, in terms of effectiveness, efficiency and robustness. As a minor part, we contribute a new Reddit dataset for studying information diffusion. Our source code and data will be made online available. \end{itemize} \textbf{Roadmap.} The related work will be introduced in Sec. \ref{sec: relate}. We provide the preliminaries in Sec. \ref{sec: pre}, including the diffusion model and the formulation of the TAIM problem. The theoretical analysis is given in Sec. \ref{sec: theory}, and the designed seeding policies are then described in Sec. \ref{sec: strategy}. In Sec. \ref{sec: exp}, we present the experimental study. Sec. \ref{sec: con} concludes. \section{Related Work} \label{sec: relate} Influence maximization and its variants have been extensively studied. In this section, we survey the works germane to our work. \textbf{IM, TIM, and AIM.} The IM problem \cite{kempe2003maximizing} investigates the strategy to launch an information cascade in social networks, with the goal of maximizing the resulted influence. It has been proved that the IM problem is monotone and submodular under the classic diffusion models (e.g., independent cascade model and linear threshold model), and therefore a $(1-1/e)$-approximation can be readily obtained by the greedy strategy due to the celebrated results of Nemhauser \textit{et al.} \cite{nemhauser1978analysis}. However, the objective function (i.e., influence) of the IM problem is \#$P$-hard to compute, so efficient heuristics were designed by various methods (e.g., \cite{chen2009efficient,chen2010scalable}). Borgs \textit{et al.} \cite{borgs2014maximizing} later invented the reverse sampling technique resulting in an efficient algorithm without sacrificing the performance guarantees. The reverse sampling technique was further improved by a series of works \cite{tang2014influence,tang2015influence,nguyen2016stop}, and currently, the IM problem can be solved efficiently on even very large networks. In order to support time-critical applications, researchers have further considered the IM problem with a time constraint \cite{liu2012time, liu2013influence,han2017time,xie2015dynadiffuse,chen2012time,dinh2013cost}. The TIM problem remains monotone and submodular, so the greedy algorithm still gives an effective approximation solution. Because the diffusion process is stochastic, it is possible to adopt an adaptive seeding policy where we could compute the seed nodes after observing the diffusion feedback, which was first considered by Golovin \textit{et al.} \cite{golovin2011adaptive} using the technique of adaptive submodularity. Under the budget constraint, a non-adaptive seeding policy computes a subset of nodes with a specified size, while an adaptive seeding policy computes a seed set in each seeding step according to the observations subject to the budget constraint. Without a time constraint, it has been shown that the full-adoption feedback model \cite{golovin2010adaptive} combined with the greedy node selection rule would give a $(1-1/e)$-approximation for the AIM problem under the budget constraint \cite{tong2017adaptive}. However, with a time constraint, the problem is not adaptive submodular \cite{vaswani2016adaptive,golovin2011adaptive}, and thus the current technique cannot be applied. Vaswani \textit{et al.} \cite{vaswani2016adaptive} (Arxiv.org) considered both the AIM and TAIM problem\footnote{They called time constraint as bounded time horizon.} and suggested using sequential model based optimization (SMBO) \cite{hutter2011sequential} to deal with the general case. While their ideas are intuitive, they did not provide the detailed implementation and their experiments for TAIM focused on examining the average adaptivity gain but not the efficacy in solving TAIM. Other works have studied the AIM problem under specific feedback models \cite{sun2018multi,salha2018adaptive,seeman2013adaptive, horel2015scalable,tong2019adaptive} or considered the trade-off between delay and efficiency \cite{tang2020influence, stein2017heuristic} based on partial feedback, but their settings still allow the diffusion process to complete and therefore cannot meet a hard time constraint. \textbf{Adaptive Gap.} Another important concept is the adaptive gap, which measures the ratio between the optimal adaptive policy and the optimal non-adaptive policy. Despite the recent results in \cite{peng2019adaptive,chen2019adaptivity,fujii2019beyond}, the adaptive gap under most of the feedback models is still open. In this paper, we provide the first result on the lower bound for the time-constrained case. \section{Preliminaries} \label{sec: pre} This section provides the preliminaries to the rest of the paper. We first describe the considered diffusion model and then present the formulation of the TAIM problem. \subsection{Diffusion Model} We consider the classic independent cascade model in which a social network is given by a directed graph $G=(V, E)$, and associated with each edge $e \in E$ there is a propagation probability $p_e \in (0,1]$. We use $n \in \neqZ$ and $m \in \neqZ$ to denote the number of nodes and edges, respectively. A cascade is triggered by the seed users who are \textit{active} after selected. When a user $u$ becomes active, they have one chance to activate each inactive neighbor $v$ with a success probability of $p_{(u,v)}$.\footnote{For the purpose of analysis, we assume that $p_{(u,v)}$ is positive as we can remove the edges with zero propagation probability.} We assume that the diffusion process goes round by round. \begin{definition}[\textbf{Round}] In each diffusion \textit{round}, the users, who are activated either by their neighbors in the last diffusion round or by being selected as new seed users, attempt to activate their inactive neighbors. \end{definition} The diffusion process can be viewed as a stochastic BFS process. In this paper, the diffusion time is measured by the number of rounds. \subsection{Seeding Process and Policy} A seeding process consists of seeding steps and diffusing steps. In a seeding step, we can observe the activation results in the previous diffusion rounds, which can be equivalently represented by the states of the edges. In particular, we say the edge $(u,v)$ is \textit{live} if $u$ can successfully activate $v$. Otherwise, we say it is \textit{dead}. Therefore, an intermediate stage of a diffusion process is deductively determined by the current active nodes and the sets of live and dead edges. We introduce the concept of status for such a purpose. \begin{definition}[\textbf{Status}] A \textit{status} $U=(\dA(U), \dL(U), \dD(U)) \in 2^V\times 2^E \times 2^E$ is given by a three-tuple with $\dL(U) \cap \dD(U) = \emptyset$, where $\dA(U)$ denotes the set of current active nodes, $\dL(U)$ and $\dD(U)$ are, respectively, the sets of live edges and dead edges. An edge is not observed yet iff it is not in $\dL(U) \cup \dD(U)$. We use $\Phi$ to denote the status space. \end{definition} We employ the next concept to describe the scenario when the diffusion process terminates spontaneously. \begin{definition}[\textbf{Final Status}] A status $U=(\dA(U), \dL(U), \dD(U)) $ is \textit{final} if all the edges from $\dA(U)$ to $V\setminus \dA(U)$ are dead. That is, $\{(u,v)\in E: u \in \dA(U),~ v \in V\setminus \dA(U)\} \subseteq \dD(U)$, which implies that no node can be further activated unless new seed nodes are selected. \end{definition} \begin{definition}[\textbf{State}] We use $(U,t,k) \in \Phi \times \neqZ \times \neqZ$ to denote a \textit{state} of the seeding process, implying that the current status is $U$, the number of remaining diffusion rounds is $t$, and the remaining budget is $k$. \end{definition} \begin{definition}[\textbf{Policy}] Given a state $(U,t,k)$ in a seeding step, a \textit{policy} $\pi$ computes a seed set $\pi(U,k,t) \subseteq V$ to be selected with $|\pi(U,k,t)|\leq k$. A policy $\pi$ is \textit{non-adaptive} if it has $|\pi(U,k,t)|=k$ for each state $(U,k,t)$. \end{definition} \begin{definition}[\textbf{Seeding Process}] For a diffusion model with a time constraint $T\in \neqZ$, and a budget $K \in \neqZ$, the \textit{seeding process} under a policy $\pi$ is described as follows: \begin{itemize} \item Set $U=(\emptyset,\emptyset,\emptyset), k=K$ and $t=T$. Iterate the following process for $T$ times. \begin{itemize} \item \textbf{Seeding Step.} Compute and launch the seed set $\pi(U, k, t)$. Set $k=k-|\pi(U, k, t)|$. \item \textbf{Diffusing Step.} Observe the diffusion process for one round. Set $t=t-1$ and update $U$ as the observed status. \end{itemize} \item Output the influence (i.e., the number of active nodes). \end{itemize} \end{definition} We use $f(\pi, K, T)$ to denote the expected influence associated with a policy $\pi$. For a non-adaptive policy that selects a particular set $S \subseteq V$ as the seed nodes, we denote the resulted influence as $f(S, K, T)$. \begin{remark} It is possible that no seed node is selected in a certain seeding step (i.e., $ \pi(U, k, t)=\emptyset$), which means that the policy would wait for more diffusion rounds. \end{remark} \subsection{Problem Formulation} The problem considered in this paper is stated below. \begin{problem}[\textbf{TAIM Problem}] \label{problem: t-aim} Given a diffusion model, a time constraint $T\in \neqZ$, and a budget $K \in \neqZ$, design a policy $\pi$ such that $f(\pi, K, T)$ is maximized. \end{problem} \begin{remark}[\textbf{Special Cases}] \label{remark: relate} The TAIM problem is closely related to several problems that have been considered in the existing literature. \begin{itemize} \item When $T=\infty$, it is exactly the unconstrained AIM problem, which admits a $(1-1/e)$-approximation achieved by combining the full-adoption feedback model and the greedy node selection rule. \item When we have $p_e=1$ for each edge $e \in E$ (i.e., the deterministic model) or we have $T=1$, the optimal policy must be non-adaptive and therefore the greedy node selection rule provides a $(1-1/e)$-approximation. \end{itemize} A policy for the TAIM problem, if not optimal, should ideally provide the best possible solution when applied to those special cases. \end{remark} \section{Theoretical Analysis} \label{sec: theory} \subsection{Hardness} The complexity of a seeding policy is measured by the computability of $\pi(U,t,k)$. When solving the TAIM problem, we essentially consider two questions: (a) how many seed nodes to select and (b) which nodes to select. We refer the solution to the first question as a \textit{seeding pattern}. Not very surprisingly, both questions are computationally hard, and thus efficient optimal solutions are pessimistic. First, while TAIM does not generalize IM, we can use a reduction similar to that in \cite{kempe2003maximizing}. In particular, when the underlying graph is directed and bipartite with $p_e=1$ for each edge $e$, the TAIM problem generalizes the maximum coverage problem in a straightforward manner. Second, there exists an instance of TAIM of which the hardness is resulted from designing optimal seeding patterns but not from selecting seed nodes, which indicates that TAIM is combinatorially different from IM and AIM. \begin{lemma} \label{lemma: hardness_2} Even if the optimal seed nodes can be computed in polynomial time, the optimal policy for TAIM is not polynomial-time computable unless the decision version of s-t connectedness can be solved in polynomial time. \end{lemma} \begin{proof} See Appendix. \end{proof} \subsection{Adaptive Gap} For an instance $\I$ of TAIM, let $A_{opt}^{\I}\define \max_{\pi}f(\pi,T, K)$ be the maximum influence resulted by any policy, and \[N_{opt}^{\I}\define \max_{S \subseteq V, |S|\leq K}f(S,T, K)\] be the maximum influence resulted by a non-adaptive policy. The adaptive gap is defined as $\sup \frac{A_{opt}^{\I}}{N_{opt}^{\I}}$ over the instances of TAIM, which measures the worst-case performance of the optimal non-adaptive policy compared to the optimal adaptive policy. Since non-adaptive policies can often be efficiently computed, one can adopt a non-adaptive one if the adaptive gap is small. In this paper, we provide a lower bound of the adaptive gap. \begin{lemma} \label{lemma: lower_bound} The adaptive gap for the TAIM problem is at least $\frac{e^2-2}{e-1}$. \end{lemma} \begin{proof} The proof is inspired by the analysis in \cite{chen2019adaptivity} for the unconstrained case, while our problem involves a time constraint. For a certain integer $N \in \neqZ$, let us consider a directed line with $2N+1$ nodes with edges $(v_i,v_{i+1})$, $i \in \{1,...,2N\}$, where each edge has the probability $p=1-1/N$. Suppose that the time constraint is $2N$ and $K=2$. For each node $s_i$ with $i\leq N$, the expected influence resulted from $v_i$ within $t$ diffusion rounds is \[S(t)=\sum_{i=1}^{t}p^{i-1}\cdot (1-p)\cdot i+p^{t} \cdot (t+1)=\frac{1-p^t}{1-p}+p^t.\] Let us first consider an adaptive policy that (a) selects $v_1$ as the first node, (b) waits for the diffusion process terminates or the time limit is reached, and (c) selects the inactive node that is closest to $v_1$ as the second the seed node. The resulted influence would be \begin{align*} \Delta_{ad} &\define\sum_{i=1}^{2N-1}p^{i-1}(1-p)(i+S(2N-i))+p^{2N-1}(2N+1)\\ &=2N(1-(1-\frac{1}{N})^{2N-1})-(2N-1)(1-\frac{1}{N})^{2N}\\ &\hspace{4cm}+2(1-\frac{1}{N})^{2N-1}. \end{align*} For the non-adaptive case, the probability that a node $v_i$ can be activated is determined by the distance to it from the closest seed node. Therefore, the optimal non-adaptive influence should select $v_1$ and $v_{N}$ as the seed nodes, which follows from the fact that, supposing that another two nodes $v_{j_1}$ and $v_{j_2}$ were selected with $j_1 < j_2$, we could have a higher influence by first replacing $v_{j_1}$ by $v_1$ and then replacing $v_{j_2}$ by the mid node $v_N$.\footnote{Note that selecting $v_1$ and $v_{N+1}$ is another optimal non-adaptive policy.} Therefore, the optimal influence under a non-adaptive policy will be \begin{align*} \Delta_{non-ad} &\define \sum_{i=0}^{N-1}p^i+\sum_{i=0}^{N}p^i\\ &=2N(1-(1-1/N)^N)+(1-1/N)^N. \end{align*} Now we have that $\frac{A_{opt}^{\I}}{N_{opt}^{\I}}$ is no less than $\frac{\Delta_{ad}}{\Delta_{non-ad}}$ of which the limit is $\frac{e^2-2}{e-1} \approx 3.14$. \end{proof} \section{Seeding Policy Design} \label{sec: strategy} In this section, we present several seeding policies. We first discuss the node selection rule and then design seeding policies based on different seeding patterns. Given the hardness in Sec. \ref{sec: theory}, we aim at the solutions that are (a) approximation solutions to the special cases in Remark \ref{remark: relate} and (b) effective heuristics for the general cases. \subsection{Node Selection Rule} \label{subsec: nodeselection} When a seed set of a given size $k \in \mathbb{Z}^+$ is planned to be selected, the greedy rule is the most common method used in the existing studies. Supposing that $U$ is the current status, we use $g(U,S,t)$ to denote the expected number of active nodes after $t \in \neqZ $ rounds following $U$ with $S \subseteq V$ being selected as the seed set. The local optimal solution would be \begin{equation} \label{eq: greedy} \argmax_{|S|=k} g(U,S,t). \end{equation} Computing the above equation is $NP$-hard, but the greedy rule, as shown in Alg. \ref{alg: greedy}, gives a $(1-1/e)$-approximation. Due to the $\#P$-hardness in computing $g(U,S,t)$, such a greedy rule is often implemented through stochastic optimization in which the key ingredients are (a) an unbiased estimator of the objective function $g(U,S,t)$ and (b) an estimate of the lower bound of $\argmax_{|S|=k} g(U,S,t)$. In the rest of this part, we will show how to obtain these ingredients. An unbiased estimator of $g(U,S,t)$ can be obtained by the samples generated in the following sampling process. \begin{definition}[\textbf{RR-set}] \label{def: rr-set} Given the status $U$ and the remaining diffusion rounds $t$, an RR-set $\R$ is generated by: \begin{itemize} \item Step 1: select a node $v$ from $V$ uniformly at random. \item Step 2: simulate the diffusion process from $v$ in a \textit{reverse} direction in the manner of BFS. The simulation process terminates if (a) any node in $\dA(U)$ is encountered, (b) no node can be further reached, or (c) $t$ rounds of BFS has been executed. \item Step 3. If the simulation terminates under the case (a) in Step 2, return $\R=V$ as the output. Otherwise, let $\R \subseteq V$ be the set of the nodes traversed during the simulation process, and return $\R$. \end{itemize} \end{definition} For each $V_1, V_2 \subseteq V$, we use \[ \I(V_1 \cap V_2)\define \begin{cases} 1 & \hspace{0mm} \hspace{-0.5mm} \text{if $V_1 \cap V_2 \neq \emptyset$} \\ 0 & \hspace{0mm} \hspace{-0.5mm} \text{else } \end{cases} \] to denote if their intersection is empty, and for each $S \subseteq V$, let us consider the random variable $\I(S \cap \R)$. It turns out $n\cdot \I(S\cap \R)$ is an unbiased estimate of $g(U,S,t)$. \begin{lemma} \label{lemma: rrset} For each status $U$, $t \in \mathbb{Z}^+$, and $S \subseteq V$, we have \begin{equation} \label{eq: lemma_rreset} n \cdot \E [\I(S\cap \R)]=g(U,S,t) \end{equation} where the expectation is taken over all possible $\R$ or equivalently over all the states of the edges. \end{lemma} \begin{proof} For each $v \in V$, let $g_v(U,S,t)$ be the probability that $v$ can be active after $t$ rounds following $U$ when $S$ is selected as the seed set, and thus we have $g(U,S,t)=\sum_{v \in V}g_v(U,S,t)$ due to the linearity of expectation. On the other hand, let $\R_v$ be the random RR-set when $v$ is selected in Step 1 in Def. \ref{def: rr-set}. By the sampling process, we have $\E [\I(S \cap \R)]=\sum_{v \in V}\frac{\E [\I(S \cap \R_v)]}{n}$ for each $v$. Therefore, it suffices to prove $\E [\I(S \cap \R_v)]=g_v(U,S,t)$. Since the expectation is taken over all the possible states of the edges, it further suffices to show that $\I(S \cap \R_v)=g_v(U,S,t)$ in each possible outcome of the edge states.\footnote{There are totally $2^{|E|}$ possible outcomes.} Now suppose that the states of the edges are fixed, and consequently $\I(S \cap \R_v)$ and $g_v(U,S,t)$ are binary-valued. According to the diffusion process of IC model, $g_v(U,S,t)=1$ iff there is a path of at most $t$ live edges from $S \cup \dA(U)$ to $v$. According to the sampling process in Def. \ref{def: rr-set}, $\I(S \cap \R_v)=1$ iff $\dA(U) \cap \R_v \neq \emptyset$ (case (a) in Step 2) or $S \cap \R_v \neq \emptyset$ (case (b) and (c) in Step 2), which is $(\dA(U)\cup S) \cap \R_v \neq \emptyset$. Since $\R_v$ contains exactly the nodes that have a path of at most $t$ live edges to $v$, we have $g_v(U,S,t)=1$ iff $\I(S \cap \R_v)=1$. \end{proof} \begin{algorithm}[t] \caption{SOF Policy}\label{alg: t-rr} \begin{algorithmic}[1] \State \textbf{Input} {the current status $U$ and the remaining diffusion rounds $t$} \State Step 1: Uniformly select a random node that are node active in $U$; \State $k^*= \argmax_{i} \overline{\beta}(U,k,t, i)$; \State Return $\gr(U,k^*,t)$; \end{algorithmic} \end{algorithm} Suppose that a collection $\R_l$ of $l$ RR-sets were randomly generated, Lemma \ref{lemma: rrset} immediately implies that when $l$ is sufficiently large, the $S$ that can maximize \begin{equation} \label{eq: estimate} \frac{\sum_{\R \in \R_l} n \cdot \I(S\cap \R)}{l} \end{equation} should be able to maximize $g(U,S,t)$. We can easily see that Eq. (\ref{eq: estimate}) is submodular with respect to $S$ for each $\R_l$, and therefore, greedy algorithm gives a $(1-1/e)$-approximation to $\argmax_{|S|=k}\frac{\sum_{\R \in \R_l} n \cdot \I(S\cap \R)}{l}$. Using the central inequality (e.g. Chernoff bound) to bound the estimation accuracy requires the lower bound of $\argmax_{|S|=k} g(U,S,t)$, which is however not known to us in advance. Fortunately, because $g(U,S,t)$ is bounded within $[|\dA(U)|,n]$ and its estimate Eq. (\ref{eq: estimate}) can be approximated within a factor of $1-1/e$, we could utilize the adaptive sampling method designed in \cite{tang2014influence} to search a lower bound that is within a constant factor to $\argmax_{|S|=k} g(U,S,t)$. Following the standard reverse sampling analysis, we have the following result. \begin{lemma}[\hspace{1sp}\cite{tang2014influence}] \label{lemma: greedy} There exists a greedy algorithm that can produce a $(1-1/e-\epsilon)$-approximation to Eq. (\ref{eq: greedy}) with probability at least $1-n^{-l}$ within time $O(\frac{(k+l)(m+n)\log n}{\epsilon^2})$, for each $\epsilon \in (0,1)$ and $l\geq 1$. \end{lemma} \begin{proof} The proof follows directly from the analysis in Sec. 3 of \cite{tang2014influence} with the only difference that the new RR-set defined in Def. \ref{def: rr-set} is used. \end{proof} We use $\gr(U,t,k)=\{v_1,...,v_k\}$ to denote the output of greedy algorithm in Lemma \ref{lemma: greedy}, and we assume that the indexes follow the order in which the nodes were selected (e.g., $v_1$ was the first node added to the solution). In this paper, we will utilize this algorithm for node selection, enabling us to focus primarily on designing seeding patterns. One plausible reason for doing so is that, for the special cases discussed in Remark \ref{remark: relate}, the greedy node selection rule is the best polynomial method in terms of the approximation ratio. For the convenience of discussion, we introduce the following definitions that will be used in the rest of this section. \begin{definition}[\textbf{Status} $U_{t,k}$] \label{def: utk} For a status $U$ and two integers $t,k \in \neqZ$, we use $U_{t,k}=\big(\dA(U)\cup \gr(U,t, k), \dL(U), \dD(U)\big)$ to denote the status when $\gr(U,t, k)$ is selected as the seed set. \end{definition} \begin{definition}[\textbf{Future Status} $\U_t(U)$] \label{def: utu} For a status $U$, we use $\U_t(U)$ to denote the set of all possible status after $t \in \neqZ$ diffusion rounds following $U$ without selecting any new seed node, and we use $\Dis({\U}_t(U))$ to denote the associated distribution over $\U_t(U)$. \end{definition} \begin{algorithm}[t] \caption{Greedy Node Selection Rule$(U, t, k)$} \label{alg: greedy} \begin{algorithmic}[1] \State \textbf{Input:} $(U, t, k)$; \State \textbf{Output:} a user set $S$; \State $S \leftarrow \emptyset$; \For {${i=1:k}$} \State $v^* \leftarrow \argmax_v g(U, S+v, t)$; \State $S \leftarrow S+v^*$; \EndFor \State Return $S$; \end{algorithmic} \end{algorithm} \subsection{Seeding Policy} If we would always use the greedy rule for node selection, the problem left is to decide the seeding pattern: given a status, how many seed nodes should be selected in each seeding step? As aforementioned, in each seeding step, we would select seed nodes as few as possible to maximally utilize the merits of adaptive seeding, while we would select seed nodes as many as possible to have more future diffusion rounds by the time constraint. With such intuitions in mind, we propose five seeding policies. \subsubsection{{Basic Seeding Policy}} In general, a seeding pattern can be either specified before the seeding process, or dynamically constructed during the seeding process. For the TAIM problem, an immediate solution is to utilize a \textit{static seeding pattern}, which is given by a sequence $(a_1,..,a_T)$ of non-negative integers where $a_i \in \neqZ$ is the size of the seed set in the $i$-th seeding step. Under the budget constraint, we have $\sum a_i \leq K$. Combining with the greedy node selection rule, we have a basic policy: \begin{policy} [\textbf{Static Policy} $(a_1,...,a_T)$] \label{policy: static} In the $i$-th seeding step with state $(U, t, k)$, select $\gr(U,t,a_i)$ as the seed set. \end{policy} A static policy is relatively simple to implement, but one drawback is that we have less knowledge on finding the sequence $(a_1,..,a_T)$ so that the influence can be maximized. Note that the searching space is exponential, and the hardness in finding the optimal static pattern can be additionally seen from the proof of Lemma \ref{lemma: hardness_2}. As a preliminary solution, we propose the \textit{$k$-filter uniform pattern} where the seeding actions are uniformly distributed to the diffusion period, and each seeding step selects the same number of seed nodes. Formally, given the filter size $k \in \neqZ$, we aim to achieve the pattern where $a_{1}=a_{1+k}=...=a_{1+(d-1)*k}=\lfloor K/d \rfloor$ with $d=\lfloor T/k \rfloor$. For example, under $T=10$, $K=50$ and $k=2$, we have the pattern $(10,0,10,0,10,0,10,0,10 ,0)$. In the case that we have the remaining budget due to the rounding, we will use them right before the last diffusion round. The static policy cannot fully utilize the merits of the adaptive policy as the seeding patterns are fixed. In the next, we present the \textit{greedy seeding pattern} in which we keep observing the diffusion process until a final status is reached or no diffusion round is left. More specifically, when there are more than one diffusion rounds left, we select one seed node if the current status is final, and otherwise wait for more results; when there is only one diffusion round left, we immediately use up the remaining budget. \begin{policy}[\textbf{Greedy Policy}] \label{policy: greedy} In each seeding step with state $(U, t, k)$: \begin{itemize} \item If $t=1$, select $\gr(U,t,k)$ as the seed set. \item If $t>1$ and $U$ is final, select $\gr(U,t,1)$ as the seed set. \item If $t>1$ and $U$ is not final, no seeding action is performed. \end{itemize} \end{policy} Compared to the static seeding pattern, the greedy policy does not require any input prior to the seeing process, and it always attempts to obtain the diffusion results maximally. However, because it does not consider the time constraint until the last diffusion round, it might be the case that the majority of the seed nodes are used right before the last diffusion round and thus have very limited time to spread widely, which can be observed later in the experiments in Sec. \ref{sec: exp}. \subsubsection{{One-step Foresight Seeding Policy}} Further generalizing the greedy seeding pattern, we propose the \textit{one-step foresight seeding pattern} in which we will estimate the profit of selecting a particular number of seed nodes in each step. Given the state $(U,k,t)$, we consider the scenario that $k_1\leq k$ nodes are first selected in the current step and $k_2=k-k_1$ nodes will be selected after one round of diffusion. When $k_1$ nodes are selected in the current seeding step by the greedy node selection rule, the optimal profit, denoted as $\beta(U,k, t,k_1)$, will be \begin{align} \label{eq: beta} &\beta(U,k, t,k_1) \nonumber\\ &\define \sum_{U^* \in {\U}_1(U_{t, k_1})}\Pr[U^*|U_{t, k_1}] \cdot \max_{|S|=k-k_1} g(U^*,S,t-1)\nonumber \\ &=\E_{U^* \sim \Dis({\U}_1(U_{t, k_1}))} \Big[\max_{|S|=k-k_1} g(U^*,S,t-1)\Big] \end{align} where $\Pr[U^*|U_{t, k_1}]$ is the probability that $U^*$ happens conditioned on $U_{t, k_1}$, and $U_{t, k_1}$ and ${\U}_1(U_{t, k_1})$ are given by Defs. \ref{def: utk} and \ref{def: utu}. Consequently, the optimal $k_1$ under this pattern is \begin{equation} \label{eq: time_size} K(U,t, k)\define\argmax_{k_1\leq k} \beta(U,k, t,k_1), \end{equation} based on which we design the One-step Foresight (\textbf{OF}) policy: \begin{policy}[\textbf{One-step Foresight (OF) Policy}] In each seeding step with state $(U, t, k)$, select $\gr(U,t,k^*)$ as the seed set where $k^*=K(U,t,k)$. \end{policy} While the OF policy only considers the one-step forward foresight, it indeed provides the best possible solution for all the special cases mentioned in Remark \ref{remark: relate}. \begin{lemma} \label{lemma: OF} The OF policy provides a $(1-1/e)$-approximation if either (a) $T=1$, (b) $T=\infty$ or (c) $p_e=1$ for each edge $e \in E$. \end{lemma} \begin{proof} First, it produces a $(1-1/e)$-approximation when $T=1$. This is because (a) $K(U,t, k)=k$ when $t=1$ and (b) the nodes are selected by the greedy node selection rule. Second, it is a $(1-1/e)$-approximation when $t=\infty$. First, at a certain seeding step when the status $U$ is not final, we always have $\beta(U,k,\infty,i)\leq \beta(U,k,\infty,0)$ for each $i\leq k$, which follows from the fact that waiting for the diffusion to complete is always optimal if there is no time constraint. Therefore, the OF policy will always wait for the diffusion to reach a final status before selecting the next seed set. Second, at a certain seeding-step when the status $U$ is final, we have $\beta(U,k,\infty,i)\leq \beta(U,k,\infty,1)$ for each $i \leq k$, and therefore we have $K(U, k, \infty)=1$, which means we always wait for the diffusion to complete and always select one seed node whenever a seed set should be selected. As a result, OF follows the full-adoption feedback model and selects each seed node in a greedy manner, which yields a $(1-1/e)$-approximation. Finally, under the deterministic independent cascade model, any adaptive policy is in fact non-adaptive as the diffusion process has no uncertainty, so we must have $\beta(U,k,t,k)\geq \beta(U,k,t,i)$ for each $i \leq k$. Combining the greedy node selection rule, OF will again give a $(1-1/e)$-approximation. \end{proof} One can see that either the static policy (Policy \ref{policy: static}) or the greedy policy (Policy \ref{policy: greedy}) cannot always guarantee the same for those special cases. While the OF policy can take account of the time constraint in each seeding step, it is not practically feasible in terms of the computability because $\beta(U,k, t,k_1)$ is hard to compute due to the fact that (a) computing $\max_{|S|=k-k_1} g(U^*,S,t-1)$ is $NP$-hard and (b) there can be an exponential number of terms in ${\U}_1(U_{t, k_1})$. To deal with the first issue, we use the $(1-1/e)$-approximation as an estimate of $\argmax_{|S|=k-k_1} g(U^*,S,t-1)$, and therefore, the quantity we are interested in is \begin{align*} \label{eq: over_beta} &\overline{\beta}(U,k,t,k_1) \define\\ &\E_{U^* \sim \Dis({\U}_1(U_{t, k_1}))} \Big[g(U^*,\gr(U^*,t-1, k-k_1),t-1)\Big]. \end{align*} For the second issue, we can estimate $\overline{\beta}(U,k,t,k_1)$ through sampling. In particular, given the input $(U,t, k)$ and $k_1$, the estimation can be obtained through samples generated by the following procedure: \begin{enumerate} \item obtain $U_{t,k_1}$ using $\gr(U,t,k_1)$, \item sample a status $U^*$ following $\Dis({\U}_1(U_{t, k_1}))$ by simulating the diffusion process for one round, and, \item compute $g(U^*,\gr(U^*,t-1,k-k_1),t-1)$ as an estimate of $\overline{\beta}(U, k,t,k_1)$. \end{enumerate} Supposing $L$ simulations are used for each estimation, the resulted policy is shown in Alg \ref{alg: OF policy}, denoted as the Sampling-enhanced One-step Foresight (\textbf{SOF}) policy. \begin{policy}[\textbf{Sampling-enhanced One-step Foresight (SOF) Policy} $(L \in \neqZ)$] \label{policy: SFOM} In each seeding step with state $(U, t, k)$, select the seed set obtained by running Alg. \ref{alg: OF policy} with input $(U, t, k)$ and $L$. \end{policy} \begin{algorithm}[t] \caption{SOF Policy}\label{alg: OF policy} \begin{algorithmic}[1] \State \textbf{Input} {$(U, k, t)$ and $L$} \For {$i=0:k$} \State Estimate $\overline{\beta}(U,k,t, i)$ by $L$ simulations; \EndFor \State $k^*= \argmax_{i} \overline{\beta}(U,k,t, i)$; \State Return $\gr(U,k^*,t)$; \end{algorithmic} \end{algorithm} Comparing to the static policy and greedy policy, the SOF policy is more sophisticated, but it incurs a higher complexity because we have to invoke the greedy node selection rule in each sampling. In our experiments, we have observed that the SOF policy is not scalable to handle large datasets. \subsubsection{Fast Foresight Seeding Policy} For supporting high-volume datasets, we finally present a fast foresight seeding policy. Our design is driven by considering a fine-grained trade-off between seeding and observing. Given the state $(U,t,k)$ with $\gr (U,t,k)=\{v_1,...,v_k\}$ being the local greedy solution and $S_i\define \{v_1,...,v_i\}$, our method considers the node one by one from $v_1$ to $v_k$ and determines if they would be selected in the current solution. This framework is formally described in Alg. \ref{alg: framework} where $\ifadd(U,S_i,v_{i+1},t) \in \{\true,\false\}$ is a module for determining if $v_{i+1}$ should be selected given that $S_i$ has already been selected. Once a node $v_{i+1}$ is rejected by $\ifadd()$, the process terminates and takes $S_i$ as the seed set in the current seeding step. Such a framework enables us to concentrate on analyzing the marginal effect of adding an individual node - designing the $\ifadd()$ module. To this ends, we propose a novel and efficient $\ifadd()$ designed through two quantities, $\Ma(U,S_i,v_{i+1}, t) \in [0,1]$ and $\Mt(U,S_i,v_{i+1}, t) \in [0,1]$, which reveals the gain or loss of selecting or not selecting $v_{i+1}$. In particular, when $\Ma(U,S_i,v_{i+1}, t)$ or $\Mt(U,S_i,v_{i+1}, t)$ approaches $1$, it is a strong indicator for selecting $v_{i+1}$ as another seed node. When $\Ma(U,S_i,v_{i+1}, t)$ or $\Mt(U,S_i,v_{i+1}, t)$ is close to $0$, it is a strong indicator for not selecting $v_{i+1}$. In what follows, we present these two metrics in detail. \begin{algorithm}[t] \caption{Fast Seeding Policy Framework}\label{alg: framework} \begin{algorithmic}[1] \State \textbf{Input} {$(U, k,t)$} \State $\{v_1,...,v_k\} \leftarrow \gr(U,k,t)$; \State $S_i \leftarrow \{v_1,...,v_i\}$; \For {$i=0:k$} \If {$\ifadd(U, S_{i}, v_{i+1}, t) == \false$} \State Return $S_i$; \EndIf \EndFor \State Return $S_k$; \end{algorithmic} \end{algorithm} The first metric $\Ma()$ is designed by measuring the correlation between the influence triggered by different seed nodes. Due to the diminishing marginal return of influence, the marginal contribution of a node will decrease after other nodes have been selected, which is an essential reason that a higher influence can be achieved by allowing an adaptive seeding. However, if the influences resulted from different seed nodes were independent, observing the feedback is not useful, and consequently an adaptive policy only incurs the loss of diffusion rounds. For example, if $v_1$ and $v_2$ are in different connected components of the graph, observing the influence resulted from $v_1$ does not alter the capability of $v_2$ in terms of influencing other users. From this perspective, the next lemma gives a sufficient condition for testing such independence. \begin{lemma} \label{lemma: sufficient} For a seeding step with state $(U,t,k)$ and a seed set $S^* \subseteq V$, observing the cascade resulted by any subset $S_1 \subseteq S^*$ will not alter the marginal contribution of $S^* \setminus S_1$, provided that \begin{equation} \label{eq: sufficient} g(U, S^*, t)-g(U,\emptyset, t)=\sum_{v\in S^*}(g(U, v, t)-g(U, \emptyset, t)). \end{equation} \end{lemma} \begin{proof} Let us use $\E_{G_l \sim\G_U}[g_u(U, S^*, t|G_l)]$ to denote the probability that a node $u$ can be activated after $t$ rounds following $U$ when $S^*$ is selected, where $G_l$ is a sampled live-edge graph conditioned on $U$ and $g_u(U, S^*, t|G_l) \in \{0,1\}$ is an indicator function denoting if there exists a path in $G_l$ from $A(U)\cup S^*$ to $v$ with no more than $t$ live edges. Due to the linearity of expectation, we have $g(U, S^*, t)=\sum_u \E_{G_l\sim\G_U}[g_u(U, S^*, t|G_l)]$. Because $g_u(U, S^*, t|G_l)$ is submodular, $g_u(U, S^*, t|G_l)-g_u(U, \emptyset, t|G_l)$ is no larger than $\sum_{v\in S^*}(g_u(U, v, t|G_l)-g_u(U, \emptyset, t|G_l))$. Combining Eq. (\ref{eq: sufficient}), it implies that \begin{align} \label{eq: equal} g_u(U, S^*, t|G_l)&-g_u(U, \emptyset, t|G_l) \nonumber \\ &= \sum_{v\in S^*}(g_u(U, v, t|G_l)-g_u(U, \emptyset, t|G_l)) \end{align} holds for each $G_l$ and $u$. Therefore, whenever the LHS of Eq. (\ref{eq: equal}) is equal to 1, there must be exactly one term on the RHS of Eq. (\ref{eq: equal}) is equal to 1, Taking $G_l$ as the live graph where all the edges in $E \setminus (\dD(U)\cup \dL(U))$ are live, it implies that any inactive node cannot be connected to two nodes in $S^*$ through a path in $E \setminus (\dD(U)\cup \dL(U))$ with edges less or equal than $t$. As a result, the cascades triggered by the nodes in $S^*$ are totally independent, and therefore observing the cascade resulted by any subset $S_1 \subseteq S^*$ does not change the marginal gain of any node in $S^* \setminus S_1$. \end{proof} The intuition behind Lemma \ref{lemma: sufficient} is that the overlapping of the contributions can be evaluated by testing the submodularity. For our problem, to determine if $v_{i+1}$ will be selected given $(U_{t,i},t,k)$, Lemma \ref{lemma: sufficient} suggests that we can do this by comparing $g(U, S_i+v_{i+1}, t)-g(U,S_i, t)$ and $g(U,v_{i+1}, t)-g(U,\emptyset, t)$. As a result, we can leverage the quantity \[\Ma(U,S_i, v_{i+1}, t)\define \dfrac{g(U, S_i+v_{i+1}, t)-g(U,S_i, t)}{g(U,v_{i+1}, t)-g(U,\emptyset, t)},\] which measures the benefits of including $v_{i+1}$ in the seed set of the current seeding step. When we have $\Ma(U,S_i, v_{i+1}, t)=1$, there is a good reason to include $v_{i+1}$ in the current step because observing more diffusion results will not decrease the marginal gain of $v_{i+1}$. When $\Ma(U,S_i, v_{i+1}, t)$ is close to $0$, it simply means that the marginal gain of $v_{i+1}$ is small, and therefore we may wait for more observations to have better seed nodes. The second metric $\Mt()$ is designed by looking into the one-step loss of the influence resulted by $v_{i+1}$. Given $(U,t,k)$, if the total influence resulted by $v_{i+1}$ can always complete within $t-1$ diffusion rounds, we have a good reason for not selecting it in the current seeding step in that there is no loss of waiting for another diffusion round. Formally, consider the status $U_{t,i}=(A(U)\cup S_i,\dL(U), \dD(U))$ with $\U_t(U_{t,i})$ being its future status after $t$ diffusion rounds. When the influence from $v_{i+1}$ is allowed to spread for $t^* \in \neqZ$ diffusion rounds, the marginal gain of selecting $v_{i+1}$ would be { \begin{align*} &h(U,S_{i},v_{i+1},t,t^*)\\ &\define \E_{U^* \sim \Dis(\U_t(U_{t,i}))}\Big[g(U^*,v_{i+1},t^*)-|A(U^*)|\Big]. \end{align*} } The above formula can be explained as: we first simulate the influence from $S$ for $t$ rounds to obtain a status $U^* \in \U_t(U_{t,i})$, and conditioning on $U^*$ we then simulate the influence from $v_{i+1}$ for $t^*$ rounds. Viewing the diffusion process from a multi-step perspective shares the same insights in \cite{mossel2007submodularity}. Now let us utilize the quantity \begin{align*} &\Mt(U,S_{i},v_{i+1},t)\\ &\define \frac{h(U,S_{i},v_{i+1},t,t)-h(U,S_{i},v_{i+1},t,t-1)}{h(U,S_{i},v_{i+1},t,t)} \end{align*} to measure the loss incurred by seeding $v_{i+1}$ with a delay of one round. If $\Mt(U,S_{i},v_{i+1},t)=0$, there would be no such a loss and we thus should not select $v_{i+1}$ in the current seeding step. One the other hand, if we have $\Mt(U,S_i,v_{i+1},t)$ close to $1$ (i.e., $h(U,S_{i},v_{i+1},t,t-1)$ is small), it means that $v_{i+1}$ can hardly trigger any influence if seeded one round later, and therefore, we prefer to select it immediately. For example, we always have $\Mt(U,S_i,v_{i+1},t)=1$ when there is only one remaining diffusion round (i.e., $t=1$). With the metrics $\Ma()$ and $\Mt()$, let us consider the quantity \begin{align*} &\Ind(U,S_i,v_{i+1},t)\\ &\define\alpha(t)\cdot \Ma(U,S_i, v_{i+1}, t)+\big(1-\alpha(t)\big)\cdot\Mt(U,S_i, v_{i+1}, t), \end{align*} where $\alpha(t) \define 1-1/t \in [0,1]$. According the above design, given the state $(U,t,k)$, we would select $v_{i+1}$ when $\Ind(U,S_i,v_{i+1},t)$ approaches to $1$, while not select $v_{i+1}$ when it approaches to $0$. We can see that $\alpha(t)$ is a balancing parameter, and it becomes larger when fewer diffusion rounds remain, which increases the importance of $\Mt()$ when the time constraint is severe. As a result, the module $\ifadd()$ can be constructed as \begin{equation} \label{eq: ind} \ifadd(U,S_i, v_{i+1}, t) \define \begin{cases} \true & \hspace{0mm} \hspace{-0.5mm} \text{if $\Ind(U,S_i, v_{i+1}, t) \geq \theta$} \\ \false & \hspace{0mm} \hspace{-0.5mm} \text{otherwise } \end{cases} \end{equation} where $\theta \in [0,1]$ is a controllable threshold that can reflect certain prior knowledge or preference. For instance, adopting a small $\theta$ could result in more seed nodes in the first several diffusion rounds. The effect of $\theta$ will be further investigated through experiments. We denote the resulted policy as the Fast Foresight (\textbf{FF}) policy: \begin{policy}[\textbf{Fast Foresight (FF) Policy $(\theta \in (0,1))$}] \label{policy: FOFM} In each seeding step with state $(U,t,k)$, the seed set is computed by Alg. \ref{alg: framework} with $\ifadd()$ given by Eq. (\ref{eq: ind}). \end{policy} In addition to the subroutine $\gr()$, implementing the FF policy requires to compute $\Ma()$ and $\Mt()$, which can be estimated again by sampling. Although both the SOF policy and FF policy involve the sampling procedure, it can be shown both theoretically and experimentally that FF is more efficient than SOF. \begin{table}[t] \caption{Time Complexity.} \centering \label{table: time} \begin{tabular}{@{}c|l@{}} \toprule Static Policy & $O\big((k+l) (m+n)\cdot \log n/\epsilon^2\big)$ \\ Greedy Policy & $O\big((k+l)(m+n)\cdot \log n/\epsilon^2\big)$ \\ SOF Policy & $O\big((k^2+kl) (m+n)\cdot \log n \cdot L/\epsilon^2\big)$ \\ FF Policy & $O\big((k+l) (m+n)\cdot \log n/\epsilon^2+Lk^2\cdot (m+n)\big)$ \\ \bottomrule \end{tabular} \end{table} \begin{lemma} \label{lemma: complexity} Suppose that we use $L \in \neqZ$ samples for each estimation in SOF and FF, and the parameters used in $\gr()$ are $\epsilon$ and $l$. The complexity of the policies is given by Table \ref{table: time}. \end{lemma} \begin{proof} Recall that for an adaptive seeding policy, we measure its complexity by the running time of computing one seed set. The static policy invokes the greedy node selection rule in each step so its time complexity is $O((K+l) (m+n)\cdot \log n/\epsilon^2)$. The greedy policy first examines if the status is final, which can be done in $O(m+n)$, and therefore the total time complexity is again $O((K+l)(m+n)\cdot \log n/\epsilon^2)$. In the SOF policy, each estimation in line 3 in Alg. \ref{alg: OF policy} consists of two parts: the simulation, running in $O(m+n)$, and the greedy node selection rule, running in $O((K+l)(m+n)\cdot \log n /\epsilon^2)$. Therefore, Alg. \ref{alg: OF policy} runs in $O((K^2+K l) (m+n)\cdot \log n\cdot L/\epsilon^2)$. In FF policy, line 2 runs in $O((K+l)(m+n)\cdot \log n/\epsilon^2)$, and each estimation of $\Ma()$ and $\Mt()$ can be done in $KL(m+n)$. Therefore, the complexity of FF is $O((K+l)(m+n)\cdot \log n/\epsilon^2+K^2L\cdot(m+n))$. \end{proof} \begin{table}[t] \caption{Dataset.} \centering \label{table: data} \begin{tabular}{@{} c|l l l l @{}} \toprule & Power & Wiki & Reddit & Youtube \\ \midrule Nodes &2,500& 8,300 & 124,960 & 1,157,900 \\ Edges &26,449& 103,689 & 624,349 & 5,975,248\\ \bottomrule \end{tabular} \end{table} \section{Experiments} \label{sec: exp} In this section, we report the results of the experiments done for studying the practical performance of the proposed policies, aiming at examining (a) their ability to achieve a large influence, (b) the running time, and (c) the robustness of the seeding pattern. \textbf{Datasets.} We adopt four datasets: (a) \textit{Power}: a synthetic power-law graph \cite{cowendigg}, (b) \textit{Wiki}: a Wikipedia voting network \cite{leskovec2010signed}, (c) \textit{Reddit}: a graph inferred from Reddit social networking platform, and (d) \textit{Youtube}: a social network extracted from Youtube.com \cite{yang2015defining}. Reddit is a new dataset created in this paper. We collected 1,000 threads, each of which has at least 1,500 replies, from the News subreddit in August 2019 and constructed a graph with the users who have participated at least two threads. A brief summary of the datasets is given in Table. \ref{table: data}. \begin{table*}[t] \centering \caption{Influence Resulted by Different Policies. } \label{table: influence} \begin{tabular}{@{}c|c c c c c c|c|c c c|c|c@{}} \toprule & \multicolumn{6}{c|}{FF}& \multirow{1}{*}{SOF} & \multicolumn{3}{c|}{Static} & \multirow{1}{*}{Greedy} & \multirow{1}{*}{\makecell{NonAd}}\\ \midrule & $\theta=0.01$ & $\theta=0.2$ & $\theta=0.26$ & $\theta=0.4$ & $\theta=0.6$ & $\theta=0.7$ & & $k=1$ & $k=2$ &$k=5$ & &\\ Power &$963.2$ &$971.9$ & $\textbf{981.1}$ &$878.7$ &$729.3$ &$653.4$ & $\textbf{979}$ & $975.7$& $\textbf{994.7}$ & $\textbf{994.4}$ & $507.4$ & $932.3$ \\ \hline & $\theta=0.4$ & $\theta=0.5$ & $\theta=0.6$ & $\theta=0.7$ & $\theta=0.8$ & $\theta=0.9$& & $k=1$ & $k=2$ &$k=5$ & &\\ Wiki &$681.0$ &$686.4$ &$\textbf{694.8}$ &$665.2$ &$538.6$ &$239.8$ & $\textbf{694.5}$ & $687.6$ & $687.2$ & $688.6$ & $493.1$ & $669.1$ \\ \hline & $\theta=0.2$ & $\theta=0.3$ & $\theta=0.4$ & $\theta=0.5$ & $\theta=0.6$ & $\theta=0.7$ & & $k=1$ & $k=2$ &$k=5$ & &\\ Reddit &$749.8$ &$761.0$ &$759.7$ &$\textbf{772.2}$ &$684.6$ & $471.3$ & $\textbf{810.5}$ & $657.9$ & $674.7$ & $748.3$ & $181.6$ & $734.4$ \\ \midrule \multirow{2}{*}{\makecell{Youtube\\$(10, 20, 0.01)$}} & $\theta=1$E-$4$& $\theta=1$E-$3$& $\theta=5$E-$3$ & $\theta=0.01$& $\theta=0.1$ & $\theta=0.2$ & & $k=1$ &$ k=2$ &$k=5$ & &\\ &$\textbf{7806}$ & $7791$& $7762$& $7762$& $7633$ & $7504$ & n/a &$7647$ & $7747$& $\textbf{7873}$& $6800$& $7774$\\ \hline \multirow{2}{*}{\makecell{Youtube\\$(10, 10, 0.005)$}} & $\theta=0.2$ & $\theta=0.3$& $\theta=0.4$ & $\theta=0.5$& $\theta=0.6$ & $\theta=0.7$ & & $k=1$ & $k=2$ &$k=5$ & &\\ & $898.3$& $912.5$ & $911.2$ &$\textbf{955.2}$ &$\textbf{962.6}$ & $868.8$ & n/a &$913.1$ &$903.1$ & $922.3$& $694.5$& $927.4$\\ \hline \multirow{2}{*}{\makecell{Youtube\\$(10, 10, 0.001)$}} & $\theta=0.1$& $\theta=0.2$ & $\theta=0.5$& $\theta=0.6$ & $\theta=0.7$ & $\theta=0.8$ & & $k=1$ & $k=2$ & $k=5$ & &\\ & $53.6$ & $54.1$ & $53.9$ &$ 54.3$ & $56.0$ & $54.2$ & n/a &$\textbf{82.3}$ & $\textbf{78.9}$ & $63.5$ & $71.8$& $54.6$\\ \bottomrule \multicolumn{13}{r}{*Competitive results are in bold. } \end{tabular} \end{table*} \textbf{Settings.} For Power and Wiki, we consider the weighted-cascade setting where $p_{(u,v)}=1/\InDeg(v)$ with $\InDeg(v)$ being the in-degree of $v$. For Reddit, the probability on edge $p_{(u,v)}$ is proportional to the frequency between $u$ and $v$, where the frequency is measured by the number of the threads that both $u$ and $v$ have participated in. On Power, Wiki and Reddit, we consider the setting $(T,K)=(10,50)$. We adopt a short period in order to examine the ability of each policy to deal with a severe time constraint. For Youtube, each edge $e \in E$ has the same propagation probability $p_e$, and we adopt three settings: $(T,K,p_e)=(10,20,0.01)$, $(10,10,0.005)$, and $(10,10,0.001)$. We will shortly see how these settings could help us investigate the property of the FF policy. The implementation of the greedy node selection rule follows the vanilla reverse sampling framework \cite{tang2015influence}. For ensuring that each policy could run in a reasonable time, $500$ (resp., $50$) samples were used for each estimation in FF (resp., SOF). We tested the $k$-filter uniform pattern for the static policy with $k\in \{1,2,5\}$. We also tested the greedy policy and the non-adaptive policy, denoted as Greedy and NonAd, which can be taken as two baselines. For each dataset and each seeding policy, we repeated the experiment for 300 times and report the average result. Our experiments were done on an Intel Xeon Platinum 8000 Series processor with parallelizations. We wish to note that, to the best of our knowledge, there is no existing algorithm for the AIM problem that can meet a hard deadline. The analysis in \cite{vaswani2016adaptive} provided a general framework but no specific algorithm for AIM was studied in their experiments. Therefore, we focus on experimentally examining the performance of the policies proposed in this paper. \subsection{Results} The resulted influence under each policy is given in Table \ref{table: influence}, and the running time is shown in Table \ref{table: running_time} in which the report of Youtube is for $(T,K,p_e)=(10,10,0.005)$. The results of SOF on Youtube is not reported because it was not able to complete within five days. Notice that all the methods are heuristic, and we did not have any prospect on either which policy would provide the best performance or the seeding patterns dynamically constructed during the seeding process. \textbf{Results on Power, Wiki, and Reddit.} According to Table \ref{table: influence}, the SOF policy can provide the most competitive performance, but it may take hours to compute one seed set, making it not suitable for time-sensitive tasks. Second, the FF policy is reasonably good, provided that an appropriate $\theta$ is used. On Reddit, the FF policy outperforms the static policy by an evident margin. The static policy can also produce moderate performance, and it gives the best result on Power with $k=2$. However, on Reddit, the static policy is worse than non-adaptive. Finally, the baseline methods, greedy policy and non-adaptive policy, are not effective. \textbf{Results on Youtube.} In order to test the extreme cases, we first consider Youtube with $(T, K, p_e)=(10,20,0.01)$. In such a case, we see that the non-adaptive policy has relatively good performance, while the greedy policy is very ineffective. This is because the resulted influence is very large, leading to that the time constraint dominates the merits of the adaptive seeding. Therefore, TAIM reduces to the TIM problem, and the need for adaptive seeding is low. In another extreme case, when we have $(T,K,p_e)=(10,10,0.001)$, the influence can hardly spread for more than one round due to the low prorogation probability, and therefore TAIM is close to AIM for which the Greedy policy is relatively good, which is supported by the results in Table \ref{table: influence}. In such a case, FF is again not effective due to the construction of $\ifadd()$. Note that such extreme cases are constructed artificially, and for the settings between those extreme cases, the FF policy can be effective, which can be seen from the results in Table \ref{table: influence} for the setting $(T,K,p_e)=(10, 10, 0.005)$. \begin{table}[t] \caption{Running Time. Each cell gives the average running time of computing one seed set.} \centering \label{table: running_time} \begin{tabular}{@{}c|l|l|l|l@{}} \toprule & Static & Greedy & SOF & FF\\ \midrule Power & $1.3$s & $<1$s & $21$min &$6.0$s\\ Wiki & $<1$s & $<1$s & $55$min & $2.5$s \\ Reddit &$4.9$s & $4.9$s & $81.6$min & $50.1$s \\ Youtube & $45.0$s & $44.0$s & n/a & $51.5$s\\ \bottomrule \end{tabular} \vspace{-3mm} \end{table} \begin{figure*}[t] \centering \subfloat[{[Power, 0.2]} ]{\label{fig: power_02_pattern} \includegraphics[width=0.16\textwidth]{images/exp/power_02_pattern.pdf}} \subfloat[{[Power, 0.5]} ]{\label{fig: power_05_pattern} \includegraphics[width=0.16\textwidth]{images/exp/power_05_pattern.pdf}} \subfloat[{[Wiki, 0.6]} ]{\label{fig: wiki_06_pattern} \includegraphics[width=0.16\textwidth]{images/exp/wiki_06_pattern.pdf}} \subfloat[{[Reddit, 0.2]} ]{\label{fig: reddit_02_pattern} \includegraphics[width=0.16\textwidth]{images/exp/reddit_02_pattern.pdf}} \subfloat[{[Reddit, 0.5]} ]{\label{fig: reddit_05_pattern} \includegraphics[width=0.16\textwidth]{images/exp/reddit_05_pattern.pdf}} \subfloat[{[Youtube-0.005, 0.6]} ]{\label{fig: youtube_06_pattern} \includegraphics[width=0.15\textwidth]{images/exp/youtube_06_pattern.pdf}} \caption{Cumulative pattern of seed set size. Each subgraph is labeled as {[$\text{dataset}, \theta$]}, and it gives the results of ten random experiments where the point $(x,y)$ shows the total budget $y$ used by the $x$-th seeding step.} \label{fig: ff_pattern} \vspace{-4mm} \end{figure*} \textbf{Analysis of FF Policy.} According to the construction of $\ifadd()$ and $\Ind()$ in Eq. (\ref{eq: ind}), the $\theta$ close to either $0$ or $1$ is not desired, which can be seen from Table. \ref{table: influence}. One interesting observation from Table. \ref{table: influence} is that the performance of FF is concave with respect to $\theta$. For instance, the performance on Reddit is monotone increasing on $[0, a]$ with $a\approx 0.5$ while monotone decreasing after $\theta=a$. We can see that the optimal point varies over different datasets. For example, the best performance is given at $\theta \approx 0.26$ on Power, but for Reddit the optimal point is at $\theta \approx 0.5$. While we do not have any prior estimate on the optimal $\theta$, such a concave pattern suggests that a binary search can be effective. Second, since the seeding patterns are constructed in real-time, we are interested in that if such patterns are robust. To this ends, for each setting, we plot the patterns generated in ten random simulations, as shown in Fig. \ref{fig: ff_pattern}. As shown in the figure, while the seeding patterns are not exactly the same in different simulations, they do exhibit a similar pattern under the same $\theta$. For example, on Power, most of the budget is used by the $6$-th seeding step under $\theta=0.2$, while more than half of the budget is used after the $8$-th seeding step under $\theta=0.5$. \textbf{Summary.} Overall, the FF policy is cost-effective in most cases except for the extreme settings, and it results in meaningful and robust seeding patterns controlled by $\theta$. The static policy is worse than FF in average, but it can deal with extreme cases such as Youtube with $(T,K,p_e)=(10,20,0.01)$ or $(T,K,p_e)=(10,10,0.001)$. The SOF policy is effective on small datasets but time-consuming, so reducing its time complexity can potentially make it a desired practical solution for large datasets. Finally, the baselines, Greedy and NonAd, only perform well in certain extreme cases where the TAIM reduces to AIM or TIM. \section{Conclusion} \label{sec: con} In this paper, we have studied the time-constrained adaptive influence maximization (TAIM) problem. The outcomes include the hardness result in computing the optimal policy, a lower bound of the adaptive gap, and, a series of seeding policies. In particular, we show the new hardness in the TAIM problem and observe a critical trade-off for designing effective seeding policies. \appendices \section{{Proof of Lemma \ref{lemma: hardness_2}}} \label{proof: lemma: hardness_2} Given a graph and two nodes $s_1$ and $s_2$, the s-t connectedness problem asks for the number of subgraphs in which $s_1$ and $s_2$ are connected. Its decision version is given as follows. \begin{problem} \label{problem: s-t-decision}[\textbf{s-t connectedness}] Given a directed graph $G_s=(V_s, E_s)$, an integer $k$ and two nodes $s_1$ and $s_2$, decide whether the number of $s_1$-$s_2$ connected subgraphs is no larger than $k$. \end{problem} An oracle of Problem \ref{problem: s-t-decision} can be used to answer the s-t connectedness problem by a binary search, and the oracle is called $O(|E_s|)$ times because the maximum number of s-t connected subgraphs is $2^{|E_s|}$. Since the s-t connectedness problem is \#P-complete \cite{valiant1979complexity}, a polynomial algorithm for Problem \ref{problem: s-t-decision} would yield $NP=P$. Next, we give a reduction from Problem \ref{problem: s-t-decision} to TAIM. Let us consider an instance of Problem \ref{problem: s-t-decision} given by $(G_s, s_1, s_2, k)$. Let $n_s$ and $m_s$ be the number of nodes and edges in $G_s$, respectively. Without loss of generality, we assume that there is no edge pointing out from $s_2$ and no edge pointing into $s_1$. \textbf{Reduction.} We construct an instance of TAIM as follows. Let $p_1$ and $p_2$ be small real numbers in $(0,1)$, and $A, B, C$ and $D$ be integers where $C=4n_s$, $D=\frac{1-\frac{k}{2^{m_s}}\cdot p_2}{(1-\frac{k}{2^{m_s}})\cdot p_1\cdot p_2}\cdot C$, $B=4D$ and $A=4B$.\footnote{We omit the rounding issue as it is not critical.} We intent to make the following relationship satisfied \begin{equation} \label{eq: reduction} A\gg B \gg D >C \gg n_s \end{equation} The social network structure $G=(V, E)$ is shown in Fig. \ref{fig: reduction}, built through the following steps: \begin{itemize} \item Copy the graph $G_s$ with $s_1$ and $s_2$. For each edge $e $ in graph $G_s$, set $p_e$ as $0.5$. \item (\textbf{Path $P_s$}) Insert $n_s-1$ new nodes with $n_s$ edges so that the nodes form a simple path from $s_1$ to $s_2$. Let one of the $n_s$ added edges have propagation probability $p_1$ and other edges have propagation probability $1$. We denote this path as $P_s$. \item (\textbf{Group A}) Insert $A$ new nodes to the graph and let them be connected from $s_1$. We set that $p_e=1$ for each added edge $e$. \item Insert a new node labeled as $s_3$ and an edge $(s_2,s_3)$ with $p_{(s_2,s_3)}=p_2$. \item (\textbf{Group B}) Insert $B$ new nodes and let them be connected from $s_2$. We set that $p_e=1$ for each added edge $e$. \item (\textbf{Group C}) Select a node inserted in the last step and label it as $s_4$. Insert $C$ new nodes and make them connected from $s_4$. We set that $p_e=1$ for each added edge $e$. \item (\textbf{Group D}) Insert another new node $s_5$ and let it connect to $D$ new nodes. We set that $p_e=1$ for each added edge $e$. \end{itemize} We set the budget as $K=2$ and the time constraint as $T=n_s+2$. Now the instance of TAIM is completed. \begin{figure}[t] \begin{center} \includegraphics[width=0.45\textwidth]{images/reduction.pdf} \end{center} \vspace{-2mm} \caption{\textbf{Reduction}} \label{fig: reduction} \vspace{-6mm} \end{figure} \textbf{Optimal Policy.} In the optimal policy, to maximize the number of active nodes, only the nodes in $\{s_1,s_3,s_4,s_5\}$ will be selected as seed nodes due to Eq. (\ref{eq: reduction}) as well as the fact that the budget is two and $p_1$ and $p_2$ are small. For the same reason, $s_1$ be must be selected in an optimal policy, so without loss of generality we assume that it is selected as the first seed node in the first seeding step. Now the problem left is to decide when to use the other budget. As we will only select seed nodes from $\{s_3,s_4,s_5\}$ unless they have all been activated, the only event that affects our decision is that if $s_3$ is activated. After each of the first $n_s$ diffusion rounds, once $s_3$ is activated, we should select $s_5$ as the other seed node. If $s_3$ is not activated, we can either wait for more diffusion rounds or select $s_3$ to maximize the number of active nodes. Because the time constraint is $n_s+2$ and leaving two diffusion rounds is sufficient for $s_3$ to activate all the nodes connected from it, it is optimal to wait until the $n_s$-th diffusion round. After the $n_s$-th diffusion round, if $s_3$ has been activated, it is clear that we should select $s_5$ as the second seed node. If $s_3$ is not activated yet, we would have two choices: (a) select the second seed node or (b) wait for another diffusion round and then select the second seed node. Note that there are only two diffusion rounds left, so the optimal policy must be one of those choices. Now we calculate the resulted influence. \textbf{Policy a.} Suppose the seed node must be selected right after $n_s$ diffusion rounds. In this case, the profit is \begin{equation} A+E_s+B+\Pr[\leq n_s]\cdot (C+D)+(1-\Pr[\leq n_s])\cdot C+O(1) \end{equation} where $\Pr[\leq n_s]$ is probability that $s_3$ can be activated within $n_s$ rounds of diffusion, and $E_s$ is expected number of active nodes in $G_s \cup P_s$ resulted from $s_1$. \textbf{Policy b.} We would wait for another diffusion round even if $s_3$ is not activated after $n_s$ diffusion rounds. Since the time constraint is $n_s+2$, this seed node must selected right after the $n_s+1$ diffusion round. Under this policy, when $s_3$ is activated before the $(n_s+1)$-th diffusion round, we would select $s_5$ as the second seed node, and therefore the total profit is $A+E_s+B+C+D+2$. If $s_3$ is activated exactly in the $(n_s+1)$-th diffusion round, we should select $s_5$ as the seed node, because there is only round left and $D>C$. In this case, the total profit is $A+E_s+B+D+2$. If $s_3$ is not activated after the $(n_s+1)$-th diffusion round, we should select $s_3$ to maximize the profit as $B$ is larger than $C$ or $D$, and therefore the total profit is $A+E_s+B+1$. In summary, the total profit under the second policy is \begin{equation} A+E_s+B+ \Pr[\leq n_s]\cdot (C+D)+\Pr[= n_s+1]\cdot D+O(1). \end{equation} where $\Pr[=n_s+1]$ is the probability that $s_3$ is activated exactly after $n_s+1$ rounds of diffusion. Comparing the above two policies, for sufficiently large $n_s$, Policy a is better than Policy b if and only if $\frac{1-\Pr[\leq n_s]}{\Pr[=n_s+1]}\geq \frac{D}{C}$. Let $p^*$ be the probability that $s_2$ can be activated through $G_s$ from $s_1$. Because the longest simple path from $s_1$ to $s_2$ in $G_s$ has at most $n_s-1$ edges and the path $P_s$ has $n_s$ edges, $\Pr[\leq n_s]$ is the probability that $s_2$ is first activated by $s_1$ through $G_s$ but not $P_s$, and then $s_3$ is activated by $s_2$, which means $\Pr[\leq n_s]=p^*\cdot p_2$. Similarly, $\Pr[= n_{s}+1]$ is the probability that $s_2$ is first activated by $s_1$ through the path $P_s$ but not $G_s$, and $s_3$ is then activated by $s_2$, implying that $\Pr[= n_{s}+1]=(1-p^*)\cdot p_1 \cdot p_2$. Therefore, Policy a is better than Policy b if and only if $\frac{1-p^*\cdot p_2}{(1-p^*)\cdot p_1\cdot p_2}\geq \frac{D}{C} \iff p^* \leq \frac{k}{2^{m_s}}$. Because the probability of edges in $G_s$ is uniformly 0.5, $p^*$ is equal to $\frac{n^*}{2^{m_s}}$ where $n^*$ is the number of $s_1$-$s_2$ connected subgraphs. Thus, deciding which policy is better is equivalent to determining if the number of $s_1$-$s_2$-connected subgraphs is no larger than $k$, which completes the proof.
2,869,038,154,207
arxiv
\section{Introduction} \label{sec:intro} Cryogenic electron microscopy (cryo-EM), low-energy electron holography (LEEH) and scanning probe microscopy (SPM) are complimentary imaging techniques to probe the structure and conformation of biomolecules at sub-nm resolution \cite{renaud_cryo-em_2018,longchamp_imaging_2017,muller_atomic_2008,abb_carbohydrate_2019,wu_imaging_2020}. Cryo-EM has evolved into a leading method for high-resolution imaging of biological macromolecules\cite{kuhlbrandt_resolution_2014,bai_how_2015,yip_atomic-resolution_2020}. LEEH is a low-energy electron, single-particle microscopy method that allows to image highly flexible proteins in their individual conformations\cite{ochner_low-energy_2021}. SPM reveals the connectivity of branched oligosaccharides \cite{wu_imaging_2020} and allows access to the electronic structure of individual molecules \cite{kahle_quantum_2012, kley_atomic-scale_2014}. All three methods require samples produced at highest standard to work optimally. LEEH and high-resolution SPM require ultra-pure, UHV-compatible substrate conditions and greatly profit from chemical purity of the adsorbate.\cite{longchamp_imaging_2017}. For cryo-EM, the preparation of homogeneous, high-quality samples can be challenging, especially for complex biomolecules. Conventional sample preparation for cryo-EM proceeds through the plunge freezing method, which has been enormously successful, but can be time-consuming and resource-intensive, and homogeneity is limited by solution-based purification techniques.\cite{agard_chapter_2014,drulyte_approaches_2018,noble_routine_2018,chorev_use_2020} Electrospray ion beam deposition (ES-IBD) is a preparative mass spectrometry\cite{cyriac_low-energy_2012,johnson_soft-_2016} technique, capable of producing highly purified molecular samples for single molecule imaging. It is routinely used for SPM with smaller (bio)molecules\cite{wu_imaging_2020,rauschenbach_electrospray_2006,hamann_ultrahigh_2011, rauschenbach_mass_2016, walz_navigate_2021, abb_two-dimensional_2016,deng_close_2012,rinke_active_2014} and has been demonstrated also for TEM\cite{vats_electron_2018,vats_catalyzing_2021, prabhakaran_rational_2016, mikhailov_mass-selective_2014}, LEEH\cite{ochner_low-energy_2021,longchamp_imaging_2017}, and recently cryo-EM \cite{esser_mass-selective_2021, westphall_3d_2021}. In contrast to organic molecular beam epitaxy (OMBE),\cite{mccray_mbe_2007,koma_molecular_1995} ES-IBD is not limited to small and volatile molecules. In ES-IBD, molecules are ionised in an electrospray ion source, transferred into the gas phase, and mass-analysed in vacuum. Then, the ion beam is mass-to-charge-ratio filtered and deposited with a controlled landing energy onto a suitable substrate. ES-IBD is often referred to as "soft landing" at lower collision energies, or "reactive landing" at higher collision energies or if the collision results in formation of a covalent bond to the surface. It enables new reaction pathways\cite{krumbein_fast_2021,yang_anionanion_2021} and surface modifications \cite{su_design_2019}. In addition to the requirements for ESI mass spectrometry, ES-IBD needs an intense ion beam\cite{pauly_hydrodynamically_2014,rauschenbach_electrospray_2006,su_multiplexing_2021, bernier_transfer_2020} with well-defined energy distribution, to enable fast sample preparation with controlled landing energy. The width of the beam-energy distribution is crucial, as it defines the collision energy distribution and limits the landing energy range. Reported values for full width at half maximum (FWHM) range from 2 to \SI{10}{\electronvolt} per charge\cite{krumbein_fast_2021,rauschenbach_electrospray_2006,walz_compact_2020,hamann_electrospray_2011}. The beam-energy width determines the minimal landing energy, which can be used without deflecting a significant portion of the ion beam. Narrow beam-energy distributions enable controlled exploration of shallow conformation spaces \cite{anggara_exploring_2020}. Low landing energy is particularly important for highly-charged protein complexes, as their absolute landing energy is proportional to their charge state. Likewise, the beam intensity determines the deposition time for a given deposition area and particle density. 1 pAh ($3.6 \times 10^{-9}$ Coulomb or 22 billion charges) is the charge deposited by a 1 pA ion beam over one hour. In practice, \SIrange{5}{20}{pAh} are sufficient for imaging\cite{esser_mass-selective_2021,rinke_active_2014,longchamp_imaging_2017,ochner_low-energy_2021}. This charge allows to deposit on a several \si{\mm^{2}} large sample with a sub-monolayer coverage that enables imaging of isolated particles. Beam currents of more than \SI{20}{\pA} ensure typical deposition times of less than half an hour, so multiple deposition conditions can be tested in a day. However, precise mass-selection inherently reduces the current available for deposition, since all ions except the selected one are removed from the beam. Finally, an accurate current measurement on the level of \SI{1}{\pA} is needed to achieve reproducible coverage. For sample preparation of biological macromolecules, the structural integrity of fragile biomolecules has to be maintained for the entire ES-IBD process. Native MS retains covalent and most non-covalent interactions within a protein complex\cite{bakhtiari_protein_2019,sharon_mass_2007,tamara_high-resolution_2021} and can be integrated to ES-IBD. Nevertheless, it remains unclear to which extent ionisation, liquid-gas-phase-vacuum transfer, and soft landing affect non-covalent interactions and hence the conformation and structure of the protein complexes. Currently, the barrier to widespread use of ES-IBD is still high and there is no commercial instrument available. Academic instrument developers have designed preparative MS mainly for small and medium size molecule deposition\cite{franchetti_soft_1977,heiz_chemical_1997,miller_soft-landing_1997,laskin_soft-landing_2008,hamann_electrospray_2011,rauschenbach_mass_2016,su_multiplexing_2021} and only few of these instruments can handle native proteins complexes.\cite{longchamp_imaging_2017,ochner_low-energy_2021,mikhailov_mass-selective_2014} To be universally useful for molecular ion-beam deposition, ES-IBD instruments need to be good mass spectrometers, and have a high beam current in addition to the features needed for beam control and deposition. Commercial, analytical instruments typically are excellent mass spectrometers, but have insufficient beam intensity for ES-IBD and lack the flexibility in design and software to integrate deposition as an additional workflow. As a minimum requirement, a native ES-IBD/MS must handle large, low-mass-to-charge-ratio protein ions with a molecular weight of up to a megadalton. Whilst some home-built or converted machines can do this \cite{benesch_separating_2010,longchamp_imaging_2017,walz_compact_2020,westphall_3d_2021,ochner_low-energy_2021}, their mass filter, collisional activation or beam control is severely restricted in comparison to commercial instruments. Here, we show how to convert a proven, commercial, analytical, native MS to a native ES-IBD platform. It has an intense, well-controlled ion beam, which we characterise with current and energy measurements. Three different methods are used to demonstrate that the platform is suitable for near-native deposition: Protein heights observed in SPM images show globular features when preparing samples using native ES-IBD, compared to denatured, conventional MS conditions. Using TEM, we demonstrate the importance of landing energy control to preserve near-native structural features. Finally, we show that a non-covalent enzymatic complex retains activity after ES-IBD. \section{Results and Discussion} \subsection{Instrument setup and modification} \begin{figure}[h!] \includegraphics[width=\linewidth]{Desmond_scheme.pdf} \caption{Schematic view of the Q Exactive UHMR mass spectrometer modified for deposition. Custom landing stage to deposit two microscopy samples and measure energy on the left. UHMR with improved source for better transmission on the right.} \label{fgr:instrument} \end{figure} We have converted a Q Exactive UHMR instrument (Thermo Fisher Scientific, Bremen, Germany) into a preparative mass spectrometer by adding a custom-built landing stage downstream of the Higher Energy Collisional Dissociation (HCD) cell. Fig.~\ref{fgr:instrument} shows a scheme of the instrument. The added stage contains electrostatic lenses to focus and steer the beam onto a sample holder, containing two sample positions and a retarding grid energy detector (this scheme only shows a single sample in the sample holder). A sample transfer rod moves the samples in and out of the deposition chamber. That process takes two minutes including pumping and venting. To monitor the beam intensity, the ion current is measured at the landing stage and on apertures throughout the instrument, which was modified to add this capability (yellow elements in Fig.~\ref{fgr:instrument}). In addition, we have increased the S-exit lens diameter from 1.4 mm to 2.5 mm and added a custom cone gas adapter to increase transmission efficiency and thus achieve shorter deposition times (see Methods). \subsection{Deposition workflow} First, we load up to two samples, typically TEM grids or highly oriented pyrolytic graphite (HOPG) substrates, into the sample holder and insert it into the deposition stage. We create an ion beam and check the composition with the Orbitrap mass analyser and set the quadrupole mass filter to select the species required for deposition. To optimise the beam intensity for deposition, we switch to beam mode. In this mode, the C-trap and the HCD cell guide the ions in a continuous beam, instead of intermittently pulsing the beam into the Orbitrap mass analyser. All direct current (DC) potentials within the Q Exactive UHMR instrument were kept at default values, which minimize activation during transmission from source to the deposition stage (see Fig.~\ref{fgr:UHMR_landing_potentials_SI}a). This usually means that potential gradients are as low as possible especially in regions where collisions with the background gas occur. Next, the beam is steered onto the energy detector. In front of the collector plate that is used to measure the ion current, the detector has a metal grid to apply retarding voltages. Ions with a total energy below their potential energy at the grid cannot reach the detector plate. Hence, we record the ion current at the detector plate as a function of the grid potential to obtain the beam energy. The difference between the beam energy and the retarding sample potential determines the landing energy. We typically use a range from \SIrange{2}{100}{\electronvolt} per charge depending on the specific application. For deposition, we finally steer the beam onto the sample and start integrating the detected sample current, to measure when the desired coverage is achieved. During deposition the beam composition is checked periodically using the mass analyser. \subsection{Beam-energy distribution} The total energy of the ion beam and its distribution are pivotal parameters for the ES-IBD process because they define the collision energy with the surface. The total energy distribution is determined by the potential along the beam path and the interactions of the ions with the background gas. Hence, it can be influenced by the local pressure, which is a function of pumping speed and shape of the vacuum vessel, and by the applied radio-frequency (RF) and DC voltages. In our instrument, total energy is measured via the retarding grid detector integrated in the sample holder (see Fig.~\ref{fgr:instrument} and Methods). Ions moving through the upstream part of the instrument experience small gradients in pressure varying from \SI{0.01}{mbar} in the HCD cell to high vacuum in the landing stage. While keeping all other conditions constant, we can obtain intense beams with different sets of voltages applied to the electrodes of the ion optics along the beam path. We investigated the influence of two distinct sets of potentials on the beam-energy distribution, one with higher and one with lower potential gradients (see Fig.~\ref{fgr:UHMR_landing_potentials_SI}b). For this investigation, we used an ion-beam of denatured and a native bovine serum albumin (BSA). Denatured BSA yields a wide range of charge states between +44 and +15 (\SIrange{1600}{4500}{\Th}, Fig.~\ref{fgr:MS_BSA_SI}a). The native BSA beam contains the monomer as well as undefined, higher-order aggregates. Their mass-to-charge ratio is 3900 (+17, monomer) to 10200 Th (aggregate, Fig.~\ref{fgr:MS_BSA_SI}b). After the C-trap, the ions pass through the HCD cell and the electrostatic lenses and finally reach the energy detector. At the detector we were able to measure the beam's intensity and total energy ($E_\mathrm{tot}$) \begin{equation} E_\mathrm{tot} = E_\mathrm{kin} + E_\mathrm{pot}. \end{equation} The ions kinetic energy, $E_\mathrm{kin}$, only depends on velocity. Its potential energy, $E_\mathrm{pot}$, depends on charge state and position in the electric potential landscape. The reference for $E_\mathrm{pot}$ and $E_\mathrm{tot}$ is electrical ground by convention. Hence, an ion with a negative $E_\mathrm{tot}$ moving towards a grounded electrode would not reach it, because once all kinetic energy is converted $E_\mathrm{pot} = E_\mathrm{tot} < 0$. Fig.~\ref{fgr:energy} shows beam-energy distributions measured under different conditions. They are represented as Gaussian fits to the first derivative of the beam current, $I$, with respect to the grid bias, $U_\mathrm{grid}$. Based on the above conventions, the grid potential $U_\text{grid}$ corresponds to total energies. Clearly, the state of the ion, folded or unfolded, as well as the chosen potential landscape influence the energy mean value and energy width, in the following given as $E(\Delta E)$. When a lower DC gradient for focusing within the electrostatic lens was applied, the denatured BSA $E_\mathrm{tot}$ was $-9.8(1.1)$ \SI{}{\electronvolt} per charge. It is lower by \SI{4}{\electronvolt} per charge and widens by \SI{2.4}{\electronvolt} per charge when choosing a higher gradient instead. Native BSA's $E_{tot}$ follows a similar trend, albeit with a higher $E_\mathrm{tot}$ mean of \SI{-7.9}{\electronvolt} per charge with the lower gradient and \SI{-10.2}{\electronvolt} per charge for the higher gradient. \begin{figure} \includegraphics[width=0.8\linewidth]{beam_energy_profile.pdf} \caption{Beam-energy distribution measured for denatured and native BSA ion beams for two different potential gradient settings as shown in Fig.~\ref{fgr:UHMR_landing_potentials_SI}b. Dots: Ion current measured as function of the retarding grid potential $I(U_\mathrm{bias})$. Lines: Gaussian fit for first derivative $\mathrm{d}I/\mathrm{d}U_\mathrm{bias}$, corresponding to the total beam energy per charge ($E_\mathrm{tot}$) in eV per charge. FWHM is given in parentheses.} \label{fgr:energy} \end{figure} The interplay between local pressure and ion acceleration in the electrostatic lens determines $E_\mathrm{tot}$. Ions thermalise in the HCD cell to a total energy of \SI{-5}{\electronvolt} per charge, which is defined by the axial DC-potential. From there they enter the electrostatic lenses. Although the pressure rapidly decreases, the ion's mean free path is significantly shorter than the distance between the HCD exit lens and the next aperture and hence energetic ion-background gas collisions will occur. The ions gain kinetic energy ($E_\mathrm{kin}$) between two collisions proportional to the DC gradient (electric field, see figure \ref{fgr:UHMR_landing_potentials_SI}b) along the flight path in the landing stage. The relative loss of kinetic energy per collision depends mainly on the mass of the collision partners, with the absolute loss per collision higher at higher $E_{kin}$. The randomness of the impact angle between gas and ion causes a distribution in energy loss, which is wider for high $E_{kin}$. Thus, a high potential gradient causes a large decrease in $E_\mathrm{tot}$ and widens $\Delta E_{tot}$, the width of the distribution (see SI). Two factors explain the lower $E_{tot}$ for the denatured protein. First, the number of collisions in the electrostatic lens increases with the unfolded protein's larger collisional cross section\cite{douglas_collisional_1992}. Second, the denatured protein ions' higher charge states raise the overall $E_{kin}$ (for the same value of energy per charge), which leads to higher energy loss in collisions as compared to the low charge state, native ion. In summary, when transferring an ion beam from high-pressure RF optics into high vacuum, magnitude and distribution of $E_\mathrm{tot}$ is a function of the DC gradient, background pressure, ion charge, and collision cross section (CCS). For a given type of ion, efficient pumping and a weak DC gradient ensure a narrow distribution of total beam energy, enabling all ions to land on a substrate downstream with a similar collision energy. Here, using low gradients, the $E_\mathrm{tot}$ distribution (FWHM $\leq \SI{1.2}{\electronvolt}$ per charge) is sharper than previously reported literature values (FWHM $\geq \SI{2.2}{\electronvolt}$ per charge) \cite{krumbein_fast_2021,rauschenbach_electrospray_2006,walz_navigate_2021,hamann_electrospray_2011}, pointing to gentle conditions in which gas-phase activation is minimal. Given that the lower gradient conditions also achieved high transmission and good beam focus, and we retained them for all other experiments presented here. \subsection{Transmission} High transmission is crucial for deposition experiments, since the particle flux directly determines the deposition time for a given coverage and sample surface area. Using a typical concentration of \SI{3}{\micro\mole\per\liter} and assuming a \SI{1}{\ul\per\hour} nano electrospray flow rate with \SI{100}{\percent} ionisation efficiency, a \SI{1.2}{\nA} emission current of native BSA ($z=15$) would be generated (see SI for details). However, under these conditions we measured initially only \SI{13}{\pA} at the sample position in the Q Exactive UHMR instrument with an unmodified source region. An initial measurement indicated a 1 nA current in the first vacuum chamber (Fig.~\ref{fgr:transmission}a). This may include ionised solvent and contaminants. There was also a sharp drop in current between the S exit lens and the inter flatapole lens. \begin{figure} \includegraphics[width=\linewidth]{transmission.pdf} \caption{Transmission properties. \textbf{a} Ion current across the instrument before and after increasing the S-exit-lens diameter, measured at different ion optics. \textbf{b} Typical ion currents at the energy detector for different S-lens diameters. (Values equivalent to sample currents). Protein ion currents increase with aperture size. RhoB currents don't follow the trend, due to a defocusing effect. Currents on preceding optical element are shown in light colours. \textbf{c} Native BSA current on energy detector decreases with the narrowing width of the mass filter window. } \label{fgr:transmission} \end{figure} To improve the transmission performance, we enlarged the inner diameter of the S-exit lens stepwise from 1.4 to 2.0 and finally to \SI{2.5}{mm}. With the \SI{2.5}{mm} opening, the ion current at the sample for large, native proteins doubled to \SI{25}{\pA} and for medium sized, native proteins the current grew more than ten-fold to \SI{170}{\pA} (Fig.~\ref{fgr:transmission}b). There was no measurable effect for Rhodamine B (RhoB), a relatively small ion with an \mz{} of 443 Th. All currents reported here are routinely reached with fluctuations of up to \SI{80}{\percent}, due to emitter performance. The overall transmission is further affected by mass-filtering, where a narrow \mz-window not only suppresses contamination, but can also reduce the flux of desired analyte molecules. Fig.~\ref{fgr:transmission}c illustrates how the width of the mass-filter window affects the native BSA current: Removing higher-order agglomerates has little effect on the sample current (bottom to mid panel). In this case, it was possible to filter a single charge state whilst retaining a third of the total current. In contrast to the protein ion currents, RhoB current does not change with increasing S-lens diameters. Likely, a different beam profile as compared to heavy protein ions causes this behaviour. Thanks to its low \mz, RhoB experiences a stronger effective potential than high \mz{} protein ions within the S-lens. Thus, it can remain closer to the optical axis, reducing losses at the transfer apertures. The modifications to increase the ion current are vital for depositing larger molecules. They allow to test several deposition conditions on a single experiment day, where particle densities of \SI{3000}{\per\micro\m\squared} or more are needed for efficient cryo-EM or SPM. For our applications, this is usually achieved with a deposited charge of \SI{15}{\pA h}. The modifications lead to a deposition time of approximately \SI{0.5}{\hour} for large native protein complexes. Whilst necessary for preparative MS, our modifications cause the gas flow into the injection flatapole collision cell to become significantly higher. The pressure in the flatapole rises as a consequence and could decrease the in-source-trapping effectiveness. \subsection{Ion-Beam Shape and Control} \label{ssec:spotsizetext} \begin{figure} \includegraphics[width=.9\textwidth]{BeamSpotSizePanel.pdf} \caption{ Ion Beam Shape Analysis: \textbf{a} Ion-beam image of the sample holder front plate. \textbf{b} Sample holder section view: Different voltages influence focusing. \textbf{c} Ion-beam shape from deconvolution of \textbf{a}. \textbf{d} Data points and Gaussian fit of ion-beam intensity distribution. \textbf{e} HOPG sample used for AFM measurements and protein density distribution in the screened area. \textbf{f} Measured protein distribution on \textbf{e} (black dots), Gaussian fit (blue), and single monolayer model (orange). \textbf{g} Amorphous carbon grid used for TEM measurements and protein density distribution in particles per \textmu m$^2$. \textbf{h} Gaussian fit (blue) of density distribution in \textbf{g}. Individual AFM and TEM micrographs are shown in Fig.~\ref{fgr:spotsize_SI} in the SI. } \label{fgr:spotsize} \end{figure} The ability to create a narrowly focused beam is essential to reduce the time needed to achieve the optimal particle density for SPM or TEM. We used three different methods to assess the ion beam profile under typical experimental conditions. First, we took an ion-beam image of the front plate of our sample holder (see Fig.~\ref{fgr:spotsize}a). For this, we scanned the beam with the deflection elements in the electrostatic lenses and recorded the current on the front plate. The resulting current image is a convolution of the front plate geometry and the beam shape. Deconvolution revealed a Gaussian-like beam profile (shown in Fig.~\ref{fgr:spotsize}c). A Gaussian fit gives a FWHM of 2.7 mm, only slightly larger than the diameter of the preceding aperture of 2 mm, which the beam typically passes without losses. The observed widening between the last aperture and the front plate is a consequence of the beam-energy distribution and the DC gradient in this section. A weak DC gradient moves the ions slowly in axial direction and gives them more time to expand radially. The beam profile obtained in this way is the profile at the front plate, whereas the samples are located a few mm behind and can be biased at a different potential. The beam profile is different on the sample, because the potential gradient between front plate and sample can focus the beam (Fig.~\ref{fgr:spotsize}b). We used AFM to determine protein density distribution after ion beam deposition on HOPG and TEM imaging after deposition on a TEM grid. We typically use 5 mm wide HOPG chips (see Fig.~\ref{fgr:spotsize}e) as substrates for AFM imaging. For the example given here, we deposited 12.5 pAh of GroEL. Multiple AFM images were taken on the graphite sample, distributed along the length and width of the sample. We found that, for the specific DC potentials used in this experiment, most of the surface area was empty and proteins were localized in a small spot near the centre. Surprisingly, we observed a transition from a clean, empty surface to a coverage of more than a monolayer within 250~$\mu m$. We estimate the total number of GroEL particles as \SI{4.2E9}, from the deposited charge and average charge state of +67. Because AFM cannot distinguish between single and multiple monolayer coverage, we can only roughly approximate the particle distribution. We fitted our data to two alternative models. A Gaussian fit combines the total particle number with the particle density in the sub-monolayer coverage area. It suggests a deposition spot FWHM of just 350 \textmu m and a coverage of up to six monolayers at the centre. However, it fails to reproduce the sharp increase in density at the spot's boundary. Alternatively, we assume a monolayer density in the centre (ca. 5000 particles per \textmu m$^2$) with a sharp drop to 0 at .5 mm from the spot centre (orange curve in Fig.~\ref{fgr:spotsize}f). This model overestimates the density at the spot boundary. The real distribution is likely found between these two estimates. As changing position on the sample can be tedious in AFM, other methods with wider field of view or faster change of position would be more appropriate to analyze particle distributions. Thus, as a third approach, we deposited an apo/holo-ferritin mixture on a TEM grid covered with 3 nm amorphous carbon film (see Fig.~\ref{fgr:spotsize}g) and acquired micrographs at room temperature. The density of holoferritin iron cores was quantified on different grid squares. The resulting distribution is shown in Fig.~\ref{fgr:spotsize}h, together with Gaussian fit. A clear decrease of protein density from the centreed maximum to the edges of the grid is observed. The fit gives a FWHM of 1 mm and a total particle count of \SI{2.9E9}{}. We can compare this number to the estimate from the total accumulated deposition current of 20 pAh. Using the most abundant apoferritin charge state of +50, this corresponds to \SI{9.0E9}{} particles. We attribute the deviation partially to ambiguity of the charge state, due to the continuous mass to charge distribution of ferritin, caused by the randomness of the mass of the iron cores. Hence, the charge state distribution cannot be measured with ensemble MS techniques. This makes the calculation of the number of landed particles less accurate. In addition, apoferritin, which accounts for 40\% of the total ion-beam intensity, was not detected due to radiation damage. The different approaches to the measurements of the deposition spot size provide comparable results and show that the ion beam can be focused to reduce the preparation time of high-density protein samples. Differences in the spot size can be understood by the use of two different proteins, DC potentials, and different sample geometry. The AFM sample is thicker, and thus closer to the front plate. This changes the local electric fields and leads to a different focus. We have observed that the deposition spot size can be tuned most effectively using the DC potential between front plate and sample. The beam can also be defocused to create a more homogeneous distribution across the entire sample. Generally, either full monolayer coverage or few isolated particles can be achieved to optimize the sample for various imaging applications. The size and shape of the deposition spot measured here is consistent with other observations. Secondary ion mass spectrometry together with infrared reflection absorption spectroscopy showed similar distributions of below- and above-monolayer coverage\cite{laskin_soft-landing_2008}. Most importantly, the strong influence of the fields directly at the sample suggest that more effective focusing could be achieved with dedicated ion optics installed at this location. \subsection{Control of conformation after landing by mass filtering and solution composition} \begin{figure} \includegraphics[width=\linewidth]{Histogram_spectra_BSA.pdf} \caption{Native (red) and denatured (yellow) mass spectra and height histograms. Spray solution composition: Native \SI{200}{\milli\mole} \ce{NH4Ac}, denatured 73:24:3 \ce{MeOH}:\ce{H2O}:\ce{HCOOH}. \textbf{a} Mass spectra for native (filter window 2000...5000 m/z) and denatured (1250...1700 m/z) BSA. \textbf{b} Resulting height distribution measured with AFM after soft-landing on HOPG. N native = 47, N denatured = 60.} \label{fgr:QMS-height} \end{figure} \sisetup{separate-uncertainty = true} It is established, for example by ion mobility spectrometry, that the three-dimensional (3D) conformation of proteins can be retained to a large degree in native ESI.\cite{shelimov_protein_1997} To study if such a native-like conformation can be retained in our instrument, we soft-landed BSA on HOPG using different solutions and instrument settings. Fig.~\ref{fgr:QMS-height} shows two mass spectra of BSA. When using a solvent containing \SI{73}{\percent} \ce{MeOH}, \SI{3}{\percent} \ce{HCOOH} (formic acid), and \SI{24}{\percent} water and a conventional ESI source, high charge states were observed indicating that the protein is denatured and unfolded. We selected the charge states +40 to +53 with the mass filter for deposition. For a \SI{200}{\milli\mole\per\liter} \ce{NH4Ac} solution nano-sprayed at \SI{1.2}{kV}, much lower charge states between +14 and +17 are observed which indicate folded BSA. We selected only the BSA monomer for deposition. After deposition, AFM images are taken and quantitatively analysed (see methods) to extract the height distribution, which allow an approximation of the shape of the adsorbed proteins. The height distribution is \SI{1.8\pm0.3}{\nm} for denatured BSA, and \SI{4.7\pm0.4}{\nm} for native BSA, given as mean $\pm$ standard deviation. Adsorbates originating from highly charged, denatured protein ions appear much flatter than their low-charged native counterparts. This difference in height is consistent with proteins in completely unfolded and globular conformations, respectively. However, it is not possible to directly image the conformation of individual soft-landed proteins in ambient AFM. Firstly, the individual BSA molecules have undergone diffusion limited aggregation\cite{zhang_atomistic_1997} on step edges and terraces. Hence, the individual proteins cannot be identified unambiguously (Fig.~\ref{fgr:native_HOPG_SI} and \ref{fgr:denatured_HOPG_SI}). Secondly, the AFM radius of the tip is too large to resolve the lateral shape of the aggregates. Instead, a convolution of the tip shape and adsorbate shape is measured, but the height is reproduced with great accuracy ($<1$\r{A}). This result proofs that the ionisation conditions, notably source and solvent, control the conformation of the soft-landed protein on HOPG. The CCS describes the ion conformation in the gas phase ahead of the landing event. The CCS of BSA measured in \ce{N2} for charge state +40 to +53 is \SIrange{134}{144}{\nm^2} \cite{elliott_simultaneous_2017}, for native BSA (+14 \ldots +17) it is \SI{45}{\nm^2} \cite{bush_collision_2010}. Our measured heights are in good agreement with these values because high CCS, extended denatured conformations yield flatter agglomerates than native, compact ones. Therefore, protein height measurements after soft-landing can reveal pre-landing gas-phase conformations on mass spectrometers without IMS capability. This is consistent with previous observations that conformations are retained, on the level of a general shape, after soft-landing on a relatively inert surface like graphite.\cite{siuzdak_mass_1996,rauschenbach_electrospray_2006,mikhailov_mass-selective_2014} \sisetup{separate-uncertainty = false} \subsection{Mass-selective preparation of cryo-EM protein samples} For large, folded protein assemblies, cryo-EM has become one of the leading methods for structural characterization at atomic resolution.\cite{kuhlbrandt_resolution_2014,bai_how_2015} Negative-stain EM, on the other hand, is commonly used to screen sample quality before preparation of cryo-EM samples. Native ES-IBD has the potential to complement and accelerate established cryo-EM sample preparation workflows by selective sample preparation and direct correlation between cryo-EM density maps with complementary information about native interactions and small ligands from mass spectrometry. Our ion-beam deposition instrument can cover TEM grids with mass-selected protein assemblies, with unprecedented landing energy control, for imaging in negative-stain EM and cryo-EM. Native gas-phase protein ions are generated via native electrospray ionization, then mass selected, and deposited on TEM grids at room temperature. Grids are retrieved via the vacuum load-lock, transferred under ambient conditions and either stained using uranyl acetate or manually frozen in liquid nitrogen to create cryo-EM compatible samples while circumventing vitrification. Fig.~\ref{fgr:TEM} shows negative-stain and cryo-EM micrographs from native ES-IBD samples of apo/holo-ferritin (479 kDa) and GroEL (803 kDa). 3D models from the PDB (blue) and two-dimensional (2D) classes (green) obtained from single particle analysis in RELION 3.1 are shown as inserts. \begin{figure} \includegraphics[width=.9\textwidth]{TEM_micrographs-crop} \caption{Negative-stain and cryo-EM micrographs of apo/holo-ferritin and GroEL after gas-phase purification and gentle deposition on TEM grids. \textbf{a} Apo/holo-ferritin, landing energy of 5 eV per charge, 30 nm amorphous carbon film, stained with uranyl acetate. \textbf{b} Apo/holo-ferritin, landing energy of 2 eV per charge, 3 nm amorphous carbon film, plunge-frozen in liquid nitrogen. \textbf{c, d} GroEL, landing energy of 2 eV, 100 eV per charge, 3 nm amorphous carbon film, plunge-frozen in liquid nitrogen. The insets show 3D models from the PDB (blue), rendered with ChimeraX\cite{pettersen_ucsf_2021} using PDB entries 7A6A for apoferritin and 5W0S for GroEL, and 2D classes of native ES-IBD samples (green) obtained using RELION 3.1. The number of particles in the 2D classes is given in the insets. } \label{fgr:TEM} \end{figure} In the micrograph of a negative-stain sample of an apo/holo-ferritin mixture, Fig.~\ref{fgr:TEM}a, individual proteins with and without iron cores can be identified. The edges of the protein shell in the 2D classes are less defined than for a control sample made by conventional liquid deposition (shown in Fig.~\ref{fgr:TEM_SI}). The apo-ferritin 2D class indicates structural heterogeneity, likely due to a deformation of the hollow protein shell, while the holo-ferritin is stabilized by the presence of the iron core in its centre. A similar workflow has recently achieved significantly higher quality for stained samples of GroEL, by landing in a glycerol matrix before negative staining, even without precise landing energy control.\cite{westphall_3d_2021} This highlights that landing, interaction with the solid substrate, and vacuum exposure can influence the structure of protein complexes, and a high level of control is needed to minimize deviation from native structures. Combining ES-IBD of protein complexes with negative stain TEM, with or without liquid matrix, has great potential for screening applications. However, we have focused on cryo-EM sample preparation because negative staining ultimately limits access to high-resolution and information on internal structure. A micrograph of a native ES-IBD cryo-EM sample of the same apo/holo-ferritin mixture is shown in Fig.~\ref{fgr:TEM}b. The particles have a significantly higher contrast compared to conventional cryo-EM micrographs, due to the use of a 3 nm thin amorphous carbon film and the absence of ice. The ferritin protein shells are clearly visible around the iron cores and demonstrate conservation of protein complex topology. A slight deformation of the apoferritin is still observed, but it is smaller than for the stained sample, and the 2D classes show sharp rather than diffuse edges. This result indicates that the deformation observed in Fig.~\ref{fgr:TEM}a is not only due to the deposition on dry samples at room temperature, but due to the influence by negative staining. We suspect that the exposure to the air-water interface in the staining step limits sample quality in this workflow. Finally we compare ES-IBD samples of GroEL prepared with landing energies of 2 and 100 eV per charge, imaged by cryo-EM and shown in Fig.~\ref{fgr:TEM}c and Fig.~\ref{fgr:TEM}d, respectively. Top and side projections of GroEL can be identified unambiguously in the sample prepared at the lower landing energy. The features of the characteristic barrel shape, including the central cavity and heptameric symmetry in the top view, are already apparent in the single particle images. Particle dimensions indicate no lateral deviation from literature values. However, further detailed substructure, as observed in samples prepared by plunge-freezing, is not visible, which is attributed to small random changes in secondary and ternary structure, which blurs the images classes (see \citeauthor{esser_mass-selective_2021} for a detailed discussion). In the sample prepared using a landing energy of 100 eV per charge, Fig.~\ref{fgr:TEM}d, individual particles are still clearly visible, but they are up to 30\,\% larger in diameter, and the distinctive structural features have disappeared. Identification of side and top views is no longer unambiguous. This clearly shows plastic deformation of the GroEL complex due to the energetic impact on the surface, as all other conditions were kept identical. Our workflow enables systematic investigation of the landing energy dependence of this deformation to infer mechanical properties of proteins and protein assemblies. \subsection{Retention of enzymatic activity} The difference in structural detail observed between the plunge-frozen cryo-EM samples and ES-IBD samples suggest a level of structural change. To study to what degree this structural change can affect the biological function of proteins, we tested whether the non-covalent protein complex ADH retains enzymatic activity after deposition. So far, this has only been shown for recalcitrant single-stranded proteins with no prosthetic groups such as trypsin \cite{ouyang_preparing_2003,volny_preparative_2005}. We adapted a photometric assay to quantify ADH activity by NADH production after landing on a surface. \begin{figure} \includegraphics[width=\linewidth]{ms_activity.pdf} \caption{\textbf{a} Mass spectra of deposited, mass-selected ADH tetramer. Inset: Colour change from yellow to orange indicates an active ADH in the lower well. The black objects on the well's walls are the submerged ADH-coated conductive tapes. \textbf{b} Production of NADH by ADH after ES-IBD. The broken lines stagnating at the offset level are background controls, so the NADH production is specific for ADH activity. The absorbance measurement causes two separate artificial saturation levels due to different calibrations.} \label{fgr:ADH_assay} \end{figure} We deposited ADH on conductive carbon tapes with \SI{27}{\ng} (\SI{128}{\pA h}) ADH for repetition A, and for repetition B with \SI{22}{\ng} (\SI{102}{\pA h}). For each experiment 2 samples were made. Assuming a \SI{2.5}{\mm} diameter deposition spot, this corresponds to 2 monolayers on average. Fig.~\ref{fgr:ADH_assay} shows production of NADH by the samples together with background control submerged conductive carbon tapes. The ADH activity is proportional to the slope in of the curves in Fig \ref{fgr:ADH_assay}b. It was \SI{1.2}{mU} (A) and \SI{1.9}{\milli\U} (B). Minimal (A) or no (B) background activity was recorded in the corresponding time frame. The recovery, based on ADH data sheet activity (\SI{300}{\milli\U\per\g}) was \SI{14}{\percent} (A) and \SI{29}{\percent} (B). When the activity of the spray solution is taken as a reference (A: \SI{88}{\milli\U\per\g}, B: \SI{138}{\milli\U\per\g}), we find activities of \SI{48}{\percent} (A) and \SI{65}{\percent} (B) for soft-landed ADH. The positive control activity was lower than spray solution activity (A: \SI{56}{\milli\U\per\g}, B: \SI{117}{\milli\U\per\g}). We measured no activity for a \SI{27}{\ng} (\SI{128}{\pA h}) conductive carbon tape after 3 days storage in vacuum (Fig.~\ref{fgr:storage_SI}). (For further details on attempted ADH extraction refer to SI.) These results offer compelling evidence that a large, non-covalent protein complex can survive the entire ES-IBD workflow including ionisation, de-hydration, transfer into high vacuum, soft-landing, and re-solvation. It is difficult to quantify the exact proportion of intact enzyme. Instead of the numerical value, the order-of-magnitude of the activity is relevant. A number of experimental uncertainties cause this: When reconstituting the commercially obtained, crystalline ADH, it is not known which proportion of the enzyme refolds incorrectly and remains inactive. We measured the spray solution concentration photometrically using a calculated attenuation coefficient. Surprisingly, a much higher proportion of deposited ADH than expected from these references was found to be active. Thus, we used an extrapolation and later a non-linear calibration (see methods). Additionally, the conductive carbon tape could have blocked a small part of the plate reader beam path inside the well and increased absorbance. To mitigate errors, deposited ADH quantity should be cut to a third to remain in the linear range and the reading frequency increased. The loss of all activity after 3 days storage in vacuum at room temperature might be a consequence of degradation, surface interaction or desolvation. Further experiments are required to investigate if the soft-landed reconstituted ADH was the intact homo-tetramer. TEM images of ADH, soft landed under comparable conditions, indicate no fragmentation or change in quaternary structure.\cite{esser_mass-selective_2021} \section{Conclusions} While the existing ES-IBD prototype instruments show some of the desirable features, a high-resolution, native, mass spectrometer with ES-IBD capability, which includes high beam intensity, ion beam monitoring and control, and adjustable, low and narrow deposition energy is currently not commercially available. This work details the conversion of a high-performance serial Orbitrap mass spectrometer, designed for the very high mass range typical for native protein complexes, into an instrument for molecular ion beam deposition. Beyond additional ion optics and a deposition stage, this requires the complete understanding of the instruments' beam handling in order to align the new components with the duty cycle of the original instrument. Also, obtaining sufficient intensity is a major achievement, for which small modifications to the existing ion optics were needed in addition to the implementation of ion current monitoring. Finally, an intuitive beam guiding and monitoring software is most helpful in characterising the performance and obtaining reliable, reproducible deposition results. The focus of this instrument modification is the deposition and imaging of native proteins in order to add chemical selectivity to the protein structure determination process. This deposition and imaging work-flow is in development. Currently, electron density maps from samples prepared with ES-IBD lack the necessary resolution to determine if the protein complex structure has been completely preserved.\cite{westphall_3d_2021, esser_mass-selective_2021}. An alternative approach to check the integrity of the deposited protein is an enzymatic assay. It indicated that the activity of the non-covalent protein complex ADH was retained post-deposition. The instrument developed here shows that a commercial platform can indeed be modified to perform deposition experiments reliably and under full control, while the excellent performance of the mass spectrometer is retained. We have not yet utilized the full capabilities for beam modification such as ion activation, or high-resolution selection of a fragment ion, but in principle these capabilities are available. Generally, the pulsed operation of the Orbitrap instrument allows for a manifold of operation modes, in which deposition can be integrated as part of the duty cycle. \section{Methods} \subsection{Mass-filtered electrospray-ion-beam-deposition machine design} We converted a Thermo Scientific{\textsuperscript{TM}} Q Exactive UHMR into a preparative mass spectrometer (Fig.~\ref{fgr:instrument}). The electrometer at the end of the HCD cell was removed to make space for a custom deposition stage. Analytical tandem MS still works unaffected in the modified UHMR. The deposition stage contains a 2x8 element electrostatic lens to focus the ion beam. Steering lenses deflect the beam laterally to any position on the sample holder. A \SI{2}{mm} diameter aperture separates the two lens stacks. The first lens stack is pumped via the Q Exactive UHMR quadrupole. A \SI{67}{\liter\per\s} turbo pump in the deposition part pumps the second part (HiPace 80, Pfeiffer Vacuum GmbH, Asslar, EU). A CF 40 gate valve (series 01, VAT Vakuumventile AG, Haag, Switzerland) decouples the deposition stage from the analytical mass spectrometer. After the gate valve, an immersion lens shields the ion path from the electric potential of the grounded vacuum chamber. Hence, beams with negative Total Energy ($E_\mathrm{tot}$) vs. GND can pass. The sample holder has two sample positions for EM grids or AFM samples and an energy detector to measure beam $E_\mathrm{tot}$. A custom sample transfer stick moves it from a load lock to high vacuum (HV). RBD 9103 HV floating picoampmeters (RBD Instruments Inc., Bend, USA) measure ion current on aperture, sample holder front plate, samples, and the energy detector. An ECH 244 crate with 2x EBS 180 $\pm$ \SI{500}{V} bipolar power supply insets control all DC voltages to deposition stage (ISEG Spezialelektronik GmbH, Radeberg, EU). Home-written control software for the picoampmeters and power supplies facilitates the ES-IBD workflow. It supports rapid 2D ion beam imaging, $E_{tot}$ beam measurement and automatic beam focusing optimisation. To use sweep gas with the nano-ESI source, we milled a \SI{20}{\mm} bore in the cone gas adaptor. The S-lens diameter was increased from \SI{1.4}{\mm} to \SI{2.0}{\mm} and later to \SI{2.5}{\mm} to improve ion transmission. Consequently, gas throughput at the source turbo pump (Splitflow 310, Pfeiffer Vakuum GmbH, Asslar, EU) rose from approximately \SIrange{2.7}{5}{\milli\bar\liter\per\s}. We separated the fore pump system to protect the Splitflow 310. The S-lens chamber remained pumped by the factory-fitted Sogevac SV65BiFc fore pump (Atlas Copco, Stockholm, EU), and the Splitflow 310 was connected to an Edwards XDS 35i (Atlas Copco, Stockholm, EU) fore pump. \subsection{Deposition workflow} The first step is to load two EM-grid or AFM highly oriented pyrolytic graphite (HOPG) targets into the sample holder. The transfer rod moves them from the ambient load lock to the high vacuum deposition chamber. Whilst the pressure therein decreases, we prepare the ion beam.\newline For native proteins, we use gold-coated \SI{1.2}{mm} glass capillary emitters. We select the minimum possible pressure to push the spray solution to the tip. This maximises emitter lifetime. We start the instrument in normal analytical configuration to check if the emitter is working. We set the mass filter window, then switch to beam mode. Both samples are kept at a high, repulsive potential to avoid uncontrolled deposition. In beam mode, the C-trap and the HCD cell guide the ions without pulsing into the landing stage. All DC potentials within the Q Exactive UHMR instrument are at the default values to guarantee activation-free transmission from source to the deposition stage. In contrast, analytical native MS typically uses strong gradients, often in pulsed modes, to desolvate or dissociate protein complexes\cite{hernandez_determining_2007}. HCD gas flow is set to 7 to thermalise the ion beam in there.\newline To optimise the current, we change the emitter distance, backing gas pressure and the cone gas flow. If the current is sufficient for deposition, we switch to analytical mode and acquire mass spectra of the ion beam. The instrument is set beam mode again and the beam steered on the energy detector. The detector has a metal grid in front of the collector plate used to measure current. If the electric potential on the metal grid is higher than the total beam energy, the ions cannot pass. Hence, we record the detector collector plate current as a function of the grid potential to obtain the beam energy. Then, we select the retarding potential on the sample. The difference between the beam energy and the retarding sample potential determines the landing energy, typically \SI{5}{\electronvolt} per charge. We deflect the beam on the sample and start the sample current integration. Once the charge reaches the defined value, the repulsive potential is re-applied. The beam composition is periodically controlled using the mass analyser, including every time we replace the nano spray emitter. TEM imaging deposition procedure has been already described \cite{esser_mass-selective_2021}. \subsection{Energy width} We used a native and a denatured BSA beam. For preparations see below. All DC voltages within the Q Exactive UHMR instrument were at the default values. For both beams, we applied a weak or strong DC gradient in the landing stage optics. This focused them through the electrostatic lens on the energy detector. Fig.~\ref{fgr:UHMR_landing_potentials_SI}b shows the different voltages applied to produce a weak or strong gradient. The voltage on the detector metal grid was swept in 40 voltage steps around the expected beam-energy value. For every voltage step, we recorded the average of 60 detector current measurements. This dampens arbitrary or short-term periodic current fluctuations. The negative differential of the current by the voltage was fitted with a Gaussian distribution. The fit gives the mean beam energy and its FWHM. \subsection{Transmission} To measure ion current within the Q Exactive UHMR instrument, we added breakout cables. To this end, we separated the transfer capillary voltage supply from S-exit lens. Breakout cables were connected to the S-Exit lens, the inter-flatapole lens, the inner Turner-Kruger (TK) lens and the HCD exit lens. A modified cone gas cap adaptor supplies the transfer capillary voltage. Each breakout cable connects a RBD 9103 picoampmeter to a DC ion optic and the corresponding power supply on the Q Exactive UHMR DC supply board. \newline For the current measurement in the Q Exactive UHMR, we set the DC optic (e.g. the S-exit lens) to an attractive potential and the following RF ion optics axis DC (e.g. the injection flatapole) to a repulsive potential. This ensures the entire beam is collected on the DC optic in question. All voltages are in table SI \ref{tab:SI_pot_current}. In the deposition stage, we deflected the beam instead on the aperture or energy detector. \newline In the next step, we moved the emitter sidewards away from the transfer capillary to block the ion beam at a preceding element. The current offset was recorded and the emitter moved back in position. Then, we recorded the current. All values in Fig.~\ref{fgr:transmission} are offset corrected. \newline We used the heated ESI source for Rhodamine B and denatured BSA solutions. The nano-ESI source was used for native Ferritin and native BSA. \subsection{Ion beam shape analysis and control} 1. On Front Plate: We obtained a 2D image of the front plate with a denatured BSA beam. We chose denatured BSA, as it reproducibly provides an intense and stable ion beam, which allows to collect high-quality images. To obtain a scanned image, we deflected the beam horizontally and vertically with the steering lenses whilst recording the current on the front plate. A 41 x 65 pixel scan was obtained in \SI{34}{min}. The image dimensions where then converted from volts to millimetres by calibration with the actual front plate size. The image represents a convolution of the sharp front plate geometry, a function of only 0 and 1, and the ion beam profile, assumed to have Gaussian shape. We used a Python script to deconvolute. It employs a binary filter to create a sharp version of the image and then applies the convolution theorem to obtain the beam profile. Finally, we used a low pass filter to remove high frequency components, originating from the non periodic image boundary. \newline 2. On a HOPG AFM sample: We deposited 12.5 pAh of GroEL and used a NanoScope MultiMode AFM for imaging. GroEL was prepared as described in subsection "Spray solution preparation". For deposition, we followed the standard workflow. Except for the front plate voltage. It was at \SI{-10}{\V}, as close as possible to the beam energy of \SI{-7\pm1.6}{\electronvolt} per charge, to minimise the deposition spot size. We acquired multiple 5 x \SI{5}{\um^{2}} images on a raster around the deposition spot to further assess the protein distribution. We used the dimensions of the cantilever to raster across the surface and reconstruct a density map. We manually counted the number of aggregates in each image. 3. On a TEM sample: \SI{20}{pAh} ferritin were deposited on an amorphous carbon TEM grid (AGS160-4, Agar Scientific, Stansted, Great Britain). The front plate was at \SI{-20}{\V} to the standard work-flow focus. We used a mixture of apoferritin and holoferritin to obtain high contrast. Under the given conditions only holoferritin iron cores are visible. TEM images were recorded using an FEI Talos 200c at room temperature and under the given conditions only the holoferritin iron cores are visible. A python script was used to count the number of particles on the TEM images. Measuring the current on the sample for mass-selected apoferritin and ferritin, the ratio between them was determined as 40:60 and the particle counts were corrected accordingly. The density was determined on multiple grid squares as the average of particle counts of three images, divided by the image area. The coordinates of the individual grid squares were obtained according to the grid square size on a 400 mesh TEM grid. \subsection{Spray solution preparation} We purchased rhodamine B (R6626-25G), bovine serum albumin (BSA, A0281-1G), equine spleen ferritin (F4503-25MG), GroEL (chaperonin 60, C7688-1MG) and baker's yeast alcohol dehydrogenase (A7011-15KU) from Sigma Aldrich (Darmstadt, EU). Ferritin and GroEL preparation has been already described \cite{esser_mass-selective_2021}. We dissolved rhodamine B in 80:20 \ce{H2O}:iPr to \SI{1E-4}{\mole\per\liter}. We made a denatured \SI{4E-6}{\mole\per\liter} BSA solution for AFM deposition in 73:23:3 MeOH:\ce{H2O}:HCOOH. For all other denatured BSA measurements, we used a \SI{3E-6}{\mole\per\liter} 100:100:1 ACN:\ce{H2O}:HCOOH solution. We desalted native BSA and ADH twice with size-exclusion chromatography columns (P6, 7326222, Biorad, Hercules, USA). These were equilibrated with \SI{0.2}{\mole\per\liter} ammonium acetate (A2706-100ML, Sigma Aldrich). Resulting concentrations were 2 to \SI{5E-6}{\mole\per\liter}. For all preparations, deionised water with $\rho >= \SI{18.2}{\Mohm.\m}$ filtered through \SI{0.22}{\um} was used. All other solvents were MS grade from changing suppliers. \subsection{AFM analysis} Prior to deposition, each highly oriented pyrolytic graphite chip (HOPG, MikroMasch, Sofia, EU) was cut into 5x\SI{5}{\mm} chunks and glued with leit-silver (09937, Sigma Aldrich) on an AFM stainless steel support. We used a multimode AFM (asmicro, Indianapolis, USA) with a Scout 350 silicon tip (Nunano, Bristol, Great Britain) in tapping mode at room temperature. The AFM images were further processed with Gwyddion. We used the graphite step-edges for height calibration. We selected the highest point of each protrusion as height measurement. \subsection{TEM} Ferritin (F4503-25MG) and GroEL (chaperonin 60, C7688-1MG) samples were purchased from Sigma Aldrich. Sample preparation was carried out using a standard native MS workflow, including exchange of buffer to volatile ammonium acetate, as described before.\cite{esser_mass-selective_2021}. All samples were imaged using a Talos Arctica 200 kV (Thermo Fisher Scientific), and images were processed using RELION 3.1, as described in \citeauthor{esser_mass-selective_2021}. For staining, 30 nm amorphous carbon TEM grids (AGS160-4H, Agar Scientific) were plasma cleaned before deposition. After deposition, dry grids were placed on 25 $\mu$L of 2\% uranyl acetate, blotted, and left to dry. A control sample was prepared by applying 4 $\mu$L of 10 $\mu$M ferritin in PBS to the grid for 2 minutes, followed by blotting, washing and staining as described above. \subsection{Retention of enzymatic activity} The workflow we developed combines ES-IBD with an adapted photometric alcohol dehydrogenase detection kit (ab102533, Abcam, Cambridge, Great Britain). Principle: ADH-catalysed oxidation of Propan-2-ol yields NADH and Propanone: \ce{NAD+ + Propan-2-ol <--> NADH + Propanone}. NADH reacts with a colorimetric probe to form a bright yellow complex analysed at $\lambda$ = 450 nm. Whilst the manufacturer does not specify the exact mechanism of the kit, it is most likely based on the WST-8 to WST-8 formazan reaction \cite{chamchoy_application_2019}. Preparation: All micro-centrifuge tubes and pipette tips were normal PP. All solutions were shielded from direct light and kept on ice, except where mentioned. Each kit was reconstituted according to the manual \cite{noauthor_ab102533_2014}, divided into 4 aliquots and refrozen at \SI{-20}{\celsius}. On the day of the deposition, we thawed one kit aliquot and the desalted ADH spray solution. We prepared two positive control ADH solutions from crystalline ADH in the supplied buffer to theoretical in-well activities of \SI{4E-10}{\mole\per\minute} and \SI{4E-9}{\mole\per\minute}. Reaction mix and background control solutions were prepared as in the manual and kept at room temperature. All solutions were prepared for a \SI{150}{\ul} total volume in well. This is made up of \SI{50}{\ul} active solution (buffer for blank, buffer for extraction or positive control) and \SI{100}{\ul} of either reaction mix (with substrate Propan-2-ol) or background control mix (no substrate). We measured the ADH spray solution absorbance and determined the concentration with a calculated absorbance coefficient of \SI{195440}{\liter\per\mole\per\centi\m}. Based on this concentration we prepared spray solution positive controls with the same theoretical in-well activity as the other two positive controls. Deposition: We cut a conductive carbon double-sided tape (EM-Tec CT6, 15-000406, Labtech, Heathfield, Great Britain) in half. We removed two thirds of the protective film on the back and glued it to a stainless-steel AFM support. The entire protective film on the top side was removed and the target installed in the sample holder. We prepared two targets per repetition, one for the sample and one for the background control. To minimise contamination, we immediately installed the sample holder in the deposition vacuum chamber. The deposition followed the standard procedure. The nano spray needle was protected from direct light. We deposited two tapes with \SI{27}{\ng} (\SI{128}{\pA h}) ADH for both repetition (A) and (C). For repetition (B), we deposited two tapes with \SI{22}{\ng} (\SI{102}{\pA h}). The mass was determined based on the total deposited charge, most abundant charge state and molecular weight of ADH. In repetition (B), the deposited amount was lower due to low sample current. The landing energy was \SI{5}{\electronvolt} per charge. Submersion and Measurement: The entire kit except for the reaction mix / background control solutions was reverse-pipetted in a 96 Corning 3881 non-binding surface half area well plate (Corning Inc., Corning, USA). We were doing this in parallel to deposition of the second ADH target. This minimises both the time the deposited ADH targets spend in high vacuum and the time they are exposed to the atmosphere. Repetition (C) targets were left for 3 days in the high vacuum deposition chamber. Then, we put the two deposited tapes in a well with their empty side facing the wall and the centre optical path free. The wells were already filled with \SI{50}{\ul} assay buffer to avoid gluing the tapes to the well's wall. We added reaction mix or background control mix and closed the plate with a transparent lid. A FLUOstar Omega plate reader (BMG LABTECH GmbH, Ortenberg, EU) incubated the sample at \SI{37}{\celsius} and read absorbance at \SI{450}{\nm} every \SI{3}{min} for \SI{2}{h}. Data analysis: The initial slope of the NADH production (repetition (A): minute 3...15, (B): minute 0...6) was used for activity calculation. We subtracted background activity only if it was positive. Due to the high proportion of active ADH after deposition, we had to extrapolate the linear calibration for repetition (A) in the absorbance range from 1.6 to 2.5 (\SI{18}{nmol} NADH). To attenuate arising errors, we extended a non-linear calibration for repetition (B) to 2.7 (\SI{15}{nmol} NADH). \pagebreak \begin{acknowledgement} We want to thank the Nanoscale Science Department at the Max-Planck-Institute for solid state reseach, in particular Artur Küster, for the CAD-construction and manufacturing of the deposition stage. We acknowledge support from Thermo Fisher Scientific who provided the UHMR mass spectrometer within the framework of a technology alliance partnership. TE acknowledges funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 883387. Competing Interests: M.R.S., K.L.F and A.A.M. are employees of Thermo Fisher Scientific, the company that commercializes Orbitrap-based mass analyzers. \end{acknowledgement} \clearpage \section{SI} \begin{suppinfo} \setcounter{page}{1} \setcounter{figure}{0} \renewcommand{\thefigure}{S\arabic{figure}} \renewcommand{\theHfigure}{Supplement.\thefigure} \subsection{Beam energy} \begin{figure} \includegraphics[width=.9\textwidth]{UHMR_landing_potentials.pdf} \caption{DC Potentials applied within \textbf{a} the mass spectrometer and \textbf{b} the custom landing stage for higher and lower DC gradients.} \label{fgr:UHMR_landing_potentials_SI} \end{figure} The ion beam is thermalised in the HCD cell at approx. $10^{-2}$ \si{mbar}. The total ion beam energy (TE) is close to the effective potential therein (\SI{-5}{\electronvolt z^-1}). When the ions leave the HCD cell, they are accelerated in the electrostatic lens. In there, the pressure decreases from high $10^{-3}$ \si{mbar} (HCD side) to $10^{-6}$ \si{mbar} (deposition chamber side). As the electrostatic lens is longer (\SI{60}{mm} for high-pressure part) than BSA mean free path (\SI{0.1}{mm} for a native $BSA^{+14}$ ion at \SI{7e-3}{mbar}), collisions with the background gas occur. For a hard-sphere collision, the kinetic energy E' of an ion after the collision is \cite{douglas_collisional_1992, douglas_mechanism_1982, cooks_collision_1978}: \begin{equation} \label{eq:energy} \frac{E'}{E} = \frac{m_1^2 + m_2^2}{M^2} + \frac{2m_1 m_2}{M^2} \cdot cos(\theta_{cm}) \end{equation} where $\theta_{cm}$ is the scattering angle in centre-of-mass coordinates, $m_1$ the ion mass, $m_2$ the gas molecule mass, $M = m_1 + m_2$ and $E$ the pre-collision ion kinetic energy. As $m_1 >> m_2$, the second term is close to zero. Thus, equation \ref{eq:energy} predicts $E'$ is a fraction of $E$ depending mostly on the ion and gas mass. Fig.~\ref{fgr:energy_loss_SI} compares this effect for heavy and light ions. The decrease in ion kinetic energy $E - E'$ is bigger for higher $E$. $E$ in the lab frame is proportional to the potential within the electrostatic lens. $E$ maximum is \SI{135}{\electronvolt z^-1} (strong gradient). As only \SI{8.8}{\electronvolt z^-1} (denatured) respectively \SI{5.2}{\electronvolt z^-1} (native) are dissipated in the electrostatic lens (Fig.~\ref{fgr:energy}), few high energy collisions occur. For each $E'$ is close to the initial kinetic energy. Therefore, the $E_{tot}$ is lower after passing through stronger gradient conditions. \begin{figure} \includegraphics[width=.9\textwidth]{energy_loss.pdf} \caption{Kinetic energy loss per collision in \ce{N2} for BSA (mass \SI{66500}{\atomicmassunit}) and Iodine (mass \SI{127}{\atomicmassunit}) as a function of the scattering angle. Minimum for BSA is 0.99832 at \SI{180}{\degree}.} \label{fgr:energy_loss_SI} \end{figure} A similar argument applies to the distribution width: Arbitrary variations of the impact angle between a gas molecule and an ion cause variations in the scattering angle. Term 2 in equation \ref{eq:energy} varies accordingly and again $E'$ is a fraction of $E$, meaning absolute variations in ion kinetic energy $E - E'$ are bigger for higher $E$. \begin{figure} \includegraphics[width=.9\textwidth]{MS_BSA_SI} \caption{Mass spectra of \textbf{a} denatured BSA and \textbf{b} native BSA. Both mass spectra are acquired with non-activating conditions used for deposition.} \label{fgr:MS_BSA_SI} \end{figure} \newpage \subsection{Transmission} Native BSA emission current for the +15 charge state and a nano-ESI flow rate of \SI{1}{\ul h-{1}} \begin{equation} I = \frac{cVzF}{t} = \frac{\SI{3e-6}{\mole\per\liter} \SI{1e-6}{l}\cdot 15 \cdot{\SI{96485}{C\per mol}}}{\SI{3600}{s}} = \SI{1.2}{nA} \end{equation} With concentration $c$, Volume $V$, number of charges $z$, Faraday constant $F$ and time $t$. \begin{table} \centering \begin{tabular}{llll} \textbf{Measure at} & \textbf{$\Phi$ Optic (V)} & \textbf{Repulsive Optic (RO)} & \textbf{$\Phi$ RO (V)} \\ \hline \hline Emitter & $\approx$1200 & n.A. & n.A. \\ Transfer capillary & 21 & n.A. & n.A. \\ S exit lens & -100 & Injection flatapole DC & 50 \\ Inter flatapole lens & -50 & Bent flatapole & 50 \\ Inner TK lens & 0 & Outer TK & 60 \\ Aperture & -60 & steer beam & n.A. \\ Current detector & -20 & steer beam & n.A. \end{tabular} \caption{Potentials applied to ion optics when measuring current. Emitter potential is for nano-spray setup.} \label{tab:SI_pot_current} \end{table} \newpage \subsection{Ion beam size} SIMION simulations are consistent with the observations in figure \ref{fgr:spotsize}. However, since the angular velocity distribution of the beam upstream of the sample holder cannot be determined experimentally, a quantitative comparison of simulated and observed beam profiles is currently not meaningful. \begin{figure} \includegraphics[width=.7\textwidth]{spot_size_SI-crop} \caption{Exemplary AFM and TEM data for the beam shape characterization. \textbf{a)} and \textbf{b)} show low (250 particles per \textmu m$^2$) and high (1500 particles per \textmu m$^2$) density areas on the TEM grid. \textbf{c)} shows an area in the centre of the deposition spot with more than monolayer coverage. Therefore, the line profile \textbf{d)} shows no well-defined baseline. \textbf{e)} shows a less dense area with 4 particles per \textmu m$^2$. The corresponding line profile \textbf{f)} shows a clear baseline and allows to determine the particle heights. } \label{fgr:spotsize_SI} \end{figure} \newpage \subsection{Mass filtering and solution composition} \begin{figure} \includegraphics[width=0.5\textwidth]{20190719_bsa_native_sample_end009.jpg} \caption{Native BSA on HOPG imaged with ambient AFM} \label{fgr:native_HOPG_SI} \end{figure} \begin{figure} \includegraphics[width=0.5\textwidth]{20190919_bd9a036flat.jpg} \caption{Denatured BSA on HOPG imaged with ambient AFM} \label{fgr:denatured_HOPG_SI} \end{figure} \newpage \subsection{TEM} \begin{figure} \includegraphics[width=.6\textwidth]{TEM_micrographs_SI-crop} \caption{Negative stain apo/holo-ferritin control sample. The 2D classes show the same characteristic features as observed for the native ES-IBD samples, though they are better defined and there is less deformation in the control, despite the lower number of particles. } \label{fgr:TEM_SI} \end{figure} \newpage \subsection{ADH activity} \textbf{Method for extraction:} Before we established the successful protocol, we tried to extract the deposited ADH. All steps were the same as described for the submersion, except: We deposited two times \SI{27}{\ng} (\SI{128}{\pA h}) and two times \SI{37}{\ng} (\SI{175}{\pA h}) on four amorphous Carbon EM Grids (AGS160-4H, Agar Scientific Ltd, Stansted, Great Britain). Each \SI{27}{\ng} grid was left for 2 days after deposition in ambient conditions. Then, we put in \SI{50}{\ul} assay buffer in a PP micro centrifuge vial. One \SI{27}{\ng} grid was vortexed for \SI{15}{min}, the other sonicated for the same time. Then, the we transferred the extract in the 96 well plate. For the \SI{37}{\ng} grids, both were transferred immediately after deposition in a well with \SI{50}{\ul} assay buffer. We moved them around for \SI{6}{min} with tweezers to wash ADH off. Then, we removed the grids. \textbf{Results:} Neither sonication- nor vortex- nor washed-off-extracted ADH from EM-grids was active (Fig.~\ref{fgr:extraction_SI}). Submersion of the EM grid was not possible due to high background activity. We attribute this to a redox-reaction between the kit's components and the grid's copper support, which turned dull. Submersed conductive carbon tape was inert and used for all further work (Fig.~\ref{fgr:submersion_blanks_SI}). To check if the adapted assay protocol was working, we used the nano-electrospray source to deposit two tapes at atmospheric pressure. Sample activity was \SI{2.1}{mU}, no background activity was present (Fig.~\ref{fgr:ESI_depo_SI}). Although charge can be measured, recovery calculation is not possible due to unknown composition of the ion-droplet-plume. \begin{figure} \includegraphics[width=.85\textwidth]{extraction.pdf} \caption{Assay-buffer-extraction of 128 pAh deposited ADH on EM grid with either vortex or sonication yields blank activity. The same applies for washing-off 175 pAh deposited ADH on EM grids. We tried to wash ADH off by moving the deposited grid around with tweezers in ADH buffer. } \label{fgr:extraction_SI} \end{figure} \begin{figure} \includegraphics[width=.85\textwidth]{submersion_blanks.pdf} \caption{Submersion of an EM-grid in assay reaction mix causes a strong increase in absorbance, conductive tape doesn't.} \label{fgr:submersion_blanks_SI} \end{figure} \begin{figure} \includegraphics[width=0.48\linewidth]{20211001_15h13n_NADH.pdf} \caption{Production of NADH by ADH after electrospray deposition at atmospheric pressure. The broken lines stagnating at the offset level are background controls, so the NADH production is specific for ADH activity.} \label{fgr:ESI_depo_SI} \end{figure} \begin{figure} \includegraphics[width=.85\textwidth]{vacuum_storage.pdf} \caption{128 pAh deposition (repetition C) retains no activity after 3 day storage in vacuum. The corresponding control was inactive as well, but showed a high absorbance due to the conductive tape having moved into the beam path (see Fig.~\ref{fgr:blockage_SI}). } \label{fgr:storage_SI} \end{figure} \begin{figure} \includegraphics[width=.85\textwidth]{vacuum_storage_optical_path_blocked.jpg} \caption{Left: 3 day storage background control (repetition C) moved into plate reader optical path. Note the liquid is still yellow, meaning no reaction has occurred during incubation. Right: 2D scan of bottom well confirms absorbance is still at blank level after incubation.} \label{fgr:blockage_SI} \end{figure} \end{suppinfo} \newpage
2,869,038,154,208
arxiv
\section{Introduction} \qquad In recent years, there has been a great deal of interest in non-commutative (NC) spaces in connection with string theory. Common to many of these studies is that the non commutativity stems from the D-brane physics in the presence of a B-field \cite{SW}. Similar NC structures have been applied to Calabi-Yau compactifications. The underlying idea in this context is to express the non commutativity in terms of discrete isometries of orbifolds. This was successfully done in \cite{BL} for the quintic, and it has been extended to K3 surfaces \cite{KL, BMR} and higher-dimensional orbifolds \cite{BS1}. The NC aspect of such hypersurfaces is also important for the stringy resolution of singularities as it offers an alternative to the standard resolutions obtained by deformations of the complex or K\"{a}hler structures of the Calabi-Yau manifolds. An objective of the present paper is to develop a new and essentially non-geometric approach to NC Calabi-Yau manifolds, based on ideas from quantum mechanics. This also offers a new take on the moduli space of resolved singularities. In this paper, a moduli space is meant to denote a space spanned by the degrees of freedom in the system. Our focus is on $ADE$ geometries, which we represent by certain holomorphic, partial differential operators. Such an operator acts on the space of holomorphic functions $\Psi$ on $\mathbb{C}^{3}$, thereby defining a wave equation. The spectrum of wave functions solving this equation is accordingly interpreted as the moduli space of the NC elevation of the associated $ADE$ geometry. We consider in some details the case $A_1$, and we find that it is linked to the Whittaker differential equation. Our wave-functional approach may be applied to more general geometries than the $ADE$ spaces. It thus offers a whole new description of NC elevations of ordinary geometries. A more general exposition may be found in \cite{work} while a more conventional approach to NC Calabi-Yau manifolds may be found in \cite{ss,Be}. The present paper is organized as follows: In Section 2, we outline how our NC elevations of ordinary geometries mimic the quantization of classical mechanics in the hamiltonian formalism. Section 3 concerns the quantization of the $ADE$ geometries, while the wave-equation representations of the resulting NC geometries are discussed in Section 4. Section 5 contains some concluding remarks. \section{Basic correspondence} Our construction of NC $ADE$ geometries as elevations of ordinary, commutative $ADE$ geometries is based on an extension of the relation between classical and quantum mechanics. We are thus exploring a similarity between commutative $ADE$ geometries and classical mechanics on one hand, and NC $ADE$ geometries and quantum mechanics on the other hand. The basic ideas are outlined in the following. \subsection{Ordinary $ADE$ geometries and classical mechanics} A hypersurface in the three-dimensional complex space $\mathbb{C}^3$ generated by $(z^1,z^2,z^3)=(x,y,z)$ may be described by an algebraic equation of the form \begin{equation} V(x,y,z)=\epsilon. \label{Veps} \end{equation} The explicit (polynomial) potentials $V(x,y,z)$ of our interest will be specified below. The parameter $\epsilon$ is independent of the complex coordinates $(x,y,z)$, and may be seen as parameterizing the family or orbit of hypersurfaces characterized by a given potential $V$. It is observed that a point $(x,y,z)\in\mathbb{C}^3$ can lie on at most one hypersurface in a given orbit. Since $\epsilon$ is constant, a point $(x_0,y_0,z_0)$ on the hypersurface (\ref{Veps}) is a singular point if the gradient $\nabla V=(\partial_xV,\partial_yV,\partial_zV)$ vanishes at that point: \begin{equation} \nabla V(x_0,y_0,z_0)=(0,0,0). \label{nablaV} \end{equation} It should be evident that the above extends to hypersurfaces in the higher-dimensional complex spaces $\mathbb{C}^n$. The $ADE$ geometries of our interest are all singular at exactly one point. The associated potentials will be chosen such that the $ADE$ geometries correspond to $\epsilon=0$ in their respective orbits, and such that the singularities appear at the origin $(x_0,y_0,z_0)=(0,0,0)$. The hamiltonian description of classical mechanics is quite analogous. One may be interested in characterizing the configurations corresponding to a fixed energy $E$. This amounts to solving the equation \begin{equation} \mathcal{H}(q_1,\ldots,q_n,p_1,\ldots,p_n)=E \label{HE} \end{equation} where $\mathcal{H}$ denotes the hamiltonian. The solutions define a hypersurface in the $2n$-dimensional phase space. Up to some well-known signs, Hamilton's equations express the time derivatives of the canonical coordinates $q_j,p_j$, $j=1,\ldots,n$, in terms of the gradient of the hamiltonian. A singular point of the fixed-energy hypersurface thus corresponds to the simultaneous vanishing of all these time derivatives. With this analogy we are thus considering a similarity between the orbit of hypersurfaces based on $V$ and the physical system described by $\mathcal{H}$. The individual hypersurfaces characterized by $\epsilon$ play the roles of specific energy levels given by $E$. Singularities appear where the associated gradients vanish. There are of course important differences and further similarities between these two scenarios. Here, though, we will not be concerned with them. Rather, our objective is to explore the consequences of mimicking the quantization of classical mechanics in the realm of hypersurfaces of the form (\ref{Veps}). \subsection{NC $ADE$ geometries and quantum mechanics} We will consider a combination of the Heisenberg and Schr\"odinger pictures of quantization. As part of our construction we thus replace the complex coordinates $z^j$ by holomorphic operators $Z^j$ in analogy with the promotion of the canonical variables to quantum operators. We also borrow the idea of promoting the classical hamiltonian to a differential operator acting on a space of wave functions where the eigenfunctions correspond to the stationary states, whereas the eigenvalues represent the allowed energy levels. In our case, the potential $V$ is replaced by a differential operator whose eigenfunctions will be certain holomorphic functions, while the eigenvalues will label the NC geometries we may represent in this picture. The geometric analogue of the Schr\"odinger wave equation which we will discuss reads \begin{equation} V(X,Y,Z)\Psi=\epsilon\Psi. \label{VPsi} \end{equation} We naturally require that this reduces to (\ref{Veps}) in the classical (commutative) limit. We may decompose the `quantum potential' $V(X,Y,Z)$ as \begin{equation} V(X,Y,Z)=H+V(x,y,z) \label{VHV} \end{equation} where the holomorphic (partial) differential operator $H$ vanishes in the classical limit: \begin{equation} H\rightarrow0. \label{class} \end{equation} The `geometric hamiltonian' $H$ is constructed by replacing the coordinates $z^j$ by a differential-operator realization of the NC coordinates $Z^j$ subject to a normal-ordering procedure to be discussed below. This also justifies the use of the same symbol $V$ to denote the quantum potential as in the classical case. Keeping the decomposition (\ref{VHV}) and the classical limit (\ref{class}) in mind, the partial differential operators $Z^j$ may be viewed as NC perturbations of the original commutative coordinates $z^j$ where the perturbative terms vanish in the classical limit. We will derive and study differential equations of the form \begin{equation} H(x,y,z;\partial_x,\partial_y,\partial_z)\Psi(x,y,z)= \left(\epsilon-V(x,y,z)\right)\Psi(x,y,z), \label{HPsiVPsi} \end{equation} where the $ADE$ geometries correspond to $\epsilon=0$. Deriving these equations essentially amounts to devising an appropriate normal ordering of the NC coordinates. This is discussed in the following section. Working out the corresponding quantum moduli space amounts to finding the spectrum of eigenfunctions $\Psi$ in (\ref{HPsiVPsi}). This is a highly non-trivial task as it requires solving complicated partial differential equations. The general solution is beyond the scope of the present work, though we will present the solution in the case of $A_1$. In brief, our construction offers a novel and essentially non-geometric representation of NC $ADE$ geometries as holomorphic wave equations on $\mathbb{C}^{3}$. The associated moduli spaces of the NC $ADE$ geometries are given in terms of the spectrum of holomorphic waves solving these differential equations. It is well known that a singular $ADE$ geometry may be deformed by adding a polynomial term $f(x,y,z)$ to the defining potential $V(x,y,z)$, where either $f(0,0,0)\neq(0,0,0)$ or $\nabla f(0,0,0)\neq0$. The corresponding NC elevation is represented by a wave equation of the form \begin{equation} H(x,y,z;\partial_x,\partial_y,\partial_z)\Psi(x,y,z)= -(V(x,y,z)+f(x,y,z))\Psi(x,y,z). \end{equation} NC elevations of such deformations will not be considered further here. \section{Quantization of $ADE$ geometries} For the sake of simplicity we will limit our analysis to the complex K3 surfaces being an important example of compact Calabi-Yau manifolds. These surfaces play a crucial role in the study of type II superstrings and in the geometric engineering of quantum field theories embedded in superstring theory \cite{KKV,KMV,BFS,BS2}. A K3 surface can have singularities corresponding to contracting two-spheres. The intersection matrix of these two-spheres is then given by the Cartan matrix of a Lie algebra, and one may naturally distinguish between three types of singularities \cite{ABS}, namely (\textbf{a}) \textsl{ordinary singularities} classified by the ordinary $ADE$ Lie algebras, (\textbf{b}) \textsl{affine singularities} classified by the affine $\widehat{ADE}$ Kac-Moody algebras, and (\textbf{c}) \textsl{indefinite singularities} classified by the indefinite Lie algebras. Here we focus on ordinary $ADE$ singularities, while a similar analysis is possible for the affine extensions. Near such an ordinary singularity, a K3 surface may be viewed as an ALE space defined by an orbifold structure $\mathbb{C}^{2}/G$, where $G$ is a discrete group depending on the $ADE$ singularity in question. These orbifolds can be expressed as hypersurfaces in $\mathbb{C}^{3}$ (\ref{Veps}) where $\epsilon=0$, \begin{equation} V(x,y,z)=0, \label{ADE0} \end{equation} and with potentials given by \cite{HOV}\footnote{We have chosen to represent the $A$-series by $x^2+y^2+z^n$ instead of $uv+z^n$. The two representations are related by the invertible transformation $(x,y)\rightarrow(u,v)=(x+iy,x-iy)$. As will become clear, the choice $(x,y)$ renders the quantization straightforward.} \begin{eqnarray} A_{n-1}:&&\ \ V_{A_{n-1}}(x,y,z):=x^2+y^2+z^{n}, \notag \\ D_{n}:&&\ \ V_{D_{n}}(x,y,z):=x^{2}+y^{2}z+z^{n-1}, \notag \\ E_{6}:&&\ \ V_{E_{6}}(x,y,z):=x^{2}+y^{3}+z^{4}, \nonumber \\ E_{7}:&&\ \ V_{E_{7}}(x,y,z):=x^{2}+y^{3}+yz^{3}, \notag \\ E_{8}:&&\ \ V_{E_{8}}(x,y,z):=x^{2}+y^{3}+z^{5}. \label{ADE} \end{eqnarray} The indices indicate the ranks of the Lie algebras. As already mentioned, these hypersurfaces have singularities at the origin of $\mathbb{C}^{3}$. It is also well known that the singularity of any one of these hypersurfaces may be resolved in two ways, either by deforming the complex structure of the surface (changing the shape), or by varying its K\"{a}hler structure (changing the size). Here we are not interested in such deformations or resolutions, but rather in a NC elevation of the $ADE$ spaces defined by the polynomial constraint equations (\ref{ADE0},\ref{ADE}). Following the previous section, we will introduce a quantization procedure in which these NC $ADE$ spaces are constructed by imposing polynomial constraints similar to (\ref{ADE0},\ref{ADE}) on a NC generalization of $\mathbb{C}^{3}$. It is noted that one may also consider K3 surfaces with singularities described by the $BCFG$ Lie algebras. The potentials defining these complex surfaces are in general multiple-valued functions \cite{BFS,BS2}, unlike the $ADE$ cases in (\ref{ADE}), and will not be discussed here. We now turn to the construction of the NC embedding space. The ordinary $n$-dimensional complex space $\mathbb{C}^{n}$ is parameterized by the $n$ complex variables $z^{j}$, $j=1,\dots ,n$. We will parameterize its NC counterpart $\mathbb{C}_{\Theta}^{n}$ by $Z^{j}$, $j=1,\dots ,n$, satisfying \begin{equation} \lbrack Z^{j},Z^{k}]=2\Theta^{jk}, \label{ZZ} \end{equation} where $\Theta^{jk}$ is an anti-symmetric complex tensor. We wish to add some comments on this non commutativity. The first comment concerns the fact that all anti-symmetric matrices of odd dimension are singular, i.e., not invertible. A real hamiltonian system, on the other hand, is always even-dimensional since each position variable is accompanied by a conjugate momentum variable. The singular property of $\Theta$ thus restricts the way the operators $Z^{j}$ may be expressed in terms of `phase-space' variables, see (\ref{ZZP}). The second comment concerns the complex nature of the structure constants $\Theta^{jk}$. They can be viewed as a complexification of the real Seiberg-Witten parameters known to be related to the NS-NS B-field in the description of real Moyal space \cite{SW}. In type IIB superstring theory this complexity may have its origin in terms of a complexified K\"ahler form $\mathcal{J}=J_{K}+iB_{NS}$ or in terms of a complex combination of the two B-fields (RR and NS-NS), i.e., $B=B_{NS}+iB_{R}$. The third comment is that the parameters $\Theta^{jk}$ play a role similar to (the normalized) Planck's constant $\hbar$ appearing in the Heisenberg commutation relations of non-relativistic quantum mechanics: \begin{equation} \lbrack \mathcal{P},\mathcal{X}]=-i\hbar. \end{equation} Here $\mathcal{X}$ and $\mathcal{P}$ are the usual position and momentum operators, respectively. As is well known, they admit a representation in which $\mathcal{X}=x$ and $\mathcal{P}=-i\hbar \partial_{x}$. Motivated by this, we wish to realize the NC coordinates $Z^j$ (satisfying (\ref{ZZ})) in terms of a linear combination of $2n$ `phase-space' variables, $\mathcal{Z}^j$ and $\mathcal{P}_j$, as follows \begin{equation} Z^j=\mathcal{Z}^j+\sum_{k=1}^{n}\Theta^{jk}\mathcal{P}_k \label{ZZP} \end{equation} where $\mathcal{P}_j$ and $\mathcal{Z}^k$ are quantum operators satisfying \begin{equation} [\mathcal{P}_j,\mathcal{Z}^k]=\delta_j^k, \qquad [\mathcal{Z}^j,\mathcal{Z}^k]=[\mathcal{P}_j,\mathcal{P}_k]=0. \label{PZdelta} \end{equation} Representing the variables $(\mathcal{Z}^j,\mathcal{P}_j)$ as $(z^j,\frac{\partial }{\partial z^j})$, we may represent the NC coordinates, $Z^j$, as first-order differential operators: \begin{equation} Z^j=z^j+\sum_{k=1}^{n}\Theta^{jk}\partial_{k},\qquad \ \ \ \ \ \partial_k=\frac{\partial}{\partial z^k}. \label{Zdiff} \end{equation} In this representation, the NC coordinates are thus seen to act as non-trivial, holomorphic, partial differential operators on the space of holomorphic functions $\Psi(z_1,\ldots,z_n)$ on $\mathbb{C}^{n}$, and we are one step closer to the geometric analogue of the Schr\"odinger equation discussed above. For invertible $\Theta^{jk}$, in which case the dimension $n$ must be even, one may introduce the `gauge potential' $A_j=\sum_{k=1}^n\Theta_{jk}z^{k}$. The realization (\ref{Zdiff}) may then be re-expressed in terms of the `covariant' derivative $D_j=\partial_j+A_j$ as \begin{equation} D_j=\sum_{k=1}^n\Theta_{jk}Z^{k}. \end{equation} This indicates that the NC elevation in these cases behaves as switching on an external constant magnetic field $B^{i}=\varepsilon^{ijk}\partial_{j}A_{k}=-\varepsilon^{ijk}\Theta_{jk}$. Since our main interest is based on $n=3$, we will not elaborate on this observation. Referring to the notation in (\ref{ADE}), we will parameterize $\mathbb{C}_{\Theta}^{3}$ by the NC coordinates $(X,Y,Z)$ satisfying \begin{equation} \lbrack X,Y]=2\alpha ,\ \ \ \ \ \ \ [Y,Z]=2\beta ,\ \ \ \ \ \ \ [Z,X]=2\gamma \label{abc} \end{equation} where $\alpha,\beta,\gamma$ are (commutative) structure constants. It is noted that this algebra is equivalent to a central extension of the direct sum of three $u(1)$s. That is, the $u(1)$s are originally generated by (the commuting variables) $X,Y,Z$ while the central element is denoted $I$. To complete the interpretation of (\ref{abc}) as this central extension, the structure constants appearing on the right-hand sides of the commutators should all be multiplied by $I$. Following (\ref{ZZP}) and (\ref{Zdiff}), the representations of $(X,Y,Z)$ of our interest now read \begin{eqnarray} X &=&\mathcal{X}+\alpha \mathcal{P}_{y}-\gamma \mathcal{P}_{z}\ =\ x+\alpha \partial _{y}-\gamma \partial _{z}, \notag \\ Y &=&\mathcal{Y}+\beta \mathcal{P}_{z}-\alpha \mathcal{P}_{x}\ =\ y+\beta \partial _{z}-\alpha \partial _{x}, \notag \\ Z &=&\mathcal{Z}+\gamma \mathcal{P}_{x}-\beta \mathcal{P}_{y}\ =\ z+\gamma \partial _{x}-\beta \partial _{y}. \label{XYZ} \end{eqnarray} It is emphasized that these operators act on the local coordinates as $[X,x]=0$, $[X,y]=\alpha $, $[X,z]=-\gamma$ etc. It is also noted that one may consider various degrees of non commutativity corresponding to \begin{eqnarray} \alpha &\neq &0,\qquad \beta =\gamma =0,\qquad \text{or a cyclic permutation,} \notag \\ \alpha \beta &\neq &0,\qquad \gamma =0,\qquad \qquad \text{or a cyclic permutation,} \nonumber \\ \alpha \beta \gamma &\neq &0. \label{ca} \end{eqnarray} The remaining case where $\alpha=\beta=\gamma=0$ merely corresponds to classical geometry. Obviously, the possibility $\alpha \beta \gamma \neq 0$ has the highest degree of non commutativity. Our next objective is to define the NC elevation of the potentials $V(x,y,z)$. As in other `quantization' schemes, the naive substitution \begin{equation} (x,y,z)\rightarrow (X,Y,Z) \label{xyz} \end{equation} is ambiguous due to the simple fact that $xy=yx$ while $XY\neq YX$ if $\alpha\ne0$, for example, and one is faced with an ordering problem. According to (\ref{ADE}), we need to treat $y^{2}z$ and $yz^{3}$, as these monomials appear in the $D_n$ and $E_7$ potentials, respectively. To this end, and to put it into a more general context, we introduce the homogeneous polynomials \begin{equation} M_{m}(u,v)=\sum_{j=0}^{m}a_{j}u^{j}vu^{m-j},\ \ \ \ \ \ \ \ \ \ \ \sum_{j=0}^{m}a_{j}=1 \label{M} \end{equation} of degree $m+1$ where $m$ is a non-negative integer. The arguments, $u$ and $v$, may be NC variables, and $M_{m}(u,v)$ is seen to reduce to the monomial $u^{m}v$ if $[u,v]=0$. Now, in our case we are thus interested in \begin{eqnarray} M_{m}(X,Y) &=&\sum_{j=0}^{m}a_{j}X^{j}YX^{m-j} \notag \\ &=&\sum_{j=0}^{m}a_{j}(\mathcal{X}+\alpha \mathcal{P}_{y} -\gamma \mathcal{P}_{z})^{j}(\mathcal{Y}+\beta \mathcal{P}_{z}-\alpha \mathcal{P}_{x})(\mathcal{X} +\alpha \mathcal{P}_{y}-\gamma \mathcal{P}_{z})^{m-j}, \label{MXY} \end{eqnarray} and we find that it may be written in the following form: \\[0.2cm] \textbf{Lemma} \begin{eqnarray} M_{m}(X,Y) &=&\mathcal{Y}X^{m}+X^{m}(\beta \mathcal{P}_{z}-\alpha \mathcal{P}_{x})+\alpha \sum_{j=0}^{m}(2j-m)a_{j}X^{m-1}, \notag \\ M_{m}(X,X) &=&X^{m+1} . \label{lemma} \end{eqnarray} Up to commutative (hence trivial) re-arrangements (within $X^{s}$), the ordering of the right-hand side has phase-space coordinates to the left of the phase-space momenta. We will refer to this ordering as \emph{normal ordering}. Our proposal for a `natural' quantization procedure that elevates an ordinary hypersurface to a NC hypersurface now goes as follows. Let the classical hypersurface be defined by the vanishing of a polynomial, as in the case of the $ADE$ manifolds (\ref{ADE0},\ref{ADE}). Since our prime goal is to construct NC elevations of these $ADE$ spaces, we will restrict ourselves to the situation where each monomial summand is of the form $x^{m}y$ or $z^{s}$, or similar monomials in $\{x,y,z\}$ obtained by replacing $x$, $y$ or $z$ by one of the other coordinates. Each of these monomials is then replaced by the most general homogeneous polynomial in the corresponding NC coordinates $\{X,Y,Z\}$ (as in the first line of (\ref{MXY}), for example) satisfying that the result of normal ordering it must itself be a homogeneous polynomial in the phase-space variables of the NC coordinates. It is of course also required that the NC polynomial is properly normalized so that it reduces to the original polynomial in the classical limit where $(X,Y,Z)\rightarrow (x,y,z)$. For the class of polynomials $M_{m}(X,Y)$, this means that the right-hand side of (\ref{lemma}) must be homogeneous in the phase-space variables, which is ensured provided \begin{equation} \sum_{j=0}^{m}(2j-m)a_{j}=0. \label{sum0} \end{equation} The symmetrized polynomial, where $a_{0}=\ldots =a_{m}=\frac{1}{m+1}$, is seen to satisfy this condition, showing that a solution exists for all $m$. It may appear surprising, though, that the thus defined set of `quantizations' of a given classical polynomial consists of \emph{one} polynomial only. This follows straightforwardly, though, from the lemma with (\ref{sum0}) imposed, since the first part of the right-hand side of (\ref{lemma}) is \emph{independent} of $\{a_{0},\ldots ,a_{m}\}$, and from the fact that the quantization of $z^{s}$ is trivial. It also indicates that our quantization procedure for a complex hypersurface like the $ADE$ spaces (\ref{ADE0},\ref{ADE}) results in a \emph{unique} NC hypersurface. In brief, the quantization procedure replaces uniquely the classical (commutative) monomials of the form $x^{m}y$ or $z^{s}$ by homogeneous polynomials of degree $m+1$ or $s$, respectively, in the phase-space variables associated to the NC coordinates $\{X,Y,Z\}$. Let us illustrate the uniqueness from the point of view of the relations following from the commutative nature of the structure constants (\ref{abc}). Recall that $\sum_{j=0}^{m}a_{j}=1$ ensures that the NC polynomial reduces to its commutative origin in the classical limit $(X,Y,Z)\rightarrow (x,y,z)$, while $\sum_{j=0}^{m}(2j-m)a_{j}=0$ ensures homogeneity of the NC counterpart of a classical monomial. For $m=1$ there is only one solution to these two constraints: $a_{0}=a_{1}=1/2$. For $m=2$ there is the one-parameter family of solutions \begin{equation} a_{0}=a,\ \ \ \ \ \ \ a_{1}=1-2a,\ \ \ \ \ \ \ a_{2}=a, \label{k2} \end{equation} but \begin{equation} ZY^{2}-2YZY+Y^{2}Z=[Z,Y]Y-Y[Z,Y]=0 \end{equation} according to the aforementioned commutative nature of the structure constants. Likewise for $m=3$, where \begin{equation} a_{0}=b,\ \ \ \ \ a_{1}=\frac{1}{2}-2b+c,\ \ \ \ \ \ \ a_{2}=\frac{1}{2} +b-2c,\ \ \ \ \ \ \ a_{2}=c \label{k3} \end{equation} is the general solution, we have \begin{equation} YZ^{3}-2ZYZ^{2}+Z^{2}YZ=(YZ^{2}-2ZYZ+Z^{2}Y)Z=0, \end{equation} for example. This demonstrates again that our quantization procedure results in a unique NC hypersurface which we may then choose to represent in its symmetrized form (corresponding to $a=1/3$ for $m=2$, and $b=c=1/4$ for $m=3$): \begin{eqnarray} A_{n-1}:&&\ \ V_{A_{n-1}}(X,Y,Z):=X^2+Y^2 +Z^{n}, \notag \\ D_{n}:&&\ \ V_{D_n}(X,Y,Z):= X^{2}+\frac{1}{3}\left( ZY^{2}+YZY+Y^{2}Z\right)+Z^{n-1}, \notag \\ E_{6}:&&\ \ V_{E_6}(X,Y,Z):=X^{2}+Y^{3}+Z^{4}, \nonumber \\ E_{7}:&&\ \ V_{E_7}(X,Y,Z):=X^{2}+Y^{3}+\frac{1}{4}\left( YZ^{3}+ZYZ^{2}+Z^{2}YZ+Z^{3}Y\right), \notag \\ E_{8}:&&\ \ V_{E_8}(X,Y,Z):=X^{2}+Y^{3}+Z^{5}. \label{nc} \end{eqnarray} Upon replacing the operators $X,Y$ and $Z$ by their differential-operator representations given in (\ref{XYZ}), it is straightforward to write down the corresponding holomorphic wave equations (\ref{VPsi}). This is discussed below. It is also stressed that, by construction, the NC nature of the quantized $ADE$ spaces is inherited from the ambient space $\mathbb{C}_\Theta^3$. An immediate way of seeing this is that the non-commutativity in either case is governed by the same set of structure constants $\alpha,\beta,\gamma$. \section{Wave-equation representation} Here we list the differential-operator representations of the NC $ADE$ potentials outlined above: \begin{eqnarray} V_{A_{n-1}}(X,Y,Z)&=&\sum_{j=0}^2\sum_{k=0}^j \left(\begin{array}{cc}2\\ j\end{array}\right) \left(\begin{array}{cc}j\\ k\end{array}\right) (-1)^k\left(\alpha^{j-k}\gamma^k x^{2-j}\partial_y^{j-k}\partial_z^k +\beta^{j-k}\alpha^k y^{2-j}\partial_z^{j-k}\partial_x^k\right) \nonumber\\ &+&\sum_{j=0}^n\sum_{k=0}^j \left(\begin{array}{cc}n\\ j\end{array}\right) \left(\begin{array}{cc}j\\ k\end{array}\right) (-1)^k\gamma^{j-k}\beta^k z^{n-j}\partial_x^{j-k}\partial_y^k, \nonumber \\ V_{D_n}(X,Y,Z)&=&\sum_{j=0}^2\sum_{k=0}^j \left(\begin{array}{cc}2\\ j\end{array}\right) \left(\begin{array}{cc}j\\ k\end{array}\right) (-1)^k\left( \alpha^{j-k}\gamma^kx^{2-j}\partial_y^{j-k}\partial_z^k \right.\nonumber\\ &&+ \left. \beta^{j-k}\alpha^ky^{2-j}\left(z+\gamma\partial_x-\beta\partial_y\right) \partial_z^{j-k}\partial_x^k \right)\nonumber\\ &+&\sum_{j=0}^{n-1}\sum_{k=0}^j \left(\begin{array}{cc}n-1\\ j\end{array}\right) \left(\begin{array}{cc}j\\ k\end{array}\right) (-1)^k\gamma^{j-k}\beta^k z^{n-1-j}\partial_x^{j-k}\partial_y^k, \nonumber \\ V_{E_6}(X,Y,Z)&=&\sum_{j=0}^2\sum_{k=0}^j \left(\begin{array}{cc}2\\ j\end{array}\right) \left(\begin{array}{cc}j\\ k\end{array}\right) (-1)^k\alpha^{j-k}\gamma^k x^{2-j}\partial_y^{j-k}\partial_z^k \nonumber\\ &+&\sum_{j=0}^3\sum_{k=0}^j \left(\begin{array}{cc}3\\ j\end{array}\right) \left(\begin{array}{cc}j\\ k\end{array}\right) (-1)^k\beta^{j-k}\alpha^k y^{3-j}\partial_z^{j-k}\partial_x^k \nonumber\\ &+&\sum_{j=0}^4\sum_{k=0}^j \left(\begin{array}{cc}4\\ j\end{array}\right) \left(\begin{array}{cc}j\\ k\end{array}\right) (-1)^k\gamma^{j-k}\beta^k z^{4-j}\partial_x^{j-k}\partial_y^k, \nonumber \\ V_{E_7}(X,Y,Z)&=&\sum_{j=0}^2\sum_{k=0}^j \left(\begin{array}{cc}2\\ j\end{array}\right) \left(\begin{array}{cc}j\\ k\end{array}\right) (-1)^k\alpha^{j-k}\gamma^k x^{2-j}\partial_y^{j-k}\partial_z^k \nonumber\\ &+&\sum_{j=0}^3\sum_{k=0}^j \left(\begin{array}{cc}3\\ j\end{array}\right) \left(\begin{array}{cc}j\\ k\end{array}\right) (-1)^k\left( \beta^{j-k}\alpha^ky^{3-j}\partial_z^{j-k}\partial_x^k\right. \nonumber\\ &&+\left. \gamma^{j-k}\beta^kz^{3-j}\left(y+\beta\partial_z-\alpha\partial_x\right) \partial_x^{j-k}\partial_y^k\right), \nonumber \\ V_{E_8}(X,Y,Z)&=&\sum_{j=0}^2\sum_{k=0}^j \left(\begin{array}{cc}2\\ j\end{array}\right) \left(\begin{array}{cc}j\\ k\end{array}\right) (-1)^k\alpha^{j-k}\gamma^k x^{2-j}\partial_y^{j-k}\partial_z^k \nonumber\\ &+&\sum_{j=0}^3\sum_{k=0}^j \left(\begin{array}{cc}3\\ j\end{array}\right) \left(\begin{array}{cc}j\\ k\end{array}\right) (-1)^k\beta^{j-k}\alpha^k y^{3-j}\partial_z^{j-k}\partial_x^k \nonumber\\ &+&\sum_{j=0}^5\sum_{k=0}^j \left(\begin{array}{cc}5\\ j\end{array}\right) \left(\begin{array}{cc}j\\ k\end{array}\right) (-1)^k\gamma^{j-k}\beta^k z^{5-j}\partial_x^{j-k}\partial_y^k . \label{ncdiff} \end{eqnarray} The associated wave equations are defined by (\ref{VPsi}) for $\epsilon=0$. Below follows an analysis of the case $A_1$. \subsection{The case $A_1$} We consider \begin{equation} V_{A_1}(X,Y,Z)\Psi=\left(X^2+Y^2+Z^2\right)\Psi=\epsilon\Psi \label{A12} \end{equation} where $\epsilon=0$ corresponds to the NC $A_1$ geometry. In terms of differential operators we have \begin{eqnarray} X^2+Y^2+Z^2&=&x^2+y^2+z^2+2(\gamma z-\alpha y)\partial_x +2(\alpha x-\beta z)\partial_y+2(\beta y-\gamma x)\partial_z \nonumber\\ &+&(\alpha^2+\gamma^2)\partial_x^2+(\alpha^2+\beta^2)\partial_y^2 +(\beta^2+\gamma^2)\partial_z^2\nonumber\\ &-&2\beta\gamma\partial_x\partial_y-2\alpha\gamma\partial_y\partial_z -2\alpha\beta\partial_x\partial_z. \label{A12diff} \end{eqnarray} We also introduce the geometric angular momentum \begin{equation} L=(L_x,L_y,L_z)=r\times\nabla,\ \ \ \ \ r=(r_x,r_y,r_z)=(x,y,z),\ \ \ \ \ \nabla=(\partial_x,\partial_y,\partial_z) \label{Lrn} \end{equation} in terms of which the differential-operator representation may be written \begin{eqnarray} V_{A_1}(X,Y,Z)&=&V_{A_1}(x,y,z) +2\left(\alpha L_z+\beta L_x+\gamma L_y\right)\nonumber\\ &+&(\alpha^2+\beta^2+\gamma^2)\nabla^2 -(\alpha\partial_z+\beta\partial_x+\gamma\partial_y) . \label{Ldif} \end{eqnarray} The differential operators involved here are seen to satisfy the following commutator \begin{equation} \left[\alpha L_z+\beta L_x+\gamma L_y,(\alpha^2+\beta^2+\gamma^2)\nabla^2 -(\alpha\partial_z+\beta\partial_x+\gamma\partial_y)\right]=0. \label{comm} \end{equation} It is therefore natural to look for an orthonormal coordinate system $(u,v,w)$ in terms of which $V_{A_1}(X,Y,Z)$ is independent of $L_w$. The two remaining coordinates are chosen from a `symmetrical' point of view, as follows \begin{eqnarray} x&=& \frac{\gamma-\alpha}{N_2}u+ \frac{\gamma(\gamma-\beta)+\alpha(\alpha-\beta)}{N_4}v+ \frac{\beta}{N}w,\nonumber\\ y&=&\frac{\alpha-\beta}{N_2}u+ \frac{\alpha(\alpha-\gamma)+\beta(\beta-\gamma)}{N_4}v+ \frac{\gamma}{N}w,\nonumber\\ z&=& \frac{\beta-\gamma}{N_2}u+ \frac{\beta(\beta-\alpha)+\gamma(\gamma-\alpha)}{N_4}v+ \frac{\alpha}{N}w, \label{xyzuvw} \end{eqnarray} where \begin{eqnarray} N_2^2&=&(\gamma-\alpha)^2+(\alpha-\beta)^2+(\beta-\gamma)^2,\nonumber\\ N_4^2&=&(\gamma(\gamma-\beta)+\alpha(\alpha-\beta))^2 +(\alpha(\alpha-\gamma)+\beta(\beta-\gamma))^2 +(\beta(\beta-\alpha)+\gamma(\gamma-\alpha))^2,\nonumber\\ N^2&=&\alpha^2+\beta^2+\gamma^2. \label{NNN} \end{eqnarray} These normalization constants are seen to be related according to \begin{equation} N_4^2=N_2^2N^2 \label{N2} \end{equation} After some somewhat tedious computations one finds the following remarkable simplification \begin{equation} V_{A_1}(X,Y,Z)=u^2+v^2+w^2+N^2(\partial_u^2+\partial_v^2). \label{Vuvw} \end{equation} The simplicity of this expression is due to the change of coordinates (\ref{xyzuvw}), exploiting the symmetries of the original differential operator (\ref{A12diff}). In either form, the differential operator represents the NC elevation of the polynomial $V_{A_1}(x,y,z)$. It thus appears in the reduction of $\mathbb{C}_\Theta^3$ to the NC hypersurface defined by $V_{A_1}(X,Y,Z)=0$. Here we are interested in solving the differential equation (\ref{A12}) using (\ref{Vuvw}). The NC system may now be studied by representing the orthonormal coordinates $(u,v,w)$ in cylindrical coordinates \begin{equation} u=\rho\cos(\theta),\ \ \ \ \ \ \ v=\rho\sin(\theta),\ \ \ \ \ \ \ w=w,\ \ \ \ \ \ \ \partial_u^2+\partial_v^2 =\partial_\rho^2+\frac{1}{\rho}\partial_\rho +\frac{1}{\rho^2}\partial_\theta^2, \label{uvrhotheta} \end{equation} in which case the differential equation (\ref{A12}) reads \begin{equation} \left(\rho^2+w^2+N^2 \left(\partial_\rho^2+\frac{1}{\rho}\partial_\rho +\frac{1}{\rho^2}\partial_\theta^2\right)\right)\Psi(\rho,\theta;w) =\epsilon\Psi(\rho,\theta;w). \label{diffN} \end{equation} Since this equation does not involve derivatives with respect to $w$, we consider the following simple separation of variables \begin{equation} \Psi(\rho,\theta;w)=R_w(\rho)\Upsilon(\theta) \label{sep} \end{equation} with corresponding differential equations \begin{eqnarray} \left(N^2\rho^2\frac{d^2}{d\rho^2}+ N^2\rho\frac{d}{d\rho}+\rho^4+(w^2-\epsilon)\rho^2 -\mu^2\right)R_{w,\mu}(\rho)&=&0,\nonumber\\ \left(\frac{d^2}{d\theta^2}+\frac{\mu^2}{N^2} \right)\Upsilon_\mu(\theta)&=&0. \label{sepdiff} \end{eqnarray} The angular equation is the well-known differential equation for the harmonic oscillator. It has two linearly independent solutions: \begin{equation} \Upsilon_\mu^+(\theta)=e^{i\mu\theta/N},\ \ \ \ \ \ \ \ \ \Upsilon_\mu^-(\theta)=e^{-i\mu\theta/N}. \label{har} \end{equation} After making the substitution \begin{equation} R(\rho)=\frac{1}{\rho}Q(i\rho^2/N) \label{RW} \end{equation} for the radial function, we find the differential equation \begin{equation} \left(\frac{d^2}{d(\frac{i\rho^2}{N})^2} +\left(-\frac{1}{4}+\frac{i(\epsilon-w^2)/(4N)}{i\rho^2/N}+\frac{\frac{1}{4} -\left(\frac{\mu}{2N}\right)^2}{\left(i\rho^2/N\right)^2}\right)\right)Q(i\rho^2/N)=0. \label{Wdiff} \end{equation} This is recognized as the Whittaker differential equation whose two linearly independent solutions may be represented by \begin{equation} M_{\frac{i(\epsilon-w^2)}{4N},\frac{\mu}{2N}}(i\rho^2/N)\ \ \ {\rm and}\ \ \ M_{\frac{i(\epsilon-w^2)}{4N},-\frac{\mu}{2N}}(i\rho^2/N) \end{equation} or in terms of Whittaker's function by \begin{equation} W_{\frac{i(\epsilon-w^2)}{4N},\frac{\mu}{2N}}(i\rho^2/N)\ \ \ {\rm and}\ \ \ W_{-\frac{i(\epsilon-w^2)}{4N},\frac{\mu}{2N}}(-i\rho^2/N). \label{solw} \end{equation} To see this, it is recalled \cite{GR} that the Whittaker differential equation is given by \begin{equation} \frac{d^2W(z)}{dz^2}+\left(-\frac{1}{4}+\frac{\lambda}{z}+ \frac{\frac{1}{4}-\kappa^2}{z^2}\right)W(z)=0, \label{W} \end{equation} and that it has the two linearly independent solutions \begin{eqnarray} M_{\lambda,\kappa}(z)&=&z^{\kappa+\frac{1}{2}}e^{-z/2} \Phi(\kappa-\lambda+\frac{1}{2},2\kappa+1;z),\nonumber\\ M_{\lambda,-\kappa}(z)&=&z^{-\kappa+\frac{1}{2}}e^{-z/2} \Phi(-\kappa-\lambda+\frac{1}{2},-2\kappa+1;z). \label{WM} \end{eqnarray} Here $\Phi(\nu,\tau;z)$ denotes the confluent hypergeometric function sometimes written ${}_1F_1(\nu;\tau;z)$. Whittaker's function provides solutions suitable for $2\kappa$ integer, and are defined by \begin{equation} W_{\lambda,\kappa}(z)=\frac{\Gamma(-2\kappa)}{\Gamma(\frac{1}{2} -\kappa-\lambda)}M_{\lambda,\kappa}(z) +\frac{\Gamma(2\kappa)}{\Gamma(\frac{1}{2} +\kappa-\lambda)}M_{\lambda,-\kappa}(z). \label{WW} \end{equation} Two linearly independent solutions to (\ref{W}) of this kind are given by $W_{\lambda,\kappa}(z)$ and $W_{-\lambda,\kappa}(-z)$. Since the spectrum of solutions to the differential equation (\ref{A12}) is given in terms of the harmonic oscillator and solutions to the Whittaker differential equation, the involved parameters are not constrained by quantization conditions in the usual sense. Rather, the quantization conditions manifest themselves in the {\em form} of the spectrum which in this case is comprised of a combination of (\ref{har}) and (\ref{solw}). Now that we have the solution to (\ref{A12}) for all $\epsilon$, it is natural to study the limit $\epsilon\rightarrow0$. In the notation of (\ref{W}), the only dependence on $\epsilon$ is through $\lambda$. The Whittaker differential equation and its solution are well defined for all $\lambda$, so we may conclude that the differential equation (\ref{Wdiff}) is well defined for all $\epsilon$. The original, classical $A_1$ geometry, on the other hand, is singular and corresponds to the aforementioned limit: \begin{equation} V_{A_1}(x,y,z)=\epsilon\rightarrow0. \label{Ve0} \end{equation} A merit of our quantization procedure is thus that the singularity of the classical geometry has been resolved. This NC elevation therefore offers an alternative to the more conventional resolutions. Due to the interpretation that the differential operator (\ref{A12diff}) represents the NC elevation of the singular K3 surface $x^2+y^2+z^2=0$, we find it natural to attribute all the solutions to (\ref{A12}) to the `moduli space' of the associated NC geometry. That is, the spectrum of wave functions solving (\ref{A12}), or more generally (\ref{VPsi}), for $\epsilon=0$ is interpreted as the moduli space of the NC geometry. Since the latter is represented by a partial differential equation, we see that its possible boundary conditions correspond to constraints imposed on the moduli space. A detailed analysis of this link between boundary conditions and constraint equations is beyond the scope of the present work. In the limit of zero non commutativity, i.e., $\alpha=\beta=\gamma=0$, the change of coordinates (\ref{xyzuvw}) is singular. This is in accordance with the fact that the differential equation (\ref{A12}) merely reduces to its classical counterpart \begin{equation} (x^2+y^2+z^2)\Psi=\epsilon\Psi . \label{classA1} \end{equation} On the hypersurface (\ref{Veps}) for $V=V_{A_1}$, every complex function solves (\ref{classA1}). This means that the `classical moduli space' may be identified with the set of holomorphic functions. \section{Discussion} We have developed a new and essentially non-geometric approach to NC Calabi-Yau manifolds, based on ideas from quantum mechanics. Our focus has been on the singular $ADE$ geometries. The polynomial equations defining these classical $ADE$ geometries are replaced by differential equations in which the original singularities are (presumably) absent. The moduli space associated to such an NC geometry is then interpreted as the spectrum of solutions to the corresponding wave equation. We have analyzed in detail the NC elevation of the $A_1$ geometry and found that it is described in part by the Whittaker differential equation. We intend to discuss elsewhere the extension of this explicit study to the other $ADE$ geometries \cite{work}. Our approach is adaptable to a broad variety of geometries whose NC elevations may then be represented by differential wave equations. The extension from complex to real variables is straightforward, as is the extension to other dimensions than two complex ones. We anticipate that the NC elevations of singular geometries in general will be non singular as in the case of $A_1$ discussed above. This will be addressed elsewhere \cite{work} where we also intend to discuss the implementation of boundary conditions alluded to above. In order to put the analogy between our construction and quantum mechanics to a `physical test', one could examine the `dual' descriptions of the NC elevations. That is, on one hand we have introduced the NC elevations as `hypersurfaces' in an NC ambient space, while on the other hand we are representing them as differential operators resulting in some wave equations. In quantum mechanics, this corresponds to an operator description versus a description in terms of wave functions. It would therefore be of interest to try to extract information on an NC elevation based on both its dual descriptions. We believe that these complimentary approaches deserve to be studied further. \begin{acknowledgement} Saidi would like to thank Protars III program, D12/25/CNRST, for support. \end{acknowledgement}
2,869,038,154,209
arxiv
\section{Introduction\label{sec:introduction}} Nucleation is a ubiquitous process in nature which has been the subject of extensive research throughout the last century. Nowadays the most well-known understanding of the process relies on Gibbs' work\cite{article:gibbs-seminal-1878-a,article:gibbs-seminal-1878-b, book:gibbs-1931} concerning the characterization of phase transformations. Mainly focused on transitions near the equilibrium, Gibbs deduced a simple expression for the work required to form a spherical embryo (so-called cluster) of the new phase within the old one, $W(r)$ with $r$ being the cluster radius. While these efforts set the thermodynamic ground for understanding nucleation phenomena, \citeauthor{article:volmer-weber-1926}\cite{article:volmer-weber-1926, book:volmer-1939} were the pioneers to reveal the importance of the kinetics of nucleation. They proposed a rudimentary model to account for the chief characteristics of such phenomenon. A short time later, a more atomistic picture was proposed by \citet{article:farkas-1927} who developed the idea of Szilard and that was further developed by \citet{article:becker-doring-1935} resulting in the equation which now bears their names. Finally, \citeauthor{book:frenkel-1946}\cite{article:frenkel-1939,book:frenkel-1946} and \citet{article:zeldovich-1943} reached a similar result which also allows to describe non-steady-state kinetics. \citet{article:turnbull-fisher-1949} generalized this formalism in order to describe solid nucleation from a liquid phase, an approach that was readily extended to include nucleation in solids. The nucleation rate expressions derived from all these developments have an Arrhenius-like structure\cite{book:kashchiev-2000,book:kelton-2010} but they differ in the exact expression for the pre-exponential factor. The combination of these ideas comprise a remarkably robust theory which is commonly called {Classical Nucleation Theory} (CNT). Besides being a versatile tool, CNT is intuitively appealing and clearly summarizes the basic rules underlying phase transformations. However, while CNT has shown an extraordinary ability to predict the functional dependence of the nucleation rate on the thermodynamic variables involved, it has exhibited a severe disability when it comes to quantitatively explain experimental data. \cite{article:viisanen-strey-reiss-1993,article:viisanen-strey-1994,article:hruby-viisanen-strey-1996} This flaw has been usually blamed either on a poorly refined expression of the work of cluster formation, or on the heuristic modelling of cluster formation based on macroscopic growth laws, or on the simplicity of the cluster properties assumed by the capillary approach. There has been several attempts to extend and refine CNT, e.g. generalizing the kinetic model\cite{article:shizgal-barret-1989,article:kashchiev-1969b} to consider wider cluster transitions than that initially assumed by the pioneers of nucleation,\cite{article:becker-doring-1935, article:kaichew-stranski-1934,article:zeldovich-1943, article:frenkel-1939,book:frenkel-1946, article:tunitskii-1941} providing more accurate expressions of the free energy barrier by using classical Density Functional Theory\cite{inbook:kelton-1991,article:lutsko-2011-b} (DFT), refining the capillary model,\cite{article:prestipino-2012} or by selecting a different order parameter instead of the cluster size to characterize the nucleation pathway.\cite{article:lechner-2011} Recently, a new approach to nucleation has been formulated\cite{article:lutsko-2011-c,article:lutsko-2012-dtn, article:lutsko-2012-a1} based on fluctuating hydrodynamics.\cite{book:landau-lifshitz-fluidMechanics-1959} We will refer to this as Mesoscopic Nucleation Theory (MeNT). This new framework provides a self-consistent justification and extension of more heuristic equilibrium approaches based solely on the free energy. The MeNT provides a general stochastic differential equation (SDE) for the evolution of an arbitrary number of order parameters characterizing the number density field. When the simplest case is considered, that is a single order parameter, a straight-forward connection with CNT is found.\cite{article:lutsko-duran-2013} Such a reformulation of CNT, hereafter called dynamical CNT (dCNT), sheds light on the weaknesses of the classical derivation and can be used to construct a more realistic theory in which clusters have finite interfacial width. The present work aims to continue this development so as to extend dCNT to the case of confined systems. In the last few years there has been a veritable explosion of interest in nucleation due to the development of new techniques, such as microfluidics, that bring us the opportunity to probe the very small and the very fast. Besides, nucleation in confined environments is important for biological processes such as bone formation,\cite{article:meldrum-2013bone,article:delgado-lopez-2013apatite} \emph{in vivo} protein crystallization,\cite{article:bechtel-1976parasporal,article:koopmann-2012invivo} or cavitation in lipid bilayers,\cite{article:renn-2013cavitation} to name but a few. However, CNT is based on assumptions that are violated for small systems. For example, when the nucleation of a dense droplet from a weak solution is considered it is assumed that clusters do not consume enough material during nucleation so as to have a noticeable effect on the properties of the mother phase, but this can only be true for large systems. The main goal of this work is therefore to extend dCNT to take into consideration the conservation of mass required for a finite volume with the aim of further developing the classical theory. Following a similar procedure as that presented in a previous work,\cite{article:lutsko-duran-2013} a nucleation rate equation is readily obtained. It turns out that the confinement has a strong effect on the energy barrier and, thus, on the nucleation rate. On the one hand, in contrast to infinite systems, the cluster of new phase can only grow to a certain maximal size so that a complete phase transition is not possible. Nevertheless, for sufficiently large and supersaturated systems, this maximal cluster is indeed the stable, equilibrium state. In contrast, if the system is too small, the maximal cluster size is less than the critical radius and no transition takes place. In other words, nucleation is found to be inhibited as a consequence of the size of the container where the experiment is being carried out. On the other hand, nucleation rate is affected for a certain range of volumes when we compare it with the CNT prediction calculated for infinite systems.\cite{article:lutsko-duran-2013} Indeed, such a ratio shows a maximum for system sizes close to that which inhibits nucleation. Moreover, considerable corrections arise when a more realistic model for clusters is taken into consideration. In section \ref{sec:theory} the order-parameter dynamics derived from fluctuating hydrodynamics is modified so that the finite volume limit is taken into account. It is shown that the confinement does not affect the structure of the stochastic differential equation (SDE) derived in Ref. \onlinecite{article:lutsko-2012-dtn}. The use of this SDE with a modified version of the capillary model that accounts for the finite mass in the system under study is presented in section \ref{sec:capillaryModel}. In that Section, we give expressions for the attachment rate, the stationary cluster-size distribution, the nucleation rate and the growth rate of super-critical clusters. Section \ref{sec:extendedModels} focuses on the improvement of those results by means of considering clusters with a finite interfacial width. Three models are proposed: in the first the inner density and the interfacial width are the same as in the case of infinite systems, in the second the inner density is chosen so as to minimize the free energy of the stable cluster, and lastly, in the third model, both the interior density and the interfacial width are determined so as to minimize the free energy of the stable cluster. While the first two models yield similar results between them and to the capillary approach, the last one gives rise to large deviations from the other models. These comparisons are presented in section \ref{sec:resultsAndComparisons}. Finally, our results are summarized in section \ref{sec:conclusions}. \section{Theory\label{sec:theory}} The approach we follow in this work, based on Ref. \onlinecite{article:lutsko-2012-dtn}, requires that a spherical cluster be characterized by its density as a function of distance from its center, $\rho(r;\mathbf{x}(t))$ where $\mathbf{x}$ represents a set of one or more parameters describing the cluster: e.g. its radius, interior density, etc. As indicated, these parameters can change in time according to the following SDE, \cite{article:lutsko-2011-c,article:lutsko-2012-dtn, article:lutsko-2012-a1} which will be the basis for this study, \begin{align} \frac{dm(r;\mathbf{x}(t))}{dt}=&\,D4\pi r^2\rho(r;\mathbf{x}(t))\left.\frac{\partial}{\partial r} \frac{\delta\beta F[\rho]}{\delta\rho(\mathbf{r})}\right|_{\rho(r;\mathbf{x}(t))} \nonumber\\ &-\sqrt{D8\pi r^2\rho(r;\mathbf{x}(t))}\,\xi(r;t)\label{eq:cumulative-mass-SDE}, \end{align} where $m(r;\mathbf{x}(t))$ stands for the mass inside a spherical shell, \begin{equation} m(r;\mathbf{x}(t))=4\pi\int_{0}^{r} \rho(r';\mathbf{x}(t))\,r'^2dr' \label{eq:definition-cumulative-mass} \end{equation} and, with $D$ being the diffusion constant, $F[\rho]$ being the Helmholtz free energy, $\beta=1/k_BT$ where $k_B$ is the Boltzmann constant and $T$ is the absolute temperature, and where $\xi(r;t)$ is a fluctuating force that fulfils \begin{equation} \langle\xi(r;t)\xi(r';t')\rangle=\delta(r-r')\delta(t-t'). \end{equation} Note that square brackets in equation (\ref{eq:cumulative-mass-SDE}) have been used to indicate a functional dependence. Finally, it has been shown that equation (\ref{eq:cumulative-mass-SDE}) is It\^{o}-Stratonovich equivalent (see appendix A of Ref. \onlinecite{article:lutsko-2012-dtn}), for which reason so that either interpretation may be used. The next step consists of deriving the dynamics of the parameter vector, $\mathbf{x}(t)$, in confined volumes which will open the door to reduced descriptions, specifically to single order-parameter description. \subsection{Order-parameter dynamics in confined systems \label{subsec:orderParameterDynamicsInConfinedSystems}} The use of a finite number of scalar parameters (so-called order parameters) to describe the density can be a crude simplification but it is also a very useful method to get an approximate representation of the whole problem. Such a reduced description of the real density profile is commonly used in the classical picture, where it is customary to hypothesize that density fluctuations are well characterized by a single order parameter, namely the size of the cluster. While in CNT the order-parameter dynamics is formulated based on heuristic reasoning, MeNT allows us to derive the dynamical equations from a formal point of view, including the case of more than one order parameter. Here, we briefly review the arguments leading to the equations for the order parameter in order to note the effect of imposing a finite volume. From equation (\ref{eq:cumulative-mass-SDE}) the time-evolution equation governing the order-parameter dynamics is given by, \begin{align} \frac{\partial m(r;\mathbf{x}(t))}{\partial x_i}\frac{d x_i}{dt}=&\,D4\pi r^2\rho(r;\mathbf{x}(t))\left.\frac{\partial}{\partial r} \frac{\delta\beta F[\rho]}{\delta\rho(\mathbf{r})}\right|_{\rho(r;\mathbf{x}(t))} \nonumber\\ &-\sqrt{D8\pi r^2\rho(r;\mathbf{x}(t))}\,\xi(r;t)\label{eq:xi-cumulative-mass-SDE}. \end{align} Now, let us assume that the container is a sphere of radius $R_T$. The line of reasoning presented in section III.B of Ref. (\onlinecite{article:lutsko-2012-dtn}) remains valid, although we have to take care of imposing the right integration limits in order to consider the confinement. Thus, the latter equation can be transformed into equation (\ref{eq:Wj-xi-SDE}) multiplying by a function $W_j(r;\mathbf{x}(t))$ and integrating up to $R_T$, \begin{align} g_{ij}(\mathbf{x})\frac{d x_i}{dt}=&\,D\int_0^{R_T} W_j(r;\mathbf{x}(t))\rho(r;\mathbf{x}(t))\times\nonumber\\ &\times\left(\frac{\partial}{\partial r} \frac{\delta\beta F[\rho]}{\delta\rho(\mathbf{r})}\right)_{\rho(r;\mathbf{x}(t))} 4\pi r^2dr\nonumber\\ &-\int_0^{R_T} W_j(r;\mathbf{x}(t)) \sqrt{D8\pi r^2\rho(r;\mathbf{x}(t))}\,\xi(r;t)dr\label{eq:Wj-xi-SDE}, \end{align} with \begin{equation} g_{ij}(\mathbf{x}(t))=\int_0^{R_T}W_j(r;\mathbf{x}(t))\frac{\partial m(r;\mathbf{x}(t))}{\partial x_i}dr \label{eq:definition-general-gij} \end{equation} It was shown that if the diffusion matrix $\mathcal{D}_{ij}(\mathbf{x})$ associated to equation (\ref{eq:Wj-xi-SDE}) and the matrix $g_{ij}(\mathbf{x})$ are assumed proportional, the function $W_i$ must be, modulo a multiplicative constant, \begin{equation} W_i(r;\mathbf{x}(t))=\frac{1}{4\pi r^2\rho(r;\mathbf{x}(t))}\frac{\partial m(r;\mathbf{x}(t))}{\partial x_i} \label{eq:definition-Wi} \end{equation} so that $\mathcal{D}_{ij}(\mathbf{x})=2Dg_{ij}(\mathbf{x})$ and, eventually, \begin{equation} g_{ij}(\mathbf{x})=\int_0^{R_T}\frac{1}{4\pi r^2\rho(r;\mathbf{x}(t))}\frac{\partial m(r;\mathbf{x}(t))}{\partial x_i}\frac{\partial m(r;\mathbf{x}(t))}{\partial x_j}dr, \end{equation} which is also called ``the metric''\cite{article:lutsko-2011-c,article:lutsko-2012-dtn,article:lutsko-duran-2013}. The inverse of this matrix will be seen below to be interpretable as the matrix of state-dependent kinetic coefficients. By using the definition of $W_i(\mathbf{x})$ (Eq. \ref{eq:definition-Wi}), the equation for the driving force (\ref{eq:Wj-xi-SDE}) becomes, \begin{align} \int_0^{R_T}& W_j(r;\mathbf{x}(t))\rho(r;\mathbf{x}(t))\left.\frac{\partial}{\partial r} \frac{\delta\beta F[\rho]}{\delta\rho(\mathbf{r})}\right|_{\rho(r)} 4\pi r^2dr\nonumber\\ =&\left[\frac{\partial m(r;\mathbf{x}(t))}{\partial x_j}\left.\frac{\delta\beta F[\rho]}{\delta\rho(\mathbf{r})}\right|_{\rho(r;\mathbf{x}(t))}\right]_0^{R_T}\nonumber\\ &-\int_{r<R_T}\frac{\partial \rho(r;\mathbf{x}(t))}{\partial x_j}\left.\frac{\delta\beta F[\rho]}{\delta\rho(\mathbf{r})}\right|_{\rho(r;\mathbf{x}(t))}d\mathbf{r}.\label{eq:thermo-driving-force-proof-1} \end{align} The first term gives a zero contribution at $r=0$, and at $r=R_T$ the contribution will be, \begin{equation} \frac{\partial m(r;\mathbf{x}(t))}{\partial x_j}\left.\frac{\delta\beta F[\rho]}{\delta\rho(\mathbf{r})}\right|_{\rho(R_T;\mathbf{x}(t))}=\frac{\partial N}{\partial x_j}\,\mu(\rho(R_T;\mathbf{x}(t))), \end{equation} which vanishes in closed systems for which the total number of particles, $N=m(R_T;\mathbf{x})$, is constant regardless the values of the order parameters. The second term in equation (\ref{eq:thermo-driving-force-proof-1}) can be simplified by using the functional chain rule, \begin{equation} \int_{r<R_T}\frac{\partial \rho(r;\mathbf{x}(t))}{\partial x_j}\left.\frac{\delta\beta F[\rho]}{\delta\rho(\mathbf{r})}\right|_{\rho(r;\mathbf{x}(t))}d\mathbf{r} = \frac{\partial\beta F(\mathbf{x})}{\partial x_j}, \end{equation} where $F(\mathbf{x})$ has been used as the equivalent of $F[\rho]$. The latter equations allows to rewrite the driving-force term of the SDE (\ref{eq:Wj-xi-SDE}) in a simpler manner that involves only partial derivatives. The noise term is similarly simplified following Ref.(\onlinecite{article:lutsko-2012-dtn}) to get \begin{equation} \frac{dx_i}{dt}=-Dg_{ij}^{-1}(\mathbf{x})\frac{\partial\beta F(\mathbf{x})}{\partial x_i}+2DA_i(\mathbf{x})-\sqrt{2D}q_{ji}^{-1}(\mathbf{x})\xi(t) \label{eq:order-parameter-sde-simplified} \end{equation} with $q_{il}(\mathbf{x})q_{jl}(\mathbf{x})=g_{ij}(\mathbf{x})$ and \begin{align} A_{i}\left( \mathbf{x}\right) =&q_{ik}^{-1}\left( \mathbf{x}\right) \frac{\partial q_{jk}^{-1}\left( \mathbf{x}\right) }{\partial x_{j}}- \frac{1}{2}g_{il}^{-1}\left( \mathbf{x}\right) \frac{\partial g_{jm}^{-1} \left( \mathbf{x}\right) }{\partial x_{l}}g_{mj}\left( \mathbf{x}\right)\nonumber\\ &+\frac{1}{2}\left( g_{il}^{-1}\left( \mathbf{x}\right) g_{jm}^{-1}\left(\mathbf{x}\right) -g_{ij}^{-1}\left( \mathbf{x}\right) g_{lm}^{-1}\left(\mathbf{x}\right) \right)\nonumber\\ &\times\int_{0}^{R_T}\frac{1}{4\pi r^{2}\rho^{2}\left( r;\mathbf{x}\right) } \left(\frac{\partial \rho \left( r;\mathbf{x}\right) }{\partial x_{l}}\right.\nonumber\\ &\times\left.\frac{\partial m \left( r;\mathbf{x}\right) }{\partial x_{j}}\frac{\partial m \left(r;\mathbf{x}\right) }{\partial x_{m}}\right)dr \label{eq:Ai-definition}. \end{align} This has exactly the same in structure as the counterpart for open systems except that the free energy that occurs here is the Helmholtz free energy while for open systems it is, naturally enough, the grand potential. Hence, the confinement does not alter the structure of the dynamics equations, as expected, but it will play an important role when it comes to derive the exact expressions of the cluster density profile, the free energy and the cumulative mass. This framework will be applied to make contact with the classical picture but considering a finite mass and volume. The following sections are intended to modify the capillary and extended models discussed by \citet{article:lutsko-duran-2013} by enforcing the mass conservation law, \begin{equation} N=4\pi \int_0^{R_T} \rho(r;\mathbf{x}(t))\ r^2dr. \label{eq:mass-conservation-law} \end{equation} where $N$ represents the total number of particles, also referred as the ``total mass'', which is strictly constant for a closed system. To this end we will particularize the general order-parameter dynamics to a single order-parameter description, i.e. a one-dimensional parametrization will be considered. In contrast to CNT, the chosen parameter may be indifferently the cluster size in number of molecules or in radius, or even an abstract variable to simplify the resulting SDE. Hereinafter we will also specialize to the case that the new phase is more dense than the old phase (e.g. nucleation of liquid from gas) although the opposite possibility (e.g. nucleation of gas from liquid) is very similar. \subsubsection{One-dimensional parametrization} For the simplest case of a single order parameter, \begin{equation} \rho(r;t)\rightarrow \rho(r;X(t))\label{eq:general-1-dimensional-rho}. \end{equation} it was shown\cite{article:lutsko-2011-c,article:lutsko-2012-dtn,article:lutsko-2012-a1} that equation (\ref{eq:order-parameter-sde-simplified}) becomes, \begin{align} \frac{dX}{dt}=&-Dg^{-1}(X)\frac{\partial \beta F(X)}{\partial X}-D\frac{1}{2}g^{-2}(X)\frac{\partial g(X)}{\partial X}\nonumber\\ &+\sqrt{2D\,g^{-1}(X)}\xi(t), \label{eq:dynamics-dCNT} \end{align} which constitutes the starting point of the dCNT. The metric in this reduced description is a 1-dimensional function of $X$ whose definition according to equation (\ref{eq:definition-general-gij}) becomes, \begin{equation} g(X)=\int_0^{R_T} \frac{1}{4\pi r^2\rho(r;X)}\left(\frac{\partial m(r;X)}{\partial X}\right)^2 dr. \label{eq:definition-1dgeneral-gij} \end{equation} As for the cumulative mass, the definition (\ref{eq:definition-cumulative-mass}) remains unchanged but now $\mathbf{x}(t)=X(t)$. That said, equation (\ref{eq:order-parameter-sde-simplified}) is easily transformed into a Fokker-Planck equation (FPE) determining the time evolution of the probability density function (PDF) of the random variable $X$,\cite{book:risken-1996a,book:gardiner-2004, article:lutsko-2012-dtn,article:lutsko-duran-2013} \begin{align} \frac{\partial P(X,t)}{\partial t}=&-\frac{\partial \mathfrak{J}(X,t)}{\partial X}, \label{eq:fpe-general-x} \end{align} with \begin{align} \mathfrak{J}(X,t)=&\, -D\left(\nonumber g^{-1}(X)\frac{\partial \beta F(X)}{\partial X}\right.\nonumber\\ &+\left.g^{-1/2}(X)\frac{\partial }{\partial X}g^{-1/2}(X)\right)P(X,t)\label{eq:fpe-general-v1-x}\\ =&-D\left(\nonumber g^{-1}(X)\frac{\partial \left(\beta F(X)-\ln g^{1/2}(X)\right)}{\partial X}\right.\nonumber\\ &+\left.g^{-1}(X)\frac{\partial }{\partial X}\right)P(X,t)\label{eq:fpe-general-v2-x} \end{align} being the probability flux, which has been written in two ways to show the similarity with the Zeldovich-Frenkel equation\cite{article:zeldovich-1943,article:frenkel-1939,book:frenkel-1946} of CNT. Indeed, the FPE determined by equations (\ref{eq:fpe-general-x}, \ref{eq:fpe-general-v2-x}) is formally equivalent to the Zeldovich-Frenkel equation when $X$ is the number of molecules inside a cluster, with $Dg^{-1}(X)$ playing the role of the monomer-attachment rate and the free energy shifted by a logarithmic term in $g(X)$. It has been shown\cite{article:lutsko-duran-2013} that the logarithmic term ensures the general covariance of the dCNT. This means that when different equivalent choices of the parmater $X(t)$ are possible (e.g. the mass or radius of the cluster), the stochastic dynamics will be independent of which parameter is used - a nontrival property that does not occur naturally in the context of CNT. While the general solution of equation (\ref{eq:fpe-general-x}) is a difficult problem, a simple case admitting a solution is that of a stationary system with const flux, $\mathfrak{J}_s$, so that, \begin{align} \mathfrak{J_s}=&-D\left(\nonumber g^{-1}(X)\frac{\partial \beta F(X)}{\partial X}\right.\nonumber\\ &+\left.g^{-1/2}(X)\frac{\partial }{\partial X}g^{-1/2}(X)\right)P(X), \end{align} from which we readily obtain, \begin{align} P_{s}(X)=&A\,g^{1/2}(X)e^{-\beta F(X)}\nonumber\\ &-\frac{\mathfrak{J}_s}{D}g^{1/2}(X)e^{-\beta F(X)}\int^X g^{1/2}(Y)e^{\beta F(Y)} dY, \label{eq:steady-state-pdf} \end{align} the steady-state solution, where $A$ is a normalization constant and which is manifestly invariant under transformation of variables.\cite{article:lutsko-duran-2013} If we consider that such a stationary non-zero flux is ensured by removing clusters once they reach a given size $X_+$, the steady-state distribution must satisfy $P_s(X_+)=0$. When this condition is imposed, equation (\ref{eq:steady-state-pdf}) becomes, \begin{align} P_{s}(X)=&\ \frac{\mathfrak{J}_s}{D}g^{1/2}(X)e^{-\beta F(X)}\int_X^{X_+} g^{1/2}(X')e^{\beta F(X')} dX'. \label{eq:steady-state-final-pdf} \end{align} For an undersaturated solution, equilibrium, of course, can be identified with a particular value of the stationary flux, namely $\mathfrak{J}_s=0$. Thus, when the system is in a equilibrium state (i.e., under-saturated) the PDF will be, \begin{align} P_{eq}(X)&= A g^{1/2}(X)\exp\left(-\beta F(X)\right)\nonumber\\ &=A\exp\left\{-\beta \left(F(X)-\frac{1}{2}k_BT\ln g(X)\right)\right\}. \label{eq:PDF-equilibrium} \end{align} \subsubsection{Canonical form: the natural order parameter} Thus far, our concern was to use the mathematical tools of the theory of stochastic processes in order to make contact with CNT, what led us to derive a formally equivalent to the Zeldovich-Frenkel equation. However, any single-variable SDE with multiplicative noise (as the current case) can be always transformed into a simpler one with additive noise via the transformation of variable, \cite{book:risken-1996a,book:gardiner-2004,article:lutsko-duran-2013} \begin{equation} dY = \sqrt{g(X)}\,dX. \label{eq:canonical-transformation} \end{equation} with an arbitrary boundary condition that for the sake of simplicity will be taken to be $Y(0)=0$. Such a ``canonical variable'' is the most natural order parameter to be chosen in the case of a one-dimensional parametrization of $\rho(r;t)$, since equation (\ref{eq:dynamics-dCNT}) is thereby simplified, \begin{equation} \frac{dY}{dt}= -D\frac{\partial \beta \widetilde{F}(Y)}{\partial Y}+\sqrt{2D}\,\xi(t), \label{eq:canonical-sde} \end{equation} where $\widetilde{F}(Y)=F(X(Y))$. As can be observed, such an equation is It\^{o}-Stratonovich equivalent. The same goes for the FPE (\ref{eq:fpe-general-x}) which becomes,\cite{article:lutsko-duran-2013} \begin{equation} \frac{\partial \widetilde{P}(Y,t)}{\partial t}=D\frac{\partial }{\partial Y} \left(\frac{\partial\beta \widetilde{F}(Y)}{\partial Y}+\frac{\partial}{\partial Y}\right)\widetilde{P}(Y,t) \label{eq:canonical-fpe} \end{equation} with $\widetilde{P}(Y,t)dY=P(X,t)dX$. These equations will be very useful when it comes to get the nucleation rate since they notably simplify the calculations involved in the derivation. \subsubsection{Nucleation rate and mean first-passage time \label{subsub:nucleationRateAndMFPT}} In the previous study for infinite systems the nucleation rate was derived from classical arguments yielding an expression that essentially corroborated the well-known relationship between the nucleation rate and the mean first-passage time (MFPT),\cite{book:barrat-hansen-2003} hereafter denoted as $\tau$ and accompanied by a subscript to specify the corresponding approach, \begin{align} J_{\text{CNT}} \equiv&\,\frac{\rho_{av}}{2\tau_{\text{CNT}}}=\frac{D\rho_{av}}{\int_{X_1}^{X_{+}}g_\infty(X')e^{\beta\Delta\Omega(X')}dX'}\notag\\ \sim& \, \rho_{av}\, Dg_\infty^{-1}(\Delta N_*)\,\sqrt{\frac{1}{2\pi}\left|\beta\Delta \Omega''_*\right|} \exp\left(-\beta\Delta\Omega_*\right), \label{eq:tau-cnt-approx} \end{align} where the infinite subscript has been used to remember that the metric used here is that for an infinite system, $\Omega=F-\mu N$ is the grand canonical potential, $X_1$ is the value of the order parameter $X$ for which the number of molecules inside the cluster, $\Delta N$, is set to be 1, $X_+$ can be any value beyond the critical size to enforce the stationary flux, and where \begin{align} \beta\Delta\Omega_*'\equiv& \beta\Delta\Omega ' (X_*)=0,\nonumber\\ \beta\Delta\Omega_*''\equiv& \beta\Delta\Omega'' (X_*). \end{align} Indeed, the MFPT can be directly identified as the time required for the phase transition to start, since one super-critical cluster in the whole system is enough to trigger the transition. Adapting the same argument as led to (\ref{eq:tau-cnt-approx}) one can derive the escape rate for confined systems, \begin{align} j_{nc}&\equiv\frac{1}{\tau_{nc}}= \frac{2D}{\int_{X_1}^{X_{+}}g(X')e^{\beta\Delta F(X')}dX'}\nonumber\\ &\sim 2D\, g^{-1}(\Delta N_*)\,\sqrt{\frac{1}{2\pi}\left|\beta\Delta F''_*\right|} \exp\left(-\beta\Delta F_*\right) \label{eq:escape-rate-nc} \end{align} Note that in the following we will not distinguish between the escape rate and the nucleation rate, as they are essentially the same. This will be ulteriorly compared to the classical estimation (from Eq. \ref{eq:tau-cnt-approx}). Such a ratio will give us a first idea of the effect of the mass conservation. Given that we are restricting attention to the evolution of a single cluster which is not perturbed by any other clusters within the system, it seems natural to focus on the escape rates. In our particular case the MFPT is given by, \begin{align} \tau=\,\frac{1}{2D}&\int_{0}^{X_+}dx\, P_0(x)\int_{x}^{X_{+}}dx'\,g^{1/2}(x')e^{\beta {F}(x')}\times\notag\\ &\times\int_0^{x'} dx''\,g^{1/2}(x'')e^{-\beta F(x'')}. \label{eq:MFPT-definition} \end{align} Considering the initial PDF, $P_0(X)=\delta(x)$, the latter equation becomes, \begin{equation} \tau = \frac{1}{2D}\int_0^{X_+}dx\,g^{1/2}(x)e^{-\beta{F}(x)} \int_{x}^{X_+}dx'\,g^{{1/2}}(x')e^{\beta{F}(x')} \label{eq:escape-rate-definition}. \end{equation} It is not generally possible to evaluate this expression analytically, however we can make a good approximation of its value with the aid of the canonical variable and assuming the free energy admits the expansion, $ \beta\widetilde{ F}(Y)=\beta\widetilde{F}(Y(0))+\widetilde{F}_0\,Y^\alpha+\dots\ $ with some $\alpha >0$, so that it can be approximated as \begin{align} \beta\Delta\widetilde{F}(Y)\sim&\ \widetilde{F}_0\,Y^\alpha \end{align} for small values of $Y$. Hence, by using the same method explained in appendix A of Ref. \onlinecite{article:lutsko-duran-2013}, the escape rate becomes, \begin{align} j=&\ \frac{2D}{\int_0^{X_+}dx\,g^{1/2}(x)e^{-\beta{F}(x)} \int_{x}^{X_+}dx'\,g^{{1/2}}(x')e^{\beta{F}(x')}}\notag\\ \sim&\ 2D \frac{\alpha\,\widetilde{F}_0^{1/\alpha}\left(2\pi|\beta\widetilde{F}''(Y_*)|^{-1}\right)^{-1/2}}{\left(\Gamma\left(\frac{1}{\alpha}\right)-\Gamma_i\left(\frac{1}{\alpha},\widetilde{F}_0Y^{\alpha}(X_+)\right)\right)}{e}^{-\beta\Delta F_*}\nonumber\\ \sim&\ 2D\,\frac{\alpha\,\widetilde{F}_0^{1/\alpha}}{\Gamma\left(\frac{1}{\alpha}\right)}\,\sqrt{\frac{1}{2\pi}|\beta F''(X_*)|g^{-1}(X_*)}\,e^{-\beta\Delta F_*} \label{eq:escape-rate-general} \end{align} Note that the tilde has been used to highlight that the expression of the free energy is written in terms of the canonical variable. Indeed, this equation can also be deduced from the dCNT derivation by fixing the total number of clusters to be 1 in the nucleation rate equation. Besides, this approximation also yields an approximated equation for the stationary distribution, \begin{align} P_{s}\sim \frac{\alpha\,\widetilde{F}_0^{1/\alpha}}{\Gamma\left(\frac{1}{\alpha}\right)}g^{1/2}(X)\exp\left(-\beta\Delta F(X)\right). \label{eq:stationary-pdf-approximated} \end{align} \section{Parametrized profiles\label{sec:parametrizedProfiles}} The following section is devoted to particularize the expressions derived above to some specific parametrizations. We will start with the capillary model, a crude model where even the smallest clusters {have} zero {interfacial} width. Despite being the simplest description of a density fluctuation, the capillary approach {results} in a robust theory that {captures} the {most} relevant aspects of the nucleation process. Thereafter we will test the effect of considering a finite cluster width under the same circumstances. When the capillary model is endowed with a surface we call the resulting approach {the ``extended'' model}. Our concern is the nucleation of a dense liquid droplet from a weak solution at a given temperature, $T$, with a finite number of particles (or total mass) $N$ and total volume $V_T$. The average density of the initial vapor is then given by $\rho_{av}=N/V_T$. In order to write the vapor density in a simpler way we will refer this quantity to the coexistence vapor density for an infinite system at the same temperature, which will be denoted as $\rho_{v}^{\text{coex}}$. The liquid density at the coexistence, $\rho_{l}^{\text{coex}}$, is then determined by the conditions, \begin{align} \omega(\rho_{v}^{\text{coex}}) &= \omega(\rho_{l}^{\text{coex}})\nonumber\\ \omega'(\rho_{v}^{\text{coex}}) &= \omega'(\rho_{l}^{\text{coex}}), \label{eq:coexistence-conditions} \end{align} with $\omega(\rho)=f(\rho)-\mu\rho$ being the free energy per unit volume and $f(\rho)$ the Helmholtz free energy per unit volume. The ratio $\rho_{av}/\rho_{v}^{\text{coex}}$ plays a similar role as the supersaturation in ideal systems, thus it will be referred as the \emph{effective supersaturation}, $S_e$. For that reason, the initial density will be specified in terms of the effective supersaturation since we will adopt the convention, $\rho_{av}=\rho_{v}^{\text{coex}}\,S_e$. \subsection{Modified capillary model\label{sec:capillaryModel}} The capillary model used in CNT assumes that clusters have no interfacial width and that they emerge with the same properties as the bulk new phase. In short, that approach can be mathematically expressed as, \begin{equation} \rho(r;R,\rho_0)=\begin{cases} \rho_0,\quad r\leq R\\ \rho_{ext},\quad r>R \end{cases} \label{eq:capillary-model-general} \end{equation} where $R$ is the radius of the cluster, $\rho_0$ is the density inside the cluster and $\rho_{ext}$ is the value of the density outside the cluster. In the case of infinite systems, $\rho_{ext}=\rho_{av}$ and $\rho_0$ is the bulk-liquid density, which fulfils $\omega'(\rho_{av}) = \omega'(\rho_{l})$. {In contrast, a finite system} closed to matter {exchange cannot} reach {this} global thermodynamic equilibrium but {, rather,} a stable state, which will not fulfil the just mentioned equilibrium condition. {This is because the density of the vapor outside the cluster must drop as the size and density of the cluster grows so as to maintain a fixed number of molecules.} Under these circumstances it seems natural to select $\rho_0$ as the stable-state density, $\rho_{st}$, which yields a minimum of the Helmholtz free energy of the system, $F(\rho(r;R))$ (Eq. \ref{eq:stable-cluster-MCM}). We have still to express the surrounding density, $\rho_{av}$, as a function of the {cluster} size. Depending on the total size of the system, the probability that several fluctuations coexist at the same time will be negligible or not. In the case in which only one density fluctuation lives in the system at a time (an assumption always made within CNT), the result of applying the mass conservation law (Eq. \ref{eq:mass-conservation-law}) gives \begin{align} \rho_{ext}(R,\rho_0)=\frac{\rho_{av}-\delta^3(R)\rho_0}{1-\delta^3(R)}. \label{eq:rho-ext-capillaryModel} \end{align} with $\delta(R)=R/R_T$. This equation explicitly shows that clusters will perturb the surrounding density as long as the system is small enough. From a straightforward calculation one observes that $\rho_{ext}(R)\rightarrow\rho_{av}$ as $R_T\rightarrow\infty$, which is in accordance with the classical description. Combining equations (\ref{eq:capillary-model-general}) and (\ref{eq:rho-ext-capillaryModel}), along with $\rho_{0}=\rho_s$, we obtain the modified capillary model (MCM), \begin{equation} \rho(r;R,\rho_0)=\rho_0\,\Theta(R-r)+\rho_{ext}(R,\rho_0)\,\Theta(r-R), \label{eq:modified-capillaryModel} \end{equation} with $\Theta(x)$ being the Heaviside step function. The Helmholtz free energy of the system containing a fluctuation, $\beta F(\rho_0,R)$, will have two contributions. The first one is due to the cluster itself, and it is postulated to have the common volume-plus-surface structure. The second one is due to the remaining volume, with density $\rho_{ext}(R,\rho_0)$. Computing the difference between this energy and that corresponding to the system with no fluctuation, $\beta F(\rho_{av})$, one gets the work of cluster formation, \begin{align} \Delta \beta F(\rho_0,R) =& \beta F(\rho_0,R)-\beta F(\rho_{av})\nonumber\\ =&\frac{4\pi}{3}R^{3}\left(\beta f(\rho_{0})-\beta f(\rho_{ext}(R,\rho_{0}))\right)\nonumber\\ &+4\pi R^{2}\beta \gamma\nonumber\\ &+\left(\beta f(\rho_{ext}(R,\rho_{0}))-\beta f(\rho_{av})\right) V_T \label{eq:W-MCM}, \end{align} where $\gamma$ is the phenomenological surface tension. The last term will play a key role in nucleation, since it is related with the presence (or not) of a global minimum beyond the critical size. \subsubsection{Critical and stable cluster\label{subsub:criticalAndStableCluster-MCM}} {To characterize the critical cluster, we need to minimize the free energy with respect to the cluster's density and radius,} \begin{align} \left(\frac{\partial\beta F(\rho,R)}{\partial R},\frac{\partial\beta F(\rho,R)}{\partial \rho}\right)_{\substack{R=R_{st}\\ \rho=\rho_{st}}}=\mathbf{0} \label{eq:stable-cluster-MCM} \end{align} {where} $R_{st}$ {is the radius of the stable cluster}. {Use of equation (\ref{eq:W-MCM}) then gives} \begin{equation} 4\pi\,R_{\ast }\left[ \begin{array}{l} \left( \beta f(\rho_{st})-\beta f(\rho_{ext}(R_{\ast},\rho_{st}))\right)\\ -\left(1-\delta_*^{3}\right)\beta f^{\prime}(\rho_{ext}(R_{\ast},\rho_{st})) \frac{(\rho_{st}-\rho_{av})}{1-2\delta_*^3+\delta_*^6} \end{array}\right]=-8\pi\beta \gamma \label{eq:critical-cluster-MCM}. \end{equation} Taking the limit $R_T\rightarrow\infty$, one readily gets \begin{equation} R_{\ast }=\frac{-2\beta \gamma}{\left(\beta f(\rho_l) -\beta f(\rho_{av})\right)- \beta f^{\prime}(\rho_{av})(\rho_l-\rho_{av})}, \label{eq:critical-radius-MCM} \end{equation} that shows the same structure as the equation for open systems,\cite{book:kashchiev-2000,book:kelton-2010} \begin{equation} R^{\text{CNT}}_\ast =\frac{- 2\beta\gamma}{\beta\omega(\rho_l)-\beta\omega(\rho_{av})}. \end{equation} These results clearly show an agreement with those predicted by CNT, while extending them to situations where the confinement can play a prominent role, e.g. inhibiting nucleation for an average density which would proceed to nucleate in larger systems. \subsubsection{Cumulative mass and metric\label{subsub:cumulativeMassAndMetric-MCM}} {The modified capillary model} for the density profile {gives cumulative mass distribution} (\ref{eq:definition-cumulative-mass}), \begin{align} m(r;R) =&\ 4\pi \int_0^{r} \rho(r;R)\ r'^2dr'\nonumber\\ =&\Theta(R-r)\frac{4\pi}{3}R^3\rho_{0}\nonumber\\ &+\Theta(r-R)\frac{4\pi}{3}\left(R^3\rho_0 + (r^3-R^3)\rho_{ext}(R)\right) \label{eq:mass-MCM} \end{align} with $\rho_0=\rho_{st}$. {Using this,} the metric can be obtained by employing equation (\ref{eq:mass-MCM}) in (\ref{eq:definition-1dgeneral-gij}) {with the result that} \begin{align} g(R)=g_{1,0,0}(R)+g_{0,1,0}(R)+g_{0,0,1}(R) \label{eq:metric-MCM-v1} \end{align} with \begin{align} g_{1,0,0}(R)=&\frac{4\pi R^{3}}{\rho_{ext}(R)} \left(\rho_{ext}(R)-\rho_{0}\right)^{2}\left(1-\delta(R)\right) \label{eq:metric-cap-comps}\\ g_{0,1,0}(R) =&-\frac{4\pi R^{2}}{3\rho_{ext}(R)} \left(\rho_{ext}(R)-\rho_{0}\right)\nonumber\\ &\times \frac{(R_{T}-R)^{2}(R_{T}+2R)}{R_{T}} \left(\frac{\partial\rho_{ext}(R)}{\partial R}\right) \nonumber \\ g_{0,0,1}(R) =&\frac{4\pi R_{T}^{3}}{45} \left(1+3\delta(R) +6\delta^{2}(R)+5\delta^{3}(R)\right)\nonumber\\ &\times\frac{(R_{T}-R)^{3}}{R_{T}} \left(\frac{\partial\rho_{ext}(R)}{\partial R}\right)^{2}\nonumber \end{align} It is easy to check that $g_{0,1,0}(R)$ and $g_{0,0,1}(R)$ tend to $1/R_T$ when $R\ll R_T$, so they represent small corrections to the first term $g_{1,0,0}(R)$. {Thus}, we recover the metric derived for infinite systems\cite{article:lutsko-duran-2013} when that limit is considered. That fact leads us to rewrite equation (\ref{eq:metric-MCM-v1}) as, \begin{align} g(R)=&\ g_{1,0,0}(R)\left(1+\frac{g_{0,1,0}(R)+g_{0,0,1}(R)}{g_{1,0,0}(R)}\right)\nonumber\\ =&\ g_{1,0,0}(R)\ \chi(R) \label{eq:metric-MCM-v2} \end{align} As we discussed in section \ref{sec:theory}, the inverse of the metric plays a similar role as the monomer attachment rate in the Zeldovich-Frenkel equation when the order parameter is the number of particles. To test this fact we perform the change of variable \begin{equation} \Delta N = \frac{4\pi}{3}R^3(\rho_0-\rho_{ext}(R)). \label{eq:deltaN-MCM} \end{equation} The metric is easily translated to the new variable, \begin{align} g^{-1}(\Delta N)=&\left(\frac{d\Delta N}{dR}\right)^2g^{-1}(R(\Delta N))\nonumber\\ =&\ \zeta(\Delta N) 4\pi R(\Delta N) \rho_{ext}(R(\Delta N)) \end{align} with \begin{equation} \zeta (\Delta N)=\frac{\left(1-\frac{\rho_{0}-\rho_{av}}{\rho_{0}-\rho_{ext}(R(\Delta N))} \delta^{3}(\Delta N)\right)^{2}}{% 1-\delta (\Delta N)}\chi^{-1}(R(\Delta N)) \label{eq:sticking-coeff} \end{equation} It turns out that $f(\Delta N)=Dg^{-1}(\Delta N)$ {has} essentially the same structure as the usual result for the monomer attachment rate within the context of diffusion-limited nucleation: indeed the first converges to the second when $R_T\rightarrow\infty$. Note that here $\zeta(\Delta N)$ would be the counterpart of the phenomenological sticking coefficient. \subsubsection{Expansion of the canonical variable \label{subsub:canonicalVariable-MCM}} With the aid of the expression of the metric we can look for the canonical variable defined by (\ref{eq:canonical-transformation}), \begin{align} Y(R')=\int_0^{R'}\sqrt{\frac{4\pi\,R^3(\rho_{ext}(R)-\rho_0)^2}{\rho_{ext}(R)} (1-\delta(R))\chi(R) } dR. \label{eq:canonical-Y-MCM} \end{align} However, the canonical variable is not an elementary function of $R$ due to the complexity of the integrand. Fortunately, the practical interest on this variable resides in obtaining a first-order approximation of the work of cluster formation and the number of particles inside a cluster in the case of small clusters. Under such circumstances one can consider $\delta(R)\sim 0$ as a good approximation and, therefore, $g(R)\sim g_{1,0,0}(R)$. Thus, \begin{align} Y(R) \sim &\ \frac{2}{5} \left(\frac{4\pi(\rho_{0}-\rho_{av})}{\rho_{av}}\right)^{1/2} R^{5/2}\label{eq:can-var} \\ R(Y) \sim &\ \left(\frac{5}{2}\left( \frac{4\pi(\rho_{0}-\rho_{av})}{\rho_{av}} \right)^{-1/2}\right)^{2/5}Y^{2/5}\nonumber \end{align} so that, \begin{align} \Delta\beta F(Y) \sim &\ 4\pi \beta\gamma R^{2}(Y) \nonumber\\ \sim&\ 4\pi\beta\gamma\left(\frac{5}{2} \left(\frac{\rho_{av}}{4\pi(\rho_{0}-\rho_{av})}\right)^{1/2} \right)^{4/5} Y^{4/5} \label{eq:F-expansion-MCM} \end{align} and, \begin{align} \Delta N \sim &\frac{4\pi}{3}\left(\rho_{0}-\rho_{av}\right) \left(\frac{5}{2}\left(\frac{\rho_{av}}{4\pi(\rho_{0}-\rho_{av})} \right)^{1/2}\right)^{6/5}Y^{6/5} \label{eq:DeltaN-expansion-MCM} \end{align} These expressions are used in the calculations of the nucleation rate in order to make simpler the integrals involved (Eq. \ref{eq:escape-rate-general}), with $\alpha=\frac{4}{5}$ and where \begin{equation} \widetilde{F}_0=4\pi\beta\gamma\left(\frac{5}{2} \left(\frac{\rho_{av}}{4\pi(\rho_{0}-\rho_{av})}\right)^{1/2} \right)^{4/5} \end{equation} \subsubsection{The stochastic differential equation\label{subsub:SDE-MCM}} {The SDE now becomes} \begin{align} \frac{dR}{dt}=&-Dg^{-1}(R)\frac{\partial}{\partial R}\left(\Delta\beta F(R)+\frac{1}{2}\ln g(R)\right)\nonumber\\ &+\sqrt{2D\,g^{-1}(R)}\,\xi(t)\nonumber. \label{eq:sde-R-MCM} \end{align} When the cluster and system are large enough the SDE {converges to} that derived by \citeauthor{article:lutsko-duran-2013},\cite{article:lutsko-duran-2013} which yields the classical result $R\sim t^{1/2}$ when the higher order terms in $R^{-1}$ are neglected.\cite{book:saito-1998} {In contrast,} in confined systems the result {is very different} when the cluster is large compared to the total volume. In that situation, the mass conservation law {does not allow} the cluster {the cluster to grow indefinitely}, as it does in CNT and dCNT. Indeed, clusters will not be able to grow beyond the stable size determined by equation (\ref{eq:stable-cluster-MCM}). Accordingly, the modified capillary model is able to reproduce the slow down of the growth rate of post-critical clusters {expected in a confined system}, unlike the classical theory. \subsection{Extended model\label{sec:extendedModels}} \subsubsection{The profile and the metric\label{subsub:profileAndMetric-EM}} One of the most {obvious} deficiencies of the capillary model is the zero-thickness {interface} assumed for clusters, even for the smallest ones where most of molecules will lie on the cluster surface. To circumvent such a limitation, piecewise-linear profiles (PLP) have been used in previous works, \cite{article:lutsko-2011-b,article:lutsko-2012-dtn,article:lutsko-duran-2013} allowing thus a smooth transition from the inner to the outer density value. {We use the same idea to extend the MCM profile as,} \begin{equation} \rho(r)=\begin{cases} \hfill{}\rho_0, & r<R-w\\ \hfill{}\rho_0-(\rho_0-\rho_{ext}(R))\frac{r-(R-w)}{w},&R-w<r<R\\ \hfill{}\rho_{ext}(R),&R<r \end{cases} \label{eq:extendedModel-general} \end{equation} where the density out of the cluster is determined by the mass conservation law, \begin{align} \rho_{ext}(R)=&\frac{\rho_{av}-\left(\delta^3(R)-\psi(R;w)\right)\rho_0}{% 1-\left(\delta^3(R)-\psi(R;w)\right)}\nonumber \\ \psi(R;w)=&\frac{4\pi}{w V_T} \left( \begin{array}{l} \frac{R^4-(\max(R-w),0)^4}{4}\\ \qquad+\frac{(w-R)\left(R^3-(\max(R-w,0))^3\right)}{3} \end{array} \right) \label{eq:rho-ext-EM} \end{align} so that $m(R_T)/V_T=\rho_{av}$. The parameters $\rho_0$ and $w$ have to be {fixed} according to some reasonable physical criterion. In order to be consistent with the previous section, the inner density will be set to minimize the free energy of the stable cluster. Following the same reasoning, it seems natural to set the width parameter as that fulfilling the same rule. To this end we need to construct the free energy model for the PLP (Eqs. \ref{eq:extendedModel-general} and \ref{eq:rho-ext-EM}) and solve the 3-dimensional root-finding problem, \begin{align} \left( \frac{\partial \beta F(\mathbf{X})}{\partial X_j} \right)_{\mathbf{X}=\{R_{st},\rho_{st},w_{st}\}}=0 \label{eq:stable-cluster-EM} \end{align} These {do not permit} an exact solution {and so will be solved numerically.} \subsubsection{Free energy model\label{subsub:freeEnegyModel-EM}} The aim of this work is ultimately make a connection with the calculations already performed for infinite systems. Thus, the model for the free energy in the PLP approach will be constructed based on a simple\cite{article:lutsko-2011-c} \begin{equation} F[\rho]=\int\left(f(\rho(\mathbf{r})+\frac{1}{2}K \left(\nabla \rho(\mathbf{r})\right)^2\right)d\mathbf{r} \label{eq:squared-gradient} \end{equation} where $K$ is the squared-gradient coefficient that will be estimated by using the results of Ref. \onlinecite{article:lutsko-2011-c}, and the Helmholtz free energy per unit volume can be calculated based on a pair potential using thermodynamic perturbation theory or liquid state integral equation methods. Substituting the PLP into equation (\ref{eq:squared-gradient}) yields, \begin{widetext} \begin{align} \beta\Delta F(R;w)=&\frac{4\pi}{3}\left(\max(R-w,0)\right)^3\beta\Delta f(\rho_0) +\left(1-\delta^3(R)\right)\,\beta\Delta f(\rho_{ext}(R))\,V_T \notag \\ &+\int_{\max(R-w,0)}^{R}4\pi\,r^2\,\beta\Delta f\left(\rho_0-(\rho_0-\rho_{ext}(R))\frac{r-R+w}{w}\right)dr \notag \\ &+\frac{\beta K}{2}\frac{4\pi}{3}\left(R^3-\max(R-w,0)^3\right)\left(\frac{% \rho_0-\rho_{ext}(R)}{w}\right)^2 \label{eq:W-EM-squaredGradient} \end{align} \end{widetext} {which} equation has exactly the same structure {as} that derived for infinite systems except for the second {term} which accounts for the confinement. \section{Results and comparisons\label{sec:resultsAndComparisons}} The theory previously presented was evaluated by considering a model of globular proteins, as {was previously} done in the case of infinite systems. Thus, the solvent was approximated by considering Brownian dynamics of the solute molecules which simultaneously experience an effective pair potential that we assumed to be the ten Wolde-Frenkel potential,\cite{article:tenwolde-frenkel-1997} \begin{align} v(r)=\begin{cases} \infty, & r\leq\sigma\\ \frac{4\epsilon}{\alpha^2}\left(\left(\frac{1}{\left(\frac{r}{\sigma}\right)^2-1}\right)- \left(\frac{1}{\left(\frac{r}{\sigma}\right)^2-1}\right)\right), & r\geq\sigma\\ \end{cases} \end{align} with $\alpha=50$ which is then cutoff at $r_c=2.5\,\sigma$ and shifted so that $v(r_c)=0$. With the aim to compare the results obtained with the present theory with those reached for infinite systems we fixed the temperature at $k_BT=0.375\,\epsilon$. {The} free energy density $f(\rho )$ was computed {using} thermodynamic perturbation theory. Finally, the squared-gradient coefficient was calculated making use of the results in Ref. \onlinecite{article:lutsko-2011-c}, i.e. \begin{align} \beta K\simeq -\frac{2\pi }{45}d^{5}\beta v(r)+\int_{d}^{\infty }\left( 2d^{2}-5r^{2}\right) v(r)r^{2}dr \label{eq:K-calculation-app} \end{align} with $d$ being the effective hard-sphere diameter. Under these conditions, it was shown that the squared-gradient coefficient is $\beta K=1.80322\,\sigma^5$. Finally, the CNT value for the surface tension was computed by using the following expression for a planar interface,\cite{article:lutsko-duran-2013} \begin{equation} \gamma_{\text{CNT}}=(\rho_0^\text{coex}-\rho_{av}^\text{coex}) \sqrt{2K\overline{\omega}_0^{\text{coex}}}, \end{equation} with \begin{equation} \overline{\omega}_0^\text{coex}=\frac{1}{(\rho_0^{\text{coex}}-\rho_{av}^{\text{coex}})}\int_{\rho_{av}^{\text{coex}}}^{\rho_0^{\text{coex}}}(\omega(x)-\omega(\rho_{av}^{\text{coex}}))dx. \end{equation} \subsection{Work of cluster formation} {The} energy barrier for cluster formation {is a key quantity in } nucleation theories as well as the comparison of its value for different average densities, or supersaturation values under CNT conditions. In {order to make} contact with the results obtained for infinite systems, we {use as independent variable the } effective supersaturation, $S_e$, which is {the average density divided by} the coexistence density (for infinite systems) at the given temperature. {W}e evaluated the free energy models proposed in section \ref{sec:parametrizedProfiles} for effective supersaturations from $S=1.125$ to $S=2.5$ {thus} covering a wide range of critical sizes, from very large to very small. \begin{figure}[t] \centering{} \includegraphics[width=0.49\textwidth]{figures/F_plot_1125.pdf} \caption{\label{fig:F-1125} The free energy of cluster formation as a function of number of molecules inside the cluster at $S_e=1.125$ in a confined system of total volume, $V_T=4.5\times 10^6\sigma^{-3}$, using the modified capillary model with $\gamma$ being that calculated from infinite systems, and the extended model, with which we tested different combinations of the characteristic parameters $\rho_0$ and $w$. This graph shows the fact that the liquid phase is not stable so that nucleation will not proceed.} \end{figure} \begin{figure*}[t] \begin{center} \includegraphics[width=0.35\textwidth]{figures/F_inset_1175.pdf} \includegraphics[width=0.35\textwidth ]{figures/F_plot_1175.pdf}\\ \vspace{0.15cm} \noindent{}\includegraphics[width=0.35\textwidth ]{figures/F_inset_15.pdf} \includegraphics[width=0.35\textwidth ]{figures/F_plot_15.pdf}\\ \vspace{0.15cm} \noindent{}\includegraphics[width=0.35\textwidth ]{figures/F_inset_25.pdf} \includegraphics[width=0.35\textwidth ]{figures/F_plot_25.pdf} \caption{The Helmholtz free energy as a function of cluster size, $\Delta N$, at $S=1.175,\,1.5\ $and $2.5$ for different cluster models. The left column are zooms of the figures on the right column about the critical size at each supersaturation. The right column shows the existence of a stable size behind which the energy of formation rockets. The total volume is again $V_T=4.5\times10^{6}\sigma^{-3}$.} \label{fig:F-all} \end{center} \end{figure*} The work of cluster formation was evaluated by using equations (\ref{eq:W-MCM}) and (\ref{eq:W-EM-squaredGradient}) for the modified capillary and extended models respectively. Concerning the MCM, we fixed the surface tension to equal the CNT value calculated in the previous study for infinite systems,\cite{article:lutsko-duran-2013} i.e. $\gamma=\gamma_{\text{CNT}}$. However, the inner density $\rho_0$ was {adjusted so as } to minimize the free energy of the stable cluster, $\rho_{{st}^{(cap)}}$, unlike the classical capillary model where the inner density is set to be the that of the new phase, $\rho_l$. As for the extended model considering the PLP, we studied several possibilities to choose the characteristic parameters so that we can {see} more easily the effects of: the confinement, the interior density and the surface width. Thus, we tested three different combinations of the values $\rho_0$ and $w$: \begin{enumerate} \item[a)] Set the density $\rho_0=\rho_l$ and $w=w_0$ (from dCNT), \item[b)] Set the density $\rho_0=\rho_{{st}^{(cap)}}$ and $w=w_0$, \item[c)] Look for the pair $(\rho_{{st}^{(plp)}},w_{{st}^{(plp)}})$ to minimize the free energy (Eq. \ref{eq:W-EM-squaredGradient}) of the stable cluster. \end{enumerate} {F}igures \ref{fig:F-1125} and \ref{fig:F-all} show the free energy landscapes at $S_e=1.125$ and $S_e=1.175,1.5, 2.5$, respectively. The reason why the supersaturation values {was} divided into subsets is to highlight the fact that nucleation is inhibited in the first case while it still occurs in the other, as {is} obvious {from} these figures. On the one hand, in both cases we can observe the most important effect of considering confinement {which} is the emergence of a local (stable or metastable) minimum beyond the critical size as a result of finite mass. This is a new property which {has no counterpart for an infinite system.} Depending on the total amount of material, such a minimum will become metastable (Fig. \ref{fig:F-1125}) or stable (Fig. \ref{fig:F-all}). On account of this fact a new effect arises, namely the control on the nucleation rate and the nucleation itself as a function of the total volume. Indeed, with the volume previously specified at $S_e=1.125$ no nucleation event will occur, given that the liquid (supposedly the new phase) is not stable any more. On the other hand, it is clear from those figures that the capillary model with a fixed $\gamma$ produces results close to those obtained with the extended models, at least up to $S_e=1.5$. {In addition}, we observe how the interface width plays a key role in the finite-width models lowering the energy of both the critical and the stable cluster, since $\rho_{st^{(cap)}}\simeq\rho_{st^{(plp)}}\simeq \rho_l$ (see Table \ref{tab:j}). Indeed, what we found is that the width value which minimizes the stable-cluster energy is about twice the value $w_{0}$. {There} is also observed a great similarity of these results with respect to those for the infinite case, if we only pay attention on the left column of Fig. \ref{fig:F-all}. Finally, in view of these results an interesting conclusion can be drawn in terms of experimental setups. The control on the total volume enables to modulate the stability of a given phase. This is an interesting result for crystallization experiments in small volumes (e.g. microfluidics), since it would imply that the effective solubility curve could be controlled at will. \begin{figure*} \begin{center} \includegraphics[width=0.325\textwidth]{figures/P_st_1175.pdf} \includegraphics[width=0.325\textwidth]{figures/P_st_15.pdf} \includegraphics[width=0.325\textwidth ]{figures/P_st_25.pdf} \caption{The stationary size distribution for the supersaturation values under which nucleation can proceed, $S_e=1.175$ (left panel), $S_e=1.5$ (center panel) and $S_e=2.5$ (right panel), with $V_T=4.5\times10^{6}\sigma^{-3}$ and $R_+=1.5R_*$.} \label{fig:Pst} \end{center} \end{figure*} \subsection{The stationary distribution} A straightforward connection with experimental measurements can be {made} via the PDF which is essentially the quantity obtained by techniques like \emph{dynamic light scattering} (DLS).\cite{book:berne-2000-dls} Thus, the stationary PDF offers us another way to test the theories presented above. {In addition}, this quantity is required both in its exact (Eq. \ref{eq:steady-state-final-pdf}) and approximated (Eq. \ref{eq:stationary-pdf-approximated}) {form so as to determine the nucleation rate ans so} we need to test its validity. In order to do that, we have to compute the PDF for the different models in terms of a common variable since $R$ does not mean the same thing in both of them, as we already pointed out. Thus, the calculations will be performed using the equimolar radius, $R_E$, {which} requires the transformation, \begin{align} \overline{P}(R_E)=P(R)\frac{dR}{dR_E}, \end{align} with $R_E$ being equivalent to $R$ for the MCM or being given by equation (\ref{eq:equimolar-R-EM}) for the PLP. The stationary size distributions are {displayed} in figure \ref{fig:Pst} showing {good} agreement with the results for the infinite case. The {shape of the } PDF is faithfully reproduced by the approximated equation (Eq. \ref{eq:PDF-equilibrium}), at least for the lower effective supersaturations (left and center panels). {However}, the normalization is not equally well estimated, which is a result of the rapid change of the free energy with the cluster size for small clusters. Secondly, while for the MCM the approximation still remains being a good estimation for the highest density ($S_e=2.5$), a {significant} error arises for the extended models. The worse result lies on the extended model with a minimized stable cluster due to the fact that the system is {in} the pseudospinodal region,\cite{article:xu-ting-kusaka-wang-2014-reviewNucleation} i.e. $\beta\Delta F_*\sim 1$, so that {the assumption} that small sizes govern the integral result is quite crude. Indeed, for these density values one would expect that cluster-cluster interactions play a key role thus violating the hypotheses assumed to make these calculations, as was noticed for infinite systems. Notwithstanding, {we conclude that} the capillary model exhibits a surprising ability to {capture} the main properties of nucleation even for finite systems. \subsection{Nucleation rates} {We end by comparing} the nucleation (escape) rates in the different {models} previously introduced {as shown} in Table \ref{tab:j}. It is {apparent} that for the lower densities the nucleation rates are much lower for the extended models with $w$ taken from dCNT calculations than for the capillary model, which is essentially due to the higher energy barrier associated with both of them. {On the other hand}, one observes the opposite situation when the extended model with a minimized stable cluster is considered, since the energy barrier is lower than that of the MCM (see Fig. \ref{fig:F-all}). For the other cases the capillary approximation yields similar results to the extended models and to the CNT predictions. {Next, we consider the variation of the nucleation rate as a function of volume.} For the sake of simplicity, since a similar result in shape is obtained for each density we selected $S_e=1.5$. This calculation is shown in Fig. \ref{fig:j_VT_15}. A surprising effect is observed near the zero-rate zone, the nucleation rate exhibits a maximum for very small volumes and after that relaxes quickly to a steady value, which is nearly the one presented in Table \ref{tab:j}. This is the result of a competition between two effects. On the one hand, the inner density of the cluster decreases with increasing total radius so that the bulk free energy increases. On the other hand, the free energy associated to the zone outside the cluster decreases when the total volume increases. It is therefore such a competition which causes a minimum in free-energy barrier and, hence, the maximum in nucleation rate. Before that maximum, the nucleation rate passes from being zero to non-zero in a very narrow region. From this result we can draw the conclusion that confined systems could be pretty well approximated by the infinite-system predictions, unless the volume under consideration {is very} close to the {minimum} volume {for} nucleation. \begin{figure}[t!] \begin{center} \includegraphics[width=0.48\textwidth]{figures/j_vs_VT_S_15_cap_EM_EM2.pdf} \caption{Nucleation rates as functions of the total volume at $S_e=1.5$. The extended models considered here are: 1) considering $\rho_0=\rho_{st^{(cap)}}$ and $w=w_0$, and 2) taking $\rho_0=\rho_{st^{(plp)}}$ and $w=w_{st^{(plp)}}$, i.e. those values which minimize the stable-cluster energy. It is observed how the confinement takes effect in a very narrow region, resulting in a maximum nucleation rate before inhibiting the process.} \label{fig:j_VT_15} \end{center} \end{figure} \section{Conclusions\label{sec:conclusions}} In this work, {a recent reformulation of} classical nucleation theory\cite{article:lutsko-duran-2013} {has been extended} to consider finite systems. The {motivation for} making such {an} effort {arises from} the explosion of interest {in} nucleation process by using new techniques, such as microfluidics, where the hypotheses made by CNT are probably far from reality. Given that the dynamical reformulation of CNT was founded in a more fundamental framework, it was relatively easy to modify its derivation to take into account the mass-conservation law along with a finite volume and {to} go beyond the initial scope of CNT. With this goal attained, general expressions for both the stationary distribution function and the nucleation rate were obtained. Those were ultimately used with two different parametrized density profiles, a modified version of the capillary model to consider mass conservation and a piecewise-linear profile. Thus, the results obtained thereby allow to make a {direct} comparison to those performed for infinite systems. The main conclusion we can draw from this study is that the nucleation rate can be somehow enhanced in a confined system. However, confinement affects in practice a very narrow range of volumes {which is also} why {CNT} produce{s} good estimates. Surprisingly, the different profiles proposed here gave similar results where the main difference between them lies on the free energy barrier, as it also does for infinite systems. That said, it seems to us that the most natural way to further develop dCNT would be allowing the inner density to freely vary within the capillary model, which {seems a good} balance between being simple and accurate. The nucleation rates were calculated in terms of the mean first-passage time,\cite{article:hanggi-talkner-borkovec-1990,article:wedekind-strey-reguera-2007,article:lundrigan-2009} which has been a widely used {approach} in this field. These calculations involved similar ingredients to those required to compute the stationary distribution function. The latter was evaluated numerically (Eq. \ref{eq:steady-state-final-pdf}) and by using its approximated version (Eq. \ref{eq:stationary-pdf-approximated}). A good agreement between exact and approximated expressions were found for low and intermediate densities while for higher values the approximation became less accurate. Therefore, the same can be observed in the nucleation rates in Table \ref{tab:j}. However, the fact that high densities yield worse approximations is not a key problem since certainly in such a regime the hypothesis of non-interacting clusters will be unlikely valid any more. Finally, the volume of the system under study was varied in a wide range to {study the} effect on the nucleation rate. {I}t was found {that} the finite volume {effect is only noticeable for a } narrow range of volumes and {that} it rapidly vanishes as the volume grows {so that CNT and dCNT are accurate above this threshold}. \begin{turnpage} \begin{table*} \caption{Properties of the capillary and extended cluster models as function of the effective supersaturation ratio, $S_e$ with $V_T=4.5\times10^{6}\,\sigma^{-3}$. The absorbing wall was set to be $R_+=1.5\,R_*$. }\label{tab:j} \begin{center} \begin{tabular}{c|c|c|c|c|c|c|c} \hline\hline && \multicolumn{6}{|c}{Modified Capillary Model} \\\hline $S_e$ & ${j_{nc}}e^{-\Delta\beta F_*}$ & $R_{*E}$ & $\Delta N_*$ & $\Delta\beta F_*$ & $\frac{j}{ {j_{nc}}}$& $\frac{j_{app}}{ {j_{nc}}}$ & $\rho_0$\\ \hline 1.175 & 0.695 & 7.94 & 1.22$\times 10^3$ & 80.8 & 0.198 & 0.191 & 0.66503 \\ 1.5 & 0.770 & 3.3 & 84.4 & 14.3 & 0.623 & 0.772 & 0.66438 \\ 2 & 1.532 & 2.15 & 21.9 & 6.06 & 0.548 & 0.834 & 0.66412 \\ 2.5 & 2.758 & 1.84 & 12.8 & 4.44 & 0.393 & 0.672 & 0.66401 \\ \hline\hline \end{tabular} \vspace{0.5cm} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline\hline && \multicolumn{5}{|c|}{Extended model 0: $\rho_0=\rho_l$, $w=w_0$} & \multicolumn{5}{|c|}{Extended model 1: $\rho_0=\rho_{st^{(cap)}}$, $w=w_0$} & \multicolumn{5}{|c}{Extended model 2: $\rho_0=\rho_{st^{(plp)}}$, $w=w_{st^{(plp)}}$} \\ \hline $S_e$ & $ {j_{nc}}e^{-\Delta\beta F_*}$ & $R_{*E}$ & $\Delta N_*$ & $\Delta\beta F_*$ & $\frac{j}{ {j_{nc}}}$& $\frac{j_{app}}{ {j_{nc}}}$ & $R_{*E}$ & $\Delta N_*$ & $\Delta\beta F_*$ & $\frac{j}{ {j_{nc}}}$& $\frac{j_{app}}{ {j_{nc}}}$ & $R_{*E}$ & $\Delta N_*$ & $\Delta\beta F_*$ & $\frac{j}{ {j_{nc}}}$& $\frac{j_{app}}{ {j_{nc}}}$\\ \hline 1.175 & 0.695 & 8.03 & 933.25& 81.9922 & 0.0041 & 0.0028 & 8.02 &926.95 &81.5787 & 0.0063 & 0.0043 & 7.83 &787.32 &75.2840 & 2.9234 &1.7871\\ 1.5 & 0.770 & 3.39 & 51.68 & 14.9410 & 0.0313 & 0.0272 & 3.39 &51.37 &14.7634 & 0.0380 & 0.0332 & 3.09 &25.79 &10.6570 & 1.6432 & 1.2196\\ 2 & 1.532 & 2.16 & 10.57 & 5.8644 & 0.0903 & 0.1045 & 2.16 &10.52 &5.7515 & 0.1036 & 0.1205 & 1.82 &2.97 &2.8714 & 1.0116 &1.0912\\ 2.5 & 2.758 &1.73 & 4.62 & 3.4804 & 0.1605 & 0.2349 & 1.73 &4.60 &3.3993 & 0.1799 & 0.2648 & 1.39 &0.94 &1.2245 & 1.0186 &1.0338\\ \hline\hline \end{tabular} \vspace{0.5cm} \begin{tabular}{c|c|c|c|c|c|c|c} \hline\hline &&& \multicolumn{1}{|c|}{Extended model 0} & \multicolumn{2}{|c|}{Extended model 1} & \multicolumn{2}{|c}{Extended model 2} \\ \hline $S_e$ &$\rho_l$ & $ {j_{nc}}/j_{\text{cnt}}$ &$\frac{j}{j_{\text{cnt}}}$ &$\rho_0$ & $\frac{j}{j_{\text{cnt}}}$ &$\rho_0$ & $\frac{j}{j_{\text{cnt}}}$\\ \hline 1.175 &0.665& 0.47 & 0.0235 & 0.6650 & 0.0358 & 0.6636 & 16.73 \\ 1.5 &0.665& 1.15 & 0.3409 & 0.6644 & 0.4146 & 0.6635 & 17.91 \\ 2 &0.665& 1.63 & 1.0449 & 0.6641 & 1.1988 & 0.6634 & 11.71 \\ 2.5 &0.665& 2.57 & 2.3405 & 0.6640 & 2.6244& 0.6634 & 14.86 \\ \hline\hline \end{tabular} \end{center} \end{table*} \end{turnpage} \begin{acknowledgments} The work of J.F.L is supported in part by the European Space Agency under Contract No. ESA AO-2004-070 and by FNRS Belgium under Contract No. C-Net NR/FVH 972. M.A.D. acknowledges support from the Spanish Ministry of Science and Innovation (MICINN), FPI grant BES-2010-038422 (project AYA2009-10655). \end{acknowledgments} \bibliographystyle{unsrtnat}
2,869,038,154,210
arxiv
\section{Introduction}\label{s:intro} In this note we propose an answer to the following question: Assume that $M$ is a smooth manifold, $\mathfrak{g}$ an $L_\infty$-algebra and $\alpha$ a flat connection on $M$ with values in $\mathfrak{g}$, i.e.,\ a Maurer--Cartan element of the $L_\infty$-algebra $\mathfrak{g} \hat{\otimes} \Omega(M)$; what are the holonomies associated to the flat connection $\alpha$? Our answer differs from those that have appeared in the literature, such as \cite{Picken,SatiSchreiberStasheff,SchreiberWaldorf,Tradleral,Yekutieli}, where various notions of two-dimensional parallel transport are considered. In order to motivate our answer, let us first discuss the case where $\mathfrak{g}=\textrm{End} V$ is the Lie algebra of endomorphisms of a finite-dimensional vector space. In this case, $\alpha$ is just a flat connection on the trivial vector bundle $V$, and by solving the differential equation for parallel transport, one obtains the holonomy $\hat{\mathsf{hol}}(\sigma)\in \textrm{End} V\subset \mathbb{U}(\textrm{End} V)$ associated to a path $\sigma\colon I \rightarrow M$. One can view this whole assignment as an element $\hat{\mathsf{hol}}$ of $\textrm{End} V \otimes C^{\bullet}(M)$, the differential graded algebra of $\textrm{End} V$-valued smooth singular cochains on $M$. The flatness of $\alpha $ implies the homotopy invariance of the holonomy. This corresponds to the fact that $\hat{\mathsf{hol}}$ is a Maurer--Cartan element. Indeed, an element $\beta \in \textrm{End} V \otimes C^1(M)$ is a Maurer--Cartan element precisely if it is homotopy invariant in the sense that for any two-dimensional simplex one has \begin{center} \begin{tikzpicture}[scale=1] \coordinate (A) at (-1,-0.5); \coordinate (B) at (0,-0.5); \coordinate (C) at (0,0.5); \coordinate (X) at (0,0.5); \coordinate (Y) at (-0.1,0.5); \coordinate (Z) at (-0.1,-0.5); \coordinate (W) at (0,-0.5); \coordinate (x) at (-0.5,0.5); \coordinate (y) at (-0.4,0.5); \coordinate (z) at (-0.4,-0.5); \coordinate (w) at (-0.5,-0.5); \matrix[column sep=0.8cm,row sep=0.5cm] { \draw(A) -- (B) -- (C) -- cycle;\draw[ultra thick](A)--(B);\draw[ultra thick](C)--(B); &node[$-$]& \draw(A) -- (B) -- (C) -- cycle;\draw[ultra thick](A)--(C);& node{$=$}&&node{$0$}node[$.$]\\ }; \end{tikzpicture} \end{center} Here the bold edges represent holonomies associated to the corresponding paths, and concatenation of paths corresponds to multiplication in the algebra $\textrm{End} V$. Observe that a Maurer--Cartan element of $ \textrm{End} V \otimes C^{\bullet}(M)$ corresponds naturally to a morphism of differential graded coalgebras $C_\bullet(M)\rightarrow \mathsf{B}(\textrm{End} V)$. Using the explicit iterated integral formulas for the parallel transport, one can show that this morphism factors through the bar coalgebra of the (completed) universal enveloping algebra of $\textrm{End} V$: \begin{align*} \xymatrix{ C_{\bullet}(M) \ar[r]^{{\mathsf{hol}_\alpha}\,\,} \ar[rd]&\mathsf{B} \hat{\mathbb{U}}(\textrm{End} V)\ar[d]^p\\ & \mathsf{B}(\textrm{End} V). } \end{align*} This construction works for any filtered Lie algebra $\mathfrak{g}$, and we conclude that the holonomies of a flat connection with values in $\mathfrak{g}$ can be interpreted as a morphism of differential graded coalgebras $\mathsf{hol}_{\alpha}\colon C_{\bullet}(M)\rightarrow \mathsf{B} \hat{\mathbb{U}}(\mathfrak{g}),$ where $\mathsf{B}\hat{\mathbb{U}}(\mathfrak{g})$ denotes the bar construction of the completion of the universal enveloping algebra $\mathbb{U}(\mathfrak{g})$. The case where the $L_\infty$-algebra $\mathfrak{g}$ is the graded Lie algebra of endomorphisms of a graded vector space $V$ corresponds to holonomies of flat $\mathbb{Z}$-graded connections. This has been studied recently by Igusa \cite{I}, Block and Smith \cite{BS}, and Arias Abad and Sch\"atz \cite{AS}, and ultimately relies on Gugenheim's \cite{G} $\mathsf{A}_{\infty}$-version of de Rham's theorem. In turn, Gugenheim's construction is based on Chen's theory of iterated integrals \cite{Chen}. We extend this approach to flat connections with values in $L_\infty$-algebras. The holonomy of $\alpha$ is a morphism of differential graded coalgebras $\mathsf{hol}^\infty_\alpha\colon C_\bullet(M) \rightarrow \mathsf{B} \hat{\mathbb{U}}_\infty(\mathfrak{g})$.\footnote{Throughout the introduction, we gloss over the technical issue that one has to work with the completed bar complex $\hat{\mathsf{B}}\hat{\mathbb{U}}_{\infty}(\mathfrak{g})$ of $\hat{\mathbb{U}}_{\infty}(\mathfrak{g})$, which is not a differential graded coalgebra, because its ``comultiplication'' does not map into the tensor product, but into the completion. See Appendix~\ref{section:twisting_cochains} for details.} We first need to explain what the universal enveloping algebra $\mathbb{U}_{\infty}(\mathfrak{g})$ of an $L_\infty$-algebra $\mathfrak{g}$ is. Several proposals for a definition of the enveloping algebra of an $L_\infty$-algebra exist in the literature, e.g.,\ \cite{Rossi,B, LM}. Following \cite{B}, we use the idea of defining the enveloping algebra via the strictification $\mathbb{S}(\mathfrak{g})$ of the $L_\infty$-algebra $\mathfrak{g}$. The differential graded Lie algebra $\mathbb{S}(\mathfrak{g})$ is naturally quasi-isomomorhic to $\mathfrak{g}$, and we define the enveloping algebra of $\mathfrak{g}$ to be that of its strictification. Our main result is as follows: \begin{theoremnn} Suppose that $\alpha$ is a flat connection on $M$ with values in a filtered $L_\infty$-algebra $\mathfrak{g}$. Then there is a natural homomorphism of differential graded coalgebras \[\mathsf{hol}^\infty_\alpha\colon C_\bullet(M) \rightarrow \mathsf{B}\hat {\mathbb{U}}_\infty(\mathfrak{g}).\] \end{theoremnn} In order for this notion of holonomy to be reasonable, it should be consistent with the standard definition in the case of Lie algebras. Indeed, in the case where $\mathfrak{g}$ is a Lie algebra, the usual parallel transport provides a holonomy map: \[{\mathsf{hol}}\colon C_\bullet(M)\rightarrow \mathsf{B} \hat{\mathbb{U}}(\mathfrak{g}).\] On the other hand, there is a natural map of differential graded coalgebras \[\mathsf{B} \hat{\mathbb{U}}(\rho)\colon \mathsf{B} \hat{\mathbb{U}}_\infty (\mathfrak{g})\rightarrow \mathsf{B} \hat{\mathbb{U}}(\mathfrak{g}),\] and the following diagram commutes: \begin{align*} \xymatrix{ C_\bullet(M) \ar[r]^{\mathsf{hol}^\infty_\alpha} \ar[rd]_{{\mathsf{hol}}_\alpha}& \mathsf{B} \hat{\mathbb{U}}_\infty (\mathfrak{g}) \ar[d]^{\mathsf{B} \hat{\mathbb{U}}(\rho)}\\ &\mathsf{B} \hat{\mathbb{U}}(\mathfrak{g}). } \end{align*} The notion of holonomy on which Theorem~\ref{main theorem} is based admits a rather visual description. Given any filtered differential graded algebra $(\mathsf{A},\partial)$, a morphism of differential graded coalgebras $\phi\colon C_\bullet(M)\rightarrow \mathsf{B} \hat{A}$ corresponds to a Maurer--Cartan element $\overline{\phi}$ in the algebra $A \hat{\otimes} C^{\bullet}(M)$, which is an element in the vector space $\mathsf{Hom}(C_\bullet(M), A)$. Thus, $\phi$ can be interpreted as a rule that assigns to each simplex in $M$ an element of the algebra $\hat{A}$, which we think of as being the holonomy associated to that simplex. Since the algebra $A\hat{\otimes} C^{\bullet}(M)$ is bigraded, the condition for $\overline{\phi}$ to be Maurer--Cartan decomposes into a sequence of equations. In degree~0, the condition is that $\phi$ assigns to every point $p \in M$ a Maurer--Cartan element of $\hat{A}$. This implies that if we set $\partial_p:=\partial+[\phi(p),\_],$ then $\partial_p \circ \partial_p=0$. Let us denote the complex $(A,\partial_p)$ by $A_p$. Given a simplex $\sigma: \Delta_k \to M$, we denote the commutator between the operation of multiplying by $\phi(\sigma)$ and of applying the differentials associated to the first and last vertex of $\sigma$ by $[\partial, \phi(\sigma)]$, i.e., $[\partial, \phi(\sigma)]:= \partial_{v_k} \circ \phi(\sigma)-(-1)^{1+|\sigma|} \phi(\sigma)\circ \partial_{v_0}$. The Maurer--Cartan equation in degree $1$ is \begin{center} \begin{tikzpicture}[scale=1] \coordinate (A) at (-0.5,0); \coordinate (B) at (0.5,0); \coordinate (X) at (0,0.5); \coordinate (Y) at (-0.1,0.5); \coordinate (Z) at (-0.1,-0.5); \coordinate (W) at (0,-0.5); \coordinate (x) at (-0.5,0.5); \coordinate (y) at (-0.4,0.5); \coordinate (z) at (-0.4,-0.5); \coordinate (w) at (-0.5,-0.5); \matrix[column sep=0.8cm,row sep=0.5cm] { \draw(X) -- (Y) -- (Z)--(W); &node{$\partial ,$} & \draw[ultra thick](A) -- (B);\fill[ultra thick](A); &\draw(x) -- (y) -- (z)--(w); & node{$=$}&&node{$0\quad ,$}\\ }; \end{tikzpicture} \end{center} \noindent which says that multiplication by the holonomy associated to a path is an isomorphism between the complexes $A_{v_0}$ and $A_{v_1}$. The equation in degree~2 reads \begin{center} \begin{tikzpicture}[scale=1] \coordinate (A) at (-1,-0.5); \coordinate (B) at (0,-0.5); \coordinate (C) at (0,0.5); \coordinate (X) at (0,0.5); \coordinate (Y) at (-0.1,0.5); \coordinate (Z) at (-0.1,-0.5); \coordinate (W) at (0,-0.5); \coordinate (x) at (-0.5,0.5); \coordinate (y) at (-0.4,0.5); \coordinate (z) at (-0.4,-0.5); \coordinate (w) at (-0.5,-0.5); \matrix[column sep=0.8cm,row sep=0.5cm] { \draw(X) -- (Y) -- (Z)--(W); & node{$\partial ,$} &\draw(A) -- (B) -- (C)--cycle; \draw[shade](A)--(B)--(C);\draw(A)--(C);&\draw(x) -- (y) -- (z)--(w); & node{$=$}&\draw(A) -- (B) -- (C) -- cycle;\draw[ultra thick](A)--(B);\draw[ultra thick](C)--(B); &node[$-$]& \draw(A) -- (B) -- (C) -- cycle;\draw[ultra thick](A)--(C);& node[$,$]\\ }; \end{tikzpicture} \end{center} \noindent requiring that the two isomorphisms between the complexes $A_{v_0}$ and $A_{v_2}$ are homotopic, with a specified homotopy given by the holonomy associated to the triangle. Similarly, for the tetrahedron one obtains \begin{tikzpicture}[scale=1] \coordinate (A) at (-0.8,-0.3); \coordinate (B) at (0,-0.5); \coordinate (D) at (-0.1,0.5); \coordinate (C) at (0.5,-0.1); \coordinate (X) at (0,0.5); \coordinate (Y) at (-0.1,0.5); \coordinate (Z) at (-0.1,-0.5); \coordinate (W) at (0,-0.5); \coordinate (x) at (-0.5,0.5); \coordinate (y) at (-0.4,0.5); \coordinate (z) at (-0.4,-0.5); \coordinate (w) at (-0.5,-0.5); \matrix[column sep=0.8cm,row sep=0.5cm] { \draw(X) -- (Y) -- (Z)--(W); & node{$\partial ,$} &\draw(A) -- (B) -- (C) -- (D) -- cycle; \draw[shade](A)--(B)--(D); \draw[shade](B)--(C)--(D);\draw(B)--(D); \draw[dotted](A)--(C);\draw[dotted](A)--(C); \draw(A)--(D); &\draw(x) -- (y) -- (z)--(w); node{$=$} \\ && \draw(A) -- (B) -- (C) -- (D) -- cycle; \draw[shade=gray](B)--(C)--(D);\draw[thin](B)--(D); \draw[dotted](A)--(C);\draw[dotted](A)--(C); \draw[thick](A)--(B); &node[$-$]& \draw(A) -- (B) -- (C) -- (D) -- cycle; \draw[shade=gray](A)--(B)--(D);\draw(B)--(D); \draw[dotted](A)--(C); \draw[dotted](A)--(C); \draw(A)--(D);\draw[thick](D)--(C); & node{$+$}& \draw(A) -- (B) -- (C) -- (D) -- cycle; \draw[shade=gray](A)--(B)--(C);\draw(B)--(D); \draw[dotted](A)--(C);\draw[dotted](A)--(C);\draw(A)--(D); & node{$-$}& \draw(A) -- (B) -- (C) -- (D) -- cycle; \shade(A)--(C)--(D);\draw(B)--(D); \draw(A)--(D); \draw[dotted](A)--(C);\draw(C)--(D); & node[$.$]\\ }; \end{tikzpicture} Our main motivation to develop this version of parallel transport is the appearance of certain flat connections on configuration spaces $\mathsf{Conf}_d(n)$ of $n$ points in Euclidean space $\mathbb{R}^d$. In dimension $d=2$ these connections were introduced and studied by \v{S}evera and Willwacher in \cite{SW}. There, the flat connections mentioned above yield a homotopy between the formality maps for the little disks operad of Kontsevich \cite{K} and Tamarkin \cite{T}, respectively, provided that in the second one the Alekseev--Torossian associator is used. In Section 5, we discuss these connections on configuration spaces. We first explain a link between rational homotopy theory and the theory of flat connections with values in $L_\infty$-algebras. We then describe Kontsevich's model $^*\mathsf{Graphs}_d(n)$ of $\mathsf{Conf}_d(n)$ and the corresponding flat connections $\mathsf{SW}_d(n)$, extending the construction of \v{S}evera and Willwacher to higher dimensions. Finally, we demonstrate how to use this machinery to construct actions of the $\infty$-groupoid of $\mathsf{Conf}_d(n)$ on representations of quadratic differential graded Lie algebras, generalizing the holonomy representations of the braid groups. \subsection*{Acknowledgements}We thank Alberto Cattaneo, Ya\"el Fr\'egier, Pavol \v{S}evera, and Thomas Willwacher for several helpful conversations related to this project. Moreover, we thank Carlo Rossi for making available his unpublished work with J. Alm \cite{Rossi}. We would also like to thank Utrecht University (C.A.A.) and the University Zurich (F.S.) for their hospitality. Finally, we are grateful to James D. Stasheff, the editor and the referees for their careful revisions and useful comments. \section{The universal enveloping algebra} \subsection{Basic definitions}\label{subsection:basic_definitions} In order to fix notations and conventions, we review the definitions of some functors and collect relevant facts. We essentially follow \cite{F}. \begin{definition} Let $V$ be a graded vector space. The suspension of $V$, denoted $\mathsf{s} V$, is the graded vector space $(\mathsf{s} V)^k:=V^{k+1}$. The desuspension of $V$, denoted $\mathsf{u} V$, is the graded vector space $(\mathsf{u} V)^k:=V^{k-1}$. \end{definition} \begin{definition} We will make use of the following categories: \begin{itemize} \item The category $\mathsf{DGA}_{(a)}$ of (augmented) differential graded algebras \item The category $\mathsf{DGC}_{(a)}$ of (co-augmented) differential graded coalgebras \item The category $\mathsf{DGCC}_{(a)}$ of (co-augmented) cocommutative differential graded coalgebras \item The category $\mathsf{DGLA}$ of differential graded Lie algebras \item The category $\mathsf{L}_{\infty}$ of $L_\infty$-algebras \end{itemize} For the relevant definitions please see \cite{LM,Getzler}. \end{definition} \begin{remark} We will assume that differential graded algebras and differential coalgebras are unital and co-unital, respectively. \end{remark} \begin{definition} The symmetric coalgebra $\mathsf{S}(V)$ of a graded vector space $V$ is the subspace of elements in the tensor coalgebra $\mbox{T\hspace{-.47em}T} V$ that are invariant under the action by $\Sigma_{\bullet}$, i.e.,\ the collection of actions of $\Sigma_n$ on $\mbox{T\hspace{-.47em}T}^n V$ defined by $$ \Sigma_n \times \mbox{T\hspace{-.47em}T}^n V \to \mbox{T\hspace{-.47em}T}^n V, \quad \sigma\bullet (x_1\otimes \cdots \otimes x_n) := (-1)^{|\sigma|} x_{\sigma(1)}\otimes \cdots \otimes x_{\sigma(n)},$$ for $x_1,\dots,x_n \in V$ homogeneous. Here $(-1)^{|\sigma|}$ refers to the Koszul sign, which is the character of the representation of $\Sigma_n$ on $\mbox{T\hspace{-.47em}T}^n V$ determined by $$\big( \cdots \otimes x_k \otimes x_{k+1} \otimes \cdots \mapsto \cdots \otimes x_{k+1} \otimes x_{k} \otimes \cdots \big) \quad \mapsto \quad (-1)^{|x_k||x_{k+1}|}.$$ There is a natural projection $p: \mbox{T\hspace{-.47em}T} V \to \mathsf{S}(V)$ given by $$p(x_1\otimes \cdots \otimes x_n) := \frac{1}{n!} \sum_{\sigma \in \Sigma_n}(-1)^{\sigma}x_{\sigma(1)}\otimes \cdots \otimes x_{\sigma(n)}.$$ The coproduct $\Delta: \mbox{T\hspace{-.47em}T} V \to \mbox{T\hspace{-.47em}T} V\otimes \mbox{T\hspace{-.47em}T} V $, defined via $$ \Delta(x_1\otimes \cdots \otimes x_n) := \sum_{k=0}^{n} (x_1\otimes \cdots \otimes x_k)\otimes (x_{k+1}\otimes \cdots \otimes x_n),$$ restricts to a graded commutative coproduct on $\mathsf{S}(V)$, which we also denote by $\Delta$. \end{definition} \begin{definition} The Chevalley--Eilenberg functor $ \mathsf{CE}: \mathsf{L}_{\infty} \to \mathsf{DGCC}_a$ is defined as follows: \begin{enumerate} \item To an $L_{\infty}$-algebra $\mathfrak{g}$, the functor $\mathsf{CE}$ associates the co-augmented differential graded cocommutative coalgebra $(\mathsf{CE}(\mathfrak{g}),\delta_g,\Delta)$, where: \begin{enumerate} \item $\mathsf{CE}(\mathfrak{g})$ is the symmetric coalgebra $\mathsf{S}(\mathsf{s}\mathfrak{g})$ of the suspension $\mathsf{s}\mathfrak{g}$ of $\mathfrak{g}$. The co-unit and co-agumentation are given by the identification $\mathsf{S}^0(\mathsf{s} \mathfrak{g})\cong \mathbb{R}$. \item The differential $\delta_g$ on $\mathsf{CE}(\mathfrak{g})$ is obtained from the $L_\infty$-structure on $\mathfrak{g}$ via the identification $\mathrm{Coder}(\mathsf{S}(\mathsf{s} \mathfrak{g}))\cong \mathsf{Hom}(\mathsf{S}(\mathsf{s} \mathfrak{g}),\mathsf{s} \mathfrak{g})$. \end{enumerate} \item A morphism of $L_\infty$-algebras $f:\mathfrak{g} \to \mathfrak{h}$ is a morphism of differential graded coalgebras $\mathsf{CE}(f): \mathsf{CE}(\mathfrak{g})\to \mathsf{CE}(\mathfrak{h})$. \end{enumerate} \end{definition} \begin{definition} The universal enveloping algebra functor $\mathbb{U}\colon \mathsf{DGLA}\hspace{-1pt} \rightarrow\hspace{-1pt} \mathsf{DGA}$ is defined as follows: \begin{enumerate} \item To a differential graded Lie algebra $(\mathfrak{g},d,[\cdot,\cdot])$, the functor $\mathbb{U}$ associates the differential graded algebra $(\mathbb{U}(\mathfrak{g}),d_\mathbb{U})$, where \begin{enumerate} \item $\mathbb{U}(\mathfrak{g})$ is the quotient of the tensor algebra $\mbox{T\hspace{-.47em}T} \mathfrak{g}$ by the two-sided ideal generated by elements of the form $x\otimes y - (-1)^{|x||y|}y\otimes x - [x,y]$. \item The differential $d_\mathbb{U}$ on $\mathbb{U}(\mathfrak{g})$ is inherited from $d_\mbox{T\hspace{-.47em}T}: \mbox{T\hspace{-.47em}T} \mathfrak{g} \to \mbox{T\hspace{-.47em}T} \mathfrak{g}$, where \begin{eqnarray*} & d_\mbox{T\hspace{-.47em}T}(x_1\otimes \cdots \otimes x_n) & := \\ & \sum_{i=1}^n(-1)^{|x_1|+\cdots + |x_{i-1}|}&x_1\otimes \cdots \otimes x_{i-1}\otimes dx_i\otimes x_{i+1}\otimes \cdots \otimes x_n. \end{eqnarray*} \end{enumerate} \item To a morphism $f:\mathfrak{g}\to \mathfrak{h}$ of differential graded Lie algebras, the functor $\mathbb{U}$ associates the morphism $ \mathbb{U}(f): \mathbb{U}(\mathfrak{g}) \to \mathbb{U}(\mathfrak{h})$ induced by $$ \mbox{T\hspace{-.47em}T}(f): \mbox{T\hspace{-.47em}T} \mathfrak{g} \to \mbox{T\hspace{-.47em}T} \mathfrak{h}, \quad \mbox{T\hspace{-.47em}T}(f)(x_1\otimes \cdots\otimes x_n) := f(x_1)\otimes \cdots \otimes f(x_n).$$ \end{enumerate} The (anti)symmetrization functor $\Sigma: \mathsf{DGA} \to \mathsf{DGLA}$ maps $(A,d,\cdot)$ to the differential graded Lie algebra $\Sigma A$, whose underlying complex is $(A,d)$ and whose Lie bracket is defined by setting $ [x,y] := x\cdot y - (-1)^{|x||y|} y \cdot x$. The functor $\Sigma: \mathsf{DGA} \to \mathsf{DGLA}$ is right adjoint to $\mathbb{U}: \mathsf{DGLA} \to \mathsf{DGA}$ and $\mathbb{U}$ preserves quasi-isomorphisms. \end{definition} \begin{definition} Let $(C,d,\Delta)$ be a co-augmented differential graded coalgebra. The {\em reduced coproduct} $\overline{\Delta}$ is defined on the kernel $\overline{C}$ of the co-unit map via $$\overline{\Delta}(x) := \Delta(x) - x\otimes 1 - 1\otimes x.$$ \end{definition} \begin{definition}\label{def:cobar} The cobar functor $\Omega: \mathsf{DGC}_a \to \mathsf{DGA}_a$ is defined as follows: \begin{enumerate} \item To a co-augmented differential graded coalgebra $(C,d,\Delta)$, the functor $\Omega$ associates the augmented differential graded algebra $(\Omega(C),\delta,\cdot)$, where: \begin{enumerate} \item The underlying augmented graded algebra is the tensor algebra $\mbox{T\hspace{-.47em}T}(\mathsf{u} \overline{C})$ of the desuspension $\mathsf{u} \overline{C}$. \item The differential $\delta$ of $\Omega(C)$ is determined by $\delta(\mathsf{u} x) := \mathsf{u} dx + \partial(\mathsf{u} x),$ where $\partial(\mathsf{u} x) = -\sum_i (-1)^{|x_i|} \mathsf{u} x_i\otimes \mathsf{u} y_i$ if $\overline{\Delta}(x) = \sum_i x_i\otimes y_i$. \end{enumerate} \item To a morphism $f: C\to D$ of augmented differential graded cocommutative coalgebras, the functor $\Omega$ associates the morphism $\Omega(f): \Omega(C)\to \Omega(D)$ induced by $\mbox{T\hspace{-.47em}T}(\mathsf{u} f)$. \end{enumerate} \end{definition} \begin{definition} The bar functor $\mathsf{B}: \mathsf{DGA}_a \to \mathsf{DGC}_a$ is defined as follows: \begin{enumerate} \item To an augmented differential graded algebra $(A,d,\cdot)$, the functor $\mathsf{B}$ associates the co-augmented differential graded coalgebra $(\mathsf{B}(A),\delta,\Delta)$, where: \begin{enumerate} \item The underlying augmented graded coalgebra is the tensor coalgebra $\mbox{T\hspace{-.47em}T}(\mathsf{s} \underline{A})$ of the suspension $\mathsf{s} \underline{A}$ of the augmentation ideal $\underline{A}$. \item The differential $\delta$ of $\mathsf{B}(A)$ is the coderivation given by \begin{eqnarray*} \delta(\mathsf{s} x_1 \otimes \dots \otimes \mathsf{s} x_k) &:=& -\sum_{i=1}^k (-1)^{n_i}(\mathsf{s} x_1 \otimes \dots \otimes \mathsf{s} dx_i \otimes \dots \otimes \mathsf{s} x_k)\\ &&+ \sum_{i=2}^k (-1)^{n_i} \mathsf{s} x_1 \otimes \dots \otimes \mathsf{s} (a_{i-1} a_i) \otimes\dots \otimes \mathsf{s} x_k, \end{eqnarray*} where $n_i:=|\mathsf{s} x_1|+\dots +|\mathsf{s} x_{i-1}|$ on homogeneous elements of $\underline{A}$. \end{enumerate} \item To a morphism $f: A\to A'$ of augmented differential graded algebras, the functor $\mathsf{B}$ associates the morphism $\mathsf{B} f: \mathsf{B} A \to \mathsf{B} A'$ induced by $\mbox{T\hspace{-.47em}T}(\mathsf{s} f)$. \end{enumerate} \end{definition} \begin{remark} In applications the bar complex is not sufficient and it has to be replaced by the completed bar complex; see Appendix~\ref{section:twisting_cochains} for details. \end{remark} \begin{definition} The Lie functor $\mathsf{L}: \mathsf{DGCC}_a \to \mathsf{DGLA}$ is defined as follows: \begin{enumerate} \item To a co-augmented differential graded cocommutative coalgebra $(C,d,\Delta)$, the functor $\mathsf{L}$ associates the differential graded algebra $(\mathsf{L}(C),\delta,[,])$, where: \begin{enumerate} \item The underlying graded Lie algebra is the free graded Lie algebra on the desuspension $\mathsf{u} \overline{C}$ of $\overline{C}$. \item The differential $\delta$ on $\mathsf{L}(C)$ is the Lie derivation determined by \begin{align*} \delta(\mathsf{u} x) := \mathsf{u} dx + \partial(\mathsf{u} x) \in \mathsf{L}(C) \subset \mbox{T\hspace{-.47em}T}(\overline{C}), \end{align*} on homogeneous elements of $\overline{C}$, where $\partial(\mathsf{u} x) = -\sum_i (-1)^{|x_i|} \mathsf{u} x_i\otimes \mathsf{u} y_i$ if $\overline{\Delta}(x) = \sum_i x_i\otimes y_i$. Note that the cocommutativity of the coproduct guarantees that the right hand side belongs to $\mathsf{L}(C)$. \end{enumerate} \end{enumerate} \end{definition} The following theorem will be essential for our construction. \begin{theorem}[Quillen \cite{Quillen}, Hinich \cite{Hinich}]\label{Quillen} The functor $\mathsf{L}\colon \mathsf{DGCC}_a \rightarrow \mathsf{DGLA}$ is left adjoint to $\mathsf{CE}\colon \mathsf{DGLA} \rightarrow \mathsf{DGCC}_a$. Moreover, the adjunction maps $X\to \mathsf{CE}(\mathsf{L}(X))$ and $\mathsf{L}(\mathsf{CE}(\mathfrak{g})) \to \mathfrak{g}$ are quasi-isomorphisms. \end{theorem} \begin{remark} The above theorem works under the hidden assumption that we restrict to the subcategory of \emph{connected} differential graded cocommutative coalgebras; see Appendix B of \cite{Quillen}. All the coalgebras to which we will apply the Theorem are of this kind.\footnote{In contrast, the coalgebra $C_\bullet(M)$ is not connected. This is what forces us to introduce the completed bar complex; see Appendix~\ref{section:twisting_cochains}.} \end{remark} \begin{definition} The strictification functor $\mathbb{S}: \mathsf{L}_{\infty} \to \mathsf{DGLA}$ is $\mathbb{S}:= \mathsf{L}\circ \mathsf{CE}$. \end{definition} \begin{corollary}\label{corollary:g_to_ST(g)} Let $\mathfrak{g}$ be an $L_\infty$-algebra. Then the unit of the adjunction between $\mathsf{L}$ and $\mathsf{CE}$, applied to $\mathsf{CE}(\mathfrak{g})$ gives a map \[ \eta \in \mathsf{Hom}_{\mathsf{DGCC}_a}(\mathsf{CE}(\mathfrak{g}), \mathsf{CE}(\mathbb{S}(\mathfrak{g})))\cong \mathsf{Hom}_{\mathsf{L}_{\infty}}(\mathfrak{g},\mathbb{S}(\mathfrak{g})), \] which is a quasi-isomorphism of $L_\infty$-algebras. \end{corollary} \begin{remark}\label{remark:ST(g)_to_g} In case $\mathfrak{g}$ is a differential graded Lie algebra, there is also a morphism $ \rho: \mathbb{S}(\mathfrak{g}) \to \mathfrak{g}$ of differential graded Lie algebras obtained by the adjunction \[ \mathsf{Hom}_{\mathsf{DGLA}}(\mathbb{S}(\mathfrak{g}),\mathfrak{g}) = \mathsf{Hom}_{\mathsf{DGLA}}(\mathsf{L}(\mathsf{CE}(\mathfrak{g})),\mathfrak{g}) \cong \mathsf{Hom}_{\mathsf{DGCC}_a}(\mathsf{CE}(\mathfrak{g}),\mathsf{CE}(\mathfrak{g}))\] from $\mathrm{id}_{\mathsf{CE}(\mathfrak{g})}$. Moreover, $\rho \circ \eta = \mathsf{id}$ holds, hence $\rho$ is a quasi-isomorphism. \end{remark} \begin{remark} Let $\iota$ denote the inclusion functor $\mathsf{DGCC}_a \rightarrow \mathsf{DGC}_a$. One can check that the functors $\Omega \circ \iota$ and $\mathbb{U} \circ \mathsf{L}$ are isomorphic. We sum up this subsection in the diagram \begin{align*} \xymatrix{ \mathsf{L}_{\infty} \ar[rr]^{\mathsf{CE}} \ar[rrrd]_{\mathbb{S}}& & \mathsf{DGCC}_a \ar[r]^{\iota} \ar[dr]^{\mathsf{L}} & \mathsf{DGC}_a \ar[r]^{\Omega}& \mathsf{DGA}_a\\ & & & \mathsf{DGLA}. \ar[ur]_{\mathbb{U}}& } \end{align*} Observe that the triangle on the left commutes, while the triangle on the right side commutes up to a natural isomorphism. \end{remark} \subsection{The enveloping algebra}\label{subsection:universal_enveloping} Following \cite{B}, we now define the universal enveloping algebra of an $L_\infty$-algebra. The idea is to use the strictification functor. \begin{definition} The universal enveloping functor $\mathbb{U}_{\infty}\colon \mathsf{L}_{\infty} \rightarrow \mathsf{DGA}$ is given by\linebreak $\mathbb{U}_{\infty}:=\mathbb{U} \circ \mathbb{S}$. We call $\mathbb{U}_{\infty}(\mathfrak{g})$ the universal enveloping algebra of $\mathfrak{g}$. \end{definition} The universal enveloping algebra $\mathbb{U}_{\infty}(\mathfrak{g})$ of a differential graded Lie algebra $\mathfrak{g}$, seen as an $L_\infty$ algebra, is not the same as the usual enveloping algebra $\mathbb{U}(\mathfrak{g})$ of $\mathfrak{g}$. However, these two algebras are naturally quasi-isomorphic: \begin{proposition}\label{Halperin} Let $\mathfrak{g}$ be a differential graded Lie algebra. The map $\mathbb{U} (\rho)\colon$\linebreak $\mathbb{U}_{\infty}(\mathfrak{g}) \rightarrow \mathbb{U}(\mathfrak{g})$ induced from $\rho: \mathbb{S}(\mathfrak{g}) \to \mathfrak{g}$ is a quasi-isomorphism of differential graded algebras. \end{proposition} \begin{proof} This is immediate from the fact that $\rho$ is a quasi-isomorphism and that the functor $\mathbb{U}$ preserves quasi-isomorphisms. \end{proof} As in the usual case of differential graded Lie algebras, the functor $\mathbb{U}_{\infty}$ can be characterized as a left adjoint to a forgetful functor: \begin{proposition}\label{prop:U_left_to_Sigma} The functor $\mathbb{U}_{\infty}\colon \mathsf{L}_{\infty} \rightarrow \mathsf{DGA}$ is left adjoint to the forgetful functor $\Sigma_\infty:=\iota \circ \Sigma\colon \mathsf{DGA} \rightarrow \mathsf{L}_{\infty},$ where $\iota\colon \mathsf{DGLA} \rightarrow \mathsf{L}_{\infty}$ is the inclusion functor. \end{proposition} \begin{proof} This is a formal consequence of the adjunctions discussed above: \begin{eqnarray*} \mathsf{Hom}_{\mathsf{DGA}}(\mathbb{U}_{\infty}(\mathfrak{g}), A)&\cong& \mathsf{Hom}_{\mathsf{DGA}}(\mathbb{U}(\mathbb{S}(\mathfrak{g})), A) \cong \mathsf{Hom}_{\mathsf{DGLA}}(\mathbb{S}(\mathfrak{g}), \Sigma(A)) \\ &\cong& \mathsf{Hom}_{\mathsf{DGCC}_a}( \mathsf{CE}(\mathfrak{g}),\mathsf{CE}(\Sigma(A))) = \mathsf{Hom}_{\mathsf{L}_{\infty}}(\mathfrak{g}, \Sigma_\infty(A)). \end{eqnarray*} \end{proof} The proof of the following lemma will be omitted for brevity. \begin{lemma} The functor $\mathbb{U}_{\infty}\colon \mathsf{L}_{\infty} \rightarrow \mathsf{DGA}$ preserves quasi-isomorphisms. \end{lemma} \section{Complete $L_\infty$-algebras} \subsection{Generalities about complete $L_\infty$-algebras}\label{subsection:filtered} The computation of holonomies is an operation that involves infinite sums. For this reason, we have to consider $L_\infty$-algebras where infinite sums can be treated. \begin{definition} An ideal of an $L_\infty$-algebra $\mathfrak{g}$ is a graded subspace $I\subset \mathfrak{g}$ such that \[[x_1,\dots, x_k] \in I ,\textrm{\, if one of the $x_i$ belongs to $I$}.\] A filtration $F$ on $\mathfrak{g}$ is a decreasing sequence of ideals $F_1(\mathfrak{g})=\mathfrak{g} \supseteq F_2(\mathfrak{g}) \supseteq F_3(\mathfrak{g})\supseteq \cdots, $ such that: \begin{enumerate} \item $\bigcap_k F_k(\mathfrak{g})=0$. \item If $x_{i} \in F_{l_i}(\mathfrak{g})$, then $[x_1,\dots,x_k]\in F_{l_1+\dots +l_k}(\mathfrak{g})$. \end{enumerate} \end{definition} \begin{definition} A filtered $L_\infty$-algebra is an $L_\infty$-algebra together with a filtration. If $\mathfrak{g}, \mathfrak{h}$ are filtered $L_\infty$-algebras, a filtered morphism is a morphism $\phi$ such that if $x_i \in F_{l_i}(\mathfrak{g})$ then $\phi_k(x_1,\dots ,x_k) \in F_{l_1+\dots +l_k}(\mathfrak{h})$. \end{definition} \begin{remark} \hspace{0cm} \begin{enumerate} \item If $I$ is an ideal of $\mathfrak{g}$, then the quotient space $\mathfrak{g}/I$ inherits the structure of an $L_\infty$-algebra. \item Given a filtered $L_\infty$-algebra $\mathfrak{g}$, there is a diagram \[0 \leftarrow \mathfrak{g}/F_2(\mathfrak{g}) \leftarrow \mathfrak{g}/F_3(\mathfrak{g}) \leftarrow \cdots . \] The completion of $\mathfrak{g}$, denoted $\hat{\mathfrak{g}}$, is the limit \[\hat{\mathfrak{g}} := \varprojlim \mathfrak{g}/ F_k(\mathfrak{g}).\] The natural map $\iota\colon \mathfrak{g} \rightarrow \hat{\mathfrak{g}}$ given by $x \mapsto (\overline{x},\overline{x},\overline{x},\dots)$ is an injection in view of the first property of the definition of a filtration. \end{enumerate} \end{remark} For the sake of brevity, we will omit the proof of the following lemma. \begin{lemma}\label{functoriality of completion} The completion $\mathfrak{g} \mapsto \hat{\mathfrak{g}}$ defines a functor on the category of filtered $L_\infty$-algebras and filtered morphisms. Moreover, for a filtered morphism $\phi\colon \mathfrak{g} \rightarrow \mathfrak{h}$, the following holds: $\iota \circ \phi=\hat{\phi}\circ \iota$. \end{lemma} \begin{definition}\label{definition:completeness} A filtered $L_\infty$-algebra $\mathfrak{g}$ is complete if the canonical injection $\mathfrak{g} \to \hat{\mathfrak{g}}$ is an isomorphism. \end{definition} \begin{remark} \hspace{0cm} \begin{enumerate} \item A filtered $L_\infty$-algebra $\mathfrak{g}$ has the structure of a topological vector space where the sequence $F_k(\mathfrak{g})$ is a local basis for $0\in \mathfrak{g}$. This topology is Hausdorff, since it is induced by the metric: $d(x,y):= \mathsf{\inf}\{\frac{1}{k}: x-y \in F_k(\mathfrak{g})\}$. In particular, any sequence of elements in a filtered $L_\infty$-algebra $\mathfrak{g}$ has at most one limit. In case $\mathfrak{g}$ is complete in the sense of Definition~\ref{definition:completeness}, it is also complete as a topological vector space. \item Following \cite{Getzler}, we observe that there is a natural decreasing sequence of ideals on any $L_\infty$-algebra $\mathfrak{g}$, defined recursively as follows: $F_1(\mathfrak{g}):=\mathfrak{g}$ and \[F_k(\mathfrak{g}):= \sum_{l_1+ \dots +l_i=k} [F_{l_1}(\mathfrak{g}),\dots ,F_{l_i}(\mathfrak{g})]. \] In \cite{Getzler} this filtration is called the lower central filtration of $\mathfrak{g}$. Since it might fail to be a filtration in our sense, because the intersection of the $F_k(\mathfrak{g})$ might not be zero, we refer to the collection $F_k(\mathfrak{g})$ as the lower central series of $\mathfrak{g}$. Given any filtration $F'$ on $\mathfrak{g}$, it is clear that $ F_k(\mathfrak{g})\subseteq F'_k(\mathfrak{g}),$ and therefore \[\bigcap_k F_k(\mathfrak{g})\subseteq \bigcap_kF'_k(\mathfrak{g}) =0.\] Thus, if $\mathfrak{g}$ admits a filtration at all, then the lower central series is a filtration, and it is the minimal one. \end{enumerate} \end{remark} \begin{definition}\label{definition:MC} A Maurer--Cartan element of a complete $L_\infty$-algebra is an element $\alpha \in \mathfrak{g}^1$ such that ${\displaystyle \sum_{k\geq 1}} \frac{1}{k!}[\underbrace{\alpha \otimes \dots \otimes \alpha}_{k \mathsf{ \,times}}]=0. $ We denote by $\mathsf{MC}(\mathfrak{g})$ the set of all Maurer--Cartan elements of $\mathfrak{g}$. \end{definition} \begin{lemma} Let $\phi\colon \mathfrak{g} \rightarrow \mathfrak{h}$ be a filtered morphism between complete $L_\infty$-algebras. There is a map of sets $\phi_*\colon \mathfrak{g} \rightarrow \mathfrak{h}$, given by the formula $\phi_*(\alpha):= \sum_{k\geq 1} \phi_k(\alpha^{\otimes k})$. This map is continuous at zero and preserves Maurer--Cartan elements. \end{lemma} \begin{proof} Since $\phi$ is a filtered morphism, we know that $\phi_k(\alpha^{\otimes k})\in F_k(\mathfrak{h})$ and therefore the sum converges. It is clear that if $\alpha \in F_k(\mathfrak{g})$ then $\phi_*(\alpha)\in F_k(\mathfrak{h})$, so that the map is continuous at zero. Let us now prove that $\phi_*(\alpha)$ is a Maurer--Cartan element whenever $\alpha$ is as follows: \begin{eqnarray*} \sum_{k\geq 1} \frac{1}{k!}[\phi_*(\alpha) \otimes \dots \otimes \phi_*(\alpha)]&=&\sum_{k\geq 1}\frac{1}{k!}\sum_{l_1,\dots l_k} [\phi_{l_1}(\alpha^{\otimes l_1}) \otimes \dots \otimes \phi_{l_k}(\alpha^{\otimes l_{k}})]\\ &=&\sum_{p\geq 1}\sum_{l_1+\dots +l_k=p} \frac{1}{k!} [\phi_{l_1}(\alpha^{\otimes l_1}) \otimes \dots \otimes \phi_{l_k}(\alpha^{\otimes l_{k}})]\\ &=&\phi_*\left(\sum_{k\geq 1}\frac{1}{k!}[\alpha \otimes \dots \otimes \alpha]\right)=\phi_*(0)=0. \end{eqnarray*} \end{proof} \begin{remark}\label{remark:MC_dga} Similarly, for $A$ a differential graded algebra, one defines the set of Maurer--Cartan elements to be $ \mathsf{MC}(A) := \{ \alpha \in A^1: d\alpha + \alpha \cdot \alpha = 0\}$. \end{remark} \subsection{Compatibility with various functors} The proof of the following lemma will be omitted for brevity. \begin{lemma}\label{operations on filtrations} Suppose that $V$ is a filtered graded vector space. Then we have the following: \begin{itemize} \item The reduced tensor algebra $\overline{\mbox{T\hspace{-.47em}T}}V$ is a filtered algebra with filtration: \[F_k(\overline{\mbox{T\hspace{-.47em}T}} V)= \sum_{l_1+\dots +l_r\geq k} F_{l_1}(V)\otimes \cdots \otimes F_{l_r}(V).\] \item The vector space $\overline{\mathsf{S}}(V)$ is also a filtered graded vector space with filtration: \[F_k(\overline{\mathsf{S}}(V)):= \langle \{x_1 \otimes \dots \otimes x_r \in \overline{\mathsf{S}}(V):\ \exists\ l_1+\dots +l_r\geq k \text{ with } x_i \in F_{l_i}(V)\} \rangle\] \item The free graded Lie algebra $\mathsf{L}(V)$ is a filtered Lie algebra with filtration: \[F_k(\mathsf{L}(V)):= \langle \{ P(x_1, \dots x_r) \in \mathsf{L}(V):\ \exists\ l_1+\dots +l_r\geq k \text{ with } x_i \in F_{l_i}(V)\} \rangle.\] Here $P(x_1, \dots,x_r)$ denotes a Lie monomial of length $k$ on $x_1, \dots ,x_r$ where all the $x_i$ appear. \end{itemize} \end{lemma} We now prove that the strictification of $L_\infty$-algebras is compatible with filtrations. \begin{lemma}\label{lemma:filtration_strictification} Let $\mathfrak{g}$ be a filtered $L_\infty$-algebra. Then the differential graded algebra $\mathbb{S}(\mathfrak{g})$ has an induced filtration and the natural morphism $\eta\colon \mathfrak{g} \rightarrow \mathbb{S}(\mathfrak{g})$ is a filtered morphism. \end{lemma} \begin{proof} Recall that the Lie algebra $\mathbb{S}(\mathfrak{g})$ is the free Lie algebra on the vector space $V=\mathsf{u} \overline{\mathsf{S}}(\mathsf{s} \mathfrak{g})$. In view of Lemma~\ref{operations on filtrations}, we know that there is a filtration on $\mathbb{S}(\mathfrak{g})$ seen as a Lie algebra. We need to prove that this filtration is compatible with the differential, i.e.,\ that $\delta (F_k(\mathbb{S}(\mathfrak{g})))\subset F_k(\mathbb{S}(\mathfrak{g})). $ Since $\delta$ is a derivation with respect to the Lie bracket, it suffices to prove the claim for elements of $V$. The differential $\delta$ is the sum of two coboundary operators: one induced from that of $\mathfrak{g}$ and one induced from the coproduct. The claim is clearly true for the first differential. Let us prove it for the differential that comes from the coproduct, given by \[\mathsf{u} (\mathsf{s} {x}_1\otimes \dots \otimes \mathsf{s} {x}_{n})\mapsto -\hspace{-2pt} \sum_i (-1)^{|x_1|+ \dots +|x_i|+i}\mathsf{u} (\mathsf{s} {x}_1\otimes \dots \otimes \mathsf{s} x_i) \otimes \mathsf{u}(\mathsf{s} x_{i+1}\otimes \dots \otimes \mathsf{s} {x}_{n}).\] Since the right-hand side is the sum of Lie monomials on the same elements, we conclude that if the left-hand side belongs to $F_k(\mathbb{S}(\mathfrak{g}))$, so does the right-hand side. So far, we have seen that the differential graded Lie algebra $\mathbb{S}(\mathfrak{g})$ inherits a filtration; it remains to show that the map $\eta\colon \mathfrak{g} \rightarrow \mathbb{S}(\mathfrak{g})$ is a filtered map. The components of this map are given by the formula \begin{eqnarray*} \eta_k(&x_1&\otimes \dots \otimes x_k)=\pm \mathsf{s}\mathsf{u}(\mathsf{s} x_1 \otimes \dots \otimes \mathsf{s} x_k) \\ &&+ \sum_{k_1+k_2=k}\sum_{\sigma \in (k_1,k_2)} \pm \mathsf{s}\mathsf{u}(\mathsf{s} x_{\sigma(1)}\otimes \cdots \otimes \mathsf{s} x_{\sigma(k_1)})\otimes \mathsf{s} \mathsf{u} (\mathsf{s} x_{\sigma(k_1+1)}\otimes \cdots \otimes \mathsf{s} x_{\sigma(k)}) \\ && + \cdots, \end{eqnarray*} and therefore if $x_i \in F_{l_i}(\mathfrak{g})$ then $\eta_k(x_1\otimes \dots \otimes x_k)\in F_{l_1 + \dots + l_k}(\mathbb{S}(\mathfrak{g})),$ and we conclude that $\eta$ is a filtered map. \end{proof} \begin{definition} A filtration of an augmented differential graded algebra $A$ is a filtration of its augmentation ideal. A filtered augmented differential graded algebra $A$ is an augmented differential graded algebra with a filtration. \end{definition} \begin{lemma}\label{lemma:filtration_enveloping} The universal enveloping functor $\mathbb{U}: \mathsf{DGLA} \to \mathsf{DGA}$ extends to a functor from the category of filtered differential graded Lie algebras to the category of filtered differential graded algebras as follows: \begin{enumerate} \item For $\mathfrak{g}$ a filtered differential graded Lie algebra, the augmentation ideal of $\mathbb{U}(\mathfrak{g})$ carries the filtration inherited from $\mbox{T\hspace{-.47em}T}\mathfrak{g}$. \item For $f: \mathfrak{g}\to \mathfrak{h}$ a filtered morphism of differential graded Lie algebras algebras, the morphism $\mathbb{U}(f): \mathbb{U}(\mathfrak{g})\to \mathbb{U}(\mathfrak{h})$ is a filtered morphism. \end{enumerate} \end{lemma} \begin{proof} This follows from the definitions and the fact that the expression $x\otimes y - (-1)^{|x||y|}y\otimes x -[x,y]$ lies in $F_{k+l}(\overline{\mbox{T\hspace{-.47em}T}}\mathfrak{g})$ for $x\in F_k(\mathfrak{g})$ and $y\in F_l(\mathfrak{g})$. \end{proof} \begin{corollary} The universal enveloping algebra $\mathbb{U}_{\infty}(\mathfrak{g})$ of a filtered $L_\infty$-algebra $\mathfrak{g}$ is naturally a filtered augemented differential graded algebra. \end{corollary} \begin{proof} This is a direct consequence of Lemmas~\ref{lemma:filtration_strictification} and~\ref{lemma:filtration_enveloping}. \end{proof} \begin{remark} \hspace{0cm} \begin{enumerate} \item Recall from \cite{Getzler} that if $\mathfrak{g}$ is an $L_\infty$-algebra and $A$ is a differential graded commutative algebra then the tensor product $\mathfrak{g} \otimes A$ is an $L_\infty$-algebra with brackets: \begin{equation*} \begin{cases} [x \otimes a]=[x]\otimes a +(-1)^{|x|+1}x\otimes da,\\ [x_1 \otimes a_1,\dots , x_k \otimes a_k]=(-1)^{\sum_{i<j}|a_i|(|x_j|+1)}[x_1,\dots,x_k]\otimes a_1\dots a_k, \quad k\neq 1. \end{cases}\hspace*{-2.3pt} \end{equation*} Observe that $-\otimes A$ extends to a functor: given a morphism $\gamma$ of $L_\infty$-algebras, one defines $ \gamma\otimes \mathrm{id}_A: \mathfrak{g}\otimes A \to \mathfrak{h}\otimes A$ to be given by the structure maps \begin{eqnarray*} &&(\gamma\otimes \mathrm{id}_A)_k((\mathsf{s} x_1 \otimes a_1)\otimes \cdots \otimes (\mathsf{s} x_k\otimes a_k)) :=\\ && \quad (-1)^{\sum_{i<j}|a_i|(|x_j|+1)} \gamma_k(\mathsf{s} x_1\otimes \cdots \mathsf{s} x_k) \otimes (a_1 \cdots a_k), \end{eqnarray*} where we see an element $\mathsf{s} x\otimes a$ in $\mathsf{s} (\mathfrak{g} \otimes A)$ via the map $$ \mathsf{s}(\mathfrak{g} \otimes A) \cong \mathsf{s} \mathfrak{g} \otimes A, \quad \mathsf{s} (x\otimes a) \mapsto \mathsf{s} x \otimes a.$$ \item If $\mathfrak{g}$ is filtered, $\mathfrak{g} \otimes A$ is a filtered $L_{\infty}$-algebra with filtration: $F_k(\mathfrak{g} \otimes A):=F_k(\mathfrak{g})\otimes A$. Moreover, if $\gamma: \mathfrak{g} \to \mathfrak{h}$ is a morphism of filtered $L_\infty$-algebras, so is $\gamma\otimes \mathrm{id}_A$. The operation $-\otimes A$ is functorial and so---see Lemma~\ref{functoriality of completion}---we have a commutative diagram: \[ \xymatrix{ \mathfrak{g} \otimes A \ar[r]^{\iota \otimes A} \ar[d]_\iota& \hat{\mathfrak{g}}\otimes A\ar[d]^\iota\\ \widehat{(\mathfrak{g} \otimes A)}\ar[r]^{\widehat{\iota \otimes A}}& \widehat{ (\hat{\mathfrak{g}}\otimes A)}. } \] \item Similar statements apply if one replaces $\mathfrak{g}$ by a (filtered) differential graded algebra and drops the commutativity of $A$. \end{enumerate} \end{remark} The following lemma is straightforward to check: \begin{lemma}\label{lemma:denseness} \hspace{0cm} \begin{itemize} \item Let $V$ be a filtered graded vector space. Then $\mbox{T\hspace{-.47em}T} V$ is dense in $\mbox{T\hspace{-.47em}T} \hat{V}$. \item Let $\mathfrak{g}$ be a filtered differential graded Lie algebra. Then $\mathbb{U}(\mathfrak{g})$ is dense in $\mathbb{U}(\hat{\mathfrak{g}})$. \item Let $\mathfrak{g}$ be a filtered $L_\infty$-algebra. Then $\mathbb{S}(\mathfrak{g})$ is dense in $\mathbb{S}(\hat{\mathfrak{g}})$. Moreover, if $A$ is a commutative differential graded algebra, then $\mathfrak{g}\otimes A$ is dense in $\hat{\mathfrak{g}}\otimes A$. \end{itemize} \end{lemma} \begin{corollary}\label{corollary:denseness} \hspace{0cm} \begin{itemize} \item Let $V$ be a filtered graded vector space. Then $\widehat{\mbox{T\hspace{-.47em}T} V}$ is naturally isomorphic to $\widehat{\mbox{T\hspace{-.47em}T} \hat{V}}$. \item Let $\mathfrak{g}$ be a filtered differential graded Lie algebra. Then $\widehat{\mathbb{U}(\mathfrak{g})}$ is naturally isomorphic to $\widehat{\mathbb{U}(\hat{\mathfrak{g}})}$. \item Let $\mathfrak{g}$ be a filtered $L_\infty$-algebra. Then $\widehat{\mathbb{S}(\mathfrak{g})}$ is naturally isomorphic to $\widehat{\mathbb{S}(\hat{\mathfrak{g}})}$. Moreover, if $A$ is a commutative differential graded algebra, then $\widehat{\mathfrak{g}\otimes A}$ is naturally isomorphic to $\widehat{\hat{\mathfrak{g}}\otimes A}$. \end{itemize} \end{corollary} \begin{proof} This follows from Lemma~\ref{lemma:denseness} and the fact that all the maps $\mbox{T\hspace{-.47em}T} V \to \mbox{T\hspace{-.47em}T} \hat{V}$, $\mathbb{U}(\mathfrak{g}) \to \mathbb{U}(\hat{\mathfrak{g}})$, $\mathbb{S}(\mathfrak{g})\to \mathbb{S}(\hat{\mathfrak{g}})$, and $\mathfrak{g}\otimes A \to \hat{\mathfrak{g}}\otimes A$ are inclusions. This is obvious except for $\mathbb{U}(\mathfrak{g}) \to \mathbb{U}(\hat{\mathfrak{g}})$. We are done if we can prove that for any graded Lie subalgebra $i: \mathfrak{h}\to \mathfrak{g}$, the induced map $\mathbb{U}(i): \mathbb{U}(\mathfrak{h}) \to \mathbb{U}(\mathfrak{g})$ is injective. But this is the case if $$\mathrm{gr}\mathbb{U}(i): \mathrm{gr}\mathbb{U}(\mathfrak{h}) \to \mathrm{gr}\mathbb{U}(\mathfrak{g})$$ is injective. Here $\mathrm{gr}$ denotes the functor that maps a filtered vector space to its associated graded and $\mathbb{U}(\mathfrak{g})$ is seens as a filtered vector space with the filtration whose members $\mathfrak{F}_k\mathbb{U}(\mathfrak{g})$ are the images of $\mbox{T\hspace{-.47em}T}^{\le k}(\mathfrak{g})$ under the quotient map.\footnote{Strictly speaking, this kind of filtration is opposite to the way we defined them.} However, $\mathrm{gr}\mathbb{U}(\mathfrak{g})$ is canonically isomorphic to $\mathsf{S}(\mathfrak{g})$, the graded symmetric algebra of $\mathfrak{g}$, and $\mathrm{gr}\mathbb{U}(i)$ corresponds to $\mathsf{S}(i)$. It is clear that $\mathsf{S}(i)$ is injective. \end{proof} \begin{definition} Given\hspace{-0.5pt} a filtered\hspace{-0.5pt} $L_\infty$-algebra\hspace{-0.5pt} $\mathfrak{g}$\hspace{-0.5pt} and\hspace{-0.5pt} a commutative differential graded algebra $A$, we denote the completion of the $L_\infty$-algebra $\mathfrak{g}\otimes A$ by $\mathfrak{g} \hat{\otimes} A$. Given a filtered $L_\infty$-algebra $\mathfrak{g}$, we denote the completion of the universal enveloping algebra $\mathbb{U}_{\infty}(\mathfrak{g})$ by $\hat{\mathbb{U}}_{\infty}(\mathfrak{g})$. \end{definition} \begin{definition} Let $V$ be a graded vector space and $W$ be a filtered vector space. The graded vector space $\mathsf{Hom}(V,W)$ carries a filtration defined by $$ \phi \in F_k\mathsf{Hom}(V,W) \quad :\Leftrightarrow \quad \mathrm{im}(\phi) \subset F_kW.$$ \end{definition} \begin{lemma} Let $V$ be a graded vector space and $W$ a complete vector space. Then $\mathsf{Hom}(V,W)$ is complete. \end{lemma} \begin{proof} Given a Cauchy sequence $\phi_i$ in $\mathsf{Hom}(V,W)$, we define $\phi: V\to W$ via $$\phi(x):= \lim_{i \to \infty} \phi_i(x).$$ By definition of the filtration on $\mathsf{Hom}(V,W)$, the sequence $\phi_i(x)$ will be Cauchy and since $W$ is complete, $\phi(x)$ is well-defined. Because $W$ is a topological vector space with respect to the topology induced from the filtration, the map $\phi$ is a linear map. \end{proof} \begin{remark} Given a graded vector space $W$ and a filtered graded vector space $V$, there is a natural inclusion of filtered graded vector spaces $W^*\otimes V \to \mathsf{Hom}(V,W)$. The above lemma implies that the completion $\widehat{W^*\otimes V}$ can be naturally identified with a subspace of $\mathsf{Hom}(V,\hat{W})$. \end{remark} \section{Parallel transport} \subsection{$\mathsf{A}_{\infty}$ de Rham Theorem} \label{subsection:deRham} We briefly describe an $\mathsf{A}_{\infty}$ version of de Rham's theorem that is due to Gugenheim \cite{G}. It is the key ingredient in the definition of higher holonomies in the next subsection. The construction involves a family of maps from cubes to simplices introduced by Chen \cite{Chen}. We use the maps given by Igusa in \cite{I}. Let us now recall Gugenheim's morphism from \cite{G}, following the conventions of \cite{AS}, where the interested reader can find more details. Let $M$ be a smooth manifold, and denote by $\mathsf{P} M$ the path space of $M$. The first ingredient for the $\mathsf{A}_{\infty}$ de Rham theorem is Chen's map \begin{align*} \mathsf{C}: \overline{\mathsf{B} \Omega(M)} =\bigoplus_{k\ge 1} \left(\mathsf{s} \Omega(M) \right)^{\otimes k} \to \Omega(\mathsf{P} M). \end{align*} It is a linear map of degree $0$ and constructed as follows: We denote the evaluation map \begin{eqnarray*} \mathsf{P} M \times \Delta_k &\rightarrow& M^k,\\ (\gamma,(t_1,\dots,t_k))&\mapsto &(\gamma(t_1),\dots, \gamma(t_k)) \end{eqnarray*} by $\mathrm{ev}$ and the natural projections $\mathsf{P} M \times {\Delta}_k \rightarrow \Delta_k$ and $M^{ k}\rightarrow M$ by $\pi$ and $p_i$, respectively. Chen's map is \begin{align*} \mathsf{C}(\mathsf{s} a_1\otimes \cdots \otimes \mathsf{s} a_k):= (-1)^{\sum_{i=1}^n[a_i](k-i)}\pi_{*}(\mathrm{ev})^{*}(p_1^*a_1\wedge \cdots \wedge p_n^*a_k), \end{align*} where $[a_i]$ is the degree of $\mathsf{s} a_i\in \mathsf{s} \Omega(M)$. The next step in the construction of the $\mathsf{A}_{\infty}$ de Rham theorem is a special sequence of maps from the cubes to the simplices. We follow a construction due to Igusa \cite{I} and make use of the following definition of the $k$-simplex \begin{align*} \Delta_k:=\{(t_1,\dots,t_k)\in \mathbb{R}^{k}: 1 \ge t_1 \ge t_2 \ge \cdots \ge t_k \ge 0\} \subset \mathbb{R}^{k}. \end{align*} \begin{definition}[Igusa] For each $k\geq 1$, the map \[\Theta_{(k)}\colon I^{k-1} \rightarrow \mathsf{P} \Delta_k, \] is defined to be the composition \begin{align*} \xymatrix{ I^{k-1} \ar[r]^{\lambda_{(k)}}& \mathsf{P} I^k\ar[r]^{\mathsf{P} \pi_k}& \mathsf{P} \Delta_k. } \end{align*} Here $\pi_k\colon I^k \rightarrow \Delta_k$ is given by $\pi_k(x_1,\dots,x_k):=(t_1,\dots,t_k)$, with components \[t_i:= \mathrm{ max }\{x_i,\dots, x_k\}.\] The map $\lambda_{(k)}\colon I^{k-1}\rightarrow \mathsf{P} I^k$ is defined by sending a point $(x_1,\dots,x_{k-1})$ to the path which goes backwards through the following $k+1$ points: \[0 \leftarrow x_1 e_1\leftarrow \dots \leftarrow ( x_1e_1+\dots + x_{k-1}e_{k-1})\leftarrow (x_1e_1+\dots + x_{k-1}e_{k-1}+e_k), \] where $(e_1,\cdots, e_k)$ denotes the standard basis of $\mathbb{R}^{k}$. In other words, for $j=0,\dots, k$ we set \[\lambda_{(k)}(x_1,\dots, x_{k-1})(\frac{k-j}{k})= x_1 e_1+ \dots +x_{j} e_{j}, \] where $x_k=1$, and interpolate linearly. By convention, $\Theta_{(0)}$ is the map from a point to a point. We denote the map adjoint to $\Theta_{(k)}$ by $\Theta_k\colon I^k \rightarrow \Delta_k$. \end{definition} \begin{definition} The map $\mathsf{S}: \Omega(\mathsf{P} M) \to \mathsf{s} C^{\bullet}(M)$ is the composition of \begin{eqnarray*} \Omega(\mathsf{P} M) &\to& C^{\bullet}(M),\\ \alpha &\mapsto& \left( \sigma \mapsto \int_{I ^{k-1}}(\Theta_{(k)})^{*}\mathsf{P} \sigma^{*}\alpha\right). \end{eqnarray*} \end{definition} \begin{definition} Given a smooth manifold $M$ and an integer $n\geq 1$, we define the map $ \psi_n: \left(\mathsf{s} \Omega(M)\right)^{\otimes n} \to \mathsf{s} C^{\bullet}(M)$, as follows: \begin{enumerate} \item For $n=1$, we set: $\left(\psi_1(\mathsf{s} a)\right) (\sigma:\Delta_k \to M):= (-1)^{k} \left(\int_{\Delta^{k}}\sigma^{*}a \right)$. \item For $n>1$, we set $\psi_n(\mathsf{s} a_1 \otimes \cdots \otimes \mathsf{s} a_n) := (\mathsf{S}\circ \mathsf{C})(\mathsf{s} a_1\otimes \cdots \otimes \mathsf{s} a_n)$. \end{enumerate} \end{definition} \begin{remark} Observe that $\psi_1(\mathsf{s} a)$ coincides with $(\mathsf{S}\circ \mathsf{C})(\mathsf{s} a)$, except for the case when $a$ is of degree $0$, i.e.,\ a function. In that case, $(\mathsf{S} \circ \mathsf{C})(\mathsf{s} a) = 0$, while \begin{align*} \left(\psi_1(\mathsf{s} a)\right)(\sigma: \{*\} \to M) := a(\sigma(0)). \end{align*} \end{remark} \begin{theorem}[Gugenheim]\label{theorem:A_infty_quasi-isomorphism} The sequence of maps $\psi_n\colon \left(\mathsf{s} \Omega(M)\right)^{\otimes n} \to \mathsf{s} C^\bullet(M)$\linebreak defines an $A_\infty$-morphism from $(\Omega(M),-d,\wedge)$ to differential graded algebra of smooth singular cochains $(C^{\bullet}(M),\delta,\cup)$. Moreover, this morphism is a quasi-isomorphism and the construction is natural with respect to pull backs along smooth maps. \end{theorem} \subsection{Holonomies} Using the constructions given above, it is now a simple task to define holonomies for connections with values in $L_\infty$-algebras. \begin{lemma}\label{lemma:coefficients} Let $\mathfrak{g}$ be an $L_\infty$-algebra and $A$ a commutative differential graded algebra. Then there is a natural map of differential graded algebras \[\tau\colon \mathbb{U}_{\infty}(\mathfrak{g} \otimes A)\rightarrow \mathbb{U}_{\infty}(\mathfrak{g})\otimes A.\] This map is given on generators of the free algebra $\mathbb{U}_{\infty}(\mathfrak{g} \otimes A)$ by the formula \begin{eqnarray*} &&\mathsf{u} \Big(\mathsf{s} (x_1 \otimes a_1)\otimes \dots \otimes \mathsf{s}(x_k \otimes a_k)\Big)\mapsto \\ && (-1)^{ \sum_{i<j}|a_i|(|x_j|+1)} \mathsf{u} \Big((\mathsf{s} x_1 \otimes \dots \otimes \mathsf{s} x_k) \otimes (a_1 \dots a_k) \Big). \end{eqnarray*} Moreover, if $\mathfrak{g}$ is filtered then $\tau$ is a filtered map. \end{lemma} \begin{proof} First recall that there is a natural morphism of $L_{\infty}$-algebras $\eta$ from $\mathfrak{g}$ to its strictification $\mathbb{S}(\mathfrak{g})$. The adjunction property of $\mathbb{U}_{\infty}$ yields a morphism $$\gamma \in \mathsf{Hom}_{\mathsf{DGLA}}(\mathbb{S}(\mathfrak{g}),\Sigma(\mathbb{U}_{\infty}(\mathfrak{g}))) \cong \mathsf{Hom}_{\mathsf{DGA}}(\mathbb{U}_{\infty}(\mathfrak{g}),\mathbb{U}_{\infty}(\mathfrak{g})) $$ corresponding to the identity of $\mathbb{U}_{\infty}(\mathfrak{g})$. The composition of $\eta$ and $\gamma$ is an $L_\infty$-morphism from $\mathfrak{g}$ to $\Sigma(\mathbb{U}_{\infty}(\mathfrak{g}))$. Tensoring with $\mathrm{id}_A$ yields an $L_\infty$-morphism $$ (\gamma\circ \eta)\otimes \mathrm{id}_A: \mathfrak{g}\otimes A \to \Sigma(\mathbb{U}_{\infty}(\mathfrak{g}))\otimes A.$$ Using the adjunction properties, as well as the natural isomorphism $\Sigma(C\otimes A) \cong \Sigma(C)\otimes A$ for $C$ any differential graded algebra, one obtains natural isomorphisms \begin{eqnarray*} \mathsf{Hom}_{\mathsf{L}_{\infty}}(\mathfrak{g}\otimes A, \Sigma(\mathbb{U}_{\infty}(\mathfrak{g}))\otimes A) &\cong & \mathsf{Hom}_{\mathsf{L}_{\infty}}(\mathfrak{g}\otimes A, \Sigma(\mathbb{U}_{\infty}(\mathfrak{g})\otimes A)) \\ &=& \mathsf{Hom}_{\mathsf{DGCC}_a}(\mathsf{CE}(\mathfrak{g}\otimes A),\mathsf{CE}(\Sigma(\mathbb{U}_{\infty}(\mathfrak{g})\otimes A))) \\ &=& \mathsf{Hom}_{\mathsf{DGLA}}(\mathbb{S}(\mathfrak{g}\otimes A),\Sigma(\mathbb{U}_{\infty}(\mathfrak{g})\otimes A))\\ &=& \mathsf{Hom}_{\mathsf{DGA}}(\mathbb{U}_{\infty}(\mathfrak{g}\otimes A),\mathbb{U}_{\infty}(\mathfrak{g})\otimes A). \end{eqnarray*} We define $\tau$ to be the image of $(\gamma\circ \eta)\otimes \mathrm{id}_A$ under this sequence of natural isomorphisms. \end{proof} \begin{definition} Let $M$ be a smooth manifold and $\mathfrak{g}$ an $L_\infty$-algebra. A connection on $M$ with values in $\mathfrak{g}$ is a degree~$1$ element $\alpha$ in $\mathfrak{g} \hat{\otimes} \Omega(M)$. \end{definition} \begin{definition} A connection $\alpha$ on $M$ with values in a filtered $L_\infty$-algebra $\mathfrak{g}$ is called flat if $\alpha \in \mathsf{MC}\big(\mathfrak{g} \hat{\otimes} \Omega(M)\big)$. \end{definition} \begin{definition} Suppose that $\alpha$ is a connection on $M$ with values in a filtered $L_{\infty}$-algebra $\mathfrak{g}$. The holonomy $\mathsf{hol}^{\infty}_{\alpha} \in \mathbb{U}_\infty(\mathfrak{g})\hat{\otimes} C^\bullet(M)$ of $\alpha$ is the image of $\alpha$ under the composition $$ \xymatrix{ \mathfrak{g}\hat{\otimes} \Omega(M) \ar[r]^(0.45){(\widehat{\eta\otimes \mathrm{id}})_*} & \mathbb{S}(\mathfrak{g})\hat{\otimes} \Omega(M) \ar[r]^(0.4){\hat{\iota}} & \hat{\mathbb{U}}(\mathbb{S}(\mathfrak{g})\otimes \Omega(M)) \ar[r]^(0.7){\hat{\tau}}& \cdots}$$ \vspace{-0.7cm} $$ \xymatrix{ \cdots \ar[r] & \mathbb{U}(\mathbb{S}(\mathfrak{g}))\hat{\otimes} \Omega(M) \ar[r]^(0.48){(\widehat{\mathrm{id}\otimes \psi})_*} & \mathbb{U}(\mathbb{S}(\mathfrak{g}))\hat{\otimes} C^{\bullet}(M). } $$ By definition, the last space equals $\mathbb{U}_{\infty}(\mathfrak{g})\hat{\otimes} C^{\bullet}(M)$. The maps above are as follows: \begin{itemize} \item $\eta$ is the map from $\mathfrak{g}$ to its strictification $\mathbb{S}(\mathfrak{g})$. \item $\iota$ is the inclusion of $\mathfrak{g}$ into its universal enveloping algebra. \item $\tau$ is the map defined in Lemma~\ref{lemma:coefficients}. \item $\psi$ is Gugenheim's $\mathsf{A}_\infty$ quasi-isomorphism between $\Omega(M)$ and $C^{\bullet}(M)$. \end{itemize} \end{definition} \begin{proposition} Suppose that $\alpha$ is a flat connection on $M$ with values in a filtered $L_\infty$-algebra $\mathfrak{g}$. Then $\mathsf{hol}^{\infty}_{\alpha}$ is a Maurer--Cartan element of $\mathbb{U}_{\infty}(\mathfrak{g})\hat{\otimes} C^{\bullet}(M)$. \end{proposition} \begin{proof} All of the maps involved in the definition of $\mathsf{hol}^{\infty}_\alpha$ preserve Maurer--Cartan elements. \end{proof} Recall that there is a natural inclusion $\mathbb{U}_{\infty}(\mathfrak{g})\otimes C^{\bullet}(M) \hookrightarrow \mathsf{Hom}(C_{\bullet}(M),\mathbb{U}_{\infty}(\mathfrak{g}))$ of filtered diffential graded algebras. Completing yields a map $$ \mathbb{U}_{\infty}(\mathfrak{g})\hat{\otimes} C^{\bullet}(M) \to \mathsf{Hom}(C_{\bullet}(M),\hat{\mathbb{U}}_{\infty}(\mathfrak{g})),$$ which allows us to view $\mathsf{hol}^{\infty}_\alpha$ as a map from $C_\bullet(M)$ to $\hat{\mathbb{U}}_\infty(\mathfrak{g})$. It is not hard to see that the image of this map lies in the kernel $K$ of the augmentation map $\hat{\mathbb{U}}_{\infty}(\mathfrak{g})\to \mathbb{R}$. Hence if $\alpha$ is flat, this map corresponds to a twisting cochain on $C_{\bullet}(M)$ with values in $K$; see Appendix~\ref{section:twisting_cochains}. Such a twisting cochain is equivalent to a morphism of differential graded coalgebras from $C_{\bullet}(M)$ to $\hat{\mathsf{B}}\hat{\mathbb{U}}_{\infty}(\mathfrak{g})$, where $\hat{\mathsf{B}}\hat{\mathbb{U}}_{\infty}(\mathfrak{g})$ denotes the completed bar complex of $\hat{\mathbb{U}}_{\infty}(\mathfrak{g})$. Hence a flat connection $\alpha$ on $M$ with values in a filtered $L_\infty$-algebra $\mathfrak{g}$ gives rise to a morphism of differential graded coalgebras $C_{\bullet}(M) \to \hat{\mathsf{B}}\hat{\mathbb{U}}_{\infty}(\mathfrak{g})$. We have proved our main result: \begin{theorem}\label{main theorem} Suppose that $\alpha$ is a flat connection on $M$ with values in a filtered $L_\infty$-algebra $\mathfrak{g}$. Then there is a natural homomorphism of differential graded coalgebras \[\mathsf{hol}^{\infty}_\alpha\colon C_\bullet(M) \rightarrow \hat{\mathsf{B}}\hat {\mathbb{U}}_\infty(\mathfrak{g}).\] \end{theorem} For a flat connection $\alpha$ with values in a filtered differential graded Lie algebra $\mathfrak{g}$, one could also define the holonomy $\mathsf{hol}_{\alpha}$ as the image of $\alpha$ under $$ \xymatrix{ \mathfrak{g}\otimes \Omega(M) \ar[r]^{\hat{\iota}} & \hat{\mathbb{U}}(\mathfrak{g}\otimes \Omega(M)) \ar[r]^{\hat{\tau}}& \mathbb{U}(\mathfrak{g})\hat{\otimes} \Omega(M) \ar[r]^(0.48){(\widehat{\mathrm{id}\otimes \psi})_*} & \mathbb{U}(\mathfrak{g})\hat{\otimes} C^{\bullet}(M). } $$ Hence, if $\alpha$ is flat, one obtains a morphism of differential graded coalgebras $$ \mathsf{hol}_{\alpha}: C_{\bullet}(M) \to \hat{\mathsf{B}}\hat{\mathbb{U}}(\mathfrak{g}).$$ \begin{proposition} Let $\alpha$ be a flat connection on $M$ with values in a filtered differential graded Lie algebra $\mathfrak{g}$. Then the following diagram is commutative: $$ \xymatrix{ C_\bullet(M) \ar[r]^{\mathsf{hol}^{\infty}_\alpha} \ar[rd]_{\mathsf{hol}_\alpha}& \hat{\mathsf{B}} \hat{\mathbb{U}}_\infty (\mathfrak{g}) \ar[d]^{\mathsf{B} \hat{\mathbb{U}}(\rho)}\\ &\hat{\mathsf{B}} \hat{\mathbb{U}}(\mathfrak{g}). } $$ \end{proposition} \begin{proof} \hspace*{3pt}This follows from the fact that the Maurer--Cartan elements $\mathsf{hol}^{\infty}_{\alpha} \in\linebreak \mathbb{U}_{\infty}(\mathfrak{g})\hat{\otimes}C^{\bullet}(M)$ and $\mathsf{hol}_{\alpha} \in \mathbb{U}(\mathfrak{g}) \hat{\otimes} C^{\bullet}(M)$ are related by the map $\mathbb{U}(\rho)\hat{\otimes}\mathrm{id}$. To establish this, let $\mathfrak{g}$ be an arbitrary filtered differential graded Lie algebra and $\phi: A\to B$ morphism of commutative differential graded algebras. Then the diagrams \begin{align*} \xymatrix{ \mathfrak{g}^1 \ar[r]^{\hat{\eta}_*} \ar[rrdd]_{\hat{i}}& \mathbb{S}(\mathfrak{g}) \ar[r]^{\hat{i}} & \mathbb{U}_{\infty}(\mathfrak{g})\ar[dd]^{\mathbb{U}(\rho)}\\ &&\\ && \mathbb{U}(\mathfrak{g}), } \end{align*} and \begin{align*} \xymatrix{ \mathbb{U}_{\infty}(\mathfrak{g}\otimes A) \ar[r]^{\tau} \ar[d]^{\mathbb{U}(\rho)}& \mathbb{U}_{\infty}(\mathfrak{g})\otimes A \ar[r]^{\mathrm{id}\otimes \phi} \ar[d]^{\mathbb{U}(\rho)\otimes \mathrm{id}}& \mathbb{U}_{\infty}(\mathfrak{g})\otimes B \ar[d]^{\mathbb{U}(\rho)\otimes \mathrm{id}}\\ \mathbb{U}(\mathfrak{g}\otimes A) \ar[r]^{\tau} & \mathbb{U}(\mathfrak{g})\otimes A \ar[r]^{\mathrm{id}\otimes \phi}& \mathbb{U}(\mathfrak{g})\otimes B. } \end{align*} are commutative. \end{proof} We saw that in the case of differential graded Lie algebras, the two possible notions of holonomy $\mathsf{hol}$ and $\mathsf{hol}^\infty$ are related by the quasi-isomorphism $\mathbb{U}(\rho)\colon \mathbb{U}_\infty(\mathfrak{g})\rightarrow \mathbb{U}(\mathfrak{g})$. The following lemma shows that, futhermore, both definitions are consistent with the usual notion of holonomy in the case that $\mathfrak{g}$ is a Lie algebra. \begin{lemma} Let $\alpha$ be a connection on $M$ with values in a filtered Lie algebra $\mathfrak{g}$. Then $\mathsf{hol}_{\alpha} \in \mathbb{U}(\mathfrak{g})\hat{\otimes} C^{\bullet}(M)$ yields the usual parallel transport of $\alpha$. \end{lemma} \begin{proof} By degree reasons, $\alpha$ is an element of $\mathfrak{g}\hat{\otimes} \Omega^1(M)$ and $\mathsf{hol}_{\alpha}$ is an element of $\mathbb{U}(\mathfrak{g}) \hat{\otimes} C^1(M)$. Let $\gamma: [0,1] \to M$ be a path in $M$. The pullback of $\alpha$ along $\gamma$ gives an element of $\mathfrak{g}\hat{\otimes} \Omega^1([0,1])$, which can be written as $$\gamma^*\alpha = \sum_{i=1}^{\infty}\xi^i\otimes a_i(t) dt,$$ where $\mathrm{deg}(\xi^{i}) \to \infty$. The degree of an element in a filtered graded vector space $V$ is the integer $k$ such that the element is contained in $F_kV$, but not in $F_{k+1}V$. We consider $\mathsf{hol}_{\alpha} \in \mathbb{U}(\mathfrak{g})\hat{\otimes} C^1(M)$ as a map from $C_1(M)$ to $\hat{\mathbb{U}}(\mathfrak{g})$. By definition, the evaluation of this map on the path $\gamma$ yields $$ \sum_{k\ge 1} \sum_{i_1\ge 1,\cdots,i_k\ge 1} (\xi^{i_1} \cdots \xi^{i_k}) \left(\int_{1\ge t_1\ge \cdots \ge t_k \ge 0} a_{i_1}(1-t_1)\cdots a_{i_k}(1-t_k) dt_1\cdots dt_k\right).$$ Up to a shift by $1 \in \mathbb{U}(\mathfrak{g})$, this is the unique solution to the ordinary differential equation $$ H_0 = 1, \qquad \frac{d}{dt}H_t = \left(\sum_{i=1}^{\infty}\xi^{i}\otimes a_i(1-t) \right)\cdot H_t$$ in $\mathbb{U}(\mathfrak{g})\hat{\otimes} \mathcal{C}^{\infty}([0,1])$. Hence, $\mathsf{hol}_{\alpha}(\gamma)$ encodes parallel transport of $\alpha$ along $\gamma$ (with reversed orientation). \end{proof} The holonomies defined in Theorem~\ref{main theorem} satisfy the following naturality conditions: \begin{lemma} Suppose that $\alpha$ is a flat connection on $M$ with values in the filtered $L_\infty$-algebra $\mathfrak{g}$. \begin{enumerate} \item If $f\colon N \rightarrow M$ is a smooth map, then $\mathsf{hol}^{\infty}_{f^*(\alpha)}=\mathsf{hol}^\infty_\alpha \circ f_*$. \item If $\gamma\colon \mathfrak{g} \rightarrow \mathfrak{h}$ is a filtered morphism, then $\mathsf{hol}^\infty_{\gamma_*(\alpha)}=\hat{\mathsf{B}} \hat{\mathbb{U}}_\infty(\gamma) \circ \mathsf{hol}^\infty_\alpha .$ \end{enumerate} \end{lemma} \begin{proof} The first claim follows directly from the naturality of Gugenheim's $A_\infty$-morph\-ism with respect to the pullback along smooth maps. The second claim is clear since the whole construction is functorial with respect to the coefficient system $\mathfrak{g}$. \end{proof} \section{Flat connections on configuration spaces} So far, we constructed an extension of Igusa's higher holonomies \cite{I} to the framework of flat connection with values in $L_\infty$-algebras. In this section, we explain how rational homotopy theory provides a vast amount of such connections. We then turn to a specific family of examples, the configuration spaces $\mathsf{Conf}_d(n)$ of $n$ points in $\mathbb{R}^d$ ($d \ge 2$). In \cite{K}, Kontsevich constructed explicit models for these spaces and used them to establish formality of the chains of the little $d$-disks operad. We consider the corresponding flat connections, extending considerations of \v{S}evera and Willwacher \cite{SW} to the higher-dimensional situation. Finally, we explain how one can use these flat connections to construct representations of the $\infty$-groupoid of $\mathsf{Conf}_d(n)$, generalizing the holonomy representations of braid groups. \subsection{Flat connections and rational homotopy theory}\label{subsection:rational_homotopy_theory} A Sullivan minimal model of a manifold $M$ is a differential graded algebra $(A_M,d)$ that is homotopy equivalent to $\Omega(M)$ and is isomorphic, as a graded algebra, to the free graded commutative algebra $\wedge V$ on a graded vector space $V$. For more details on the definition, we refer the reader to \cite{Sullivan,F}. For simplicity, we will assume that the homogeneous components of $V$ are finite dimensional. Such a model exists, for instance, if $M$ has vanishing first cohomology and finite Betti numbers. As was observed in \cite{Getzler}, the information of a Sullivan model can be encoded by a flat connection on $M$ that takes values in an $L_\infty$-algebra: Let $\mathfrak{g}$ be the graded vector space with $\mathfrak{g}^k = (V^{-k+1})^*$; i.e.,\ $\mathfrak{g}$ is the desuspension of the graded dual $V^*$ of $V$. Observe that since $V$ is concentrated in strictly positive degrees, $\mathfrak{g}$ is concentrated in non-positive degrees. Recall that $\mathsf{S}(\mathsf{s} \mathfrak{g})$ denotes the symmetric coalgebra on $\mathsf{s} \mathfrak{g}$, the suspension of $\mathfrak{g}$. We equip $\mathfrak{g}$ with structure maps $\mu_n: \mathsf{S}^n(\mathsf{s} \mathfrak{g}) \to \mathsf{s} \mathfrak{g}$ of degree $+1$ given by $$ \mathsf{S}(\mathsf{s} \mathfrak{g}) = \mathsf{S}(V^*) \hookrightarrow (\mathsf{S}(V))^* \rightarrow (\mathsf{u} V)^* \cong \mathsf{s}(\mathsf{s} \mathfrak{g}).$$ Here the arrow in the middle that goes from $(\mathsf{S}(V))^*$ to $(\mathsf{u} V)^*$ is the map dual of the restriction of the differential $d$ of $\wedge V$ to $V$. The fact that $d$ squares to zero implies that the maps $(\mu_n)_{n \ge 1}$ equip $\mathsf{s} \mathfrak{g}$ with the structure of an $L_\infty$-algebra. The next step is to consider the morphism $\varphi$. Since $\wedge V$ is free as a commutative graded algebra, it suffices to know its restriction to $V$. If we choose a homogeneous basis $(v_i)_{i\in I}$ of $V$, we obtain an element $$\alpha_\varphi := \sum_{i} \varphi(v_i)\otimes v_i$$ of $\Omega(M)\otimes V^*$. We now consider $\alpha$ as an element of $\Omega(M)\otimes \mathfrak{g}$. As such, $\alpha_\varphi$ has degree $+1$, and the fact that $\varphi$ is a morphism of commutative differential graded algebras implies that $\alpha_\varphi$ is a Maurer--Cartan element of the differential graded Lie algebra $\Omega(M)\otimes \mathfrak{g}$. It is clear that one can reconstruct the Sullivan model $(\wedge V,d)$ from $\mathfrak{g}$ and $\alpha_\varphi$. To sum up our discussion, we record the following: \begin{lemma}\label{lemma:Sullivan_models} Every finite type Sullivan model of a manifold $M$ corresponds in a natural way to a flat connection on $M$ with values in an $L_\infty$-algebra. \end{lemma} Let $\alpha_\varphi$ be a flat connection on $M$ associated to a Sullivan model $\varphi: \wedge V \to \Omega(M)$. In order for the holonomy map $\mathsf{hol}^{\infty}_{\alpha_\varphi}$ from Theorem~\ref{main theorem} to be well defined, we need the series which define it to converge. In Theorem~\ref{main theorem}, this is guaranteed by the assumption that $\mathfrak{g}$ is filtered. For flat connections associated to Sullivan models, we will circumvent this problem by assuming that $M$ is simply connected. This allows us to assume that $V$ is concentrated in degrees strictly larger than $+1$, which in turn implies that $\mathfrak{g}$ is concentrated in strictly negative degrees. Consequently, the components of $\alpha_\varphi$ of form degree $0$ and $1$ are zero and no divergent sums appear in the definition of the holonomy map $\mathsf{hol}^{\infty}_{\alpha_\varphi}$. \begin{theorem} Let $\varphi\colon \wedge V\to \Omega(M)$ be a Sullivan model of a manifold $M$, and assume that $\wedge V$ is of finite type and $V^1=0$. Then the holonomy map associated to the flat connection $\alpha_\varphi$ on $M$ with values in $\mathfrak{g}$ yields a morphism of differential graded coalgebras $\mathsf{hol}^{\infty}_{\alpha_\varphi}: C_\bullet(M) \to \mathsf{B}\mathbb{U}_{\infty}(\mathfrak{g}). $ \end{theorem} \begin{remark} If one composes $\mathsf{hol}^{\infty}_{\alpha_\varphi}$ with the projection map $\mathsf{B}\mathbb{U}_{\infty}(\mathfrak{g}) \cong \mathsf{B}\Omega \mathsf{CE}(\mathfrak{g}) \to \mathsf{CE}(\mathfrak{g}), $ one obtains essentially the dual to $$ \xymatrix{ \wedge V \ar[r]^\varphi & \Omega(M) \ar[r]^{\int} & C^{\bullet}(M), } $$ where the last map is the usual integration map. Hence, under mild assumptions (e.g.,\ compactness of $M$) the holonomy map $\mathsf{hol}^{\infty}_{\alpha_\varphi}$ will be a quasi-isomorphism of differential graded coalgebras. Since the adjunction morphism $ \mathsf{CE}(\mathfrak{g}) \to \mathsf{B}\Omega \mathsf{CE}(\mathfrak{g})$ is a quasi-isomorphism, we obtain that $\mathsf{CE}(\mathfrak{g})$ and $C_\bullet(M)$ are quasi-isomorphic dg coalgebras. Notice that the Baues--Lemaire conjecture \cite{BL}, which was proven by Majewski \cite{Majewski,Majewski_book}, asserts that the strictification $\mathbb{S}(\mathfrak{g})$ of $\mathfrak{g}$ is quasi-isomorphic to Quillen's Lie algebra model $L_M$ of $M$ (\cite{Quillen}). Hence $\mathsf{CE}(\mathfrak{g})$ is quasi-isomorphic---as a differential graded coalgebra---to Quillen's coalgebra model $\mathsf{CE}(L_X)$. \end{remark} \subsection{Flat connections on configuration spaces}\label{subsection:configuration_spaces} We now turn to a family of specific examples, the configuration spaces of $n$ (numbered) points in $\mathbb{R}^d$, i.e., $$ \mathsf{Conf}_d(n):= \{x_1,\dots,x_n \in \mathbb{R}^d: \quad x_i\neq x_j \textrm{ for } i\neq j\}.$$ It turns out to be convenient to consider a natural compactification of $\mathsf{Conf}_d(n)$ to a semi-algebraic manifold with corners, the Fulton--MacPherson space $\mathsf{FM}_d(n)$. To obtain these compactifications, one first mods out the action of $\mathbb{R}^d \rtimes \mathbb{R}_{>0}$ by translations and scalings on $\mathsf{Conf}_d(n)$ and then embeds the quotient into $$ (S^{d-1})^{n \choose 2} \times ([0,\infty])^{n \choose 3}$$ via all relative angles and cross-ratios. The closure of this embedding naturally admits the structure of a semi-algebraic manifold with corners. We refer the reader to \cite{Lambrechts,Sinha} for the details of this construction. The cohomology ring of $\mathsf{Conf}_2(n)$ was determined by Arnold \cite{Arnold} and in higher dimensions by Cohen \cite{Cohen}: $H^*(\mathsf{Conf}_d(n))$ is the graded commutative algebra with a set of generators $(\omega_{ij})_{1\le i\neq j \le n}$ of degree $(d-1)$ and the following relations: $$\omega_{ij} = (-1)^{d}\omega_{ji}, \quad \omega_{ij}\omega_{jk} + \omega_{jk}\omega_{ki} + \omega_{ki}\omega_{ij} = 0.$$ \subsubsection{Kontsevich's models for configuration spaces} In \cite{K}, Kontsevich constructed a family of graph complexes $^*\mathsf{Graphs}_d(n)$, together with integration maps $$I:\, ^*\mathsf{Graphs}_d(n) \to \Omega(\mathsf{FM}_d(n)),$$ which are quasi-isomorphisms of commutative differential graded algebras. In dimension $d>2$, the commutative differential graded algebras $^*\mathsf{Graphs}_d(n)$, together with the integration map $I$, define Sullivan models for $\mathsf{FM}_d(n)$.\footnote{For dimension equal to $2$, the problem is that $^*\mathsf{Graphs}_2(n)$ is not concentrated in positive degrees. Moreover, $I$ does not take values in smooth differential forms, but in piecewise semialgebraic forms; see \cite{real_homotopy_type} and \cite{Lambrechts} for the technical details.} Let us recall the definition of $^*\mathsf{Graphs}_d(n)$, following \cite{K} and \cite{Lambrechts}: \begin{definition} An admissible graph with parameters $(n,m,k)$, where $ n \geq 1, m \geq 0$, is a finite graph $\Gamma$ such that: \begin{enumerate} \item $\Gamma$ has no simple loops. \item $\Gamma$ contains $n$ \emph{external} vertices, numbered from $1$ to $n$, and $m$ \emph{internal} vertices numbered from $1$ to $m$. \item $\Gamma$ contains $k$ edges, numbered from $1$ to $k$. \item Any vertex in $\Gamma$ can be connected by a path to an external vertex. \item All internal vertices have valency at least $3$. \item The edges of $\Gamma$ are oriented. \end{enumerate} For $n=0$, there is just one graph with parameters $(0,0,0)$, the empty graph ${\emptyset}$. \end{definition} \begin{definition} For every $n\geq 0$ and $d\geq 2$ define $^*\mathsf{Graphs}_d(n)$ to be the $\mathbb{Z}$-graded vector space over $\mathbb{R}$ generated by equivalence classes of isomorphism classes of admissible graphs with parameters $(m,n,k)$. The equivalence relation is generated by the following three conditions: \begin{itemize} \item $\Gamma \equiv (-1)^{(d-1)} \Gamma'$, if $\Gamma$ differs from $\Gamma'$ by a transposition in the labelling of the edges. \item $\Gamma \equiv (-1)^{d} \Gamma'$, if $\Gamma$ differs from $\Gamma'$ by a transposition in the numbering of the internal vertices. \item $\Gamma \equiv (-1)^{d} \Gamma'$, if $\Gamma'$ is obtained from $\Gamma$ by reversing the orientation of one of the edges. \end{itemize} We define the degree of a class $[\Gamma]$ with parameters $(n,m,k)$ to be \[|[\Gamma]|:=(d-1)k-dm.\] Thus $^*\mathsf{Graphs}_d(n)$ is the direct sum of homogenous components: \[^*\mathsf{Graphs}_d(n)=\bigoplus_{i\in \mathbb{Z}}\, ^*\mathsf{Graphs}_{d}(n)^i.\] \end{definition} \begin{remark} In view of the equivalence relation, we may assume that, for even $d$, graphs have no multiple edges, are unoriented, and internal vertices are not ordered. Similarly, for odd $d$, one may assume that the edges are not ordered. \end{remark} \begin{definition} The graded vector spaces $^*\mathsf{Graphs}_d(n)$ have a natural structure of commutative dg algebras. The product $\Gamma_1\bullet \Gamma_2$ of $\Gamma_1$ and $\Gamma_2$ is their disjoint union, with the corresponding external vertices identified. The order in the edges is such that the order of each of the graphs is preserved and $e_1<e_2$ if $e_i $ belongs to $\Gamma_i$. Similarly, the numbering of the internal vertices is characterized by the fact that the order in each of the graphs is preserved and vertices in $\Gamma_1$ have labels smaller than those in $\Gamma_2$. The differential $\partial$ is given by the sum over all graphs obtained by contracting one of the edges. For more precise details on the sings of the differential, please see \cite{Lambrechts}. \end{definition} \begin{proposition}[\cite{K, Lambrechts}] The operations $\bullet$ and $\partial$ give $^*\mathsf{Graphs}_d(n)$ the structure of a commutative differential graded algebra. \end{proposition} To a graph $\Gamma$ in $^*\mathsf{Graphs}_d(n)$, one can associate a differential form $\omega_\Gamma \in \Omega(\mathsf{FM}_d(n))$ given by the formula \[\omega_\Gamma:=\pi_* \Big( \bigwedge_{e \textrm{ edge of } \Gamma}(\pi_e)^* \mathsf{Vol}_{d-1} \Big), \qquad \text{where:}\] \begin{itemize} \item The map $\pi\colon \mathsf{FM}_d(n+m)\rightarrow \mathsf{FM}_d(n)$ is the natural projection that forgets the last $m$ points on the configuration space. \item For each edge $e$ of $\Gamma$, $\pi_e\colon \mathsf{FM}_d(n+m)\rightarrow \mathsf{FM}_d(2)=S^{d-1}$ is the map that sends a configuration of $m+n$ points to the two points that are joined by $e$. \item $\mathsf{Vol}_{d-1}$ is the rotation invariant volume form of the $(d-1)$-dimensional sphere, normalized so that its volume is~1. \end{itemize} \begin{theorem}[\cite{K,Lambrechts}]\label{proposition_above} The formula $\Gamma \mapsto \omega_\Gamma$ defines a quasi-isomorphism of differential graded algebras: $I\colon ^*\mathsf{Graphs}_d(n)\rightarrow \Omega(\mathsf{FM}_d(n))$. \end{theorem} \subsubsection{The \v{S}evera--Willwacher connections} We next introduce flat connection on the compactified configuration spaces $\mathsf{FM}_d(n)$. In the case $d=2$, these connections where introduced by \v{S}evera and Willwacher \cite{SW}. \begin{definition} We say that an admissible graph $\Gamma$ is internally connected if it is non-empty and connected after all the external vertices are removed. We denote by $\mathsf{CG}_d(n)$ the graded vector space spanned by equivalence classes of internally connected graphs with $n$ external vertices, and introduce a grading by $\overline{\Gamma} :=1+dm-(d-1)k$. \end{definition} \begin{remark} As explained in Subsection~\ref{subsection:rational_homotopy_theory}, Kontsevich's model $^*\mathsf{Graphs}_d(n)$ of the compactified configuration space $\mathsf{FM}_d(n)$ corresponds to a certain flat connection with values in an $L_\infty$-algebra. Since $^*\mathsf{Graphs}_d(n)$ is the free commutative algebra on the space of internally connected graphs, the graded vector space underlying this $L_\infty$-algebra is the space of internally connected graphs $\mathsf{CG}_d(n)$. The general machinery from Subsection~\ref{subsection:rational_homotopy_theory} leads to following definition\slash result: \end{remark} \begin{definition} The \v{S}evera--Willwacher connection $\mathsf{SW}_d(n)$ on $\mathsf{FM}_d(n)$ with values in the $L_\infty$-algebra $\mathsf{CG}_d(n)$ is given by \[\sum_\Gamma I(\Gamma)\otimes \Gamma \quad \in \quad \Omega(\mathsf{FM}_d(n))\hat{\otimes} \mathsf{CG}_d(n),\] where the sum runs over a set of graphs whose equivalence classes form a basis of the graded vector space $\mathsf{CG}_d(n)$. \end{definition} \begin{proposition} The \v{S}evera-Willwacher connections are flat. \end{proposition} \begin{remark} We remark that Kontsevich's model $^*\mathsf{Graphs}_d(n)$ for $\mathsf{FM}_d(n)$ is concentrated in degrees $>1$ and finite-dimensional in each degree if $d>3$. However, the $L_\infty$-algebras $\mathsf{CG}_d(n)$ admit filtrations in the sense of Subsection~\ref{subsection:filtered} for all $d$, and hence our methods are applicable also in the cases $d=2$ and $d=3$. We refer the reader to the forthcoming \cite{AS2} for details. \end{remark} \begin{remark} Applying Theorem~\ref{main theorem} to the flat connections $\mathsf{SW}_d(n)$ yields holonomy maps $$\mathsf{hol}^{\infty}_{\mathsf{SW}_d(n)}: C_\bullet(\mathsf{FM}_d(n)) \to \hat{\mathsf{B}} \mathbb{U}_{\infty}(\mathsf{CG}_d(n)) \cong \hat{\mathsf{B}} \Omega \mathsf{CE}(\mathsf{CG}_d(n)).$$ The composition of $\mathsf{hol}^{\infty}_{\mathsf{SW}_d(n)}$ with the projection to $\mathsf{CE}(\mathsf{CG}_d(n))$ (which is a chain map but not a morphism of coalgebras) are Kontsevich's formality maps $$C_\bullet(\mathsf{FM}_d(n)) \to \mathsf{CE}(\mathsf{CG}_d(n))$$ from \cite{K}. Kontsevich proved that these maps are quasi-isomorphisms and that they assemble into a morphism of operads from $(C_\bullet(\mathsf{FM}_d(n)))_{n\ge 1}$ to $(\mathsf{CE}(\mathsf{CG}_d(n)))_{n\ge 1}$, respectively. It is not hard to verify that the latter operad of differential graded coalgebras is quasi-isomorphic to its cohomology, which can be identified with the homology operad of the compactified configuration spaces $(\mathsf{FM}_d(n))_{n\ge 1}$. This way, Kontsevich established the formality of the chains on the little $d$-disks operad. The holonomy maps $$\mathsf{hol}^{\infty}_{\mathsf{SW}_d(n)}: C_\bullet(\mathsf{FM}_d(n)) \to \hat{\mathsf{B}} \mathbb{U}_{\infty}(\mathsf{CG}_d(n)) \cong \hat{\mathsf{B}} \Omega \mathsf{CE}(\mathsf{CG}_d(n))$$ that we constructed are extensions of Kontsevich's formality map to a collection of quasi-isomorphisms of differential graded coalgebras. Therefore, it should be possible to use them to obtain a formality proof that is compatible with the comultiplication on chains. We hope to report on this in the forthcoming \cite{AS2}. \end{remark} \subsection{Drinfeld--Kohno construction in higher dimensions} If $\mathfrak{g}$ is a complex semisimple Lie algebra and $V$ a representation of $\mathfrak{g}$, then the braid group $B_n$ acts on $V^{\otimes n}$. This action comes from the following construction due to Drinfeld and Kohno: For each $n\geq 2$ there is a Lie algebra $\mathsf{t}_2(n)$, called the Drinfeld--Kohno Lie algebra, and natural flat connections on the configuration spaces $\mathsf{Conf}_2(n)$ with values in $\mathsf{t}_2(n)$, the Knizhnik--Zamolodchikov connections \cite{KZ}. The Lie algebras $\mathsf{t}_d(n)$ have the property that for any quadratic Lie algebra $\mathfrak{g}$ and any representation $V$ of $\mathfrak{g}$, there is a morphism of Lie algebras: $\varphi_n\colon \mathsf{t}_d(n)\rightarrow \textrm{End}(V^{\otimes n})$. Pushing the flat connections along the morphism $\varphi_n$, one obtains flat connections on the vector bundle $V^{\otimes n}$. The holonomy of the flat connection gives an action of the fundamental group of $\mathsf{Conf}_d(n)$, which is the pure braid group $P_n$. Since the connection is compatible with the action of the symmetric group, these actions extend to an action of the braid group $B_n$. We now explain how this construction can be generalized to higher dimensions. Our aim is to show how the compactified configuration spaces $\mathsf{FM}_d(n)$ act via higher holonomies on the category of representations of quadratic graded Lie algebras. \begin{definition} For each dimension $d\geq 2$ and each $n \geq 2$, the Drinfeld--Kohno Lie algebra is the graded Lie algebra $\mathsf{t}_d(n)$ generated by the symbols $t_{ij}=(-1)^dt_{ji}$ for $1\leq i, j \leq n, i\neq j$, of degree $2-d$, modulo the relations \begin{eqnarray*} {[t_{ij}, t_{kl}]} = 0 \quad & \textrm{if} & \quad \# \{i,j,k,l\}=4,\\ {[t_{ij}, t_{ik} + t_{jk}]}=0 \quad & \textrm{if} & \quad \# \{i,j,k\}=3. \end{eqnarray*} \end{definition} These graded Lie algebras $\mathsf{t}_d(n)$ are closely related to the $L_\infty$-algebras of internally connected graphs $\mathsf{CG}_d(n)$, which were defined in the previous subsection. In fact, $\mathsf{t}_d(n)$ is just the cohomology of $\mathsf{CG}_d(n)$: \begin{proposition}[Proposition 6 from \cite{W}]\label{cohomologyCG} The map $\phi\colon \mathsf{t}_d(n) \rightarrow H(\mathsf{CG}_d(n)),$\linebreak defined by sending $t_{ij}$ to the cohomology class of the graph that has only one edge going from the $i$th to the $j$th external vertices, is an isomorphism of graded Lie algebras. \end{proposition} The relation between $\mathsf{t}_d(n)$ and $\mathsf{CG}_d(n)$ is even stronger \cite{AS2}: \begin{proposition}[\cite{AS2}] The $L_\infty$-algebras $\mathsf{CG}_d(n)$ are formal; i.e.,\ there is an $L_\infty$ quasi-isomorphism between $\mathsf{CG}_d(n)$ and its cohomology $\mathsf{t}_d(n)$. \end{proposition} \begin{remark} Because the $L_\infty$-algebras $\mathsf{CG}_d(n)$ are formal, one can use homological perturbation theory to push forward the \v{S}evera--Willwacher connections $\mathsf{SW}_d(n)$ to flat connection $\widehat{\mathsf{SW}}_d(n)$ with values in the graded Lie algebras $\mathsf{t}_d(n)$. These induced connections are unique up to gauge equivalence. \v{S}evera and Willwacher showed in \cite{SW} that in two dimensions one recovers the Alsekseev--Torossian connections, which were introduced in \cite{AT}. \end{remark} We now show that the graded Lie algebras $\mathsf{t}_d(n)$ naturally act on representations of (a graded version of) quadratic Lie algebras. \begin{definition} A quadratic differential graded Lie algebra of degree $D$ is a finite-dimensional differential graded Lie algebra $\mathfrak{g}$ together with a non-degenerate graded symmetric bilinear form $ \kappa\colon \mathfrak{g} \otimes \mathfrak{g} \rightarrow \mathbb{R}[D],$ satisfying \[ \kappa ([\alpha, \beta], \gamma)= -(-1)^{|\alpha||\beta|} \kappa(\beta, [\alpha,\gamma]) \quad \textrm{and} \quad \kappa (d \alpha, \beta)= (-1)^{|\alpha|} \kappa(\alpha,d\beta).\] \end{definition} \begin{example} \hspace{0cm} \begin{enumerate} \item A complex semisimple Lie algebra $\mathfrak{g}$ endowed with the Killing form is a quadratic differential graded Lie algebra of degree $0$. \item Let $\mathfrak{g}$ be a quadratic Lie algebra and $M$ a closed oriented manifold of dimension $D$. Let $H^-(M)$ denote the graded algebra $ H^-(M)^k:= H^{-k}(M),$ and consider the pairing \[ \mu\colon H^-(M) \otimes H^-(M)\rightarrow \mathbb{R}[D],\] induced by the Poincare pairing in cohomology. Then the vector space $\mathfrak{g} \otimes H^-(M)$ is a quadratic differential graded Lie algebra with bracket \[ [\alpha \otimes \eta , \beta \otimes \omega]:= (-1)^{|\eta||\beta|}[\alpha, \beta]\otimes \eta \omega\] and bilinear pairing $\kappa(\alpha \otimes \eta , \beta \otimes \omega):= (-1)^{|\eta||\beta|} \kappa(\alpha,\beta)\mu(\eta, \omega). $ \end{enumerate} \end{example} Let us now fix a quadratic differential graded Lie algebra $\mathfrak{g}$ of degree $D$. We denote by $\mathbb{U}(\mathfrak{g})$ the universal enveloping algebra of $\mathfrak{g}$. The bilinear form $\kappa$ defines an isomorphism $\kappa^{\sharp}\colon \mathfrak{g} \rightarrow \mathfrak{g}^*[D]$, which induces identifications $\mathfrak{g} \otimes \mathfrak{g}[-D] \cong \mathfrak{g} \otimes \mathfrak{g}^* \cong \textrm{End}(\mathfrak{g})$. We will denote by $\Omega$ the element of $(\mathfrak{g} \otimes \mathfrak{g}) ^{-D}\subset \mathbb{U} \otimes \mathbb{U}$ that corresponds to $\mathrm{id} \in \textrm{End}(\mathfrak{g})$ under the identification above. Explicitly, one can choose a basis $(I_\mu)$ for $\mathfrak{g}$, with the property that each of the basis elements is homogeneous and the basis of $ \mathfrak{g}^*[D] $ induced by the isomorphism $\kappa^\sharp$ is dual to the basis $(I_\mu)$. Then $\Omega$ can be written as $\Omega= \sum_\mu I_\mu \otimes \tilde{I}_\mu,$ where $\tilde{I}_\mu$ is the unique basis element in $\mathfrak{g} ^{|I_\mu|-D}$ with the property that $\kappa(I_\mu, \tilde{I}_\mu)=1$. In case $D=4l$, there is a potential problem since the bilinear form restricted to $g^{\frac{D}{2}}$ may not be positive definite. In this case, some of the elements $\tilde{I}_\mu$ may not be basis elements but negative of basis elements instead. The Casimir element of $\mathfrak{g}$, denoted by $C$, is the image of $\Omega\in \mathfrak{g} \otimes \mathfrak{g} $ in the universal enveloping algebra. Since the bilinear form $\kappa$ is $\mathsf{ad}$ invariant i.e.,\ \[\kappa(\mathsf{ad}(x)(y),z)+(-1)^{|x||y|} \kappa(y, \mathsf{ad}(x)(z))=0,\] the map $\kappa^\sharp\colon \mathfrak{g} \rightarrow \mathfrak{g}^*[D]$ is a morphism of representations of $\mathfrak{g}$. Since $\mathrm{id} \in \textrm{End}(\mathfrak{g})$ is an invariant element for the action of $\mathfrak{g}$, so is $\Omega$. We conclude that $C$ is a central element of $\mathbb{U}(\mathfrak{g})$. Also, the compatibility between the differential and the pairing in $\mathfrak{g}$ implies that $\kappa^\sharp\colon \mathfrak{g} \rightarrow \mathfrak{g}^*[D]$ is a morphism of chain complexes. Since $\mathrm{id} \in \textrm{End}(\mathfrak{g})$ is closed, we conclude that $d \Omega=0$. Recall that $\mathbb{U}(\mathfrak{g})$ admits a coproduct $\Delta\colon \mathbb{U}(\mathfrak{g}) \rightarrow \mathbb{U}(\mathfrak{g}) \otimes \mathbb{U}(\mathfrak{g})$, which is the unique algebra homomorphism with the property that $\Delta(x)= 1 \otimes x + x \otimes 1$ for all $x\in \mathfrak{g}$. The proof of the following lemma is immediate. \begin{lemma}\label{lemmaomega} We regard $\mathfrak{g}$ as a subspace of $\mathbb{U}(\mathfrak{g})$ via the obvious inclusion. Then \[\Omega=\frac{1}{2}(\Delta(C)-1 \otimes C - C\otimes 1).\] \end{lemma} Let $ \iota^{12}\colon \mathbb{U}(\mathfrak{g}) \otimes \mathbb{U}(\mathfrak{g}) \rightarrow \mathbb{U}(\mathfrak{g}) \otimes \mathbb{U}(\mathfrak{g}) \otimes \mathbb{U}(\mathfrak{g})$ be the map $x\otimes y \mapsto x \otimes y \otimes 1$, and define $\iota^{23}, \iota^{13}$ analogously. Then, for $1\leq i <j \leq3$, we set $\Omega^{ij}:= \iota^{ij} (\Omega)$. \begin{lemma}\label{lemmadrinfeld} The following relation is satisfied: $\quad [\Omega^{12},\Omega^{23}+ \Omega^{13}]=0$. \end{lemma} \begin{proof} First, we observe that since $C$ is a central element in $\mathbb{U}(\mathfrak{g})$, $1 \otimes 1 \otimes C, 1 \otimes C \otimes 1, C \otimes 1 \otimes 1$ are central elements in $\mathbb{U}(\mathfrak{g}) \otimes \mathbb{U}(\mathfrak{g}) \otimes \mathbb{U}(\mathfrak{g})$. In view of Lemma~\ref{lemmaomega}, we know that for each pair $1 \leq i <j \leq 3$: $ \Omega^{ij}=\frac{1}{2}\iota^{ij}(\Delta(C))+X^{ij},$ where $X^{ij}$ is central. Therefore, it suffices to prove that \[[\iota^{12}(\Delta(C)),\iota^{23}(\Delta(C))+ \iota^{13}(\Delta(C))]=0.\] In order to prove this, we compute \begin{eqnarray*} \iota^{23}(\Delta(C))&=&\iota^{23}(1 \otimes C + C \otimes 1 +2 \sum_\mu I_\mu \otimes \tilde{I}_\mu)\\ &=& 1\otimes 1 \otimes C +1\otimes C \otimes 1 +2 \sum_\mu1\otimes I_\mu \otimes \tilde{I}_\mu, \end{eqnarray*} and similarly \[\iota^{13}(\Delta(C))=1\otimes 1 \otimes C + C\otimes 1 \otimes 1 +2 \sum_\mu I_\mu \otimes 1\otimes \tilde{I}_\mu.\] Therefore, we obtain \begin{eqnarray*} \iota^{13}(\Delta(C))+\iota^{23}(\Delta(C))&=& 2 \sum_\mu \Delta(I_\mu)\otimes \tilde{I}_\mu + X, \end{eqnarray*} with $X$ central. Finally, we compute \begin{eqnarray*} \frac{1}{2}[\iota^{12}(\Delta(C)),\iota^{23}(\Delta(C))+ \iota^{13}(\Delta(C))]&=&[\Delta(C)\otimes 1,\sum_\mu \Delta(I_\mu)\otimes \tilde{I}_\mu]\\ &=&\sum_\mu [\Delta(C), \Delta (I_\mu )]\otimes \tilde{I}_\mu = 0. \end{eqnarray*} \end{proof} \begin{lemma}\label{lemmakey} Let $\mathfrak{g}$ be a quadratic differential graded Lie algebra of degree $D=d-2$. For each $n \geq 2$ there is a homomorphism of graded algebras \[\hat{\varphi}_n: \mathbb{U}(\mathsf{t}_{d}(n)) \rightarrow \mathbb{U}(\mathfrak{g})^{\otimes n},\] given by the formula $t_{ij}\mapsto \lambda^{ij}(\Omega)\in \mathbb{U}(\mathfrak{g})^{\otimes n}$, where $ \lambda^{ij}\colon \mathbb{U}(\mathfrak{g}) \otimes \mathbb{U}(\mathfrak{g}) \rightarrow \mathbb{U}^{\otimes n}(\mathfrak{g})$ is the morphism of algebras given by: \[x \otimes y \mapsto 1 \otimes \dots \otimes 1 \otimes \underbrace{x}_i \otimes 1 \otimes \dots \otimes 1 \otimes \underbrace{ y}_j \otimes 1 \otimes \dots \otimes 1.\] \end{lemma} \begin{proof} We need to prove that $\hat{\varphi}_n(t_{ij})$ satisfy the defining relations of $\mathsf{t}_d(n)$. It is clear from the definition that $[\hat{\varphi}_n(t_{ij}), \hat{\varphi}_n(t_{kl})]=0$ if $ \#\{i,j,k,l\}=4$. It remains to prove that $ [\hat{\varphi}_n(t_{ij}), \hat{\varphi}_n(t_{ik})+ \hat{\varphi}_n(t_{ik})]=0$ if $\#\{ i,j,k\}=3$. Clearly, it is enough to consider the case $n=3$. Thus, it suffices to prove that $[\Omega^{12},\Omega^{23}+ \Omega^{13}]$ vanishes, which is precisely the claim of Lemma~\ref{lemmadrinfeld}. Since $d\Omega=0$, we conclude that the map $\hat{\varphi}$ is a chain map. \end{proof} \begin{corollary}\label{corollaryrep} Let $\mathfrak{g}$ be a quadratic differential graded Lie algebra of degree $D=d-2$ and $V_1, \dots , V_n$ be representations of $\mathfrak{g}$. Then there is a natural homomorphism of graded Lie algebras: $\varphi\colon \mathsf{t}_d(n)\rightarrow \textrm{End}(V_1 \otimes \dots \otimes V_n)$. \end{corollary} \begin{proof} Consider the composition \[ \mathbb{U}(\mathsf{t}_d(n)) \rightarrow \mathbb{U}(\mathfrak{g})^{\otimes n} \rightarrow \textrm{End}(V_1 ) \otimes \dots \otimes \textrm{End}(V_n) \cong \textrm{End}(V_1\otimes \dots \otimes V_n),\] where the first map is $\hat{\varphi}_n$ and the second map is the tensor product of the representations. This is an algebra map that, by the universal property of the enveloping algebra, corresponds to a morphism of Lie algebras $\varphi\colon \mathsf{t}_d(n)\rightarrow \textrm{End}(V_1 \otimes \dots \otimes V_n)$. \end{proof} Let $\mathfrak{g}$ be a quadratic differential graded Lie algebra of degree $D=d-2$ and $V_1, \dots , V_n$ be finite-dimensional representations of $\mathfrak{g}$. By Corollary~\ref{corollaryrep}, there is a morphism of Lie algebras: \[ \varphi_n\colon \mathsf{t}_d(n)\rightarrow \textrm{End}(V_1\otimes \dots \otimes V_n).\] Recall that pushing forward the \v{S}evera--Willwacher connection $\mathsf{SW}_d(n)$ to cohomology results in a flat connection $\widehat{\mathsf{SW}}_d(n)$ on $\mathsf{FM}_d(n)$ with values in $\mathsf{t}_d(n)$. Pushing forward further along the map $\varphi_n$ then yields a flat connection $\varphi_n(\widehat{SW}_d(n))$ on the space $\mathsf{FM}_d(n)$ with values in $\textrm{End}(V_1\otimes \dots \otimes V_n)$. Thus, in this way, one obtains flat connections on the trivial graded vector bundle with fiber $V_1 \otimes \dots \otimes V_n$. \begin{corollary} The holonomies of the connections $\varphi_n(\widehat{SW}_d(n))$ give an action of the $\infty$-groupoid\footnote{We adopt the convention that an $\infty$-groupoid is a Kan simplicial set. The $\infty$-groupoid of a space $X$ is the Kan simplicial set of chains $C_\bullet(X)$.} of the space $\mathsf{FM}_d(n)$ on the vector space $V_1 \otimes \dots \otimes V_n$. \end{corollary} In the two-dimensional case, this action corresponds to the usual representations of braid groups on products of representations of quadratic Lie algebras. In future work, we plan to generalize this construction to cyclic $L_\infty$-algebras. In fact, it seems plausible that this can be achieved directly on the level of the \v{S}evera--Willwacher connections, which would allow one to bypass the use of homological perturbation theory. Moreover, we expect the resulting construction to be closely related to Kontsevich's characteristic classes of cyclic $L_\infty$-algebras from \cite{Feynman}.
2,869,038,154,211
arxiv
\section{Introduction} \label{s:Introduction} In technical applications and experiments, it can be assumed that a certain amount of non-condensable gas is present in vapor cavities. In general, gases are dissolved in liquids~\citep{Pollack:1991ue} and are released during pressure reduction by outgassing~\citep{Iben:2015bw,Freudigmann:2017cr} or cavitation~\citep{Franc:2004fu}. In experiments with cavitation bubbles, gases are produced when the bubbles are generated with lasers or sparks through chemical reactions and recombination processes~\citep{Sato:2013fg,Akhatov:2001hy}. Gas inside a vapor bubble has a damping effect that can weaken the pressure wave and increase the rebound of the bubble. For spherical bubble collapses, the damping effect is evident in the incompressible Rayleigh-Plesset equation~\citep{plesset1949dynamics} \begin{equation} \rho_l (\ddot{R}R+ 3/2\,\dot{R}^2)= - \Delta p + p_{g}, \label{eq:rp} \end{equation} here written in inviscid form neglecting surface tension, with the density of the liquid $\rho_l$, the bubble radius $R$, its derivate $\dot{R}$, the driving pressure difference $\Delta p=p_{\infty}-p_{sat}$, and the gas pressure $p_{g}=p_{g,0}\left(R_0/R\right)^{3\gamma}$. $p_{g,0}$, $R_0$, $\gamma$ denote the initial gas content, the initial bubble radius and the adiabatic index, respectively. The compressible Keller-Miksis Equation~\citep{Keller1980:wm} additionally captures the rebound. Taking advantage of the fact that it can be treated first order~\citep{Prosperetti:1987jt} and neglecting viscosity and surface tension, it simplifies to \begin{equation} \rho_l (\ddot{R}R (1-v)+ 3/2 \dot{R}^2(1-v/3))=\nonumber \\ (- \Delta p+p_{g})(1+v)+R\dot{p_{g}}/c_l, \label{eq:km} \end{equation} with $v=\dot{R}/c_l$. $c_l$ is the speed of sound in the liquid phase. Both equations clearly show that the partial pressure of the gas inside the bubble decelerates the collapse and, in the compressible formulation, enhances the rebound. Further, both effects are more pronounced at lower driving pressure differences $\Delta p$. Analytical studies evaluating the effect of gas inside vapor bubbles were conducted by \citet{Fujikawa:1980jj} and \citet{Akhatov:2001hy}. They studied bubble dynamics of vapor bubbles containing gas and considered compressibility, non-equilibrium effects at phase transition, and conductive heat transfer. Later, \citet{Tinguely:2012wo} experimentally investigated the effect of the driving pressure on the energy partitioning into shock wave energy and rebound energy for spherical bubble collapses under microgravity. Based on their findings, they derived an analytical model from the Keller-Miksis Equation predicting the energy partitioning based on one single non-dimensional parameter, which also depends on the gas content in the bubble. While the effect of gas and driving pressure on the collapse of spherical bubbles has already been investigated analytically and experimentally, for more complex configurations, however, the effect has not yet been elucidated. In experimental studies, it is challenging to determine or control the initial gas content in the bubble. Additionally, \new{the short time scale and the high intensity of the emitted pressure wave impose high requirements on the measurement equipment~\citep{Tinguely:2012wo}, and more accurate measurements have only recently become feasible \citep{Supponen:2017wl,supponen2019high,supponen2019detailed} }. Three-dimensional\new{,} time-resolved numerical simulations\new{, in which the gas content can be precisely controlled and the pressure signals monitored,} are \new{thus well suited for complementary and detailed studies of the effect of gas in complex configurations.} In the last decade, \new{compressible} numerical simulations have become a complementary tool for studying collapse dynamics\new{\citep{Johnsen:2009cua, Lauer:2012jh}}. \new{Several numerical studies~\citep{Johnsen:2009cua, Beig:2018ga,Pishchalnikov:2018pp,Trummler:2020JFM} focused on the effect of the first collapse and considered gas bubbles neglecting phase transition. \citet{Pishchalnikov:2018pp} varied the gas content in an elliptical, wall-attached bubble and showed how this affects the collapse behavior and the pressure impact.} To capture \new{both} the pressure waves emitted at collapse and \new{the} rebound, the modeling approach must account for both compressibility and phase transition. Previous works on bubble collapses considering both employed an equilibrium cavitation model in combination with a single-fluid approach as e.g. \citet{Sezal:2009diss, ochiai2011numerical, Pohl:2015keb, Oerley:2016diss,Koukouvinis:2016ir} \new{and more recently \citet{sagar2020dynamics,Trummler:2021if}}. For vapor bubbles containing gas, a multi-component model considering a cavitating liquid and an additional gas component is necessary. \citet{Orley:2015kt} extended the barotropic equilibrium cavitation model by \citet{Schnerr:2008jja} and \citet{Schmidt:2009} by an additional non-condensable gas component. In this model, the mass fraction of gas is convected and for all components a coupled equation of state is employed. So far, the multi-component model has been applied and validated for the injection of a cavitating liquid into a gaseous ambient. Many research groups have taken up the model and partly modified it. \citet{Orley:2016db} extended the model to employ different equations of state for the individual components; \citet{mithun2018numerical} added a volume-of-fluid method for interface capturing; \citet{brandao2020numerical} considered a finite-rate mass transfer for the cavitation process\new{.} In this work, we present an adaptation of the multi-component model of \citet{Orley:2015kt} and \citet{Trummler:2018AAS} to be applicable to vapor bubbles containing gas. Preliminary studies to this work were presented in \citet{Trummler:2018ww,Trummler:2019icmf}. \new{In this paper, we first introduce the thermodynamic model and then apply it to spherical and aspherical bubble collapses.} For the simulations of the aspherical collapses, we have chosen a driving pressure of 1 bar. As \cref{eq:rp,eq:km} show, $\Delta p$ governs the intensity of the emitted pressure wave, the rebound, and the influence of the gas. Under atmospheric conditions, a stronger rebound and a more pronounced damping effect of the gas occur than, for example, at 100 bar. Further, the choice is also motivated by the fact that experiments of single bubble collapses are often conducted at atmospheric conditions, see e.g. \citet{Philipp:1998eg,dular2019high}, and we can thus ensure better comparability. An important parameter for aspherical collapses is the stand-off distance. The stand-off distance has a significant influence on the collapse dynamics and the erosion potential as has been shown by experimental\new{\citep{Tomita:1986gy,Philipp:1998eg}} and numerical studies~\new{\citep{Lauer:2012jh,Trummler:2020JFM,Trummler:2021if}}. The sign of the stand-off distance alters the collapse behavior and a smaller stand-off distance (absolute value) increases the pressure impact on the wall. Therefore, we consider wall-attached bubbles with negative and positive stand-off distances. The paper is organized as follows. In \cref{s:Methods}, we describe the physical model and the numerical method. \Cref{s:spherical} presents simulation results of spherical bubble collapses with various gas contents and driving pressures, and the validation of the modeling approach with the analytical energy partitioning model by \citet{Tinguely:2012wo}. Then, in \cref{s:aspherical}, we present and analyze simulation results of collapsing wall-attached bubbles at different stand-off distances with and without gas. \Cref{s:ConclusionAndDiscussion} summarizes the paper. \section{Physical Model and Numerical Method} \label{s:Methods} \subsection{Governing Equations} \label{subsec:GovEq} We solve the fully compressible Navier-Stokes equations and an additional transport equation for the gas mass fraction \begin{equation} \partial_{t}\boldsymbol{U}+\nabla \cdot [ \boldsymbol{C}(\boldsymbol{U})+\boldsymbol{S}(\boldsymbol{U})]=0\,. \label{eq:NS} \end{equation} The state vector $\boldsymbol{U}=[\rho , \, \rho \boldsymbol{u},\,\rho\xi_g ]^T$ is composed of the conserved variables density $\rho$ and momentum $\rho \boldsymbol{u}$ and gas density $\rho\xi_g$. Due to the assumed barotropic modeling ($p=p(\rho)$), the energy equation can be omitted. The convective fluxes $\boldsymbol{C}(\boldsymbol{U})$ and the flux contributions due to pressure and shear $\boldsymbol{S}(\boldsymbol{U})$ read \begin{equation} \boldsymbol{C}(\boldsymbol{U})= \boldsymbol{u} \begin{bmatrix} \rho\\ \rho\boldsymbol{u}\\ \rho\xi_g \end{bmatrix} \quad \mathrm{and} \quad \boldsymbol{S}(\boldsymbol{U})= \begin{bmatrix} 0\\ p \boldsymbol{I}-\boldsymbol{\tau}\\ 0\\ \end{bmatrix}, \label{eq:NS_Basis} \end{equation} with the velocity $\boldsymbol{u}$, the static pressure $p$, the unit tensor $\boldsymbol{I}$, and the viscous stress tensor $\boldsymbol{\tau}$ \begin{equation} \boldsymbol{\tau}=\mu(\nabla \boldsymbol{u}+(\nabla \boldsymbol{u})^{T}-\frac{2}{3}(\nabla \cdot\boldsymbol{u})\boldsymbol{I}), \label{eq:tau} \end{equation} where $\mu$ is the dynamic viscosity. \subsection{Thermodynamic Model} \label{subsec:ThermoModel} We adopt a multi-component homogeneous mixture model~\citep{Orley:2015kt,Trummler:2018AAS} to be applicable to vapor bubbles containing gas. In the employed modeling approach, the cavitating liquid ($lv$) and the non-condensable gas ($g$) are described by a substitute mixture fluid. This approach implies that within a computational cell all phases have the same velocity, temperature and pressure. The single fluid is described by the volume averaged density inside a computational cell \begin{equation} \rho= \sum \beta_{\phi} \rho_{\phi}=\beta_{g}\rho_{g}+(1-\beta_{g})\rho_{lv}. \label{eq:rhomix} \end{equation} $\beta_{\phi}$ denotes the volume fraction and $\rho_{\phi}$ the density of each component \new{$\phi= \{ lv,\, g\}$}. The gas volume fraction $\beta_{g}$ can be obtained from the transported mass fraction $\xi_{g}$ by the following relation \begin{equation} \beta_{g}=\xi_{g}\frac{\rho}{\rho_{g}}. \label{eq:beta_g} \end{equation} For the mixture fluid a coupled equation of state (EOS) is derived. Therefore corresponding thermodynamic relations for each component are derived. For the modeling of vapor bubbles containing gas, the pressure acting on the liquid-vapor mixture in the bubble has to be modified. Inside the bubble the pressure is composed of the partial pressures of vapor and gas as \begin{equation} p = p_{lv} + p_{g}. \end{equation} We calculate the pressure acting on the liquid-vapor mixture by \begin{equation} p_{lv} = p - p_{g} = (1-\beta_{g})\,p. \end{equation} The cavitating water is described with a barotropic EOS, derived by integration of the isentropic speed of sound \begin{equation} \rho_{lv}=\rho_{\mathrm{sat},l}+(p_{lv}-p_\mathrm{sat}) / c^2, \label{eq:rho_lv} \end{equation} where \new{$\rho_{\mathrm{sat},l}$} is the saturation density for liquid water and $p_\mathrm{sat}$ the saturation pressure. Phase change is modeled assuming local thermodynamic equilibrium. For $p_{lv}>p_\mathrm{sat}$, there is purely liquid water and $c=1482.35\,\si{m/s}$. For $p_{lv}<p_\mathrm{sat}$, there is a liquid vapor mixture with $c=0.1\,\si{m/s}$ as a typical value for an equilibrium isentrope, see e.g. \citet{Franc:2004fu}. The vapor volume fraction $\alpha$ is given by the density of the liquid-vapor mixture \new{$\rho_{lv}$} as \begin{equation} \alpha=\frac{\rho_{\mathrm{sat},l}-\rho_{lv}}{\rho_{\mathrm{sat},l}-\rho_{\mathrm{sat},v}}. \end{equation} \new{Note that $l$ refers to liquid and $v$ to vapor. } For water at reference temperature $\mathrm{T}=\SI{293.15}{K}$, the corresponding values are $p_\mathrm{sat}=\SI{2340}{Pa}$, $\rho_{\mathrm{sat},l}=\SI{998.1618}{kg/m^3}$ and $\rho_{\mathrm{sat},v}= 17.2\cdot 10^{-3}\,\si{kg/m^3}$. The non-condensable gas phase is described with \begin{equation} \rho_{g}=\rho_{g,\mathrm{ref}}(p/p_\mathrm{ref})^{1/\gamma}, \label{eq:rho_g} \end{equation} where $\rho_{g,ref}$ is the reference density at the reference pressure $p_\mathrm{ref}$. Here we used $p_\mathrm{ref}=10^5\,\si{Pa}$ and $\rho_{g,\mathrm{ref}}=1.188\,\si{kg/m^3}$. In the results presented, the gas is modeled as isothermal with $\gamma=1$. By inserting the thermodynamic relations for each component (\cref{eq:rho_lv}, \cref{eq:rho_g}) in \cref{eq:rhomix} a coupled EOS $p=p(\rho, \xi_g)$ is derived, see \citet{Orley:2015kt}. Viscous effects are considered in our simulations using a linear blending of the volume fractions for the mixture viscosity. The following values for the viscosities are used: $\mu_{l} = 1.002\cdot 10^{-3}\,\si{\pascal\second}$ , $\mu_{v} = 9.272\cdot 10^{-6}\,\si{\pascal\second}$ and $\mu_{g} = 1.837\cdot 10^{-5}\,\si{\pascal\second}$\,. \subsection{Numerical Method} The thermodynamic model is embedded in a density-based fully compressible flow solver with a low-Mach-number-consistent flux function, see \citet{Schmidt:2015wa}. For the reconstruction at the cell faces an upwind biased scheme is used, where the velocity components are reconstructed with the up to third-order-accurate limiter of \citet{Koren:1993} and the thermodynamic quantities $\rho$, $p$ with the second-order minmod slope limiter of~\citet{Roe:1986}. Time integration is performed with an explicit second-order, 4-step low-storage Runge-Kutta method~\citep{Schmidt:2015wa}. \section{Spherical collapses and validation of the modeling approach} \label{s:spherical} To validate the modeling approach, we simulate spherical collapses of vapor bubbles containing various amounts of gas. We analyze the collapse and rebound behavior and the intensity of the emitted pressure wave. In \cref{ss:val} the model is compared with the energy partitioning model of \citet{Tinguely:2012wo}. \subsection{Set-up} \new{We consider a bubble with an initial radius $R_{0}=400\,\si{\micro\metre}$. Note that previous investigations have shown that the normalized rebound~\citep{Akhatov:2001hy} and the energy partitioning~\citep{Tinguely:2012wo} are independent of the bubble size. The bubble} is placed at the center of a box with dimension $500 \times R_{0}$ in each Cartesian direction. Taking advantage of symmetry, only an eighth of a bubble is simulated. The domain is discretized with an equidistant grid within a cubic sub-domain with an edge length of $1.25\; R_{0}$, and for the outer part a grid stretching is applied. Simulations are performed on different grid levels defined by the number of cells over the initial radius $N_C/R_{0}$. If not stated otherwise, the results are for a grid-resolution of $N_C/R_{0}=80$. The pressure field is initialized with a pressure jump at the pseudo phase boundary. A constant CFL number of 1.4 is used. \begin{figure}[!tb] \centering \subfigure[]{\includegraphics[height=4cm]{01_a_fig_setup.pdf}} \subfigure[]{\includegraphics[height=4cm]{01_b_grid_zoom.pdf}} \caption{Simulation set-up. (a) Planar sketch of the numerical set-up, (b) Grid in near bubble region and initialized bubble.} \label{fig:setup_sph} \end{figure} For this investigation, the initial gas content in the bubble $p_{g,0}$ and the driving pressure difference $\Delta p$ are varied covering different combinations of $\Delta p = [10^4\,\si{Pa},\, 10^5 \,\si{Pa}]$ and $p_{g,0} =[0\,\si{Pa},\;1000 \,\si{Pa}]$. During the simulations, pressure signals are recorded at certain radial positions from the bubble center. \new{In this section (\cref{s:spherical}), time is normalized with the Rayleigh collapse time for spherical collapses~\citep{Rayleigh:1917} \begin{equation} \tau_c=0.915\cdot R_{0}\sqrt{\rho_l / \Delta p}. \label{eq:t_c} \end{equation} Following previous studies~\citep{Beig:2018ga,Trummler:2020JFM,Trummler:2021if}, the pressure is normalized in both sections (\cref{s:spherical,s:aspherical}) using \begin{equation} p^{\ast} = c_l \sqrt{\rho_l \Delta p}. \end{equation} } \subsection{Results} \label{ss:results_spherical} \Cref{fig:co_dyn}~(a) depicts the bubble collapse and the rebound at different time steps for $\Delta p = 10\,\si{kPa}$ with $p_{g,0} = \{ 0 \;\si{Pa}, \,1000 \;\si{Pa}\}$. The left time series presents the bubble collapse without gas, showing the initial bubble, the situation shortly before the collapse and the emitted shock wave after collapse. Analogously, the dynamics of a bubble with a high gas content is visualized in the right time series. In this case, a rebound is visible at $t = 1.44\;\tau_{c}$. In \cref{fig:co_dyn}~(b) the near bubble region is shown to visualize the rebound behavior. \new{As can be seen in the last two time instants ($t/\tau_c=1.156$ and $t/\tau_c=1.532$), the rebound bubble is not completely spherical, which is due to a more accurate numerical reconstruction in the direction of the grid orientation.} \begin{figure}[!tb] \centering \subfigure[]{\includegraphics[width=0.7\linewidth]{02_a_time_series.pdf}}\\ \subfigure[]{\includegraphics[width=0.7\linewidth]{02_b_bubble_rebound.pdf}} \caption{Time series of bubble collapse and rebound. (a) Pressure field for $\Delta p = 10\,\si{kPa}$ with $p_{g,0} = 0\,\si{Pa}$ (left) and with $p_{g,0} = 1000\,\si{Pa}$ (right), (b) Near bubble region to visualize the rebound for $\Delta p = 10 \,\si{kPa}$ with $p_{g,0} = 1000 \,\si{Pa}$.} \label{fig:co_dyn} \end{figure} \Cref{fig:results}~(a) compares the temporal evolution of the normalized bubble radius $R/R_{0}$ for different gas contents. In configurations with gas, the bubbles rebound significantly. Besides the rebound, the non-condensable gas in the vapor bubble also affects the intensity of the emitted pressure wave. \Cref{fig:results}~(b) shows the monitored pressure at certain radial positions from the bubble center and different gas contents. The radial decay of the maximum pressure is obvious and the presence of gas reduces the maximum pressure. The damping effect of the gas is more distinct for probes closer to the bubble center. Additionally, the pressure signals reveal that the collapse time is closely matched. \Cref{fig:results}~(c) compares the pressure maximum in the near bubble region. Again, the damping effect of the gas and the decay of the damping effect with increasing distance to the focus point are evident. \begin{figure*}[!tb] \centering \includegraphics[height=7.0cm]{03_r_t.pdf} \caption{Simulation results. (a) Temporal evolution of the bubble radius \new{for different $p_{g,0}$ (see legend) ;} (b) Pressure signals from the probes $0.1:0.05:0.35\,R_0$ \new{with line color gradation corresponding to the probe position for the cases $p_{g,0} = 0\,\si{Pa}$ (gray ($0.1\,R_0$) to black ($0.35\,R_0$) ) and $p_{g,0} = 1000\,\si{Pa}$ (light blue ($0.1\,R_0$) to blue ($0.35\,R_0$));} (c) Maximum pressure compared to that without gas. (Grid resolution 80 $N_C/R_0$).} \label{fig:results} \end{figure*} The grid resolution is known to affect the minimum bubble radius and the rebound~\citep{Beig:2018ga,schmidmayer19, Trummler:2018ww} and the intensity of the pressure peaks~\citep{Mihatsch:2015db,Schmidt:2014ev,Trummler:2018ww}. To assess the grid influence, we have conducted a grid study. \Cref{fig:grid}~(a) depicts the temporal evolution of the bubble radius for different grid resolutions. As expected, the rebound increases with increasing grid resolution and approaches the one predicted by the Keller-Miksis equation. \Cref{fig:grid}~(b) compares the maximum pressure of the configuration with gas ($p_{max}^{gas}$) to that without gas ($p_{max}$). At all grid resolutions, the gas has a damping effect on the maximum pressure, although a higher grid resolution results in higher damping since the focus point is better resolved and the transport of the emitted shock wave is less dissipative. \new{In conclusion, both the rebound and the damping of the maximum wall pressure show a grid dependence, leading to a more pronounced gas effect on higher grid resolutions. However, as discussed and shown in \citet{Trummler:2018ww} and illustrated here in \cref{fig:grid}, the gas effect is already captured on the coarsest grid resolution of 20 $N_C/R_0$. On a grid resolution of 80 $N_C/R_0$ (\cref{fig:results}), the gas effect is clearly pronounced for the considered $p_{g,0}$.} Based on our observations, we consider a grid resolution of 80 $N_C/R_0$ as a good compromise between accuracy and computational cost. \begin{figure*}[!tb] \centering \subfigure{\includegraphics[height=7.0cm]{04_grid.pdf}} \caption{Grid effect on the rebound and the damping of the maximum pressure by the gas for $\Delta p=10^5\,\si{Pa}$ and $p_g=1000\,\si{Pa}$. (a) Temporal evolution of the bubble radius, (b) Maximum pressure compared to that without gas. } \label{fig:grid} \end{figure*} \subsection{Validation with Energy Partitioning Model} \label{ss:val} \citet{Tinguely:2012wo} experimentally and theoretically investigated the effects of the driving pressure difference $\Delta p$ and initial gas content $p_{g,0}$ on bubble dynamics and shock wave emission. They postulated that the initial energy of a bubble $E_{0}$ mainly partitions into rebound energy $E_{reb}$ and shock wave energy $E_{sw}$ \begin{equation} E_0 = E_{reb} + E_{sw}, \label{eq:e0} \end{equation} which is in terms of normalized energies $\epsilon_{reb} =E_{reb}/E_0$, $\epsilon_{sw} =E_{sw}/E_0$ \begin{equation} \epsilon_{reb}+\epsilon_{sw}=1. \label{eq:eps} \end{equation} The initial energy and the rebound energy are potential energies at the corresponding time instants~\citep{obreschkow2006cavitation} \begin{equation} E_{0} = \frac{4 \pi }{3} R_0^3 \Delta p \quad \mathrm{and} \quad E_{reb} =\frac{4 \pi }{3} R_{reb}^3 \Delta p, \label{eq:Ereb} \end{equation} and thus the normalized rebound energy $\epsilon_{reb}$ is \begin{equation} \epsilon_{reb} =E_{reb}/E_0=\left(R_{reb}/R_{0}\right)^3 . \label{eq:eps_reb} \end{equation} The shock wave energy $E_{sw}$ at a distance $d$ from the focus point reads~\citep{Vogel:b15KVKtx} \begin{equation} E_{sw} =\frac{4 \pi d^2}{\rho_l c_l} \int p(t)^2 dt . \label{eq:e_sw} \end{equation} Based on the assumption that the pressure signals $p(t)$ have a universal shape that scales with the peak value $p_{max}$, one can estimate $E_{sw}\propto p_{max}^2$. \new{Hence, the normalized shock wave energy $\epsilon_{sw}$ can be assessed by the relative damping of the peak values as \begin{equation} \epsilon_{sw} \approx (p_{max}/p_{max,no\,rebound})^2. \label{eq:eps_sw2} \end{equation} Alternatively, the normalized shock wave energy $\epsilon_{sw}$ can be approximated using \cref{eq:eps} with \begin{equation} \epsilon_{sw} \approx 1 - \epsilon_{reb}. \label{eq:eps_sw1} \end{equation}} \citet{Tinguely:2012wo} derived a theoretical model using the inviscid Keller-Miksis equation (\cref{eq:km}) to predict the energy partitioning. Based on this model and experimental measurements, they were able to show that the energy fractions of rebound $\epsilon_{reb}$ and shock wave energy $\epsilon_{sw}$ depend on a single parameter \begin{equation} \psi = \frac{\Delta p \gamma^{\,6}}{{p_{g,0}}^{1/\gamma}(\rho_l c_l^2)^{1-1/\gamma}}\, . \label{eq:xi_energy} \end{equation} \Cref{fig:energies} plots the energy partitioning over $\psi$. The shock wave energy increases with $\psi$ and thus with the driving pressure difference and decreases with the partial pressure of free gas. On the other hand, the rebound is enhanced for a lower driving pressure difference and a higher gas content. \new{Additionally, experimental data of \citet{Tinguely:2012wo} including measurement error bars and data of \citet{Fujikawa:1980jj} are shown in \cref{fig:energies}. We also included bubble rebound data obtained for varying driving pressures by \citet{Supponen:2018tv}. They used partially degassed water and we have assumed $p_{g,0}=1.5\,\si{Pa}$ and $\gamma=1.4$. } \begin{figure*}[!tb] \centering \includegraphics[width=0.65\linewidth]{05_km.pdf} \caption{Simulation results in comparison with the theoretical energy partitioning proposed by \citet{Tinguely:2012wo} and \new{data from the literature. The solid curves are the results from the theoretical model. Filled symbols refer to data from the literature and empty ones to simulation results, where the color corresponds to the energy. The experimentally obtained values by \citet{Tinguely:2012wo} ({\tiny $\blacksquare $}) are shown along with the measurement error bars. The data of \citet{Supponen:2018tv} ($*$) consists only of rebound data and we have assumed $p_{g,0}=1.5\,\si{Pa}$ and $\gamma=1.4$. }} \label{fig:energies} \end{figure*} For the comparison of the simulation results with the energy partitioning model, the normalized rebound energy $\epsilon_{reb}$ is obtained from the maximum radius of the bubble in the first rebound using \cref{eq:eps_reb}. For the normalized shock wave energy $\epsilon_{sw}$, the pressure signals recorded at the bubble center are numerically integrated and set in relation to the respective values without gas and thus no rebound. Additionally, we have also evaluated the square of the ratios of the collapse pressures, see \cref{eq:eps_sw2}, and obtained comparable results. The evaluated energy partitioning from simulation data is included in \cref{fig:energies}. \new{At high $\psi$-values ($\psi \geq 200$), our simulation results agree very well with the theoretical model and literature data, while at lower $\psi$-values the simulation results show a smaller rebound and a higher damping effect than predicted by the theoretical model, see $\psi =60$. Thus, we conclude that our model is well suited to study configurations corresponding to high $\psi$-values with $\psi \geq 200$.} Further, the simulation data also show clear $\psi$-equivalence, i.e., equal $\psi$-values lead to equal normalized rebound and shock wave energies (see upward-pointing and downward-pointing triangles in \cref{fig:energies}). This successful validation allows for the application of the model to more complex configurations as the collapse of a wall-attached bubble. \section{Aspherical collapse of a wall-attached bubble} \label{s:aspherical} \subsection{Set-up} \new{\Cref{fig:setup_as} shows the investigated configurations with the two considered stand-off distances from the wall $S/R_0=-0.25$ and $S/R_0=0.5$. Following previous numerical studies~\citep{Lauer:2012jh,Oerley:2016diss,Koukouvinis:2016ir, Trummler:2021if,Trummler:2020JFM}, we consider an initial bubble radius $R_{0}$ of $400\,\si{\micro\metre}$. For non-spherical bubble collapses, it has been shown that the jet characteristics~\citep{Supponen:2016jnb} and the energy partitioning into rebound and shock wave energy~\citep{Supponen:2017wl,Supponen:2018tv} are determined by a dimensionless anisotropy parameter. In case of anisotropy due to nearby walls, this parameter is is independent of the bubble size and only a function of $S/R_0$.} We initialize the pressure field with a jump at the bubble interface with a driving pressure difference of $\Delta p =10^5\,\si{Pa}$. We consider either pure vapor bubbles ($p_{g,0} = 0 \,\si{Pa}$) or vapor bubbles containing a non-condensable gas content of $p_{g,0} = 160\,\si{Pa}$. This value is chosen based on the following considerations. Using experimental data, \citet{Tinguely:2012wo} estimated the initial partial gas content of non-condensable gas inside laser-generated bubbles in water to be $7\pm3.5\,\si{Pa}$. Since we model the gas as isothermal and not adiabatic, we decided to use the isothermal $ \psi$-equivalent of the lower limit of the estimated gas content of $p_{g,0}=3.5\,\si{Pa}$. For an adiabatic index of $\gamma=1.4$, a driving pressure of 1 bar and water, this value is $\psi = 630$. Thus, we consider the $\psi$ equivalent gas content for $\gamma=1$ which is $p_{g,0}=160\,\si{Pa}$. \begin{figure}[tb] \centering \includegraphics[width=0.4\columnwidth]{06_setup_asb_red_w_probe.pdf} \caption{Sketch of the investigated configurations $S/R_0=-0.25$ and $S/R_0=0.5$. The red dot marks the position where the pressure signals are monitored.} \label{fig:setup_as} \end{figure} Taking advantage of symmetry, only a quarter of the bubble is simulated. The bubble is placed in the center of a rectangular domain with an extension of $125\times R_{0}$ in wall-normal direction and $250\times R_{0}$ in wall-parallel directions. The domain is discretized with an equidistant grid within the near bubble region (80 $N_C$/$R_{0}$) and for the outer part a grid stretching is applied. The grid study presented in \cref{ss:results_spherical} demonstrated that this resolution is a good compromise of accuracy and required resources. In total, the grid has about 15 million cells. A constant CFL number of $1.4$ is used, which corresponds to a time step of $\Delta t \approx 1.5\,\si{\nano\second}$. To obtain dimensionless quantities, time is normalized with \begin{equation} t^{\ast}= R_{0}\sqrt{\rho_l / \Delta p}\new{,} \label{eq:t_ast} \end{equation} \new{which is an estimate of the collapse time of a near-wall bubble collapse \citep{Plesset:1971hu}.}The wall has a retarding effect on the collapse and thus $t^{\ast}$ is longer than the Rayleigh collapse time for spherical collapses ($\tau_c=0.915\,t^{\ast}$\new{, see also \cref{eq:t_c}}). Velocity and pressure are normalized as \begin{equation} u^{\ast} = \sqrt{\frac{\Delta p}{\rho_l}}, \quad \text{and} \quad p^{\ast} = c_l \sqrt{\rho_l \Delta p}. \end{equation} \new{Note that $p^{\ast}$ corresponds to a water hammer pressure induced by the velocity $u^{\ast}$, see also \citet{Trummler:2021if}. The employed expression for $p^{\ast}$ can be related to the scaling found by \citet{Supponen:2017wl} for the maximum pressure at non-spherical bubble collapses. At a fixed stand-off distance (and thus anisotropy parameter), the maximum pressure measured at a distance $d$ from the focus point is \begin{equation} p_\mathrm{max}\propto c_l \sqrt{\rho_l \Delta p} (R_0/d)^{1.25}= p^{\ast}(R_0/d)^{1.25}. \label{eq:supponen_pmax} \end{equation} } During the simulations, we monitor the integral vapor and gas volumes, the flow field at selected positions and evaluate the maximum pressure induced within the total simulation time. In the results, the pressure signals at the wall-center are presented. \subsection{Results} In the following simulation results of a collapsing bubble with a negative stand-off distance (\cref{sss:neg}) and with a positive one (\cref{sss:pos}) are presented and discussed. \begin{figure*} \centering {\includegraphics[width=0.8\linewidth]{07_m0100.pdf}} \caption{Collapsing wall-attached bubble with $S/R_0=-0.25$. Top \new{panel}: Sketch of general collapse behavior. Middle \new{panel}: Time series showing pressure and velocity magnitude on midplane and isosurface/isoline 10\% vapor [(i)-(v)] and a comparison of $p_{g,0}=0\,\si{Pa}$ and $p_{g,0}=160\,\si{Pa}$ with additionally olive isolines 10\% gas [(iii)-(iv)]. Note that the discontinuities in the isosurface are due to post-processing issues. Bottom \new{panel}: Temporal evolution of bubble volume (left), and recorded pressure signals at the wall-center (right). \new{Reproduced from \citet{Trummler:2021diss}}. } \label{fig:m0100_ts} \end{figure*} \begin{figure*} \centering \subfigure[]{\includegraphics[trim={0 0 0 0},clip,height=6cm]{08_a_pmax_fin_m0100.pdf}} \hspace{0.4cm} \subfigure[]{\includegraphics[height=6cm]{08_b_pmax_m0100_w_UTS.pdf}} \caption{Maximum pressure induced by a collapsing bubble with $S/R_0=-0.25$. (a) $p_\mathrm{max}/p^{\ast}$ on the wall (orientation rings at $r/R_0=0.25,0.5,0.75,1$) and midplane (initial bubble boundary indicated) (b) extracted $p_{max}/p^{\ast}$ and ultimate tensile strength (UTS) of aluminum ($70\,\si{\mega\pascal}$). } \label{fig:m0100_pmax} \end{figure*} \subsubsection{Wall-attached bubble with negative stand-off distance} \label{sss:neg} The collapse behavior of a vapor bubble with $S/R_0=-0.25$ is visualized in \cref{fig:m0100_ts}. Additionally, the comparison of a vapor bubble and a vapor-gas bubble for two selected time steps and a schematic representation of the collapse behavior are shown. The corresponding temporal evolution of the bubble volume and the recorded pressure signals in the wall-center are in the bottom of the figure. The wall-attached bubble is pinched circumferentially at its maximum expansion, resulting in a mushroom shape (\cref{fig:m0100_ts}~(ii)). Such behavior was also reported by \citet{Shima:1977df} and \citet{Lauer:2012jh}. Additionally, a circumferential pinching has been also observed for ellipsoidal bubbles \citep{Pishchalnikov:2018pp, Lechner:2019fp}. The radially inward directed flow reaches very high velocities, here exceeding $200\,\si{m/s}$ ($\approx 20 u^{\ast}$). Later, the collision of the waterfronts induces a high pressure peak, which can be seen in the pressure signals (\cref{fig:m0100_ts} bottom left). Shortly afterward the remaining upper part (the 'mushroom head') collapses, emitting a shock wave. When this wave reaches the wall, it induces another increase of the pressure signals (\cref{fig:m0100_ts} bottom left). Thus, the collision of the waterfronts and not the collapse is the central mechanism for the maximum wall pressure, which has also been observed for high driving pressures~\citep{Trummler:2021if}. Due to the conservation of momentum, the preceding radial inward flow at the pinching now causes a flow in upward direction reaching more than $100\,\si{m/s}$ ($\approx 10 u^{\ast}$), see \cref{fig:m0100_ts}~(iv). The rebound takes place in the shear layer resulting in a vapor torus (\cref{fig:m0100_ts}~(v)). If gas is present in the vapor bubble, the collapse is slightly decelerated and a higher gas content occurs at the boundary where the vapor has already collapsed, see \cref{fig:m0100_ts}~(iii')(iv'). The gas decelerates the circumferential pinching and reduces its velocity by 3.25\%. The reduced velocity correlates with a damped maximum pressure at collision (see \cref{fig:m0100_ts} bottom left). As expected, the rebound with gas is stronger, as visualized in \cref{fig:m0100_ts}~(iv'). \Cref{fig:m0100_pmax} shows the distribution of the maximum pressure on the mid-plane and the wall. The highest pressure occurs at the focus point of the collapse. The high pressure along the symmetry line and in the center of the wall is due to the collision of the liquid fronts. The gas dampens the maximum pressure at the focus point by 0.8\%, which corresponds to the damping effect at a spherical collapse (see \cref{s:spherical})\new{, and the maximum wall pressure by 1.34\%.} Based on the maximum wall pressures, material damage can be estimated. In experiments aluminum with an ultimate tensile strength (UTS) of about $70\,\si{\mega\pascal}$ ($\approx 4.7p^{\ast}$)~\citep{malmberg2015aluminium} is often used. Taking the UTS as the threshold, the estimated wall damage for aluminum is indicated in \cref{fig:m0100_pmax}, and would be a central, pit-shaped surface deformation. \subsubsection{Wall-attached bubble with positive stand-off distance} \label{sss:pos} \Cref{fig:p0200_ts} visualizes the collapse of a vapor bubble. Additionally, the comparison of a vapor bubble and a vapor-gas bubble for two selected time steps, a schematic representation of the collapse behavior, the corresponding temporal evolution of the bubble volume and the recorded pressure signals in the wall-center are shown. In this configuration, the least resistance of the bubble is in the wall-normal direction and the surrounding pressure distribution leads to an indentation on the upper side. A wall-directed liquid jet forms, penetrating the bubble and resulting in a torus. Then the first collapse takes place, followed by a toroidal rebound and a second collapse. This behavior is well known and has been analyzed in several experimental~\citep{Tomita:1986gy,Philipp:1998eg} and numerical studies~\citep{Lauer:2012jh,Trummler:2021if}. \begin{figure*} \centering {\includegraphics[width=0.8\linewidth]{09_p0200.pdf}} \caption{Collapsing wall-attached bubble with $S/R_0=0.5$. Top \new{panel}: Sketch of general collapse behavior. Middle \new{panel}: Time series showing pressure on midplane and isosurface 10\% vapor [(i)-(iv)] and a comparison of $p_{g,0}=0\,\si{Pa}$ and $p_{g,0}=160\,\si{Pa}$ with additional isolines 10\% gas (olive) [(ii)-(iii)]. Bottom \new{panel}: Temporal evolution of bubble volume (left), and recorded pressure signals at the wall-center (right). \new{Reproduced from \citet{Trummler:2021diss}}. } \label{fig:p0200_ts} \end{figure*} The wall-centered pressure signals (\cref{fig:p0200_ts} bottom left) show the impact of the jet, followed by two pressure peaks induced by the first collapse. These peaks are significantly higher than the jet-induced one, which agrees with the literature~\citep{Lauer:2012jh,Philipp:1998eg}. The presence of gas, again, results in a higher gas content at the boundary, see \cref{fig:p0200_ts}~(ii'),(iii'). Furthermore, it delays the first collapse, leads to a stronger rebound and a delayed second collapse (\cref{fig:p0200_ts}~bottom). The gas attenuates the velocity of the wall-directed jet and thus the intensity of the jet impact by 3.91\%, as can be seen in the pressure signals. The collapse induced pressure peak is also damped by the gas. \begin{figure*} \centering \footnotesize \subfigure[]{\includegraphics[height=6cm]{10_a_pmax_fin_p0200.pdf}} \hspace{0.4cm} \subfigure[]{\includegraphics[height=6cm]{10_b_pmax_p0200_w_UTS.pdf}} \caption{Maximum pressure induced by a collapsing bubble with $S/R_0=0.5$. (a) $p_\mathrm{max}/p^{\ast}$ on the wall (orientation rings at $r/R_0=0.25,0.5,0.75,1$) and midplane (initial bubble boundary indicated) (b) extracted $p_{max}/p^{\ast}$ and ultimate tensile strength (UTS) of aluminum ($70\,\si{\mega\pascal}$). } \label{fig:p0200_pmax} \end{figure*} The distribution of the maximum pressure is shown in \cref{fig:p0200_pmax}. The first collapse induces pressure peaks in the center of the collapsing torus and high pressures at the wall below. However, the highest wall pressure is recorded in the center and induced by the superposition of the emitted pressure waves at first collapse. The second collapse takes place radially further outwards and causes significantly lower wall pressures than the first collapse. If gas is present, the maximum wall pressure induced by the second collapse is higher and shifted radially inwards. Otherwise, the gas slightly dampens the maximum pressures. The damping of the maximum wall pressure at first collapse is 6.6\%, which is significantly higher than in the other configurations. On an aluminum specimen, the collapse would probably lead to ring-shaped damage with radius $r=0.35R_0$ and an indentation in the center, which matches experimental observations~\citep{Philipp:1998eg}. The maximum wall pressure at $S/R_0=0.5$ is about a third of the one at $S/R_0=-0.25$. This is consistent with the observations of \citet{Lauer:2012jh,Trummler:2021if}, who found that the maximum wall pressure decreases with increasing stand-off distance and that this decrease is less pronounced for negative distances. \section{Conclusions and Discussion} \label{s:ConclusionAndDiscussion} We have suggested a modified multi-component model to simulate vapor bubbles containing free, non-condensable gas. By numerical simulations, we were able to reproduce the physical effects of gas inside a vapor bubble. Free gas in a vapor bubble leads to a stronger rebound and dampens the emitted shock wave. This effect is already visible with coarse grid resolutions but is more pronounced for higher grid resolutions. Additionally, we were able to reproduce the partitioning into rebound and shock wave energy proposed by~\citet{Tinguely:2012wo} and could confirm a $\psi$-equivalence. This validation enabled us to investigate the effect of free gas inside vapor structures on more complex configurations such as the collapse of wall-attached bubbles. The second part of the paper presented simulation results of a collapsing wall-attached bubble under atmospheric pressure. We investigated the collapse behavior and pressure impact for the selected stand-off distances $S/R_0=-0.25$ and $S/R_0 = 0.5$. The observed collapse behavior resembles that of previous investigations at higher driving pressure differences. Our simulation results provide deeper and additional insights into the rebound behavior and the relevant mechanisms for pressure peaks. We showed that at $S/R_0=-0.25$ the collision of the circumferential pinching induces the maximum wall pressure and not the final collapse. At $S/R_0 = 0.5$, we captured the first and second toroidal collapse and the induced wall pressures of both. The induced wall pressure of the second collapse is weaker and radially further outward. For aspherical collapses \new{under atmospheric conditions, we observed a small effect of the non-condensable gas in our simulation results. Direct comparison of a collapsing vapor-gas bubble with a collapsing vapor bubble showed that the presence of gas slightly decelerates the collapse and reduces the velocity of the liquid jets, i.e. the circumferential pinching or the wall-directed jet. As expected,} gas dampens the collapse pressure and enhances the rebound. We found that the damping of the maximum wall pressure by the gas depends on the mechanism inducing this pressure peak. \new{ In case of a toroidal collapse, the observed damping of the maximum wall pressure is 6.6 \%, while for the bubble with a negative stand-off distance it is 1.35 \%. } Nevertheless, our findings for the gas effect in aspherical configurations might be biased by the employed isothermal modeling of the gas. Since we model the gas isothermal, we initialize a relatively high gas content of $p_{g,0}=160\,\si{Pa}$ compared to the assumed one of $p_{g,0}=3-10\,\si{Pa}$. The $\psi$-equivalence, which justifies this initial value, was only shown for spherical collapse. To evaluate the effect of the gas in detail, adiabatic modeling of the gas has to be employed. Moreover, further experimental and numerical investigations are generally necessary to quantify the effect of gas inside vapor bubbles. A major uncertainty of these studies is that the actual gas content in practical applications is generally unknown and very difficult to estimate. \new{The numerical framework presented can be used to study the effects of gas in configurations of interest. Accurate knowledge of the gas effect in aspherical collapses allows precise control of the effects on collapse pressure, or respectively destruction potential, and rebound behavior. Such knowledge can be advantageous for e.g. biomedical applications, such as urinary stone ablation~\citep{Pishchalnikov:2018pp}, needle-free injection with pressurized auto-injectors~\citep{Veilleux:2018gy}, or new technologies, such as surface-cleaning~\citep{Reuter:2017bu} and micro pumps driven by bubble rebound~\citep{dijkink2008laser}. Furthermore, the findings can also be applied to control erosion aggressiveness~\citep{Schmidt:2014ev}. } \section*{Acknowledgment} The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for funding this project by providing computing time on the GCS Supercomputers SuperMUC and SuperMUC-NG at Leibniz Supercomputing Centre (www.lrz.de). \section*{Data Availability} The data that support the findings of this study are available from the corresponding author upon reasonable request.
2,869,038,154,212
arxiv
\section{Introduction} \n A group $G$ is called the internal Zappa-Sz\'{e}p product of its two subgroups $H$ and $K$ if $G=HK$ and $H\cap K = \{1\}$. The Zappa-Sz\'{e}p product is a natural generalization of the semidirect product of two groups in which neither of the factor is required to be normal. If $G$ is the internal Zappa-Sz\'{e}p product of $H$ and $K$, then $K$ appears as a right transversal to $H$ in $G$. Let $h\in H$ and $k\in K$. Then $kh=\sigma(x,h)\theta(x,h)$, where $\sigma(x,h)\in H$ and $\theta(x,h)\in K$. This determines the maps $\sigma: K \times H \rightarrow H$ and $\theta: K\times H \rightarrow K$. These maps are called the matched pair of groups. We denote $\sigma(k,h)= k\cdot h$ and $\theta(k,h) = k^{h}$. These maps satisfy the following conditions (See \cite{af}) \begin{itemize} \item[$(C1)$] $1\cdot h = h$ and $k^{1} = k$, \item[($C2$)] $k\cdot 1 = 1 = 1^{h}$, \item[$(C3)$] $kk^{\prime}\cdot h = k\cdot (k^{\prime}\cdot h)$, \item[($C4$)] $(kk^{\prime})^{h} = k^{k^{\prime}\cdot h}{k^{\prime}}^{h}$, \item[($C5$)] $k\cdot (hh^{\prime}) = (k\cdot h)(k^{h}\cdot h^{\prime})$, \item[$(C6)$] $k^{hh^{\prime}} = (k^{h})^{h^{\prime}}$, \end{itemize} for all $h,h^{\prime} \in H$ and $k,k^{\prime}\in K$. \vspace{.2cm} \n On the other hand, let $H$ and $K$ are two groups. Let $\sigma: K \times H \rightarrow H$ and $\theta: K\times H \rightarrow K$ be two maps defined by $\sigma(k,h)= k\cdot h$ and $\theta(k,h) = k^{h}$ which satisfy the above conditions. Then, the external Zappa-Sz\'{e}p product $G=H\bowtie K$ of $H$ and $K$ is the group defined on the set $H\times K$ with the binary operation defined by \begin{equation*} (h,k)(h^{\prime},k^{\prime}) = (h(k\cdot h^{\prime}),k^{h^{\prime}}k^{\prime}). \end{equation*} \n The internal Zappa-Sz\'{e}p product is isomorphic to the external Zappa-Sz\'{e}p product (see \cite[Proposition 2.4, p. 4]{af}). We will identify the external Zappa-Sz\'{e}p product with the internal Zappa-Sz\'{e}p product. \vspace{.2cm} \n The Zappa-Sz\'{e}p product of two groups was introduced by G. Zappa in \cite{gz}. J. Sz\'{e}p in the series of papers (few of them are \cite{sz1,sz2,sz3,sz4}) studied such type of products. From the QR decomposition of matrices, one concludes that the general linear group $GL(n,\mathbb{C})$ is a Zappa-Sz\'{e}p product of the unitary group and the group of upper triangular matrices. Z. Arad and E. Fisman in \cite{za} studied the finite simple groups as a Zappa-Sz\'{e}p product of two groups $H$ and $K$ with the order of $H$ and $K$ coprime. In the same paper, they studied the finite simple groups as a Zappa-Sz\'{e}p product of two groups $H$ and $K$ with one of $H$ or $K$ is $p$-group, where $p$ is a prime. From the main result of \cite{ph}, one observes that a finite group $G$ is solvable if and only if $G$ is a Zappa-Sz\'{e}p product of a Sylow $p$-subgroup and a Sylow $p$-complement. \vspace{.2cm} \n Note that, if either of the actions $k\cdot h$ or $k^h$ is a group homomorphism, then the Zappa-Sz\'{e}p product reduces to the semidirect product of groups. M. J. Curran \cite{crn2008} and N. C. Hsu \cite{nch} studied the automorphisms of the semidirect product of two groups as the $2\times 2$ matrices of maps satisfying some certain conditions. In this paper (with the same terminology as in \cite{crn2008} and \cite{nch}), we have found the automorphism group of the Zappa-Sz\'{e}p product of two groups as the $2\times 2$ matrices of maps satisfying some certain conditions. As an application, we have found the automorphism group of the Zappa-Sz\'{e}p product of two cyclic groups in which one is of order $p^2$ and other is of order $m$. Throughout the paper, $\mathbb{Z}_{n}$ denotes the cyclic group of order $n$ and $U(n)$ denotes the group of units of $n$. Also, $Aut(G)$ denotes the group of all automorphisms of a group $G$. Let $U$ and $V$ be groups. Then $CrossHom(U,V)$ denotes the group of all crossed homomorphisms from $U$ to $V$. Also, if $U$ acts on $V$, then $Stab_{U}(V)$ denotes the stabilizer of $V$ in $U$. \section{Structure of Automorphism Group} \n Let $G = H\bowtie K$ be the Zappa-Sz\'{e}p product of two groups $H$ and $K$. Let $U, V$ and $W$ be any groups. $Map(U, V)$ denotes the set of all maps between the groups $U$ and $V$. If $\phi, \psi \in Map(U, V)$ and $\eta \in Map(V, W)$, then $\phi + \psi \in Map (U,V)$ is defined by $(\phi + \psi)(u) = \phi(u)\psi(u)$, $\eta\phi \in Map(U,W)$ is defined by $\eta\phi(u) = \eta(\phi(u))$, $\phi \cdot \psi \in Map(U,V)$ is defined by $(\phi \cdot \psi)(u) = \phi(u)\cdot \psi(u)$ and $\phi^{\psi} \in Map(U, V)$ is defined by $\phi^{\psi}(u) = \phi(u)^{\psi(u)}$, for all $u\in U$. \vspace{.2cm} \n Now, consider the set, \begin{equation*} \mathcal{A} = \left\{\begin{pmatrix} \alpha & \beta\\ \gamma & \delta \end{pmatrix} \;|\; \begin{matrix} \alpha \in Map(H,H), & \beta \in Map(K,H),\\ \gamma \in Map(H,K),\; \text{and} & \delta \in Map(K,K) \end{matrix} \right\}, \end{equation*} where $\alpha, \beta, \gamma$ and $\delta$ satisfy the following conditions, \begin{itemize} \item[$(A1)$] $\alpha(hh^{\prime}) = \alpha(h)(\gamma(h)\cdot \alpha(h^{\prime}))$, \item[$(A2)$] $\gamma(hh^{\prime}) = \gamma(h)^{\alpha(h^{\prime})}\gamma(h^{\prime})$, \item[$(A3)$] $\beta(kk^{\prime}) = \beta(k)(\delta(k)\cdot \beta(k^{\prime}))$, \item[$(A4)$] $\delta(kk^{\prime}) = \delta(k)^{\beta(k^{\prime})}\delta(k^{\prime})$, \item[$(A5)$] $\beta(k)(\delta(k)\cdot \alpha(h)) = \alpha(k\cdot h)(\gamma(k\cdot h)\cdot \beta(k^{h}))$, \item[$(A6)$] $\delta(k)^{\alpha(h)}\gamma(h) = \gamma(k\cdot h)^{\beta(k^{h})}\delta(k^{h})$, \item[$(A7)$] For any $h^{\prime}k^{\prime}\in G$, there exists a unique $h\in H$ and $k\in K$ such that $h^{\prime} = \alpha(h)(\gamma(h)\cdot \beta(k))$ and $k^{\prime} = \gamma(h)^{\beta(k)}\delta(k)$. \end{itemize} \n Then, the set $\mathcal{A}$ forms a group with the binary operation defined by \begin{equation*} \begin{pmatrix} \alpha^{\prime} & \beta^{\prime}\\ \gamma^{\prime} & \delta^{\prime} \end{pmatrix}\begin{pmatrix} \alpha & \beta\\ \gamma & \delta \end{pmatrix} = \begin{pmatrix} \alpha^{\prime}\alpha + \gamma^{\prime}\alpha \cdot \beta^{\prime}\gamma & \alpha^{\prime}\beta + \gamma^{\prime}\beta \cdot \beta^{\prime}\delta\\ (\gamma^{\prime}\alpha)^{\beta^{\prime}\gamma}+ \delta^{\prime}\gamma & (\gamma^{\prime}\beta)^{\beta^{\prime}\delta}+ \delta^{\prime}\delta \end{pmatrix}. \end{equation*} \begin{proposition} Let $\begin{pmatrix} \alpha & \beta\\ \gamma & \delta \end{pmatrix}\in \mathcal{A}$. Then $\alpha(1) = 1= \beta(1) = \gamma(1) = \delta(1)$. \end{proposition} \begin{proof} Let $h\in H$ be any element. Then, using $(A1)$, $\alpha(h) = \alpha(h1) = \alpha(h)(\gamma(h)\cdot \alpha(1)) $ which implies that $\gamma(h)\cdot \alpha(1) = 1 = \gamma(h)\cdot 1$ by $(C2)$. Thus $\gamma(h)^{-1}\cdot(\gamma(h)\cdot \alpha(1)) = \gamma(h)^{-1}\cdot(\gamma(h)\cdot 1)$. Hence, using $(C1)$, $\alpha(1) = 1$. \vspace{0.2 cm} \n Using $(A2)$, $\gamma(h) = \gamma(h1) = \gamma(h)^{\alpha(1)}\gamma(1)$. Using $(C1)$, $\gamma(1)=1$. Using the similar argument, we get $\beta(1)=1$ and $\delta(1)=1$. \end{proof} \n Let us define the kernel of the map $\alpha\in Map(H,H)$ as usual, that is, $\ker(\alpha) = \{h\in H \;|\; \alpha(h) = 1\}$. Here, we should remember that the map $\alpha$ need not to be a homomorphism. $\ker(\beta), \ker(\gamma)$ and $\ker(\delta)$ are defined in the same sense. \begin{lemma}\label{lk1} \begin{align*} \begin{matrix} (i) \ker(\alpha)\le H, & (ii) \ker(\beta)\le K,\\ (iii) \ker(\gamma)\le H, & (iv) \ker(\delta)\le K,\\ \hspace{1.8cm}(v) \ker(\alpha)\cap \ker(\gamma) = \{1\},& \hspace{1.8cm} (vi) \ker(\beta)\cap \ker(\delta) = \{1\}. \end{matrix} \end{align*} \end{lemma} \begin{proof} \begin{itemize} \item[$(i)$] Let $h$, $h^{\prime}\in \ker(\alpha)$. Then using $(A1)$ and $(C2)$, $\alpha(hh^{\prime}) = \alpha(h)(\gamma(h)\cdot \alpha(h^{\prime})) = \gamma(h)\cdot 1 = 1$. Also, $1= \alpha(1) = \alpha(h^{-1}h) = \alpha(h^{-1})(\gamma(h^{-1})\cdot 1)$. Thus, $\alpha(h^{-1}) = 1$. Hence, $hh^{\prime}$, $h^{-1}\in \ker(\alpha)$ and so $\ker(\alpha)\le H$. \item[$(ii)$] One can easily prove it using the similar argument as in the part $(i)$. \item[$(iii)$] Let $h$, $h^{\prime} \in \ker(\gamma)$. Then using $(A2)$ and $(C2)$, $\gamma(hh^{\prime}) = \gamma(h)^{\alpha(h^{\prime})} \gamma(h^{\prime})$ $= 1^{\alpha(h^{\prime})} = 1$. Also, $1 = \gamma(1) = \gamma(hh^{-1}) = \gamma(h)^{\alpha(h^{-1})} \gamma(h^{-1})$. Then, $\gamma(h^{-1}) = 1$ and so, $hh^{\prime}$, $h^{-1} \in \ker(H)$. Hence, $\ker(\gamma) \le H$. \item[$(iv)$] One can easily prove it using the similar argument as in the part $(iii)$. \item[$(v)$] Let $h\in \ker(\alpha)\cap \ker(\gamma)$. Then $\alpha(h)= 1 = \gamma(h)$. Therefore, $\theta(h) = 1$. Since $\theta \in Aut(G)$, $h=1$. Hence, $(v)$ holds. \item[$(vi)$] One can easily prove it using the similar argument as in the part $(v)$. \end{itemize} \end{proof} \begin{theorem} Let $G = H\bowtie K$ be the Zappa-Sz\'{e}p product of two groups $H$ and $K$, and $\mathcal{A}$ be as above. Then there is an isomorphism of groups between $Aut(G)$ and $\mathcal{A}$ given by $\theta \longleftrightarrow \begin{pmatrix} \alpha & \beta\\ \gamma & \delta \end{pmatrix}$, where $\theta(h) = \alpha(h)\gamma(h)$ and $\theta(k) = \beta(k)\delta(k)$, for all $h\in H$ and $k\in K$. \end{theorem} \begin{proof} \n Let $\theta \in Aut(G)$ be defined by $\theta(h) = \alpha(h)\gamma(h)$ and $\theta(k) = \beta(k)\delta(k)$, for all $h\in H$ and $k\in K$. Now, for all $h,h^{\prime}\in H$, $\theta(hh^{\prime}) = \theta(h)\theta(h^{\prime}) = \alpha(h)\gamma(h)\alpha(h^{\prime})\gamma(h^{\prime}) = \alpha(h)(\gamma(h)\cdot \alpha(h^{\prime}))\gamma(h)^{\alpha(h^{\prime})}\gamma(h^{\prime})$. Thus, $\alpha(hh^{\prime})\gamma(hh^{\prime}) = (\alpha(h)(\gamma(h)\cdot \alpha(h^{\prime})))(\gamma(h)^{\alpha(h^{\prime})}\gamma(h^{\prime}))$. Therefore, by uniqueness of representation, we have $(A1)$ and $(A2)$. By the similar argument, we get $(A3)$ and $(A4)$. \vspace{.2cm} \n Now, $\theta(kh) = \theta((k\cdot h)(k^{h})) = \theta(k\cdot h)\theta(k^{h}) = \alpha(k\cdot h)\gamma(k\cdot h)\beta(k^{h})\delta(k^{h}) = \alpha(k\cdot h)(\gamma(k\cdot h)\cdot \beta(k^{h}))\gamma(k\cdot h)^{\beta(k^{h})}\delta(k^{h})$. Also, $\theta(kh) = \theta(k)\theta(h) = \beta(k)\delta(k)$ $\alpha(h)\gamma(h)$ $= \beta(k)(\delta(k)\cdot \alpha(h))\delta(k)^{\alpha(h)}\gamma(h)$. Therefore, by the uniqueness of representation, $\beta(k)(\delta(k)\cdot \alpha(h)) = \alpha(k\cdot h)(\gamma(k\cdot h)\cdot \beta(k^{h}))$ and $\delta(k)^{\alpha(h)}\gamma(h) = \gamma(k\cdot h)^{\beta(k^{h})}\delta(k^{h})$, which proves $(A5)$ and $(A6)$. Finally, $(A7)$ holds because $\theta$ is onto. Thus, to every $\theta \in Aut(G)$ we can associate the matrix $\begin{pmatrix} \alpha & \beta\\ \gamma & \delta \end{pmatrix} \in \mathcal{A}$. This defines a map $T: Aut(G)\longrightarrow \mathcal{A}$ given by $\theta \longmapsto \begin{pmatrix} \alpha & \beta\\ \gamma & \delta \end{pmatrix}$. \vspace{.2cm} \n Now, if $\begin{pmatrix} \alpha & \beta\\ \gamma & \delta \end{pmatrix} \in \mathcal{A}$ satisfying the conditions $(A1)-(A7)$, then we associate to it, the map $\theta : G\longrightarrow G$ defined by $\theta(h) = \alpha(h)\gamma(h)$ and $\theta(k) = \beta(k)\delta(k)$, for all $h\in H$ and $k\in K$. Using $(A1)-(A6)$, one can check that $\theta$ is an endomorphism of $G$. Also, by $(A7)$, the map $\theta$ is onto. Now, let $hk\in \ker(\theta)$. Then $\theta(hk) = 1$. Therefore, $\alpha(h)(\gamma(h)\cdot \beta(k))\gamma(h)^{\beta(k)}\delta(k) = 1$ and so, by the uniqueness of representation $\alpha(h)(\gamma(h)\cdot \beta(k)) = 1$ and $\gamma(h)^{\beta(k)}\delta(k) = 1$. Again, by the uniqueness of representation and using $(C1), (C2), (C3)$ and $(C6)$, we get $\alpha(h) = 1 = \gamma(h)$ and $\beta(k) = 1 = \delta(k)$. Therefore, by the Lemma \ref{lk1} $(v)$ and $(vi)$, $h = 1 = k$ and so, $\ker(\theta) = \{1\}$. Thus, $\theta$ is one-one and hence, $\theta \in Aut(G)$. Thus, $T$ is a bijection. Let $\alpha,\beta, \gamma$ and $\delta$ be the maps associated with $\theta$ and $\alpha^{\prime},\beta^{\prime}, \gamma^{\prime}$ and $\delta^{\prime}$ be the maps associated with $\theta^{\prime}$. Now, for all $h\in H$ and $k\in K$, we have \begin{eqnarray*} \theta^{\prime}\theta(h) &=& \theta^{\prime}(\alpha(h)\gamma(h))\\ &=& \alpha^{\prime}(\alpha(h))\gamma^{\prime}(\alpha(h))\beta^{\prime}(\gamma(h))\delta^{\prime}(\gamma(h))\\ &=& \alpha^{\prime}(\alpha(h))(\gamma^{\prime}(\alpha(h))\cdot\beta^{\prime}(\gamma(h)))\gamma^{\prime}(\alpha(h))^{\beta^{\prime}(\gamma(h))}\delta^{\prime}(\gamma(h))\\ &=& (\alpha^{\prime}\alpha+(\gamma^{\prime}\alpha\cdot\beta^{\prime}\gamma))(h) ((\gamma^{\prime}\alpha)^{\beta^{\prime}\gamma}+\delta^{\prime}\gamma)(h). \end{eqnarray*} \n Therefore, if we write $hk$ as $\begin{pmatrix} h\\ k \end{pmatrix}$, then $\theta^{\prime}\theta(h) = \begin{pmatrix} \alpha^{\prime}\alpha+(\gamma^{\prime}\alpha\cdot\beta^{\prime}\gamma)\\ (\gamma^{\prime}\alpha)^{\beta^{\prime}\gamma}+\delta^{\prime}\gamma \end{pmatrix}\begin{pmatrix} h\\ 1 \end{pmatrix}$. By the similar argument, $\theta^{\prime}\theta(k) = \begin{pmatrix} \alpha^{\prime}\beta + \gamma^{\prime}\beta \cdot \beta^{\prime}\delta\\ (\gamma^{\prime}\beta)^{\beta^{\prime}\delta}+ \delta^{\prime}\delta \end{pmatrix}\begin{pmatrix} 1\\ k \end{pmatrix}$. Thus,\\ $\theta^{\prime}\theta(hk) = \begin{pmatrix} \alpha^{\prime}\alpha+(\gamma^{\prime}\alpha\cdot\beta^{\prime}\gamma) & \alpha^{\prime}\beta + (\gamma^{\prime}\beta \cdot \beta^{\prime}\delta)\\ (\gamma^{\prime}\alpha)^{\beta^{\prime}\gamma}+\delta^{\prime}\gamma & (\gamma^{\prime}\beta)^{\beta^{\prime}\delta}+ \delta^{\prime}\delta \end{pmatrix}\begin{pmatrix} h\\ k \end{pmatrix}$. Therefore, $T(\theta^{\prime}\theta) = \begin{pmatrix} \alpha^{\prime}\alpha+(\gamma^{\prime}\alpha\cdot\beta^{\prime}\gamma) & \alpha^{\prime}\beta + (\gamma^{\prime}\beta \cdot \beta^{\prime}\delta)\\ (\gamma^{\prime}\alpha)^{\beta^{\prime}\gamma}+\delta^{\prime}\gamma & (\gamma^{\prime}\beta)^{\beta^{\prime}\delta}+ \delta^{\prime}\delta \end{pmatrix}=T(\theta)T(\theta^{\prime})$. Hence, $T$ is an isomorphism of groups. \end{proof} \n From here on, we will identify the automorphisms of $G$ with the matrices in $\mathcal{A}$. Let \begin{align*} P = & \{\alpha\in Aut(H) \;|\; k\cdot \alpha(h) = \alpha(k\cdot h)\; \text{and}\; k^{\alpha(h)} = k^{h}\},\\ Q = & \{\beta\in Map(K,H)\;|\; \beta(kk^{\prime}) = \beta(k)(k\cdot \beta(k^{\prime})), k = k^{\beta(k^{\prime})}, \beta(k) = \beta(k^{h})\},\\ R = & \{\gamma \in Map(H,K) \;|\; \gamma(hh^{\prime}) = \gamma(h)^{h^{\prime}}\gamma(h^{\prime}), h^{\prime} = \gamma(h)\cdot h^{\prime}, \gamma(k\cdot h) = \gamma(h)\}, \\ S = & \{\delta\in Aut(K)\;|\; \delta(k)\cdot h = k\cdot h, \delta(k)^{h} = \delta(k^{h})\},\\ X =& \{(\alpha,\gamma,\delta)\in Map(H,H)\times Map(H,K)\times Aut(K)\;|\; \alpha(hh^{\prime}) = \alpha(h)(\gamma(h)\cdot \alpha(h^{\prime})),\\& \gamma(hh^{\prime}) = \gamma(h)^{\alpha(h^{\prime})}\gamma(h^{\prime}), \delta(k)\cdot \alpha(h) = \alpha(k\cdot h), \delta(k)^{\alpha(h)}\gamma(h) = \gamma(k\cdot h)\delta(k^{h}) \},\\ Y =& \{(\alpha,\beta,\delta)\in Aut(H)\times Map(K,H)\times Map(K,K)\;|\; \beta(kk^{\prime}) = \beta(k)(\delta(k)\cdot \beta(k^{\prime})),\\& \delta(kk^{\prime}) = \delta(k)^{\beta(k^{\prime})}\delta(k^{\prime}), \beta(k)(\delta(k)\cdot \alpha(h)) = \alpha(k\cdot h)\beta(k^{h}), \delta(k)^{\alpha(h)} = \delta(k^{h}) \},\\ Z =& \{(\alpha,\delta)\in Aut(H)\times Aut(K)\;|\; \delta(k)\cdot \alpha(h) = \alpha(k\cdot h), \delta(k)^{\alpha(h)} = \delta(k^{h})\}. \end{align*} \n Then one can easily check that $P$, $S$, $X$, $Y$ and $Z$ are all subgroups of the group $Aut(G)$. But $Q$ and $R$ need not be subgroup of the group $Aut(G)$. However, if $H$ and $K$ are abelian groups, then $Q$ and $R$ are subgroups of $Aut(G)$. \n Let \begin{center} $\begin{matrix} A = \left\{\begin{pmatrix} \alpha & 0\\ 0 & 1 \end{pmatrix}|\; \alpha\in P\right\}, & B = \left\{\begin{pmatrix} 1 & \beta\\ 0 & 1 \end{pmatrix}|\; \beta\in Q\right\},\\ C = \left\{\begin{pmatrix} 1 & 0 \\ \gamma & 1 \end{pmatrix}|\; \gamma\in R\right\}, & D = \left\{\begin{pmatrix} 1 & 0\\ 0 & \delta \end{pmatrix}|\; \delta\in S\right\}\\ E = \left\{\begin{pmatrix} \alpha & 0\\ \gamma & \delta \end{pmatrix}|\; (\alpha,\gamma,\delta)\in X\right\}, & F = \left\{\begin{pmatrix} \alpha & \beta\\ 0 & \delta \end{pmatrix}|\; (\alpha,\beta,\delta)\in Y\right\},\\ M = \left\{\begin{pmatrix} \alpha & 0\\ 0 & \delta \end{pmatrix}|\; (\alpha,\delta)\in Z\right\}. \end{matrix}$ \end{center} \n be the corresponding subsets of $\mathcal{A}$. Then one can easily check that $A$, $D$, $E$, $F$ and $M$ are subgroups of $\mathcal{A}$, and if $H$ and $K$ are abelian groups, then $B$ and $C$ are also subgroups of $\mathcal{A}$. \begin{theorem}\label{s2t1} Let $G=H\bowtie K$ be the Zappa-Sz\'{e}p product of two abelian groups $H$ and $K$ and $A,B,C,$ and $D$ be defined as above. Then $ABCD \subseteq \mathcal{A}$. Further, if $1-\beta\gamma \in P$, then $ABCD = \mathcal{A}$. Therefore, $Aut(G) \simeq ABCD$. \end{theorem} \begin{proof} Note that $A$ and $D$ normalizes $B$ and $C$. Then $ABCD$ is a subgroup of $Aut(G)$. Clearly, $ABCD \subseteq \mathcal{A}$. Now, let $\alpha\in P$, $\beta \in Q$, $\gamma\in R$ and $\delta \in S$. Then note that, $\alpha\beta\delta\in Q$ and $\begin{pmatrix} 1 & \beta\\ \gamma & 1 \end{pmatrix} \in \mathcal{A}$. Further, let us assume that $1-\beta\gamma \in P$. Now, if $\hat{\beta} = \alpha^{-1}\beta\delta^{-1}$, then \[\begin{pmatrix} 1 & \hat{\beta}\\ \gamma & 1 \end{pmatrix} = \begin{pmatrix} 1-\hat{\beta}\gamma & 0\\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & (1-\hat{\beta}\gamma)^{-1}\hat{\beta}\\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0\\ \gamma & 1 \end{pmatrix} \in ABC.\] Thus, if $\begin{pmatrix} \alpha & \beta\\ \gamma & \delta \end{pmatrix}\in \mathcal{A}$, then \[\begin{pmatrix} \alpha & \beta\\ \gamma & \delta \end{pmatrix} = \begin{pmatrix} \alpha & 0\\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & \hat{\beta}\\ \gamma & 1 \end{pmatrix} \begin{pmatrix} 1 & 0\\ 0 & \delta \end{pmatrix}\in A(ABC)D = ABCD.\] Therefore, $\mathcal{A}\subseteq ABCD$. Hence, $ABCD = \mathcal{A}$ and so, $Aut(G)\simeq \mathcal{A}$. \end{proof} \section{Automorphisms of Zappa-Sz\'{e}p product of groups $\mathbb{Z}_{4}$ and $\mathbb{Z}_{m}$} In \cite{y4m}, Yacoub classified the groups which are Zappa-Sz\'{e}p product of cyclic groups of order $4$ and order $m$. He found that these are of the following type (see \cite[Conclusion, p. 126]{y4m}) \begin{align*} L_1 = & \langle a,b \;|\; a^{m} = 1 = b^{4}, ab = ba^r, r^4\equiv 1 \Mod{m}\rangle, \\ L_2 = & \langle a,b \;|\; a^{m} = 1 = b^{4}, ab = b^{3}a^{2t+1}, a^{2}b = ba^{2s}\rangle, \end{align*} \n where in $L_2$, $m$ is even. These are not non-isomorphic classes. The group $L_1$ may be isomorphic to the group $L_2$ depending on the values of $m,r$ and $t$ (see \cite[Theorem 5, p. 126]{y4m}). Clearly, $L_1$ is a semidirect product. Throughout this section $G$ will denote the group $L_2$ and we will be only concerned about groups $L_2$ which are Zappa-Sz\'{e}p product but not the semidirect product. Note that $G=H \bowtie K$, where $H=\langle b \rangle$ and $K=\langle a \rangle$. For the group $G$, the mutual actions of $H$ and $K$ are defined by $a\cdot b = b^{3}, a^{b} = a^{2t+1}$ along with $a^{2}\cdot b = b$ and $(a^{2})^{b} = a^{2s}$, where $t$ and $s$ are the integers satisfying the conditions \begin{itemize} \item[$(G1)$] $2s^{2}\equiv 2 \Mod{m}$, \item[$(G2)$] $4t(s+1)\equiv 0 \Mod{m}$, \item[$(G3)$] $2(t+1)(s-1)\equiv 0 \Mod{m}$, \item[$(G4)$] $\gcd(s,\frac{m}{2}) = 1$. \end{itemize} \begin{lemma}\label{l1} \[ (a^{l})^{b} = \left\{\begin{array}{ll} a^{2t+1+(l-1)s}, & \text{if}\; l\; \text{is odd}\\ a^{ls}, & \text{if}\; l\; \text{is even} \end{array}\right..\] \end{lemma} \begin{lemma}\label{l2} Let $\begin{pmatrix} \alpha & \beta\\ \gamma & \delta \end{pmatrix}\in \mathcal{A}$. Then \begin{itemize} \item[$(i)$] $Im(\delta)\subseteq \langle a^{r}\rangle$, where $r$ is odd, \item[$(ii)$] $\beta(a^{l}) = \left\{\begin{array}{ll} \beta(a), & \text{if}\; l\; \text{is odd}\\ 1, & \text{if}\; l\; \text{is even} \end{array}\right.$, \item[$(iii)$] $Im(\gamma)\subseteq \langle a^{2}\rangle$, \item[$(iv)$] $\alpha\in Aut(H)$, \item[$(v)$] $\beta\gamma = 0$, where $0$ is the trivial group homomorphism, \item[$(vi)$] $\gamma(h)\cdot \beta(k) = \beta(k)$, for all $h\in H$ and $k\in K$, \item[$(vii)$] If either $s=1$ or $Im(\beta)\subseteq \langle b^{2}\rangle$, then $\gamma(h)^{\beta(k)} = \gamma(h)$, for all $h\in H$ and $k\in K$. \end{itemize} \end{lemma} \begin{proof} \begin{itemize} \item[$(i)$] If possible, let $\delta(a) = a^{r}$, where $r$ is even. Then, using $(A3)$ and $a^{2}\cdot b^{j} = b^{j}$, $\beta$ is a homomorphism. Also, using $(a^{2})^{b} = a^{2s}, (C4)$ and $(A4)$, if $\beta(a) = 1$ or $b^{2}$, then $\delta$ is defined by $\delta(a^{l}) = a^{rl}$, for all $l$. Similarly, if $\beta(a) = b$ or $b^{3}$, then $\delta$ is defined by \begin{equation*} \delta(a^{l}) = \left\{ \begin{array}{ll} a^{\frac{l+1}{2}r + \frac{l-1}{2}rs}, & \text{if}\; l\; \text{is odd}\\ a^{\frac{l}{2}r(s+1)}, & \text{if}\; l\; \text{is even} \end{array} \right.. \end{equation*} One can easily observe that $\delta$ is neither one-one nor onto. But this is a contradiction by $(A7)$. Hence, $Im(\delta)\subseteq \langle a^{r}\rangle$, where $r$ is odd. \item[$(ii)$] Using $(C3)$, and $a\cdot b = b^{-1}$, we have if $\nu$ is odd, then $a^{\nu}\cdot b^{j} = b^{-j}$, for all $j$. Thus using $(A3)$, $(C2)$ and part $(i)$, $\beta(a^{2}) = \beta(a)(\delta(a)\cdot \beta(a)) = \beta(a)(\beta(a))^{-1} = 1$ and $\beta(a^{3}) = \beta(a)(\delta(a)\cdot \beta(a^{2})) = \beta(a)(\delta(a)\cdot 1) = \beta(a)$. Inductively, we get the required result. \item[$(iii)$] Suppose that $\gamma(b) = a^{\lambda}$, where $\lambda$ is odd. Then using $(A1)$, $\alpha(b) = b^{i} = \alpha(b^{3})$ and $\alpha(b^{2}) = 1 = \alpha(1)$, where $0\le i \le 3$. Thus the map $\alpha$ is neither one-one nor onto, but by $(A7)$, the map $\alpha$ is a bijection. This is a contradiction. Thus, $\lambda$ is even. Now, using $(A2)$, for different choices of $\alpha(b)$ we find that $\gamma(b^{2})\in \{a^{2\lambda}, a^{\lambda(s+1)}\}$. Since, $\lambda$ is even, $\gamma(b^{2})\in \langle a^{2}\rangle$. Similarly, $\gamma(b^{3})\in \{a^{3\lambda}, a^{\lambda(s+2)}\}$ and so, $\gamma(b^{3})\in \langle a^{2}\rangle$. Hence, $(iii)$ holds. \item[$(iv)$] Using $(iii)$ and $(A1)$, one observes that $\alpha$ is an endomorphism of $H$. Also, by $(A7)$, $\alpha$ is a bijection. Thus, $\alpha$ is an automorphism of $H$. Hence, $(iv)$ holds. \item[$(v)$] Using the parts $(ii)$ and $(iii)$, $\beta\gamma(h) = 1$, for all $h\in H$. Thus, $\beta\gamma = 0$. \item[$(vi)$] Using the relation $a^{2}\cdot b = b$ and the part $(iii)$, $(vi)$ holds. \item[$(vii)$] Using $(C4)$ and $(G1)$, we get \begin{equation}\label{e2} (a^{2l})^{b^{j}} = \left\{\begin{array}{ll} a^{2ls}, & \text{if}\; j\; \text{is odd}\\ a^{2l}, & \text{if}\; j\; \text{is even} \end{array} \right.. \end{equation} \n Thus, if either $s=1$ or $Im(\beta)\subseteq \langle b^{2}\rangle$, then using the part $(iii)$ and the Equation (\ref{e2}), $(vii)$ holds. \end{itemize} \end{proof} \n By \ref{l2} $(ii)$, observe that, $\beta(k^{h}) = \beta(k)$, for all $k\in K$ and $h\in H$. \begin{lemma}\label{l3} Let $\beta\in Q$. Then $\beta\in Hom(K,H)$ and $Im(\beta) \le \langle b^{2}\rangle$. Also, $Im(\beta) = \langle b^{2}\rangle$ if and only if $2t(1+s)\equiv 0 \Mod{m}$, where $\gcd(s+1, \frac{m}{2})\ne 1$. \end{lemma} \begin{proof} Let $\beta(a) = b^{i}$. Then using the Lemma \ref{l2} $(ii)$, we have, $\beta(a^{2j}) = 1$ and $\beta(a^{2j+1}) = b^{i}$, for all $j$. So, it is sufficient to study only the $\beta(a)$ in the following, \begin{equation}\label{e1} a = a^{\beta(a)} = a^{b^{i}}. \end{equation} \n Clearly, the Equation (\ref{e1}) holds trivially for $i=0$. If $i=1$, then by the Equation (\ref{e1}), $a = a^{2t+1}$ which implies that $2t\equiv 0 \Mod{m}$. Therefore, in the defining relations of the group $G$, $ab = b^{3}a$ which shows that $G$ is a semidirect product of the groups $H$ and $K$. For $i = 3$, $a = a^{b^{3}} = a^{4t+2ts+1}$, which gives that $4t+2ts \equiv 0 \Mod{m}$. So, using $(G2)$ and $(G4)$, $2ts \equiv 0 \Mod{m}$ which gives that $t\equiv 0 \Mod{\frac{m}{2}}$. Thus, $G$ is again the semidirect product of $H$ and $K$. Now, For $i=2$, using $(C6)$ and the Lemma \ref{l1}, $a^{b^{2}} = (a^{2t+1})^{b} = a^{2t+1+2ts}$. Then, $a^{b^{2}} = a$ if and only if $2t(1+s)\equiv 0 \Mod{m}$. \vspace{.2cm} \n Now, if $\gcd(s+1, \frac{m}{2}) = 1$, then $t\equiv 0 \Mod{\frac{m}{2}}$ and so, $G$ is a semidirect product of the groups $H$ and $K$. On the other hand, if $\gcd(s+1, \frac{m}{2}) \ne 1$, then $t\not\equiv 0 \Mod{\frac{m}{2}}$. Thus, $G$ is a Zappa-Sz\'{e}p product of $H$ and $K$. Thus, $Im(\beta) = \langle b^{2}\rangle$ if and only if $2t(1+s)\equiv 0 \Mod{m}$ and $\gcd(s+1, \frac{m}{2})\ne 1$. Since $Im(\beta)\subseteq \langle b^{2}\rangle$, using the Lemma \ref{l2} $(ii)$, one can easily observe that $\beta\in Hom(K,H)$. Hence, the result holds. \end{proof} \n Now, one can easily observe that for the given group $G$, $k\cdot \alpha(h) = \alpha(k\cdot h)$, $\beta(k) = \beta(k^{h})$, $h^{\prime} = \gamma(h)\cdot h^{\prime}$, $\delta(k)\cdot h = k\cdot h$, $\delta(k)\cdot \alpha(h) = \alpha(k\cdot h)$ and $\beta(k)(\delta(k)\cdot \alpha(h)) = \alpha(k\cdot h)\beta(k^{h})$ always holds for all $\alpha\in P$, $\beta\in Q$, $\gamma \in R$, $\delta \in S$, $(\alpha,\gamma,\delta) \in X$, $(\alpha,\delta)\in Z$ and $(\alpha, \beta, \delta) \in Y$ respectively. Thus the subgroups $P$, $Q$, $R$, $S$, $X$, $Y$ and $Z$ reduces to the following, \begin{align*} P = & \{\alpha\in Aut(H) \;|\; k^{\alpha(h)} = k^{h}\},\\ Q = & \{\beta\in Hom(K,H)\;|\; k = k^{\beta(k^{\prime})}\} = Hom(K,Stab_{H}(K)),\\ R = & \{\gamma \in CrossHom(H,Stab_{K}(H)) \;|\; \gamma(k\cdot h) = \gamma(h)\} \\ S = & \{\delta\in Aut(K)\;|\;\delta(k)^{h} = \delta(k^{h})\},\\ X =& \{(\alpha,\gamma,\delta)\in Aut(H)\times Map(H,K)\times Aut(K)\;|\; \gamma(hh^{\prime}) = \gamma(h)^{\alpha(h^{\prime})}\gamma(h^{\prime}),\\& \delta(k)^{\alpha(h)}\gamma(h) = \gamma(k\cdot h)\delta(k^{h}) \},\\ Y =& \{(\alpha,\beta,\delta)\in Aut(H)\times Map(K,H)\times Map(K,K)\;|\; \beta(kk^{\prime}) = \beta(k)(\delta(k)\cdot \beta(k^{\prime})),\\& \delta(kk^{\prime}) = \delta(k)^{\beta(k^{\prime})}\delta(k^{\prime}), \delta(k)^{\alpha(h)} = \delta(k^{h}) \},\\ Z =& \{(\alpha,\delta)\in Aut(H)\times Aut(K)\;|\; \delta(k)^{\alpha(h)} = \delta(k^{h})\}. \end{align*} \begin{theorem}\label{s3t1} Let $A,B,C,$ and $D$ be defined as above. Then $Aut(G) = ABCD$. \end{theorem} \begin{proof} Using the Lemma \ref{l2} $(v)$, we have that $\beta\gamma = 0$ and so, $1-\beta\gamma \in P$. Therefore, by the Theorem \ref{s2t1}, we have, $Aut(G) = ABCD$. \end{proof} \begin{theorem}\label{abcd} Let $\begin{pmatrix} \alpha & \beta\\ \gamma & \delta \end{pmatrix} \in \mathcal{A}$. Then, if $\beta\in Q$ and $(\alpha, \gamma, \delta)\in X$, then $Aut(G) \simeq E \rtimes B \simeq (C\rtimes M)\rtimes B$. \end{theorem} \begin{proof} Let $\beta\in Q$. Then using the Lemma \ref{l3}, $Im(\beta)\le \langle b^{2}\rangle$. Let $k,k^{\prime}\in K$ such that $\beta(k) = b^{2i}$ and $\beta(k^{\prime}) = b^{2j}$, for all $i,j$. Then \begin{align*} \gamma\beta(kk^{\prime}) =& \gamma(\beta(k)(k\cdot \beta(k^{\prime})))\\ =& \gamma(\beta(k))^{\alpha(k\cdot\beta(k^{\prime}))}\gamma(k\cdot\beta(k^{\prime}))\\ =& \gamma(b^{2i})^{\alpha(k\cdot b^{2j})}\gamma(\beta(k^{\prime}))\\ =& (a^{i\lambda(s+1)})^{\alpha(b^{2j})}\gamma(\beta(k^{\prime}))\\ =& (a^{i\lambda(s+1)})^{b^{2j}}\gamma(\beta(k^{\prime}))\\ =& a^{i\lambda(s+1)s^{2j}}\gamma(\beta(k^{\prime}))\\ =& a^{i\lambda(s+1)}\gamma(\beta(k^{\prime}))\\ =& \gamma(b^{2i})\gamma(\beta(k^{\prime}))\\ =& \gamma\beta(k)\gamma\beta(k^{\prime}). \end{align*} Thus $\gamma\beta\in Hom(K,K)$ and so, $\gamma\beta+ \delta\in Hom(K,K)$. Now, let $\beta(a) = b^{2j}$ and $\delta(a) = a^{r}$, where $j \in \{0,1\}$ and $r\in U(m)$. Then, using the Lemma \ref{l2}, we have \begin{align*} (\gamma\beta+\delta)(a^{l}) = \left\{\begin{array}{ll} a^{lr}, & \text{if}\; l\; \text{is even}\\ a^{\lambda j(s+1)+lr}, & \text{if}\; l\; \text{is odd} \end{array} \right.. \end{align*} \n One can easily observe that $\gamma\beta+ \delta$ defined as above is a bijection. Thus $\gamma\beta+\delta \in Aut(K)$. \vspace{.2cm} \n Now, using $(C3)$ and $(C4)$ and the Lemma \ref{l2} $(iii)$, $(\gamma\beta+\delta)(a)\cdot \alpha(b) = \gamma\beta(a)\delta(a)\cdot \alpha(b) = \gamma\beta(a)\cdot (\delta(a)\cdot \alpha(b)) = \gamma\beta(a)\cdot\alpha(a\cdot b) = \alpha(a\cdot b)$ and $(\gamma\beta+\delta)(a)^{\alpha(b)}\gamma(b) = (\gamma\beta(a)\delta(a))^{\alpha(b)}\gamma(b) = (\delta(a)\gamma\beta(a))^{\alpha(b)}\gamma(b) = \delta(a)^{\gamma(\beta(a))\cdot \alpha(b)}$ $\gamma(\beta(a))^{\alpha(b)}\gamma(b) = \delta(a)^{\alpha(b)}\gamma(b^{2i})^{\alpha(b)}\gamma(b) = \delta(a)^{\alpha(b)}\gamma(b)(a^{i\lambda(s+1)})^{\alpha(b)} = \gamma(a\cdot b)\delta(a^{b})$ $a^{i\lambda(s+1)} = \gamma(a\cdot b)\delta(a^{b})\gamma(b^{2i}) = \gamma(a\cdot b)\gamma(\beta(a))\delta(a^{b}) = \gamma(a\cdot b)\gamma(\beta(a^{2t+1}))$ $\delta(a^{b}) = \gamma(a\cdot b)\gamma(\beta(a^{b}))\delta(a^{b}) = \gamma(a\cdot b)(\gamma\beta+\delta)(a^{b})$. Thus, $(\alpha, \gamma,\gamma\beta+\delta)\in X$. \vspace{.2cm} \n Using the Lemma \ref{l2} $(v)$, we have \begin{equation}\label{s3e2} \begin{pmatrix} 1 & \beta\\ 0 & 1 \end{pmatrix}\begin{pmatrix} \alpha & 0\\ \gamma & \delta \end{pmatrix}\begin{pmatrix} 1 & \beta\\ 0 & 1 \end{pmatrix}^{-1} = \begin{pmatrix} \alpha & (\alpha+\beta\gamma)(-\beta)+ \beta\delta\\ \gamma & \gamma\beta+\delta \end{pmatrix}. \end{equation} \n Now, using the Lemma \ref{l2} $(ii)$, we have, $((\alpha+\beta\gamma)(-\beta)+ \beta\delta)(a) = (\alpha+\beta\gamma)(-\beta(a)) \beta(\delta(a)) = (\alpha + \beta\gamma)(b^{-2j})\beta(a^{r}) = \alpha(b^{2j})\beta(\gamma(b^{2j}))b^{2j} = b^{2ij}b^{2j} = b^{2j(i+1)} = 1$. Thus, $(\alpha+\beta\gamma)(-\beta)+ \beta\delta = 0$. Therefore, by the Equation (\ref{s3e2}), \[\begin{pmatrix} 1 & \beta\\ 0 & 1 \end{pmatrix}\begin{pmatrix} \alpha & 0\\ \gamma & \delta \end{pmatrix}\begin{pmatrix} 1 & \beta\\ 0 & 1 \end{pmatrix}^{-1} = \begin{pmatrix} \alpha & 0\\ \gamma & \gamma\beta+\delta \end{pmatrix}\in E.\] So, $E \triangleleft \mathcal{A}$. Now, if $\begin{pmatrix} \alpha & \beta\\ \gamma & \delta \end{pmatrix} \in \mathcal{A}$, then \[\begin{pmatrix} \alpha & \beta\\ \gamma & \delta \end{pmatrix}=\begin{pmatrix} \alpha & 0\\ \gamma & -\gamma\alpha^{-1}\beta + \delta \end{pmatrix}\begin{pmatrix} 1 & \alpha^{-1}\beta\\ 0 & 1 \end{pmatrix}\in EB.\] \n Clearly, $E\cap B = \{1\}$. Thus, $\mathcal{A}= E\rtimes B$. Hence, $Aut(G) \simeq E\rtimes B$. \vspace{.2cm} \n Let $\begin{pmatrix} \alpha & 0\\ \gamma & \delta \end{pmatrix} \in E$. Then \[\begin{pmatrix} \alpha & 0\\ \gamma & \delta \end{pmatrix}=\begin{pmatrix} \alpha & 0\\ 0 & \delta \end{pmatrix}\begin{pmatrix} 1 & 0\\ \delta^{-1}\gamma & 1 \end{pmatrix}\in MC.\] \n Clearly, $M\cap C = \{1\}$. Since $A\times D$ normalizes $C$, $C\triangleleft E$. Thus, $E = C\rtimes M$. Hence, $X \simeq C\rtimes M$ and so, $Aut(H)\simeq (C\rtimes M)\rtimes B$. \end{proof} \n Now, we will find the structure and the order of the automorphism group $Aut(G)$. For this, we will proceed by first taking $t$ to be such that $\gcd(t,m) =1$ and then by taking $t$ such that $\gcd(t,m) = d$, where $d>1$. \begin{theorem}\label{t2} Let $4$ divides $m$ and $t$ be odd such that $\gcd(t,m) = 1$. Then \begin{equation*} Aut(G) \simeq \left\{\begin{array}{ll} (\mathbb{Z}_{\frac{m}{2}}\rtimes(\mathbb{Z}_{2}\times U(m)))\rtimes \mathbb{Z}_{2}, & \text{if}\; s\in \{\frac{m}{2}-1, m-1\}\\ \mathbb{Z}_{\frac{m}{2}}\rtimes(\mathbb{Z}_{2}\times U(m)), & \text{if}\; s\in \{\frac{m}{4}-1, \frac{3m}{4}-1\} \end{array} \right.. \end{equation*} \end{theorem} \begin{proof} Let $\gcd(t,m) = 1$. Then, using $(G2)$, we get, $s\equiv -1 \Mod {\frac{m}{4}}$ which implies that $s\in \{\frac{m}{4} -1, \frac{m}{2} -1, \frac{3m}{4} -1, m-1\}$. Now, using $(G3)$, we get $t\equiv -1 \Mod {\frac{m}{4}}$. Then $t\in \{\frac{m}{4} -1, \frac{m}{2} -1, \frac{3m}{4} -1, m-1\}$. \vspace{.2cm} \n Let $(\alpha, \gamma, \delta)\in X$ be such that $\alpha(b) = b^{i}$, $\gamma(b) = a^{\lambda}$ and $\delta(a) = a^{r}$, where $i \in \{1,3\}$, $\lambda$ is even, $0 \le \lambda\le m-1$, and $r\in U(m)$. Using $\gamma(hh^{\prime}) = \gamma(h)^{\alpha(h^{\prime})}\gamma(h^{\prime})$, we get $\gamma(b^{2}) = a^{\lambda(s+1)}, \gamma(b^{3}) = a^{\lambda(s+2)}$ and $\gamma(b^{4}) = 1$. We consider two cases based on the image of the map $\alpha$. \vspace{.2cm} \n \textit{Case(i)}: Let $\alpha(b) = b$. Then, using $\gamma(a\cdot b)\delta(a^{b}) = \delta(a)^{b}\gamma(b)$, $a^{\lambda(s+2)+ (2t+1)r} = \gamma(a\cdot b)\delta(a^{b}) = \delta(a)^{b}\gamma(b) = (a^{r})^{b}a^{\lambda} = a^{2t+1+(r-1)s+\lambda}$ which implies that \begin{equation}\label{e3} \lambda(s+1)\equiv (r-1)(s-2t-1) \Mod{m}. \end{equation} \n If $s\in \{\frac{m}{2}-1, m-1\}$, then the Equation (\ref{e3}) holds for all values of $t$, $\lambda$ and $r$. Now, if $(s,t) \in \{(\frac{m}{4}-1, \frac{m}{2}-1), (\frac{m}{4}-1, m-1)\}$, then by the Equation (\ref{e3}), $r \equiv 1+\lambda \Mod{4}$. Since $\lambda$ is even, $r\equiv 1 \; \text{or}\; 3 \Mod{4}$. Again, if $(s,t) \in \{(\frac{m}{4}-1, \frac{m}{4}-1), (\frac{m}{4}-1, \frac{3m}{4}-1)\}$, then by the Equation (\ref{e3}), $r \equiv 3-\lambda \Mod{4}$. Since $\lambda$ is even, $r\equiv 1 \; \text{or}\; 3 \Mod{4}$. By the similar argument, we get the same results for $s= \frac{3m}{4}-1$. Thus, in this case, the choices for the maps $\gamma$ and $\delta$ are, $\gamma_{\lambda}(b) = a^{\lambda}$ and $\delta_{r}(a) = a^{r}$, for all $0 \le \lambda \le m-1$, $\lambda$ is even, and $r \in U(m)$. \vspace{.2cm} \n \textit{Case(ii):} Let $\alpha(b) = b^{3}$. Then, $a^{\lambda(s+2)+ (2t+1)r} = \gamma(a\cdot b)\delta(a^{b}) = \delta(a)^{b^{3}}\gamma(b) = (a^{r})^{b^{3}}a^{\lambda} = a^{4t+2ts+1+(r-1)s+\lambda}$ which implies that \begin{equation}\label{e4} \lambda(s+1)\equiv 2t(s+1)+(r-1)(s-2t-1) \Mod{m}. \end{equation} \n If $s\in \{\frac{m}{2}-1, m-1\}$, then the Equation (\ref{e4}) holds for all values of $t$, $\lambda$ and $r$. Now, if $(s,t) \in \{(\frac{m}{4}-1, \frac{m}{2}-1), (\frac{m}{4}-1, m-1)\}$, then by the Equation (\ref{e4}), $r \equiv 3+\lambda \Mod{4}$. Since $\lambda$ is even, $r\equiv 1 \; \text{or}\; 3 \Mod{4}$. Again, if $(s,t) \in \{(\frac{m}{4}-1, \frac{m}{4}-1), (\frac{m}{4}-1, \frac{3m}{4}-1)\}$, then by the Equation (\ref{e4}), $r \equiv 1+\lambda \Mod{4}$. Since $\lambda$ is even, $r\equiv 1 \; \text{or}\; 3 \Mod{4}$. By the similar argument, we get the same results for $s= \frac{3m}{4}-1$. Thus, in this case, the choices for the maps $\gamma$ and $\delta$ are, $\gamma_{\lambda}(b) = a^{\lambda}$ and $\delta_{r}(a) = a^{r}$, for all $0 \le \lambda \le m-1$, $\lambda$ is even, and $r \in U(m)$. \vspace{.2cm} \n Thus combining both the \textit{cases} $(i)$ and $(ii)$, we get, for all $\alpha\in Aut(H)$, the choices for the maps $\gamma$ and $\delta$ are, $\gamma_{\lambda}(b) = a^{\lambda}$ and $\delta_{r}(a) = a^{r}$, where $0 \le \lambda \le m-1$, $\lambda$ is even, and $r \in U(m)$. So, using the Theorem \ref{abcd}, $X \simeq \mathbb{Z}_{\frac{m}{2}}\rtimes(\mathbb{Z}_{2}\times U(m))$. Now, if $s\in \{\frac{m}{2}-1, m-1\}$, then $2t(s+1)\equiv 0 \Mod{m}$. Therefore, using the Lemma \ref{l3}, $Im(\beta) = \{b^{2}\}$ and so, $B\simeq \mathbb{Z}_{2}$. If $s\in \{\frac{m}{4}-1, \frac{3m}{4}-1\}$, then $2t(s+1)\not\equiv 0 \Mod{m}$. Therefore, using the Lemma \ref{l3}, $Im(\beta) = \{1\}$ and so, $B$ is a trivial group. Hence, by the Theorem \ref{abcd}, \begin{align*} Aut(G) \simeq E \rtimes B \simeq \left\{\begin{array}{ll} (\mathbb{Z}_{\frac{m}{2}}\rtimes(\mathbb{Z}_{2}\times U(m)))\rtimes \mathbb{Z}_{2}, & \text{if}\; s\in \{\frac{m}{2}-1, m-1\}\\ \mathbb{Z}_{\frac{m}{2}}\rtimes(\mathbb{Z}_{2}\times U(m)), & \text{if}\; s\in \{\frac{m}{4}-1, \frac{3m}{4}-1\} \end{array} \right.. \end{align*} \end{proof} \begin{theorem} Let $m=2q$, where $q>1$ is odd and $\gcd(t,m) = 1$. Then, $Aut(G)\simeq (\mathbb{Z}_{\frac{m}{2}}\rtimes(\mathbb{Z}_{2}\times U(m)))\rtimes \mathbb{Z}_{2}$. \end{theorem} \begin{proof} Using $(G1), (G2),$ and $(G3)$, we get $s,t\in \{\frac{m}{2}-1, m-1\}$. Then, the result follows on the lines of the proof of the Theorem \ref{t2}. \end{proof} \begin{theorem}\label{t3} Let $m = 2^{n}$, $n\ge 3$. Then \begin{itemize} \item[$(i)$] if $t$ is even, then $Aut(G) \simeq (\mathbb{Z}_{4}\rtimes(\mathbb{Z}_{2}\times (\mathbb{Z}_{2}\times\mathbb{Z}_{2^{n-2}})))\rtimes \mathbb{Z}_{2}$, \item[$(ii)$] if $t$ is odd, then \begin{align*} Aut(G) \simeq \left\{ \begin{array}{ll} (\mathbb{Z}_{2^{n-1}}\rtimes(\mathbb{Z}_{2}\times (\mathbb{Z}_{2}\times\mathbb{Z}_{2^{n-2}})))\rtimes \mathbb{Z}_{2}, & \text{if}\; s\in \{\frac{m}{2}-1, m-1\}\\ \mathbb{Z}_{2^{n-1}}\rtimes(\mathbb{Z}_{2}\times (\mathbb{Z}_{2}\times\mathbb{Z}_{2^{n-2}})), & \text{if}\; s\in \{\frac{m}{4}-1, \frac{3m}{4}-1\} \end{array}\right.. \end{align*} \end{itemize} \end{theorem} \begin{proof} We will find the automorphism group $Aut(G)$ in two cases namely, when $t$ is even and when $t$ is odd. \vspace{.2cm} \n \textit{$Case(i)$}. Let $t$ be even. Then $2(t+1)(s-1)\equiv 0 \Mod{2^{n}}$ implies that $s\equiv 1 \Mod{2^{n-1}}$. Therefore, $s = 1, 2^{n-1} + 1$. Now, $4t(s+1)\equiv 0 \Mod{2^{n}}$ implies that $t \equiv 0 \Mod{2^{n-3}}$. Therefore, $t\in \{2^{n-3}, 2^{n-2}, 3\cdot 2^{n-3}, 2^{n-1}, 5\cdot 2^{n-3}, 3\cdot 2^{n-2}, 7\cdot 2^{n-3}, 2^{n}\}$. Note that, for $t = 2^{n-1}$ and $t=2^{n}$, $G$ is the semidirect product of $H$ and $K$. So, we consider the other values of $t$. \vspace{.2cm} \n Let $\gamma \in R$ be such that $\gamma(b) = a^{\lambda}$, where $0\le \lambda \le m-1$ and $\lambda$ is even. Then, since, $s=1$ and $\lambda$ is even, by $(A2)$, $\gamma\in Hom(H,K)$. Now, $1 = \gamma(b^{4}) = a^{4\lambda}$ which implies that $\lambda \equiv 0 \Mod{2^{n-2}}$. Therefore, $\lambda \in \{2^{n-2}, 2^{n-1}, 3\cdot{2^{n-2}}, 2^{n}\}$. Using $\gamma(a\cdot b) = \gamma(b)$, $a^{3\lambda} = \gamma(a\cdot b) = \gamma(b) = a^{\lambda}$ implies that $\lambda \equiv 0 \Mod{2^{n-1}}$. Thus, $\lambda\in \{0, 2^{n-1}\}$ and so, $C\simeq \mathbb{Z}_{2}$. \vspace{.2cm} \n Now, let $(\alpha, \beta, \delta)\in Y$ be such that $\alpha(b) = b^{i}, \beta(a) = b^{j}$, and $\delta(a) = a^{r}$, where $ i\in \{1,3\}$, $0\le j\le 3$ and $0\le r\le 2^{n}-1$ and $r$ is odd. Using the Lemma \ref{l2} $(ii)$, $\beta(kk^{\prime}) = \beta(k)(\delta(k)\cdot \beta(k^{\prime}))$ holds, for all $k,k^{\prime}\in K$. Now, using $\delta(kk^{\prime}) = \delta(k)^{\beta(k^{\prime})}\delta(k^{\prime})$, we get \begin{equation*} \delta(a^{l}) = \left\{\begin{array}{ll} a^{(l-1)(jt+r) + r}, & \text{if}\; l \; \text{is odd} \\ a^{l(jt + r)}, & \text{if}\; l \; \text{is even} \end{array} \right.. \end{equation*} Finally, using $\delta(k^{h}) = \delta(k)^{\alpha(h)}$, $a^{2it+r} = (a^{r})^{b^{i}} = \delta(a)^{\alpha(b)} = \delta(a^{b}) = \delta(a^{2t+1}) = a^{2t(jt+r)+r}$. Thus, $2t(jt+r-i)\equiv 0 \Mod{2^{n}}$ which implies that \begin{equation*} \begin{array}{ll} r\equiv i \Mod{4}, & \text{if}\; t\in \{2^{n-3}, 3\cdot 2^{n-3}, 5\cdot 2^{n-3}, 7\cdot 2^{n-3}\}\; \text{and}\; n\ge 5\\ r\equiv i+2j \Mod{4}, & \text{if}\; t\in \{2^{n-3}, 3\cdot 2^{n-3}, 5\cdot 2^{n-3}, 7\cdot 2^{n-3}\} \; \text{and}\; n=4\\ r\equiv i \Mod{2}, & \text{if}\; t\in \{ 2^{n-2}, 3\cdot 2^{n-2}\} \end{array}. \end{equation*} \n Now, if $j\in \{0,2\}$, then $r \equiv i \Mod{4}$ and if $j\in \{1,3\}$, then $r \equiv i$ or $i+2 \Mod{4}$. Thus, for all $\beta \in CrossHom(K,H)$, the choices for the maps $\alpha$ and $\delta$ are, $\alpha_{i}(b) = b^{i}$ and $\delta_{r}(a) = a^{r}$, where $i\in \{1,3\}$ and $r \in U(m)$. Note that, if $\begin{pmatrix} \alpha & \beta\\ 0 & \delta \end{pmatrix}\in F$, then \[\begin{pmatrix} \alpha & \beta\\ 0 & \delta \end{pmatrix} = \begin{pmatrix} \alpha & 0\\ 0 & \delta \end{pmatrix}\begin{pmatrix} 1 & \alpha^{-1}\beta\\ 0 & 1 \end{pmatrix}\in MB.\] Clearly, $M\cap B = \{1\}$ and $M$ normalizes $B$. So, $B\triangleleft F$ and $F = B\rtimes M$. Therefore, $Y \simeq B \rtimes M \simeq \mathbb{Z}_{4}\rtimes (\mathbb{Z}_{2} \times U(m))$. Using the Lemma \ref{l2} $(v) - (vii)$, \begin{equation}\label{s3e1} \begin{pmatrix} 1 & 0\\ \gamma & 1 \end{pmatrix}\begin{pmatrix} \alpha & \beta \\ 0 & \delta \end{pmatrix}\begin{pmatrix} 1 & 0\\ \gamma & 1 \end{pmatrix}^{-1} = \begin{pmatrix} \alpha & \beta\\ \gamma\alpha + (\gamma\beta + \delta)(-\gamma) & \gamma\beta+\delta \end{pmatrix}. \end{equation} \n Now, $(\gamma\alpha + (\gamma\beta + \delta)(-\gamma))(b) = \gamma\alpha(b)(\gamma\beta + \delta)(-\gamma)(b) = \gamma(b^{i})(\gamma\beta+\delta)(a^{-\lambda})$ $= a^{i\lambda}\gamma(\beta(a^{-\lambda}))\delta(a^{-\lambda}) = a^{i\lambda}\gamma(1)a^{-\lambda(jt+r)} = a^{\lambda(i-jt-r)} = 1$. Thus, $(\gamma\beta + \delta)(-\gamma) = 0$. Also, one can easily observe that $(\alpha,\beta, \gamma\beta+\delta)\in Y$. Therefore, by the Equation (\ref{s3e1}), \[\begin{pmatrix} 1 & 0\\ \gamma & 1 \end{pmatrix}\begin{pmatrix} \alpha & \beta \\ 0 & \delta \end{pmatrix}\begin{pmatrix} 1 & 0\\ \gamma & 1 \end{pmatrix}^{-1} = \begin{pmatrix} \alpha & \beta\\ 0 & \gamma\beta+\delta \end{pmatrix} \in F.\] \n So, $F \triangleleft \mathcal{A}$. Clearly, $F \cap C = \{1\}$. Also, if $\begin{pmatrix} \alpha & \beta\\ \gamma & \delta \end{pmatrix}\in \mathcal{A}$, then \[\begin{pmatrix} \alpha & \beta\\ \gamma & \delta \end{pmatrix} = \begin{pmatrix} \alpha & \beta\\ 0 & \delta \end{pmatrix}\begin{pmatrix} 1 & 0\\ \delta^{-1}\gamma & 1 \end{pmatrix}\in FC.\] \n Hence, $\mathcal{A} = F\rtimes C$ and so, $Aut(G) \simeq F \rtimes C\simeq (\mathbb{Z}_{4}\rtimes(\mathbb{Z}_{2}\times (\mathbb{Z}_{2}\times\mathbb{Z}_{2^{n-2}})))\rtimes \mathbb{Z}_{2}$. \vspace{.2cm} \n \textit{$Case(ii)$}. Let $t$ be odd. Then $\gcd(t,m) = 1$. Hence, the result follows from the Theorem \ref{t2}. \end{proof} \n Now, we will discuss the structure of the automorphism group $Aut(G)$ in the case when $\gcd(t,m) > 1$. \begin{theorem}\label{t4} Let $m=4q$, where $q>1$ is odd and $\gcd(t,m) =2^{i}d$, where $i\in \{0,1,2\}$, and $d$ divides $q$. Then $Aut(G)\simeq (\mathbb{Z}_{\frac{m}{2d}}\rtimes(\mathbb{Z}_{2}\times U(m)))\rtimes \mathbb{Z}_{2}$. \end{theorem} \begin{proof} Let $q=du$, for some integer $u$. Then, using $(G2)$, $s\equiv -1 \Mod{u}$ which implies that $s = lu-1$, where $1\le l\le 4d$. Since, $\gcd(s,\frac{m}{2}) = 1$, $s$ is odd and so, $l$ is even. Using $(G1)$ and $(G3)$, we get $l(u\frac{l}{2}-1)\equiv 0 \Mod{d}$ and $t+1\equiv u\frac{l}{2} \Mod{q}$. Now, one can easily observe that $\gcd(l,d) = 1$ which implies that $u\frac{l}{2}-1 \equiv 0 \Mod{d}$. Thus, $2t(s+1)\equiv 2ltu\equiv 0 \Mod{m}$ and $\gcd(s+1, \frac{m}{2})\ne 1$. Therefore, using the Lemma \ref{l3}, $B\simeq \mathbb{Z}_{2}$. \vspace{.2cm} \n Let $(\alpha,\gamma,\delta)\in X$ be such that $\alpha(b) = b^{i}$, $\gamma(b) = a^{\lambda}$ and $\delta(a) = a^{r}$, where $i\in \{1,3\}$, $0\le \lambda \le m-1$, $\lambda$ is even, and $r\in U(m)$. Then, using $\gamma(hh^{\prime}) = \gamma(h)^{\alpha(h^{\prime})}\gamma(h^{\prime})$, we have $\gamma(b^{2}) = a^{\lambda(s+1)}$, $\gamma(b^{3}) = a^{\lambda(s+2)}$, and $\gamma(b^{4}) = 1$. Now, using $\delta(a)^{\alpha(b)}\gamma(b) = \gamma(a\cdot b)\delta(a^{b})$ and the fact that $2t(s+1)\equiv 0 \Mod{m}$, $a^{\lambda(s+2)+(2t+1)r} = \gamma(b^{3})\delta(a^{2t+1}) = \gamma(a\cdot b)\delta(a^{b}) = \delta(a)^{\alpha(b)}\gamma(b) = (a^{r})^{b^{i}}a^{\lambda} = a^{2t+1+(r-1)s +\lambda + \frac{i-1}{2}2t(s+1)} = a^{2t+1+(r-1)s +\lambda}$. Thus \begin{equation}\label{s3e3} \lambda(s+1) \equiv (r-1)(s-2t-1) \Mod{m}. \end{equation} \n Since $2t(s+1)\equiv 0 \Mod{m}$, using $(G3)$, we get $2(s-2t-1)\equiv 0 \Mod{m}$. Therefore, by the Equation (\ref{s3e3}), $\lambda lu \equiv 0 \Mod{m}$. Using the Lemma \ref{l2} $(iii)$, we get $\lambda \equiv 0 \Mod{2d}$. Thus, using the Theorem \ref{abcd}, $X\simeq \mathbb{Z}_{\frac{m}{2d}}\rtimes(\mathbb{Z}_{2}\times U(m))$. Hence, $Aut(G)\simeq E \rtimes B\simeq (\mathbb{Z}_{\frac{m}{2d}}\rtimes(\mathbb{Z}_{2}\times U(m)))\rtimes \mathbb{Z}_{2}$. \end{proof} \begin{theorem} Let $m=2q$, where $q>1$ is odd and $\gcd(t,m) =2^{i}d$, where $i\in \{0,1\}$, and $d$ divides $q$. Then $Aut(G)\simeq (\mathbb{Z}_{\frac{m}{2d}}\rtimes(\mathbb{Z}_{2}\times U(m)))\rtimes \mathbb{Z}_{2}$. \end{theorem} \begin{proof} Follows on the lines of the proof of the Theorem \ref{t4}. \end{proof} \begin{theorem} Let $m= 2^{n}q$, $t$ be even and $\gcd(m,t) = 2^{i}d$, where $1\le i\le n$, $n\ge 3$, $q>1$ and $d$ divides $q$. Then \begin{align*} Aut(G) \simeq \left\{\begin{array}{ll} (\mathbb{Z}_{4}\rtimes(\mathbb{Z}_{2}\times U(m)))\rtimes \mathbb{Z}_{2}, & \text{if}\; d=q\\ \mathbb{Z}_{2}\times (\mathbb{Z}_{\frac{2q}{d}}\rtimes(\mathbb{Z}_{2}\times U(m))), & \text{if}\; d\ne q\; \text{and}\; n-2\le i \le n\\ \mathbb{Z}_{\frac{4q}{d}}\rtimes(\mathbb{Z}_{2}\times U(m)), & \text{if}\; d\ne q \; \text{and}\; i = n-3 \end{array} \right.. \end{align*} \end{theorem} \begin{proof} We consider the following four cases to find the structure of $Aut(G)$. \vspace{.2cm} \n \textit{Case($i$):} Let $d=q$ and $\gcd(t+1,m) = u$. Since, $t+1$ is odd, $u$ is odd and $u$ divides $q$. Thus, $u$ divides $t$ and so, $u = 1$. Therefore, using $(G2)$ and $(G3)$, $s\equiv 1 \Mod{\frac{m}{2}}$ and $t \equiv 0 \Mod{\frac{m}{8}}$. By the similar argument used in the proof of the Theorem \ref{t3} $(i)$, we get, $Aut(G)\simeq F\rtimes C\simeq (\mathbb{Z}_{4}\rtimes(\mathbb{Z}_{2}\times U(m)))\rtimes \mathbb{Z}_{2}$. \vspace{.2cm} \n \textit{Case($ii$):} Let $n-2\le i\le n$ and $q=du$, for some odd integer $u$. Then using $(G2)$, $s\equiv -1 \Mod{u}$ and so, $s = lu-1$, where $0\le l\le 2^{n}d$. Since, $\gcd(s,\frac{m}{2}) = 1$, $s$ is odd and so, $l$ is even. Now, using $(G1)$, $\frac{l}{2}(\frac{l}{2}u-1)\equiv 0 \Mod{2^{n-3}d}$ and by $(G3)$, $t\equiv \frac{l}{2}u-1 \Mod{2^{n-2}q}$. Since, $t$ is even, $\frac{l}{2}$ is odd and $\gcd(\frac{l}{2},d) =1$. Thus, $\frac{l}{2}u\equiv 1 \Mod{2^{n-3}d}$ and $t\equiv 2^{i}d \Mod{2^{n-2}q}$. One can easily observe that $2t(s+1)\equiv 0 \Mod{m}$. Therefore, using the similar argument as in the proof of the Theorem \ref{t2}, we get, $Aut(G)\simeq E \rtimes B\simeq (\mathbb{Z}_{\frac{2q}{d}}\rtimes(\mathbb{Z}_{2}\times U(m)))\rtimes \mathbb{Z}_{2}$. \vspace{.2cm} \n \textit{Case($iii$):} Let $i = n-3$, $d\ne q$ and $q=du$, for some odd integer $u$. Then using $(G2)$, $s\equiv -1 \Mod{2u}$, that is, $s = 2lu-1$, where $1\le l\le 2^{n-1}d$. Now, using $(G1)$ and $(G3)$, $l(lu-1)\equiv 0 \Mod{2^{n-3}d}$ and $(t+1)(lu-1) \equiv 0 \Mod{2^{n-2}q}$. If $l$ is even, then $t \equiv lu-1 \Mod{2^{n-2}q}$ gives that $t$ is odd, which is a contradiction. Therefore, $l$ is odd. Using $(t+1)(lu-1) \equiv 0 \Mod{2^{n-2}q}$, one can easily observe that $\gcd(l,d) = 1$. Then, $lu-1 = 2^{n-3}dl^{\prime}$ and $s = 2^{n-2}dl^{\prime}+1$, where $1\le l^{\prime}\le 8u$. Clearly, $\gcd(l^{\prime},u) = 1$. Thus, $(t+1)l^{\prime} \equiv 0 \Mod{2u}$. If $l^{\prime}$ is odd, then $(t+1) \equiv 0 \Mod{2u}$ which implies that $t$ is odd. So, $l^{\prime}$ is even and so, $t = uq^{\prime} -1$, $1\le q^{\prime} < 2^{n-1}d$, $q^{\prime}$ is odd as $t$ is even. Note that $s-2t-1 = 2^{n-2}dl^{\prime} -2t = 2^{n-2}d(l^{\prime} - \frac{t}{2^{n-3}d}) = 2^{n-2}d\left(\frac{lu-1}{2^{n-3}d} - \frac{uq^{\prime}-1}{2^{n-3}d}\right) = 2^{n-2}du\left(\frac{l-q^{\prime}}{2^{n-3}d}\right)$. \vspace{.2cm} \n Let $(\alpha,\gamma,\delta)\in X$ be such that $\alpha(b) = b^{i}$, $\gamma(b) = a^{\lambda}$ and $\delta(a) = a^{r}$, where $i\in \{1,3\}$, $0\le \lambda \le m-1$, $\lambda$ is even and $r\in U(m)$. We consider two sub-cases based on the image of the map $\alpha$. \vspace{.2cm} \n \textit{Sub-case(i):} Let $\alpha(b) = b$. Then using $\delta(a)^{\alpha(b)}\gamma(b) = \gamma(a\cdot b)\delta(a^{b})$,\\ \n $a^{\lambda(s+2)+ (2t+1)r} = \gamma(a\cdot b)\delta(a^{b}) = \delta(a)^{b}\gamma(b) = (a^{r})^{b}a^{\lambda} = a^{2t+1+(r-1)s+\lambda}$ which implies that \begin{equation*} \lambda(s+1)\equiv (r-1)(s-2t-1) \Mod{m}. \end{equation*} Therefore, $\lambda(2lu)\equiv 2^{n-2}du(r-1)\left(\frac{l-q^{\prime}}{2^{n-3}d}\right) \Mod{2^{n}q}$ which implies that $\lambda l\equiv 2^{n-3}d(r-1)\left(\frac{l-q^{\prime}}{2^{n-3}d}\right) \Mod{2^{n-1}d}$. Now, if $\lambda \equiv 0 \Mod{2^{n-2}d}$, then $r \equiv 1 \; \text{or}\; 3 \Mod{4}$ and vice-versa. Thus, in this sub-case, the choices for the maps $\gamma$ and $\delta$ are, $\gamma_{\lambda}(b) = a^{\lambda}$ and $\delta_{r}(a) = a^{r}$, where $\lambda$ is even and $\lambda \equiv 0 \Mod{2^{n-2}d}$, and $r\in U(m)$. \vspace{.2cm} \n \textit{Sub-case(ii):} Let $\alpha(b) = b^{3}$. Then using $\delta(a)^{\alpha(b)}\gamma(b) = \gamma(a\cdot b)\delta(a^{b})$,\\ \n $a^{\lambda(s+2)+ (2t+1)r} = \gamma(a\cdot b)\delta(a^{b}) = \delta(a)^{b^{3}}\gamma(b) = (a^{r})^{b^{3}}a^{\lambda} = a^{4t+2ts+1+(r-1)s+\lambda}$ which implies that \begin{equation*} (\lambda-2t)(s+1)\equiv (r-1)(s-2t-1) \Mod{m}. \end{equation*} Therefore, $2lu(\lambda-2t)\equiv 2^{n-2}du(r-1)\left(\frac{l-q^{\prime}}{2^{n-3}d}\right) \Mod{2^{n}q}$ which implies that $l(\lambda-2t)\equiv 2^{n-3}d(r-1)\left(\frac{l-q^{\prime}}{2^{n-3}d}\right) \Mod{2^{n-1}d}$. Now, if $\lambda \equiv 0 \Mod{2^{n-2}d}$, then $r \equiv 1 \; \text{or}\; 3 \Mod{4}$ and vice-versa. Thus, in this sub-case, the choices for the maps $\gamma$ and $\delta$ are, $\gamma_{\lambda}(b) = b^{\lambda}$ and $\delta_{r}(a) = a^{r}$, where $\lambda$ is even and $\lambda \equiv 0 \Mod{2^{n-2}d}$, and $r\in U(m)$. \vspace{.2cm} \n Thus combining both the \textit{sub-cases} $(i)$ and $(ii)$, we get, for all $\alpha\in Aut(H)$, the choices for the maps $\gamma$ and $\delta$ are, $\gamma_{\lambda}(b) = a^{\lambda}$ and $\delta_{r}(a) = a^{r}$, where $\lambda$ is even and $\lambda \equiv 0 \Mod{2^{n-2}d}$, and $r\in U(m)$. Therefore, using the Theorem \ref{abcd}, $X \simeq \mathbb{Z}_{4\frac{q}{d}}\rtimes(\mathbb{Z}_{2}\times U(m))$. At last, since, $l$ is odd, $2t(s+1) \equiv 4tlu\not\equiv 0 \Mod{m}$. Therefore, using the Lemma \ref{l3}, $Im(\beta) = \{1\}$. Thus, $B$ is a trivial group. Hence, using the Theorem \ref{abcd}, $Aut(G)\simeq E\rtimes B \simeq \mathbb{Z}_{\frac{4q}{d}}\rtimes(\mathbb{Z}_{2}\times U(m))$. \vspace{.2cm} \n \textit{Case($iv$):} Let $1\le i \le n-4$. and $q=du$, for some odd integer $u$. Then using $(G2)$, $s\equiv -1 \Mod{2^{n-i-2}u}$, that is, $s = 2^{n-i-2}lu-1$, where $1\le l\le 2^{i+2}d$. Now, using $(G1)$ and $(G3)$, $l(2^{n-i-3}lu-1)\equiv 0 \Mod{2^{i}d}$ and $(t+1)(lu2^{n-i-3}-1) \equiv 0 \Mod{2^{n-2}q}$. Since, $n-i-3>0$, $lu2^{n-i-3}-1$ is odd. If $l$ is even, then $t \equiv lu2^{n-i-3}-1 \Mod{2^{n-2}q}$ gives that $t$ is odd, which is a contradiction. Now, if $l$ is odd, then Using $(t+1)(lu-1) \equiv 0 \Mod{2^{n-2}q}$, one can easily observe that $\gcd(l,d) = 1$. Thus, $2^{n-i-3}lu-1\equiv 0 \Mod{2^{i}d}$, which is absurd. Hence, there is no such $l$ exist and so, no such $t$ and $s$ exist and hence no group $G$ exists as the Zappa-Sz\'{e}p product of $H$ and $K$. \end{proof} \begin{theorem} Let $m = 2^{n}q$, $t$ be odd and $\gcd(t,m) = d$, where $n\ge 4$ and $q$ is odd. Then \begin{equation*} Aut(G) \simeq \left\{\begin{array}{ll} (\mathbb{Z}_{\frac{m}{2d}}\rtimes(\mathbb{Z}_{2}\times U(m)))\rtimes \mathbb{Z}_{2}, & \text{if}\; 2t(s+1)\equiv 0 \Mod{m}\\ \mathbb{Z}_{\frac{m}{2d}}\rtimes(\mathbb{Z}_{2}\times U(m)), & \text{if}\; 2t(s+1)\not\equiv 0 \Mod{m} \end{array} \right.. \end{equation*} \end{theorem} \begin{proof} Let $q=du$, for some odd integer $u$. Then using $(G2)$, we have $s\equiv -1 \Mod{2^{n-2}u}$ which implies that $s = 2^{n-2}lu-1$, where $1\le l \le 4d$. Now, using $(G1)$, $l(2^{n-3}ul-1)\equiv 0 \Mod{d}$. Using $(G3)$, we get \begin{equation}\label{e5} (t+1)(lu2^{n-3}-1)\equiv 0 \Mod{2^{n-2}q}. \end{equation} \n \textit{Case($i$):} If $l$ is even, then by the Equation (\ref{e5}), $t\equiv lu2^{n-3}-1 \Mod{2^{n-2}q}$. Note that, $2t(s+1)\equiv 2t(2^{n-2}lu)\equiv 0 \Mod{m}$ and $\lambda(s+1) = \lambda(lu2^{n-2})$. Thus $\lambda(s+1)\equiv 0 \Mod{m}$ if and only if $\lambda l \equiv 0 \Mod{4d}$, which is true for all $\lambda\equiv 0 \Mod{2d}$. Using the similar argument as in the proof of the Theorem \ref{t2}, we get $X \simeq \mathbb{Z}_{\frac{m}{2d}}\rtimes(\mathbb{Z}_{2}\times U(m))$ and $B\simeq \mathbb{Z}_{2}$. Hence, $Aut(G)\simeq E \rtimes B\simeq (\mathbb{Z}_{\frac{m}{2d}}\rtimes(\mathbb{Z}_{2}\times U(m)))\rtimes \mathbb{Z}_{2}$. \vspace{.2cm} \n \textit{Case($ii$):} If $l$ is odd, then using the Equation (\ref{e5}), one can easily observe that $\gcd(l,d) = 1$ which means that $2^{n-3}lu-1 = dl^{\prime}$, where $l^{\prime}$ is odd, $\gcd(l^{\prime},u) = 1$ and $1\le l^{\prime}\le 2^{n}u$. Thus, using the Equation (\ref{e5}), $(t+1)dl^{\prime} \equiv 0 \Mod{2^{n-2}q}$. Since, $\gcd(l^{\prime},u) = 1$, $t = 2^{n-2}uq^{\prime}-1$, where $1\le q^{\prime}\le 4d$. Now, $s-2t-1 = 2dl^{\prime}-2t = 2d(l^{\prime} - \frac{t}{d}) = 2d(\frac{2^{n-3}ul-2^{n-2}uq^{\prime}}{d}) = 2^{n-2}du\frac{l-2q^{\prime}}{d}$. \vspace{.2cm} \n Let $(\alpha,\gamma,\delta)\in X$ be such that $\alpha(b) = b^{i}$, $\gamma(b) = a^{\lambda}$ and $\delta(a) = a^{r}$, where $i\in \{1,3\}$, $0\le \lambda \le m-1$, $\lambda$ is even and $r\in U(m)$. We consider two sub-cases based on the image of the map $\alpha$. \vspace{.2cm} \n \textit{Sub-case ($i$):} Let $\alpha(b) = b$. Then using $\delta(a)^{\alpha(b)}\gamma(b) = \gamma(a\cdot b)\delta(a^{b})$, we get \n $a^{\lambda(s+2)+ (2t+1)r} = \gamma(a\cdot b)\delta(a^{b}) = \delta(a)^{b}\gamma(b) = (a^{r})^{b}a^{\lambda} = a^{2t+1+(r-1)s+\lambda}$ which implies that \begin{equation*} \lambda(s+1)\equiv (r-1)(s-2t-1) \Mod{m}. \end{equation*} \n Therefore, $\lambda(lu2^{n-2})\equiv 2^{n-2}q(r-1)(\frac{l-2q^{\prime}}{d}) \Mod{2^{n}q}$ which implies that $\lambda l\equiv d(r-1)(\frac{l-2q^{\prime}}{d}) \Mod{4d}$. Now, if $\lambda \equiv 0 \Mod{2d}$, then $r \equiv 3 \Mod{4}$. Again, if $\lambda \equiv 0 \Mod{4d}$, then $r \equiv 1 \Mod{4}$. Thus, in this sub-case, the choices for the maps $\gamma$ and $\delta$ are, $\gamma_{\lambda}(b) = a^{\lambda}$ and $\delta_{r}(a) = a^{r}$, where $\lambda$ is even and $\lambda \equiv 0 \Mod{2d}$, and $r\in U(m)$. \vspace{.2cm} \n \textit{Sub-case ($ii$):} Let $\alpha(b) = b^{3}$. Then using $\delta(a)^{\alpha(b)}\gamma(b) = \gamma(a\cdot b)\delta(a^{b})$, $a^{\lambda(s+2)+ (2t+1)r} = \gamma(a\cdot b)\delta(a^{b}) = \delta(a)^{b^{3}}\gamma(b) = (a^{r})^{b^{3}}a^{\lambda} = a^{4t+2ts+1+(r-1)s+\lambda}$ which implies that \begin{equation*} (\lambda-2t)(s+1)\equiv (r-1)(s-2t-1) \Mod{m}. \end{equation*} \n Therefore, $lu2^{n-2}(\lambda-2t)\equiv 2^{n-2}q(r-1)(\frac{l-2q^{\prime}}{d}) \Mod{2^{n}q}$ which implies that $l(\lambda-2t)\equiv d(r-1)(\frac{l-2q^{\prime}}{d}) \Mod{4d}$. Now, if $\lambda \equiv 0 \Mod{2d}$, then $r \equiv 1 \Mod{4}$. Again, if $\lambda \equiv 0 \Mod{4d}$, then $r \equiv 3 \Mod{4}$. Thus, in this sub-case, the choices for the maps $\gamma$ and $\delta$ are, $\gamma_{\lambda}(b) = a^{\lambda}$ and $\delta_{r}(a) = a^{r}$, where $\lambda$ is even and $\lambda \equiv 0 \Mod{2d}$, and $r\in U(m)$. \vspace{.2cm} \n Thus combining both the \textit{sub-cases} $(i)$ and $(ii)$, we get, for all $\alpha\in Aut(H)$, the choices for the maps $\gamma$ and $\delta$ are, $\gamma_{\lambda}(b) = a^{\lambda}$ and $\delta_{r}(a) = a^{r}$, where $\lambda$ is even and $\lambda \equiv 0 \Mod{2d}$, and $r\in U(m)$. Therefore, using the Theorem \ref{abcd}, $E \simeq \mathbb{Z}_{\frac{m}{2d}}\rtimes(\mathbb{Z}_{2}\times U(m))$. Also, since, $2t(s+1)\not\equiv 0 \Mod{m}$, using the Lemma \ref{l3}, $Im(\beta) = \{1\}$. Thus, $B$ is a trivial group. Hence, using the Theorem \ref{abcd}, $Aut(G) \simeq E\rtimes B \simeq \mathbb{Z}_{\frac{m}{2d}}\rtimes(\mathbb{Z}_{2}\times U(m))$. \end{proof} \begin{theorem} Let $m=8q$, $t$ is odd, and $\gcd(t,m) = d$, where $q>1$ is odd. Then \begin{equation*} Aut(G) \simeq \left\{\begin{array}{ll} (\mathbb{Z}_{\frac{m}{2d}}\rtimes(\mathbb{Z}_{2}\times U(m)))\rtimes \mathbb{Z}_{2}, & \text{if}\; 2t(s+1)\equiv 0 \Mod{m}\\ \mathbb{Z}_{\frac{m}{2d}}\rtimes(\mathbb{Z}_{2}\times U(m)), & \text{if}\; 2t(s+1)\not\equiv 0 \Mod{m} \end{array} \right. \end{equation*} \end{theorem} \begin{proof} Let $q=du$, for some odd integer $u$. Then using $(G2)$, $s\equiv -1 \Mod{2u}$ which implies that $s = 2lu-1$, where $1\le l \le 4d$. Now, using $(G1)$, $l(lu-1)\equiv 0 \Mod{d}$. Using $(G3)$, we get \begin{equation}\label{e6} (t+1)(lu-1)\equiv 0 \Mod{2q}. \end{equation} \n \textit{Case($i$):} If $l$ is even, then by the Equation (\ref{e6}), $t\equiv lu-1 \Mod{2q}$. Note that, $2t(s+1)\equiv 2t(2lu)\equiv 0 \Mod{m}$ and $\lambda(s+1)= \lambda(2lu)$. Thus $\lambda(s+1)\equiv 0 \Mod{m}$ if and only if $\lambda l \equiv 0 \Mod{4d}$ which is true for all $\lambda\equiv 0 \Mod{2d}$. Therefore, using the similar argument as in the proof of the Theorem \ref{t2}, we get $E\simeq \mathbb{Z}_{\frac{m}{2d}}\rtimes(\mathbb{Z}_{2}\times U(m))$ and $B\simeq \mathbb{Z}_{2}$. Hence, by the Theorem \ref{abcd}, $Aut(G) \simeq E\rtimes B\simeq (\mathbb{Z}_{\frac{m}{2d}}\rtimes(\mathbb{Z}_{2}\times U(m)))\rtimes \mathbb{Z}_{2}$. \vspace{.2cm} \n \textit{Case($ii$):} If $l$ is odd, then using the Equation (\ref{e6}), one can easily observe that $\gcd(l,d) = 1$ which means that $lu-1 = dl^{\prime}$, where $1\le l^{\prime}\le 8u$ and $\gcd(l^\prime, u) = 1$. Since $lu-1$ is even, $l^{\prime}$ is even. Thus using the Equation (\ref{e6}), $(t+1)dl^{\prime} \equiv 0 \Mod{2q}$. Since, $\gcd(l^{\prime},u) = 1$, $t = uq^{\prime}-1$, where $1\le q^{\prime}\le 8d$ and $q^{\prime}$ is even, as $t$ is odd. Now, $s-2t-1 = 2dl^{\prime}-2t = 2d(l^{\prime} - \frac{t}{d}) = 2d(\frac{ul-uq^{\prime}}{d}) = 2du\frac{l-q^{\prime}}{d}$. \vspace{.2cm} \n Let $(\alpha,\gamma,\delta)\in X$ be such that $\alpha(b) = b^{i}$, $\gamma(b) = a^{\lambda}$ and $\delta(a) = a^{r}$, where $i\in \{1,3\}$, $0\le \lambda \le m-1$, $\lambda$ is even and $r\in U(m)$. We consider two sub-cases based on the image of the map $\alpha$. \vspace{.2cm} \n \textit{Sub-case($i$):} Let $\alpha(b) = b$. Then, $a^{\lambda(s+2)+ (2t+1)r} = \gamma(a\cdot b)\delta(a^{b}) = \delta(a)^{b}\gamma(b) = (a^{r})^{b}a^{\lambda} = a^{2t+1+(r-1)s+\lambda}$ which implies that \begin{equation*} \lambda(s+1)\equiv (r-1)(s-2t-1) \Mod{m}. \end{equation*} Therefore, $\lambda(2lu)\equiv 2du(r-1)(\frac{l-q^{\prime}}{d}) \Mod{8q}$ which implies that $\lambda(l)\equiv d(r-1)(\frac{l-q^{\prime}}{d}) \Mod{4d}$. Now, if $\lambda \equiv 0 \Mod{2d}$, then $r \equiv 3 \Mod{4}$. Again, if $\lambda \equiv 0 \Mod{4d}$, then $r \equiv 1 \Mod{4}$. Thus, in this sub-case, the choices for the maps $\gamma$ and $\delta$ are, $\gamma_{\lambda}(b) = a^{\lambda}$ and $\delta_{r}(a) = a^{r}$, where $\lambda$ is even and $\lambda \equiv 0 \Mod{2d}$, and $r\in U(m)$. \vspace{.2cm} \n \textit{Sub-case($ii$):} Let $\alpha(b) = b^{3}$. Then, $a^{\lambda(s+2)+ (2t+1)r} = \gamma(a\cdot b)\delta(a^{b}) = \delta(a)^{b^{3}}\gamma(b)$ $= (a^{r})^{b^{3}}a^{\lambda} = a^{4t+2ts+1+(r-1)s+\lambda}$ which implies that \begin{equation*} (\lambda-2t)(s+1)\equiv (r-1)(s-2t-1) \Mod{m}. \end{equation*} Therefore, $2lu(\lambda-2t)\equiv 2du(r-1)(\frac{l-q^{\prime}}{d}) \Mod{8q}$ which implies that $l(\lambda-2t)\equiv d(r-1)(\frac{l-q^{\prime}}{d}) \Mod{4d}$. Now, if $\lambda \equiv 0 \Mod{2d}$, then $r \equiv 1 \Mod{4}$. Again, if $\lambda \equiv 0 \Mod{4d}$, then $r \equiv 3 \Mod{4}$. Thus, in this sub-case, the choices for the maps $\gamma$ and $\delta$ are, $\gamma_{\lambda}(b) = a^{\lambda}$ and $\delta_{r}(a) = a^{r}$, where $\lambda$ is even and $\lambda \equiv 0 \Mod{2d}$, and $r\in U(m)$. \vspace{.2cm} \n Thus combining both the \textit{sub-cases} $(i)$ and $(ii)$, we get, for all $\alpha\in Aut(H)$, the choices for the maps $\gamma$ and $\delta$ are, $\gamma_{\lambda}(b) = a^{\lambda}$ and $\delta_{r}(a) = a^{r}$, where $\lambda$ is even and $\lambda \equiv 0 \Mod{2d}$, and $r\in U(m)$. Therefore, using the Theorem \ref{abcd}, $X \simeq \mathbb{Z}_{\frac{m}{2d}}\rtimes(\mathbb{Z}_{2}\times U(m))$. Also, since, $2t(s+1)\not\equiv 0 \Mod{m}$, using the Lemma \ref{l3}, $Im(\beta) = \{1\}$. Thus, $B$ is a trivial group. Hence, by the Theorem \ref{abcd}, $Aut(G)\simeq E\rtimes B \simeq \mathbb{Z}_{\frac{m}{2d}}\rtimes(\mathbb{Z}_{2}\times U(m))$. \end{proof} \section{Automorphisms of Zappa-Sz\'{e}p product of groups $\mathbb{Z}_{p^{2}}$ and $\mathbb{Z}_{m}$, $p$ is odd prime} In \cite{ypm}, Yacoub classified the groups which are Zappa-Sz\'{e}p product of cyclic groups of order $p^{2}$ and order $m$. He found that these are of the following type (see \cite[Conclusion, p. 38]{ypm}) \begin{align*} M_1 = & \langle a,b \;|\; a^{m} = 1 = b^{p^{2}}, ab = ba^u, u^{p^{2}}\equiv 1 \Mod{m}\rangle, \\ M_2 = & \langle a,b \;|\; a^{m} = 1 = b^{p^{2}}, ab = b^{t}a, t^{m}\equiv 1 \Mod{p^{2}}\rangle, \\ M_3 = & \langle a,b \;|\; a^{m} = 1 = b^{p^{2}}, ab = b^{t}a^{pr+1}, a^{p}b = ba^{p(pr+1)} \rangle, \end{align*} \n where $p$ is an odd prime and in $M_3$, $p$ divides $m$. These are not non isomorphic classes. The groups $M_1$ and $M_2$ may be isomorphic to the group $M_3$ depending on the values of $m,r$ and $t$. Clearly, $M_1$ and $M_2$ are semidirect products. Throughout this section $G$ will denote the group $M_3$ and we will be only concerned about groups $M_3$ which are Zappa-Sz\'{e}p product but not the semidirect product. Note that $G=H \bowtie K$, where $H=\langle b \rangle$ and $K=\langle a \rangle$. For the group $G$, the mutual actions of $H$ and $K$ are defined by $a\cdot b = b^{t}, a^{b} = a^{pr+1}$ along with $a^{p}\cdot b = b$ and $(a^{p})^{b} = a^{p(pr+1)}$, where $t$ and $r$ are integers satisfying the conditions \begin{itemize} \item[$(G1)$] $\gcd(t-1, p^{2}) = p$, that is, $t = 1+\lambda p$, where $\gcd(\lambda, p) = 1$, \item[$(G2)$] $\gcd(r,p) = 1$, \item[$(G3)$] $p(pr+1)^{p}\equiv p \Mod{m}$. \end{itemize} \begin{lemma}\label{s4l1} $a^{(pr+1)^{ip\lambda}} = a^{i((pr+1)^{p\lambda} - 1)+1}$, for all $i$. \end{lemma} \begin{proof} One can easily prove the result using $(G3)$. \end{proof} \begin{lemma}\label{s4l2} \begin{itemize} \item[$(i)$] $a\cdot b^{j} = b^{jt}$, for all $j$, \item[$(ii)$] $a^{l}\cdot b = b^{1+lp\lambda}$, for all $l$, \item[$(iii)$] $a^{(b^{j})} = a^{(pr+1)^{j}}$, for all $j$, \item[$(iv)$] $(a^{l})^{b} = a^{\frac{l(l-1)}{2}((pr+1)^{\lambda p}-1)+l(pr+1)}$, for all $l$, \item[$(v)$] $a^{l}\cdot b^{j} = b^{jt^{l}}$, for all $j$ and $l$, \item[$(vi)$] $(a^{l})^{b^{j}} = a^{\frac{jl(l-1)}{2}((pr+1)^{\lambda p}-1)+l(pr+1)^{j}}$, for all $j$ and $l$. \end{itemize} \end{lemma} \begin{proof} \begin{itemize} \item[$(i)$] Using $(C3)$ and $(C5)$, $a\cdot b^{2} = (a\cdot b)(a^{b}\cdot b) = b^{t}(a^{pr+1}\cdot b) = b^{t}(a\cdot (a^{pr}\cdot b)) = b^{t}(a\cdot b) = b^{2t}$. Similarly, $a\cdot b^{3} = (a\cdot b)(a^{b}\cdot b^{2}) = b^{t}(a^{pr+1}\cdot b^{2}) = b^{t}(a\cdot (a^{pr}\cdot b^{2})) = b^{t}(a\cdot b^{2}) = b^{3t}$. Inductively, we get $a\cdot b^{j} = b^{jt}$, for all $j$. \item[$(ii)$] Using $(C3)$ and part $(i)$, $a^{2}\cdot b = a\cdot (a\cdot b) = a\cdot b^{t} = b^{t^{2}} = b^{1+2p\lambda}$. Similarly, $a^{3}\cdot b = a\cdot (a^{2}\cdot b) = a\cdot b^{t^{2}} = b^{t^{3}} = b^{1+3p\lambda}$. Inductively, we get, $a^{l}\cdot b = b^{1+lp\lambda}$, for all $l$. \item[$(iii)$] First, note that, using $(C4)$, we have $(a^{lp})^{b} = a^{lp(pr+1)}$. Now, using $(C4)$ and $(C6)$, $a^{(b^{2})} = (a^{b})^{b} = (a^{pr+1})^{b} = a^{(a^{pr}\cdot b)}(a^{pr})^{b} = a^{b}a^{pr(pr+1)} = a^{(pr+1)^{2}}$. Similarly, $a^{(b^{3})} = (a^{b})^{b^{2}} = (a^{pr+1})^{b^{2}} = a^{(a^{pr}\cdot b^{2})}(a^{pr})^{b^{2}} = a^{b^{2}}$ $((a^{pr})^{b})^{b} = a^{(pr+1)^{2}}(a^{pr(pr+1)})^{b} = a^{(pr+1)^{2}}a^{pr(pr+1)^{2}} = a^{(pr+1)^{3}}$. Inductively, we get, $a^{(b^{j})} = a^{(pr+1)^{j}}$, for all $j$. \item[$(iv)$] Using $(C4)$, $(G3)$ and the part $(iii)$, $(a^{2})^{b} = a^{(a\cdot b)}a^{b} = a^{(b^{t})}a^{pr+1} = a^{(pr+1)^{(1+\lambda p)}}$ $a^{pr+1} = a^{(pr+1)^{\lambda p}+ pr(pr+1)^{\lambda p} + pr+1} = a^{((pr+1)^{\lambda p} -1)+ 2(pr+1)}$. By the similar argument, we get, \begin{align*} (a^{3})^{b} =& (a^{2})^{(a\cdot b)}a^{b}\\ =& (a^{2})^{b^{t}}a^{pr+1}\\ =& a^{(a\cdot b^{t})}a^{(b^{t})}a^{pr+1}\\ =& a^{(b^{1+2p\lambda})}a^{(b^{1+\lambda p})}a^{pr+1}\\ =& a^{(pr+1)^{1+2p\lambda}+(pr+1)^{1+p\lambda}+ pr+1}\\ =& a^{(pr+1)^{2p\lambda}+pr(pr+1)^{2p\lambda}+ (pr+1)^{p\lambda} + pr(pr+1)^{p\lambda}+ pr+1}\\ =& a^{2((pr+1)^{p\lambda}-1) + 1 + pr + (pr+1)^{p\lambda} + pr + pr+1},\;\text{(using the Lemma \ref{s4l1})}\\ =& a^{3((pr+1)^{p\lambda} -1)+ 3(pr+1)}. \end{align*} \n Inductively, we get, $(iv)$. \item[$(v)$] Follows inductively, using the parts $(i)$ and $(ii)$. \item[$(vi)$] Follows inductively, using the parts $(iii)$ and $(iv)$. \end{itemize} \end{proof} \begin{lemma}\label{s4l3} If for all $l\ne 0$, $(pr+1)^{pl}\not\equiv 1 \Mod{m}$, then \begin{itemize} \item[$(i)$] $Im(\gamma)\subseteq \langle a^{p}\rangle$, \item[$(ii)$] $\alpha\in Aut(H)$, \end{itemize} \end{lemma} \begin{proof} \begin{itemize} \item[$(i)$] Let $\alpha(b) = b^{i}$ and $\gamma(b) = a^{\mu}$. Then using $(A1)$ and the Lemma \ref{s4l2} $(v)$, $\alpha(b^{2}) = \alpha(b)(\gamma(b)\cdot \alpha(b)) = b^{i}(a^{\mu} \cdot b^{i}) = b^{i(1+t^{\mu})}$. Inductively, we get, \begin{align*} \alpha(b^{u}) &= b^{i(1+ t^{\mu} + t^{2\mu} + \cdots + t^{(u-1)\mu})}\\ &= b^{i(1+ (1+p\mu\lambda) + (1+ 2p\mu\lambda) + \cdots + (1+ (u-1)p\mu\lambda))}\\ &= b^{i(u+ \frac{u(u-1)}{2}p\mu\lambda)} \end{align*} \n for all $0\le u \le p^{2}-1$. Now, using $(A2)$ and the Lemma \ref{s4l2} $(vi)$, $\gamma(b^{2}) = \gamma(b)^{\alpha(b)}\gamma(b) = (a^{\mu})^{b^{i}}a^{\mu} = a^{\frac{i\mu(\mu-1)}{2}((pr+1)^{p\lambda} - 1)+ \mu(pr+1)^{i}+ \mu}$. Inductively, we get, \begin{align*} \gamma(b^{u}) = a^{(i\frac{u(u-1)\mu(\mu-1)}{2}+ i\mu^{2}\frac{u(u-1)(u-2)}{6})((pr+1)^{p\lambda} - 1)+\mu \sum_{\nu=0}^{u-1}{(pr+1)^{i\nu}}} \end{align*} \n for all $0\le u \le p^{2}-1$. Now, using $(G3)$, $1 = \gamma(p^{2}) = a^{\mu\sum_{\nu=0}^{p^{2}-1}{(pr+1)^{i\nu}}} = a^{\mu\left(\frac{(pr+1)^{ip^{2}}-1}{(pr+1)^{i}-1}\right)}$ which implies that \begin{equation}\label{s4e1} \mu\left(\frac{(pr+1)^{ip^{2}}-1}{(pr+1)^{i}-1}\right)\equiv 0 \Mod{m}. \end{equation} \n If for all $l\ne 0$, $(pr+1)^{pl}\equiv 1 \Mod{m}$, then by the Equation (\ref{s4e1}), $\mu$ can be anything. On the other hand, if for all $l\ne 0$, $(pr+1)^{pl}\not\equiv 1 \Mod{m}$, then by the Equation (\ref{s4e1}) and $(G3)$, $\mu \equiv 0 \Mod{p}$. Also, note that, in both the cases, namely $(pr+1)^{pl}\equiv 1 \Mod{m}$ and $(pr+1)^{pl}\not\equiv 1 \Mod{m}$, we have that $\gamma(b^{u}) = a^{\mu \sum_{\nu=0}^{u-1}{(pr+1)^{i\nu}}}$. Hence, if $(pr+1)^{pl}\not\equiv 1 \Mod{m}$, then $\gamma(b^{u}) = a^{\mu \sum_{\nu=0}^{u-1}{(pr+1)^{i\nu}}}\in \langle a^{p}\rangle$. \item[$(ii)$] Follows immediately using the part $(i)$. \end{itemize} \end{proof} \begin{lemma}\label{s4l4} Let $\begin{pmatrix} \alpha & \beta\\ \gamma & \delta \end{pmatrix} \in \mathcal{A}$. Then, if $\beta\in Q$, then \begin{itemize} \item[$(i)$] $\beta\in Hom(K,H)$ and $Im(\beta)\le \langle b^{p}\rangle$, \item[$(ii)$] $l(pr+1)^{j}\equiv l \Mod{m}$, for all $l$, \item[$(iii)$] $\gamma(h)\cdot \beta(k) = \beta(k)$ and $\gamma(h)^{\beta(k)} = \gamma(h)$, for all $h\in H$ and $k\in K$, \item[$(iv)$] $\gamma\beta = 0$, where $0$ is the trivial homomorphism in $Hom(K,K)$, \item[$(v)$] $\gamma\beta + \delta \in Aut(K)$ and $\gamma\beta + \delta\in S$, \item[$(vi)$] $\beta\gamma \in Hom(H,H)$, \item[$(vii)$] $\alpha+\beta\gamma \in Aut(H)$ and $\alpha+\beta\gamma \in P$. \end{itemize} \end{lemma} \begin{proof} Let $\beta(a) = b^{j}$. Then using $(A3)$, $\beta(a^{2}) = \beta(a)(a\cdot \beta(a)) = b^{j}(a\cdot b^{j}) = b^{j+jt}$. Inductively, we get, \[\beta(a^{l}) = b^{j(1+t+t^{2}+ \cdots + t^{l-1})} = b^{j(1 + (1+\lambda p) + (1+ 2\lambda p)+ \cdots + (1+(l-1)\lambda p))} = b^{j(l+\lambda p\frac{l(l-1)}{2})}.\] \begin{itemize} \item[$(i)$] Since $\beta\in Q$, $\beta(k^{h}) = \beta(k)$. Therefore, $b^{j} = \beta(a) = \beta(a^{b}) = \beta(a^{pr+1}) = b^{j(pr+1)}$ which implies that $jpr + j \equiv j \Mod{p^{2}}$. Since $\gcd(r,p) = 1$, $j\equiv 0 \Mod{p}$. Thus, $\beta(a^{l}) = b^{jl}\in \langle b^{p}\rangle$, for all $l$. Hence, One can easily observe that $\beta$ is a group homomorphism and $Im(\beta)\le \langle b^{p}\rangle$. \item[$(ii)$] Since $\beta\in Q$, $k^{\beta(k^{\prime})} = k$. Therefore, using the Lemma \ref{s4l2} $(vi)$, $a^{l} = (a^{l})^{\beta(a)} = (a^{l})^{b^{j}} = a^{\frac{jl(l-1)}{2}((pr+1)^{\lambda p}-1)+l(pr+1)^{j}}$. Now, using the part $(i)$ and $(G3)$, we get, $l(pr+1)^{j}\equiv l \Mod{m}$, for all $l$. \item[$(iii)$] First, note that $a^{l}\cdot b^{p} = b^{p}$ and using the part $(ii)$, $(a^{l})^{b^{p}} = a^{l}$, for all $l$. Hence, the result follows using the part $(i)$. \item[$(iv)$] Using the Lemma \ref{s4l3} $(i)$, we have, $\gamma(b^{u}) = a^{\mu \sum_{\nu=0}^{u-1}{(pr+1)^{i\nu}}}$, for all $u$. Then, using the part $(ii)$, for all $l$, we get, \[\gamma\beta(a^{l}) = \gamma(b^{lj}) = a^{\mu \sum_{\nu=0}^{lj-1}{(pr+1)^{i\nu}}} = a^{{\mu}\left(\frac{(pr+1)^{ijl}-1}{(pr+1)^{i}-1}\right)} = 1.\] Thus, $\gamma\beta = 0$. \item[$(v)$] Follows directly using the part $(iv)$. \item[$(vi)$] Using $\beta(k^{h}) = \beta(k)$ and the part $(i)$, $\beta\gamma(hh^{\prime}) = \beta(\gamma(h)^{\alpha(h^{\prime})}\gamma(h^{\prime})) = \beta(\gamma(h)^{\alpha(h^{\prime})}) \beta(\gamma(h^{\prime})) = \beta(\gamma(h))\beta(\gamma(h^{\prime}))$. Hence, $\beta\gamma \in Hom(K,K)$. \item[$(vii)$] Using the Lemma \ref{s4l3} $(i)$, we have, $\gamma(b^{u}) = a^{\mu \sum_{\nu=0}^{u-1}{(pr+1)^{i\nu}}}$, for all $u$. Also, using the part $(i)$, we have, $\beta\gamma(b^{u}) = b^{uj\mu}$, for all $u$. Therefore, \[(\alpha+\beta\gamma)(b^{u}) = b^{u(i+j\mu+p\mu\lambda\frac{u-1}{2})}.\] \n Now, one can easily observe that $\alpha +\beta\gamma$ is a bijection. Hence, using the part $(vi)$, $\alpha+\beta\gamma \in Aut(H)$. \vspace{.2cm} \n Now, using the part $(i)$, $(C5)$ and $(C6)$, $k\cdot (\alpha+\beta\gamma)(h) = k\cdot \alpha(h)\beta\gamma(h) = (k\cdot \alpha(h))(k^{\alpha(h)}\cdot \beta(\gamma(h)) = \alpha(k\cdot h)\beta(\gamma(h) = \alpha(k\cdot h)\beta\gamma(k\cdot h) = (\alpha+\beta\gamma)(k\cdot h)$ and $k^{(\alpha+\beta\gamma)(h)} = k^{\alpha(h)\beta\gamma(h)} = (k^{\alpha(h)})^{\beta\gamma(h)} = k^{\alpha(h)} = k^{h}$. Hence, $\alpha+\beta\gamma \in P$. \end{itemize} \end{proof} \n Note that, using the Lemma \ref{s4l4} $(iii)$, multiplication in the group $\mathcal{A}$ reduces to the usual multiplication of matrices. \begin{theorem}\label{s4t1} Let $A,B,C,$ and $D$ be defined as above. Then $Aut(G) = ABCD$ \end{theorem} \begin{proof} Using the Lemma \ref{s4l4} $(vii)$, $\alpha+\beta\gamma \in P$. In particular, $1-\beta\gamma \in P$. Therefore, by the Theorem \ref{s2t1}, we have, $Aut(G) = ABCD$. \end{proof} \begin{theorem}\label{s4t2} Let $G$ be as above. Then \begin{equation*} |Aut(G)| = \left\{\begin{array}{ll} p^{2}m\frac{\phi(m)}{p-1}, & \text{if}\; (pr+1)^{p}\equiv 1 \Mod{m}\\ pm\frac{\phi(m)}{p-1}, & \text{if}\; (pr+1)^{p}\not\equiv 1 \Mod{m} \end{array}\right.. \end{equation*} \end{theorem} \begin{proof} Let $\beta \in Q$. Then using the Lemma \ref{s4l4} $(i)$, we have that $\beta(a^{l}) = b^{jl}$, where $j\equiv 0 \Mod{p}$. Thus, $B \simeq \mathbb{Z}_{p}$. Now, let $(\alpha,\gamma,\delta)\in X$ be such that $\alpha(b) = b^{i}$, $\gamma(b) = a^{\mu}$ and $\delta(a) = a^{s}$, where $i\in \mathbb{Z}_{p^{2}}$, $\gcd(i,p^{2}) = 1$, $0\le \mu \le m-1$, and $s\in U(m)$. Then using $\alpha(hh^{\prime}) = \alpha(h)(\gamma(h)\cdot \alpha(h^{\prime}))$, $\gamma(hh^{\prime}) = \gamma(h)^{\alpha(h^{\prime})}\gamma(h^{\prime})$ and the Lemma \ref{s4l3} $(i)$, we have \begin{equation}\label{s4e2} \alpha(b^{u}) = b^{i(u+ \frac{u(u-1)}{2}p\mu\lambda)} \;\text{and}\; \gamma(b^{u}) = a^{\mu \sum_{\nu=0}^{u-1}{(pr+1)^{i\nu}}}. \end{equation} \n Now, using $(\alpha,\gamma,\delta)\in X$, $\delta(k)\cdot \alpha(h) = \alpha(k\cdot h)$, $b^{it} = \alpha(b^{t}) = \alpha(a\cdot b) = \delta(a)\cdot \alpha(b) = a^{s}\cdot b^{i} = b^{it^{s}}$. Thus, $it^{s} \equiv it \Mod{p^{2}}$ which implies that $(1+p\lambda)^{s-1} \equiv 1 \Mod{p^{2}}$. Therefore, $s\equiv 1 \Mod{p}$. Using $(\alpha,\gamma,\delta)\in X$, $\delta(k)^{\alpha(h)}\gamma(h) = \gamma(k\cdot h)\delta(k^{h})$, $(G3)$ and the fact that $s\equiv 1 \Mod{p}$, we get, $a^{\mu \sum_{\nu=0}^{t-1}{(pr+1)^{i\nu}}+s(pr+1)} = \gamma(b^{t})\delta(a^{pr+1}) = \gamma(a\cdot b)\delta(a^{b}) = \delta(a)^{\alpha(b)}\gamma(b) = (a^{s})^{b^{i}}a^{\mu}$ $= a^{\frac{is(s-1)}{2}((pr+1)^{\lambda p}-1)+s(pr+1)^{i}}a^{\mu} = a^{s(pr+1)^{i} + \mu}$. Thus $\mu \sum_{\nu=0}^{t-1}{(pr+1)^{i\nu}}+s(pr+1)\equiv s(pr+1)^{i}+\mu \Mod{m}$. Therefore, \begin{align*} \mu +s(pr+1)^{i} &\equiv \mu\left(\frac{(pr+1)^{it}-1}{(pr+1)^{i}-1}\right) +s(pr+1) \Mod{m}\\ & \equiv \mu\left(\frac{(pr+1)^{i(1+p\lambda)}-1}{(pr+1)^{i}-1}\right) +s(pr+1) \Mod{m}\\ & \equiv \mu\left(\frac{(pr+1)^{i}(pr+1)^{ip\lambda}-1}{(pr+1)^{i}-1}\right) +s(pr+1)\Mod{m}. \end{align*} \n We consider two cases, namely $(pr+1)^{p}\equiv 1 \Mod{m}$ and $(pr+1)^{p}\not\equiv 1 \Mod{m}$. \vspace{.2cm} \n \textit{Case($i$).} If $(pr+1)^{p}\equiv 1 \Mod{m}$, then $\mu + s(pr+1)^{i}\equiv \mu + s(pr+1) \Mod{m}$ which implies that $i\equiv 1 \Mod{p}$. Thus in this case, the choices for the maps $\alpha$, $\gamma$ and $\delta$ are, we get, $\alpha_{i}(b) = b^{i}$, $\gamma_{\mu}(b) = a^{\mu}$, and $\delta_{s}(a) = a^{s}$, where $i\in U(p^{2})$, $i \equiv 1 \Mod{p}$, $0\le \mu \le m-1$, $s\in U(m)$, and $s\equiv 1 \Mod{p}$. \vspace{.2cm} \n \textit{Case($ii$).} If $(pr+1)^{p}\not\equiv 1 \Mod{m}$, then using the Lemma \ref{s4l3}, $\mu \equiv 0 \Mod{p}$. Therefore, $\mu + s(pr+1)^{i}\equiv \mu + s(pr+1) \Mod{m}$ which implies that $i\equiv 1 \Mod{p}$. Thus in this case, the choices for the maps $\alpha$, $\gamma$ and $\delta$ are, we get, $\alpha_{i}(b) = b^{i}$, $\gamma_{\mu}(b) = a^{\mu}$, and $\delta_{s}(a) = a^{s}$, where $i\in U(p^{2})$, $i \equiv 1 \Mod{p}$, $0\le \mu \le m-1$, $\mu \equiv 0 \Mod{p}$, $s\in U(m)$ and $s\equiv 1 \Mod{p}$. \vspace{.2cm} \n From both the cases $(i)$ and $(ii)$, we observe that for all $\mu$, $i\equiv 1 \Mod{p}$ and $s\equiv 1 \Mod{p}$. Using these conditions, first, we find the structure of $Aut(G)$. \vspace{.2cm} \n Since, $A\times D$ normalizes $C$, $M$ normalizes C. So, clearly, $C\triangleleft E$ and $M\cap C = \{1\}$. Now, if $\begin{pmatrix} \alpha & 0\\ \gamma & \delta \end{pmatrix} \in E$, then \[\begin{pmatrix} \alpha & 0\\ \gamma & \delta \end{pmatrix} = \begin{pmatrix} \alpha & 0\\ 0 & \delta \end{pmatrix}\begin{pmatrix} 1 & 0\\ \delta^{-1}\gamma & 1 \end{pmatrix} \in MC\] \n Thus $E = C\rtimes M$. Now, using the Lemma \ref{s4l4} $(iii)$ and $(iv)$, we get, \begin{equation}\label{s4e3} \begin{pmatrix} 1 & \beta\\ 0 & 1 \end{pmatrix}\begin{pmatrix} \alpha & 0\\ \gamma & \delta \end{pmatrix}\begin{pmatrix} 1 & \beta\\ 0 & 1 \end{pmatrix}^{-1} = \begin{pmatrix} \alpha+ \beta\gamma & (\alpha+ \beta\gamma)(-\beta) + \beta\delta\\ \gamma & \delta \end{pmatrix}. \end{equation} \n Using the Lemma \ref{s4l4} $(i)$ and $(ii)$, we have \vspace{.2cm} \n $((\alpha+ \beta\gamma)(-\beta) + \beta\delta)(a) =(\alpha+ \beta\gamma)(-\beta)(a)(\beta\delta)(a)=(\alpha+\beta\gamma)(b^{-j})\beta(a^{s})= \alpha(b^{-j})\beta(\gamma(b^{-j}))b^{sj}= b^{-ij}\beta(a^{\mu\sum_{\nu=0}^{-j-1}{(pr+1)^{i\nu}}})b^{sj}= b^{j(s-i)}\beta\left(a^{\mu\left(\frac{(pr+1)^{-ij}-1}{(pr+1)^{i}-1}\right)}\right)= \beta(1) = 1$. Thus, $(\alpha+ \beta\gamma)(-\beta) + \beta\delta = 0$. Also, using the Lemma \ref{s4l4} $(vii)$, one can easily observe that $(\alpha+\beta\gamma, \gamma, \delta)\in X$. Therefore, by the Equation (\ref{s4e2}), \[\begin{pmatrix} 1 & \beta\\ 0 & 1 \end{pmatrix}\begin{pmatrix} \alpha & 0\\ \gamma & \delta \end{pmatrix}\begin{pmatrix} 1 & \beta\\ 0 & 1 \end{pmatrix}^{-1} = \begin{pmatrix} \alpha+ \beta\gamma & 0\\ \gamma & \delta \end{pmatrix}\in E.\] \n Thus $E \triangleleft \mathcal{A}$. Clearly, $E\cap B = \{1\}$. Now, if $\begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix}\in \mathcal{A}$, then using $\gamma\alpha\beta = 0$, we get \[\begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} = \begin{pmatrix} \alpha & 0 \\ \gamma & \delta \end{pmatrix}\begin{pmatrix} 1 & \alpha^{-1}\beta \\ 0 & 1 \end{pmatrix}\in EB\] \n Hence, $\mathcal{A} = E\rtimes B$ and so, $Aut(G)\simeq E\rtimes B\simeq (C\rtimes (A\times D))\rtimes B$. \vspace{.2cm} \n Thus, $|X|= p\times m\times\frac{\phi(m)}{p-1} = pm\frac{\phi(m)}{p-1}$ and $|Aut(G)| = |X||B| = pm\frac{\phi(m)}{p-1}\times p = p^{2}m\frac{\phi(m)}{p-1}$ in the \textit{case($i$)}, and $|X|= p\times\frac{m}{p}\times\frac{\phi(m)}{p-1} = m\frac{\phi(m)}{p-1}$, $|Aut(G)| = |X||B| = m\frac{\phi(m)}{p-1}\times p = pm\frac{\phi(m)}{p-1}$ in the \textit{case($ii$)}. \end{proof} \n Note that, in the Theorem \ref{s4t2}, we have $B \simeq \mathbb{Z}_{p}$, $\langle \alpha \rangle \simeq \mathbb{Z}_{p}$, $\langle \gamma \rangle \simeq \mathbb{Z}_{m}$ or $\langle \gamma \rangle \simeq \mathbb{Z}_{\frac{m}{p}}$.
2,869,038,154,213
arxiv
\section{Introduction} \tableofcontents \section{Introduction} Symbolic coding of dynamical systems $(X,T)$ in the form of measurable factor maps into shift spaces over finite alphabets $(X,T)\rightarrow (Y,\sigma)\subset (\{1,2,\ldots, a\}^\mathbb{Z},\sigma)$ has played a prominent role in the theory of dynamical systems since its inception (\cite{hadamard1898surfaces,morse1921recurrent,milnor1988iterated}). Requiring the factor maps to be continuous is usually impossible. It is however meaningful to look for a symbolic system $(Y,\sigma)\subset (\{1,2,\ldots, a\}^\mathbb{Z},\sigma)$ for which the original system occurs as a continuous factor $(Y,\sigma)\rightarrow (X,T)$. In other words this form of ``digitization" corresponds to symbolic extensions. However as $(Y,\sigma)$ is an extension of $(X,T)$ it is necessarily ``more complex" than $(X,T)$. Viewing $(Y,\sigma)$ as a model of $(X,T)$ one strives to minimize this ``complexity gap". This may be formalized mathematically by various conditions such as requiring the extension to be principal or strongly isomorphic\footnote{See Definition \ref{df:principal}}. The associated theory for $\mathbb{Z}$-systems is deep and extensive (\cite{BDz2004, downarowicz2005entropy,D11}). Recently a symbolic extension theory for $\mathbb{R}$-systems, i.e.\ \textit{topological flows} has been put forth (\cite{burguet2019symbolic}). The present paper is a further contribution in this direction, specifically to the theory of symbolic extensions of expansive topological flows. Our main tool is the \textit{small flow boundary property}. As a dynamical analog of Lebesgue covering dimension zero, the {\it small boundary property} for a $\mathbb{Z}$-system $(X,T)$ was introduced by Shub and Weiss in \cite{SW} who investigated the question under which conditions a given $\mathbb{Z}$-system has factors with strictly lower entropy. Later it was realized that the small boundary property has wider applicability. Notably Lindenstrauss and Weiss \cite{LW} showed that a $\mathbb{Z}$-system which has the small boundary property must have mean dimension zero. Moreover a $\mathbb{Z}$-system with the small boundary property has a zero-dimensional strongly isomorphic extension (\cite[p. 4338]{burguet2019symbolic} based on \cite{downarowicz2005entropy}). From \cite{L95} it follows that a finite-dimensional $\mathbb{Z}$-system without periodic points has the small boundary property\footnote{In effect from \cite[Theorem 3.3]{L95} it follows that an infinite $\mathbb{Z}$-system ($|X|~=~\infty)$ with a finite number of periodic points has the small boundary property.}. Burguet \cite{burguet2019symbolic} introduced the {\it small flow boundary property} for topological flows as an analog to the small boundary property. He showed that flows with the small flow boundary property admit strongly isomorphic zero-dimensional extensions and gave necessary and sufficient conditions for the existence of \textit{symbolic} extensions\footnote{A topological flow is said to admit a symbolic extension, respectively a zero-dimensional extension, if it has an extension by a suspension flow over a subshift, respectively a zero dimensional system, with a positive continuous roof function.} for such flows in terms of the existence of \textit{superenvelopes} (\cite[Theorem 3.6]{burguet2019symbolic}). This can be seen as a certain generalization of the Boyle-Downarowicz symbolic extension entropy theorem (\cite{BDz2004}). Burguet \cite{burguet2019symbolic} showed that a $C^2$-flow without fixed points\footnote{Flows without fixed points are known as \emph{regular flows} but we will not use this terminology in this paper.} and such that for any $\tau>0$, the number of periodic orbits of period less than $\tau$ is finite has the small flow boundary property. In our main theorem we manage to remove the smoothness assumption: \begin{thm_a} Let $X$ be a compact finite-dimensional space. Let $\Phi$ be a topological flow on $X$ without fixed points, having a countable number of periodic orbits. Then $(X, \Phi)$ has the small flow boundary property. \end{thm_a} Expansive $\mathbb{Z}$-systems were introduced as early as 1950 by \cite{utz1950unstable}. In \cite{reddy1968lifting} and \cite{keybob} it was shown that expansive $\mathbb{Z}$-systems admit symbolic extensions\footnote{From the work of Boyle and Downarowicz \cite{BDz2004} it follows that an expansive $\mathbb{Z}$-system admits a symbolic extension of the same entropy.}. The notion of expansiveness for flows was introduced by Bowen and Walters \cite{bowen1972expansive}. Bowen and Walters \cite{bowen1972expansive} proved that expansiveness is invariant under topological conjugacy and that the topological entropy of an expansive flow is finite. In addition they constructed a symbolic extension for expansive flows. They asked whether this symbolic extension preserves entropy. More precisely, they made use of closed cross-sections to build a symbolic extension and wondered if one could choose carefully these closed cross-sections so that the associated symbolic extension has the same topological entropy as the original system. Burguet \cite{burguet2019symbolic} gave a positive answer to this question for $C^2$-expansive flows. In this paper, we give an affirmative answer for all expansive flows. \begin{thm_b} Let $(X, \Phi)$ be an expansive flow. Then it has a strongly isomorphic symbolic extension. \end{thm_b} \subsection*{Structure of the paper} In Section \ref{sec:Preliminaries}, we recall basic notions related to discrete dynamical systems and topological flows. In Section \ref{sec:Establishing the small flow boundary property}, we prove that finite-dimensional topological flows without fixed points, having a countable number of periodic orbits, satisfy the small flow boundary property (Theorem A). In Section \ref{sec:Expansive flows have strongly isomorphic symbolic extensions}, we recall the definition of expansive flows and Bowen-Walters construction of symbolic extensions for expansive flows, and show that any expansive topological flow has a strongly isomorphic symbolic extension (Theorem \ref{thm:thm B}). This answers an open question of Bowen and Walters \cite{bowen1972expansive}. In the Appendix (Section \ref{sec:Appendix}), we review the construction of a complete family of cross-sections for a topological flow without fixed points, following Bowen and Walters. \subsection*{Acknowledgements} We are grateful to David Burguet who told us about the main problem of the paper and conveyed to us his strong conviction in the feasibility of a positive solution (see also \cite[Remark 2.3]{burguet2019symbolic}). \section{Preliminaries}\label{sec:Preliminaries} \subsection{Notation}\label{subsec:notation} Let $(X,d)$ be a metric space. Let $x\in S\subset X$. Denote by $B_S(x,\epsilon)=\{y\in S|\, d(x,y)<\epsilon\}$ the open $\epsilon$-ball in $S$ around $x\in S$. If $S$ is clear from the context, it may be omitted from the notation. Let $C=\{A_1,\ldots A_n\}$ be a set of sets $A_i\subset X$, $i=1,\ldots,n$. We denote by $\bigcup C=\cup_{i=1}^n A_i\subset X$ and $\bigcap C=\cap_{i=1}^n A_i\subset X$. Let $S,V\subset X$. Denote by $\partial_S V$ and $\text{\rm Int}_S V$ respectively the \textbf{boundary} and \textbf{interior} of $V$ w.r.t.\ the subspace topology induced by $S$. Let $S\subset X$ be a closed subset and $Q\subset S$ be a subset in $S$. Fix $\epsilon>0$. The \textbf{open}, respectively \textbf{closed $\epsilon$-tube around $Q$} is the set $\Theta^S_{\epsilon}(Q):=\{y\in S|\, d(y,\overline{Q})< \epsilon\}$ respectively $\overline{\Theta}^S_{\epsilon}(Q):=\{y\in S|\, d(y,\overline{Q})\leq \epsilon\}$. Let $O\subset S$ be an open subset in $S$. The \textbf{open}, respectively \textbf{closed internal $(-\epsilon)$-tube of $O$ inside $S$} is the set $\Theta^S_{-\epsilon}(O):=\{y\in O|\, d(y,S\setminus O)> \epsilon\}$ respectively $\overline{\Theta}^S_{-\epsilon}(O):=\{y\in O|\, d(y,S\setminus O)\geq \epsilon\}$. If $S$ is clear from the context, it may be omitted from the notation. \subsection{Discrete dynamical systems and topological flows} A pair $(X,T)$ is called a {\bf (discrete) dynamical system} if $X$ is a compact metric space and $T:X\to X$ is a homeomorphism. For $Y$, a second countable metrizable space we denote by $\dim(A)$ the Lebesgue covering dimension of $A$.\footnote{For second countable normal spaces (metrizable by the Urysohn metrization theorem), the Lebesgue covering dimension equals the small inductive dimension (see \cite[\S 6.2]{fedorchuk1990fundamentals}).} We use the convention $\dim(B)=-1$ iff $B=\emptyset$. \begin{df} A \textbf{topological flow} is a pair $(X, \Phi)$ where $X$ is a compact metrizable space and $\Phi: X\times \mathbb{R} \to X$ is a continuous flow on $X$, that is, the map $\Phi$ is continuous, $\Phi(\cdot, 0)$ is the identity map on $X$ and $\Phi(\Phi(x,t), s)=\Phi(x, t+s)$ for all $t,s\in \mathbb{R}$ and $x\in X$. For $t\in \mathbb{R}$, we sometimes use the notation $\Phi_t(x)=\Phi(x, t)$ and notice that $\Phi_t: X\rightarrow X$ is a homeomorphism. In addition for $L\subset \mathbb{R}$, $S\subset X$ we denote $\Phi_{L}(S)=\{\Phi_t(x)\,|\, t\in L,\, x\in S\}$. Throughout the text, we fix a compatible metric $d$ on $X$. Given a set $\emptyset \not= A\subset X$ and $x\in X$, we define $d(x,A)=\inf_{y\in A} d(x,y)$, as well as $d(x,\emptyset)=\infty$. \end{df} \begin{df} A point $x\in X$ is called a \textbf{fixed point} if $\Phi_t(x)=x$ for all $t$. A point $x\in X$ is a \textbf{periodic point} with \textbf{period} $\tau>0$ if $\Phi_\tau(x)=x$ and $\Phi_t(x)\not=x$ for any $0<t<\tau$. In the later case, the set $\{\Phi_t(x)\,|\,0\leq t \leq \tau\}$ is called the \textbf{periodic orbit} associated with $x$. \end{df} \begin{df} The flow $(X, \Phi)$ is said to be \textbf{aperiodic} if it has no periodic orbits, that is, the equation $\Phi_t(x)=x$ implies $t=0$. \end{df} \subsection{Cross-sections} \begin{df}\label{df:cross-section} A \textbf{cross-section} of \textbf{injectivity time} $\eta>0$ is a subset $S\subset X$ such that the restriction of $\Phi$ on $S\times [-\eta, \eta]$ is one-to-one. The cross-section is said to be \textbf{global} if there is $\xi>0$ such that $\Phi(S\times [-\xi, \xi])=X$. The set $\Phi_{[-\eta, \eta]}(S)$ is called the \textbf{$\eta$-cylinder} associated with $S$. \end{df} \begin{rem} In \cite{burguet2019symbolic} a closed cross-section $S$ such that the flow map $\Phi: (x, t) \rightarrow \Phi_t(x)$ is a surjective local homeomorphism from $S \times \mathbb{R}$ to $X$, is called a \textit{Poincaré cross-section} and it is shown this is strictly stronger than $S$ being a closed and global cross-section. \end{rem} \begin{df} Let $(X, \Phi)$ be a topological flow. A finite family $\mathcal{S}=\{S_i \}_{i=1}^{N}$ of disjoint closed cross-sections each of injectivity time $\eta$ is said \textbf{complete} if $\bigcup_{i=1}^{N}\Phi_{[-\frac{\eta}{2},\frac{\eta}{2}]}(S_i)=X$. It follows that $\mathcal{G}=\cup_{i=1}^{N}S_i$ is a closed global cross-section. \end{df} A trivial observation is if $x\in S$ and $\Phi_t x\in S$ for some $t\not=0$, where $S$ is a cross-section of injectivity time $\eta>0$, then $|t|> 2\eta$ as otherwise $\Phi_{-\frac t 2}S\cap \Phi_{\frac t 2}S=\emptyset$. \subsection{Flow boundaries and interiors}\label{subsec:Calculating flow boundaries and interiors} As we will see in the sequel, the cylinders associated with cross-sections will play a fundamental role in the analysis of topological flows. The following definition which originates in \cite[Definition 3]{bowen1972expansive} is based upon \cite[Lemma 2.1]{burguet2019symbolic}. \begin{df}\label{def:Flow boundaries and interiors} Let $U$ be a set contained in a closed cross-section of injectivity time $\eta$. The \textbf{flow interior} $\text{\rm Int}^{\Phi}(U)\subset U$ of $U$ is the unique set obeying: $$\Phi_{(-\eta, \eta)}(\boldsymbol{{\rm Int}^{\Phi}(U)})={\rm Int}(\Phi_{[-\eta, \eta]}(U))$$ The \textbf{flow boundary} $\partial^{\Phi}U$ of $U$ is the unique set obeying: $$\partial \Phi_{[-\eta, \eta]}(U)=\Phi_{-\eta}(U)\sqcup \Phi_{\eta}(U)\sqcup \Phi_{(-\eta, \eta)}( \boldsymbol{ \partial^{\Phi}U})$$ One can show that for every $0<\gamma<\eta$: $$ \text{\rm Int}^{\Phi}(U)=\text{Int}(\Phi_{[-\gamma, \gamma]}(U))\cap U, $$ $$ \partial^{\Phi}U=\overline{U}\setminus \text{\rm Int}^{\Phi}(U). $$ \end{df} Note that $\partial^{\Phi}U$ is closed as it can be written as $ \partial^{\Phi}U=\overline{U}\setminus \text{Int}(\Phi_{[-\gamma, \gamma]}). $ for any $0<\gamma<\eta$. We remark that under certain circumstances the above notions coincide with the classical notions of interior and boundary under the induced topology on the cross-section. See Proposition \ref{prop:natural boundary}. \subsection{Good cross-sections exist}\label{subsec:Good subsections exist} Here are several important facts regarding cross-sections: \begin{thm}(Whitney)\label{thm:Whitney} \cite[page 270]{whitney1933regular} Let $(X,\Phi)$ be a topological flow without fixed points, then for each $x\in X$ there is a closed cross-section $S$ such that $x\in \text{\rm Int}^{\Phi} S$. \end{thm} \begin{proof}(sketch) Fix $x\in X$. For $y\in X$, define $$\theta(y)=\int_0^1 d(\Phi_s(y),x) ds,$$ where $d$ is the metric on $X$ compatible with the topology. It is easy to see that for fixed $y\in X$ the function $t\mapsto \theta(\Phi_t(y))$ has a continuous derivative. We denote the derivative at $t=0$ by $\theta'(y)$. An easy calculation shows: $$\theta'(y)=d(\Phi_1(y),x)-d(y,x).$$ Assume w.l.o.g.\ that $\Phi_1(x)\not= x$. Thus $\theta'(x)=d(\Phi_1(x),x)>0$. Using the inverse function theorem we find $\ell>0$ such that $\theta(\Phi_{-\ell}(x))<\theta(x)<\theta(\Phi_{\ell}(x))$ and $\theta'(\Phi_{t}(x))>0$ for all $-2\ell\leq t\leq 2\ell$. Using the continuity of $\Phi$, $\theta$ and $\theta'$ we may find an open neighborhood $U$ of $x$ such that for all $y\in \overline{U}$, $\theta'(\Phi_{t}(y))>0$ for all $-2\ell\leq t\leq 2\ell$ and: $$\theta(\Phi_{-\ell}(y))<\theta(x)<\theta(\Phi_{\ell}(y)).$$ Thus for every $y\in \overline{U}$ there is a unique $-\ell<t_y<\ell$ such that $\theta(\Phi_{t_y}(y))=\theta(x)$. It is not hard to show that the set $S=\{\Phi_{t_y}(y)\,|\, y\in \overline{U} \}$ is a closed cross-section such that $x\in S\cap U\subset \text{\rm Int}^{\Phi} S$. Indeed $\ell>0$ is clearly an injectivity time for $S$. Moreover $U\subset \Phi_{[-\ell, \ell]}(S)$, as if $t_y<0$ then $t_y+\ell>0$ and if $t_y>0$ then $t_y-\ell<0$. \end{proof} Based on the previous theorem it is possible to prove: \begin{thm}(Bowen \& Walters) A topological flow without fixed points admits a complete family of (closed) cross-sections. \end{thm} \begin{proof} See the proof of Lemma \ref{lem:complete family appendix} where a stronger result is proven. \end{proof} \begin{lem}\label{lem:dim_cross-section} Let $(X, \Phi)$ be a topological flow with $\dim(X)=n\geq 1$. Let $S\subset X$ be a closed cross-section. Then $\dim(S)\leq n-1$. If in addition $S$ is a global cross-section then $\dim(S)= n-1$. \end{lem} \begin{proof} As $S\times [-\eta, \eta]$ is homeomorphic to a subset of $X$, one has $\dim(S\times [-\eta, \eta])\leq n$. By a theorem of Hurewicz in \cite{hurewicz1935dimension}, as $S$ is compact, $\dim(S)\leq n-1$. Now assume in addition that $S$ is a global cross-section. Let $\xi>0$ such that the natural continuous map $f: S\times [-\xi,\xi]\rightarrow X$ given by $(x,t)\mapsto \Phi(x,t)$ is surjective. Note that $f$ is a closed map between two metric separable spaces and for every $x\in X$, $f^{-1}(x)$ is countable. Thus by \cite[Theorem 1.12.4]{engelking1995theory} $$\dim(X)\leq \dim(S\times[-\xi,\xi])+\sup_{x\in X}\dim(f^{-1}(x))\le \dim(S)+1+0, $$ where we used the product theorem for non-empty metric separable spaces $A$ and $B$ $\dim(A\times B)\leq \dim(A)+\dim(B)$ (\cite[Theorem 1.5.16]{engelking1995theory}). Thus $\dim(S)\geq n-1$ as desired. \end{proof} \subsection{Calculating with flow interiors and boundaries}\label{subsec:Calculating} Let $S,V\subset X$. Recall that $\partial_S V$ and $\text{\rm Int}_S V$ denote respectively the boundary and interior of $V$ w.r.t.\ the subspace topology induced by $S$. \begin{lem}[\cite{burguet2019symbolic}, Lemma 2.2]\label{lem:partial in induced topo} Let $S$ be a set contained in a closed cross-section. Let $U\subset S$. Then $$ \partial_S U \subset \partial^{\Phi} U \subset \partial_S U \cup \partial^{\Phi} S. $$ \end{lem} \begin{ex} Consider a minimal rotation on the torus $(\mathbb{T}^2, \Phi)$. Identify $\mathbb{T}^2=[0,1]^2$ in the usual way. The closed set $S=[A,B]\times\{0\}$, where $0<A<B<1$ is a global cross-section. Let $A<C<B$ and $U=[C,B]\times\{0\}$ be a strict subset of $S$. Then $\partial_S U = \{C\}\times\{0\}$, $ \partial^{\Phi} U =\{C, B\}\times\{0\} $ and $ \partial^{\Phi} S=\{A,B \}\times\{0\}$. This is an example for which the inclusion relations for sets in Lemma \ref{lem:partial in induced topo} are strict. \end{ex} \begin{lem}\label{lem:flow boundary formula} Let $S$ be a set contained in a closed cross-section. Let $U\subset S$. Then $\partial^{\Phi} U=\partial_S U \cup (\overline{U}\cap \partial^{\Phi} S)$. \end{lem} \begin{proof} By Lemma \ref{lem:partial in induced topo} and Definition \ref{def:Flow boundaries and interiors} it follows on the one hand that $\partial^{\Phi} U \subset (\partial_S U \cup \partial^{\Phi} S)\cap \overline{U}=\partial_S U \cup (\overline{U}\cap \partial^{\Phi} S)$. On the other hand, since $\overline{U}\cap \partial^{\Phi} S =\overline{U}\cap(\overline{S}\setminus \text{\rm Int}^{\Phi} S)\subset \overline{U}\setminus \text{\rm Int}^{\Phi} U=\partial^{\Phi} U$, we obtain by Lemma \ref{lem:partial in induced topo} that $\partial_S U \cup (\overline{U}\cap \partial^{\Phi} S)\subset \partial^{\Phi} U$. \end{proof} \begin{lem}\label{lem:In U=U} Let $(X,\Phi)$ be a topological flow. Let $U$ be a subset of a closed cross-section. Then $\text{\rm Int}^{\Phi} U=U$ implies that $U$ is relatively open in $S$. If $U \cap \partial^{\Phi} S=\emptyset$, then the converse holds, i.e.\ $U$ relatively open in $S$ implies $\text{\rm Int}^{\Phi} U=U$. \end{lem} \begin{proof} Assume that $U=\text{\rm Int}^{\Phi} U=$. Fix $0<\gamma<\eta$. From the definition of cross-section it is clear that $\Phi_{[-\gamma, \gamma]}(U)\cap S\subset U$. Therefore $U=\text{\rm Int}^{\Phi} U=\text{Int}(\Phi_{[-\gamma, \gamma]}(U))\cap U=\text{Int}(\Phi_{[-\gamma, \gamma]}(U))\cap S$, implying that $U$ is relatively open in $S$. Now assume that $U$ is (relatively) open in $S$, i.e.\ $U\cap \partial_S U=\emptyset$. As it follows from Definition \ref{def:Flow boundaries and interiors} that $\text{\rm Int}^{\Phi} U\subset U$ and $\text{\rm Int}^{\Phi} U\cup \partial^{\Phi} U=\overline{U}$, it is enough to prove $\partial^{\Phi} U\cap U=\emptyset$. By Lemma \ref{lem:flow boundary formula}, $\partial^{\Phi} U\cap U=(\partial_S U\cap U) \cup (U\cap \partial^{\Phi} S)$. This expression equals the empty set as by assumption $U \cap \partial^{\Phi} S=\emptyset$. \end{proof} Given a cross-section $S$ and $x\in \text{\rm Int}^{\Phi} S$, we clearly have $\delta:=d(x,\partial^{\Phi} S)>0$ as $\partial^{\Phi} S$ is closed. Therefore it is easy to see that $U=B(x, \delta/2)\cap S$ is a relatively open set in $S$ with $x\in U\subset \text{\rm Int}^{\Phi} S$ and $\overline{U} \cap \partial^{\Phi} S=\emptyset$. This explains the usefulness of the criterion in the following proposition. Indeed this proposition is key for calculating flow boundaries and interiors in the theory we develop below. \begin{prop}\label{prop:natural boundary} Let $(X,\Phi)$ be a topological flow. Let $U$ be a subset of a closed cross-section $S$ such that $\overline{U} \cap \partial^{\Phi} S=\emptyset$. Then: $$\partial^{\Phi} U=\partial_S U\,\, \mathrm{ and }\,\, \text{\rm Int}^{\Phi} U=\text{\rm Int}_S U.$$ \end{prop} \begin{proof} By Lemma \ref{lem:flow boundary formula}, $\partial^{\Phi} U=\partial_S U$. Thus $\text{\rm Int}_S U=\overline{U}\setminus \partial_S U=\overline{U}\setminus \partial^{\Phi} U=\text{\rm Int}^{\Phi} U$. \end{proof} \begin{rem} A case where the previous remark can be used is when $S$ is a cross-section, $U,V \subset S$ with $\text{\rm Int}^{\Phi} U=U$ and $\text{\rm Int}^{\Phi} V=V$ and $\partial^{\Phi} U\subset V$. Indeed $$\overline{U}=\text{\rm Int}^{\Phi} U\cup\partial^{\Phi} U\subset \text{\rm Int}^{\Phi} U\cup V=\text{\rm Int}^{\Phi} U\cup \text{\rm Int}^{\Phi} V \subset\text{\rm Int}^{\Phi} S $$ Thus $\overline{U} \cap \partial^{\Phi} S=\emptyset$. \end{rem} \subsection{The small flow boundary property} First, we recall the small boundary property for discrete dynamical system. \begin{df} Let $(X,T)$ a discrete dynnamical system. A subset $A\subset X$ has a {\bf small boundary} if $\mu(\partial A)=0$ for every $T$-invariant measure $\mu$. The system $(X,T)$ is said to have the {\bf small boundary property} if there is a basis for the topology of $X$ consisting of open sets with small boundary. \end{df} Let $(X, \Phi)$ be a topological flow. \begin{df} A Borel subset of $X$ is called a \textbf{null set} if it has zero measure w.r.t.\ any $\Phi$-invariant Borel probability measure. A Borel subset is said to be a \textbf{full set} when its complement is a null set. \end{df} \begin{df} A closed cross-section $S$ of injectivity time $\eta$ has a \textbf{small flow boundary} if $\Phi_{[-\eta,\eta]}(\partial^{\Phi}(S))$ is a null set. \end{df} For a cross-section $A\subset X$, we define the \textbf{counting orbit capacity} of $A$ by $$ \cocap(A):=\lim_{T\to \infty} \frac{1}{T}\sup_{x\in X} \sharp\{0\leq t<T:\Phi_t(x)\in A \}. $$ The limit exists and is finite as $\sup_{x\in X} \sharp\{0\leq t<T:\Phi_t(x)\in A \}$ is subadditive. \begin{lem}[\cite{burguet2019symbolic}, Lemma 2.10]\label{lem:small flow boundary} Let $(X, \Phi)$ be a topological flow. Suppose that $S$ is a closed cross-section of injectivity time $\eta>0$. Then the following are equivalent. \begin{enumerate} \item $S$ has a small flow boundary; \item $\cocap(\partial^{\Phi}S)=0$. \end{enumerate} \end{lem} \begin{df} Let $(X,\Phi)$ be a topological flow without fixed points. The flow $(X, \Phi)$ is said to have the \textbf{small flow boundary property} if for any $x\in X$ and any closed cross-section $S'$ with $x\in \text{\rm Int}^{\Phi}(S')$, there exists a subset $S\subset S'$ such that $x\in \text{\rm Int}^{\Phi}(S)$ and $S$ has a small flow boundary. \end{df} \subsection{Suspension flows and symbolic extensions} Let $(Z, \rho)$ be a compact metric space and $T:Z\to Z$ a homeomorphism. Let $f: Z\to \mathbb{R}_{>0}$ be a continuous map. Let $ F_{f}=\{(x,t): 0\le t\le f(x), x\in Z \}\subset Z\times \mathbb{R}$. Let $R_f$ be the closed equivalence relation on $F_{f}$ induced by $\{((x,f(x)),(Tx,0))|\, x\in Z\}\subset F_{f}\times F_{f}$. The \textbf{suspension flow} of $T$ under $f$ is the flow $\Phi$ on the space $$ Z_{f}:=F_f/ R_f $$ induced by the time translation $T_t$ on $Z\times \mathbb{R}$ defined by $T_t(x,s)=(x, t+s)$. A suspension flow over a zero-dimensional $\mathbb{Z}$-topological dynamical system is called a \textbf{zero dimensional suspension flow} and a topological extension by a zero-dimensional suspension flow is said to be a \textbf{zero-dimensional extension}. Similarly a suspension flow over a symbolic $\mathbb{Z}$-topological dynamical system (a.k.a $\mathbb{Z}$-\textbf{subshift}) is called a \textbf{ symbolic suspension flow} and a topological extension by a symbolic suspension flow is said to be a \textbf{symbolic extension}. \begin{df}\label{df:principal} Let $(X, \Phi)$ and $(Y, \Psi)$ be two topological flows. Suppose that $\pi: Y\to X$ is a topological extension from $(X, \Phi)$ to $(Y, \Psi)$. A topological extension is said to be (see \cite[\S 2.3]{burguet2019symbolic}) \begin{itemize} \item \textbf{entropy preserving} when it preserves topological entropy i.e.\ $\htop(X, \Phi)=\htop(Y, \Psi)$. \item \textbf{principal} when it preserves the entropy of invariant measures, i.e.\ $\h(\mu)=\h(\pi \mu)$ for all $\Psi$-invariant measures $\mu$, \item \textbf{isomorphic} when the map induced by $\pi$ on the sets of invariant Borel probability measures is bijective and $\pi: (Y, \Psi, \mu) \to (X, \Phi, \pi \mu)$ is a measure theoretical isomorphism for all $\Psi$-invariant measures $\mu$, \item \textbf{strongly isomorphic} when there is a full set $E$ of $X$ such that the restriction of $\pi$ to $\pi^{-1}E$ is one-to-one. \end{itemize} \end{df} \begin{rem} Clearly, strongly isomorphic $\Longrightarrow$ isomorphic $\Longrightarrow$ principal$\Longrightarrow$ entropy preserving. It is easy to give an example of an extension which is entropy preserving but not principal: Let $\pi: (X,T)\rightarrow(Y,S)$ such that $\htop(T)>\htop(S)$. Then let $f: X \sqcup Y \to Y$ by $f(x)=\pi(x)$ if $x\in X$ and $f(x)=x$ if $x\in Y$. An example of an extension which is principal but not isomorphic is given by \cite[Theorem 4.7]{burguet2019uniform}. As far as we know, the question whether there exists an example of an extension which is isomorphic but not strongly isomorphic, is open. Note that the $1$-suspensions over the examples above give the analogous examples for flows. Indeed by \cite[p. 4328]{burguet2019symbolic}, there is an affine homeomorphism between the simplices of invariant measures $\Theta:\mathcal{M}(X,T)\rightarrow \mathcal{M}(S_1(X),\Phi)$ given by $\mu \mapsto \mu\times \lambda$, where $\lambda$ is the Lebesgue measure on the interval $[0,1]$. \end{rem} \subsection{Some facts from dimension theory}\label{subsec:facts_dimension_theory} The following results in dimension theory for a separable metric spaces will be used in the proof of our main result. Recall that an $F_\sigma$ set is a countable union of closed sets. \begin{thm}(\cite[Proposition 1.2.12]{engelking1995theory})\label{thm:R1} Let $E$ be a zero-dimensional subset in a separable metric space $M$. Then for every $x\in M$ and every open neighborhood $U$ of $x$ there is an open set $U'\subset U$ with $x\in U'$ and $\partial U' \cap E=\emptyset$ \end{thm} \begin{thm}(\cite[Chapter III, Theorem III 2]{HW41})\label{thm:R3} Let $(B_i)_{i\in \mathbb{N}}$ be a countable collection of closed sets in a separable metric space satisfying $\dim B_i\le k$ for all $i$. Then $\dim \bigcup_i B_i\le k$. \end{thm} The following corollary is obvious. \begin{cor}\label{cor:sum_theorem_for_F_sigma} Let $(B_i)_{i\in \mathbb{N}}$ be a countable collection of $F_\sigma$ sets in a separable metric space satisfying $\dim B_i\le k$ for all $i$. Then $\dim \bigcup_i B_i\le k$. \end{cor} \begin{lem}\label{lem:clo_int_open_is_F_sigma} Let $M$ be a separable metric space. Let $C$ be closed and $U$ open then $U\cap C$ is $F_\sigma$. \end{lem} \begin{proof} Note $M$ is second-countable and write $U$ as a countable union of closed balls. \end{proof} The following theorem is well known. \begin{thm}(\cite[Theorem 1.5.7]{engelking1995theory})\label{thm:R22} Let $M$ be a separable metric space. If $-1<\dim(M)<\infty$ there is a zero-dimensional subset $E$ of $M$ such that $\dim(M\setminus E)=\dim(M)-1$. \end{thm} We will need the following strengthening of Theorem \ref{thm:R22}. \begin{thm}\label{thm:R2} Let $M$ be a separable metric space. If $-1<\dim(M)<\infty$ there is a zero-dimensional $F_\sigma$ subset $E$ of $M$ such that $\dim(M\setminus E)=\dim(M)-1$. \end{thm} \begin{proof} Denote $n=\dim(M)$. If $n=0$, set $E=M$. If $n=1$, let $\{B_i\}_{i=1}^\infty$ be a countable basis of $M$ so that $\dim(\partial B_i)=0$ for all $i$. Let $E=\bigcup_i\partial B_i$. By Theorem \ref{thm:R3} $E$ is a zero-dimensional $F_\sigma$ set. Note $\{B_i\cap (M\setminus E)\}_{i=1}^\infty$ is a basis of $M\setminus E$ with $\partial (B_i\cap (M\setminus E))=\emptyset$. Thus $\dim (M\setminus E)=0$ as desired. Assume the claim has been established for $n-1\geq 0$. Let $\{B_i\}_{i=1}^\infty$ be a countable basis of $M$ so that $\dim(\partial B_i)\leq n-1$ for all $i$. Using the inductive assumption let $F_i\subset \partial B_i $ be a zero-dimensional $F_\sigma$ set so that $\dim(\partial B_i\setminus F_i)\leq n-2$. Let $E=\cup_i F_i$. By Theorem \ref{thm:R3}, $E$ is a zero-dimensional $F_\sigma$ set. Note $\{B_i\cap (M\setminus E)\}_{i=1}^\infty$ is a basis of $M\setminus E$ with $\partial (B_i\cap (M\setminus E))\subset \partial B_i\setminus F_i$. Thus $\dim (M\setminus E)\leq n-1$. By \cite[Chapter III, 2 B]{HW41}, $\dim (E\cup (M\setminus E))\leq 1+\dim (E)+ \dim(M\setminus E)$ and thus $\dim (M\setminus E)= n-1$. \end{proof} As an illustration of Theorem \ref{thm:R2} consider $M=[0,1]^2$ and note that $E=M\cap (\mathbb{Q}\times\mathbb{Q})$ is zero-dimensional and $M\setminus E$ is one-dimensional. Indeed it is easy to see that balls centered at rational coordinates with a rational radius form a base with zero-dimensional boundary in $M\setminus E$. As an illustration of Theorem \ref{thm:R1} note that balls centered at rational coordinates with a transcendental radius do not intersect $E$. \begin{prop}\label{prop:avoiding zero_dim} Let $M$ be a metric compact space and $E$ a zero-dimensional subset of $M$. Let $C$ be a closed set and $U$ an open subset such that $C\subset U\subset M$. Then there is an open set $U'$ with $C \subset U'\subset \overline{U'}\subset U$ and $\partial U' \cap E=\emptyset$. \end{prop} \begin{proof} First fix an open set $W$ so $C \subset W\subset \overline{W}\subset U$. Using Theorem \ref{thm:R1}, for each $x\in C$, choose an open set $x\in U_x\subset W$ with $\partial U_x \cap E=\emptyset$. Let $U_{x_1},\ldots, U_{x_n}$ be a cover of $C$. It is easy to see that $U'=\bigcup_{i=1}^n U_{x_i}\subset W$ has the required properties. \end{proof} \section{Establishing the small flow boundary property }\label{sec:Establishing the small flow boundary property} \begin{comment} \begin{lem}\label{lem:general topo} Let $A$ and $B$ be closed subset in a compact metric space $X$. Suppose that $A\cap B=\emptyset$. Then there exists an open set $C$ with $C\supset B$ such that $A\cap C=\emptyset$. \end{lem} \begin{proof} If not, then there exists a sequence of decreasing open sets $(C_n)_{n=1}^{\infty}$ such that $\cap_{n=1}^{\infty} C_n=B$ and $A\cap C_n\not=\emptyset$. Pick $x_n\in A\cap C_n$ for each $n$. It follows that $x_n\in A\cap (C_n\setminus B)$. Without loss of generality, we assume $x_n$ is convergent to $x$ as $n$ tends to $\infty$. Since $A$ and $C$ are closed, we see that $x\in A$ and $x\in C$, which is a contradiction. \end{proof} \end{comment} \subsection{Local homeomorphisms between cross-sections }\label{sec:notations} Let $(X, \Phi)$ be a topological flow without fixed points. Suppose that $\dim(X)=d+1$ for $d\ge 0$. By Lemma \ref{lem:complete family appendix} there exists an $\eta>0$ and $0<\alpha<\eta$ such that $(X, \Phi)$ has two complete families of cross-sections $\mathcal{S}=\{S_i \}_{i=1}^{N}$ of (closed disjoint) cross-sections of injectivity time $\eta_{\mathcal{S}}$ and $\mathcal{S}'=\{S_i' \}_{i=1}^{N}$ of injectivity time $\eta_{\mathcal{S}}'$ such that for all $1\le i\le N$, \begin{itemize} \item $S_i=\overline{S_i}\subset \text{\rm Int}^{\Phi}(S_i')$; \item $\eta=\eta_{\mathcal{S}}=\eta_{\mathcal{S}'}$; \item $\max \{ \diam (S_i')\}\le \alpha$; \item $\Phi_{[0,\alpha]}\mathcal{G}=\Phi_{[-\alpha, 0]}\mathcal{G}=X$, where $\mathcal{G}=\cup_{S\in \mathcal{S}} S$. \end{itemize} \begin{df} Let $1\le i,j\le N$. Denote by $t_{i,j}$ the first positive \textbf{hitting time} from $S_i$ to $S_j$, that is, for $x\in S_i$, $$ t_{i,j}(x)=\min\{t>0: \Phi_t(x)\in S_j \}, $$ where if the argument set is empty we put $t_{i,j}(x)=\infty$. The functions $t_{i,j}'$ are defined similarly w.r.t.\ $S_i'$ and $S_j'$. Note $t_{i,j}'(x)\leq t_{i,j}(x)$ for all $x\in S_i$. Define $$D_{i,j}=\{x\in S_i: t_{i,j}(x)\le \eta \}$$ and $$D_{i,j}'=\{x\in S_i': t_{i,j}'(x)<\infty\}.$$ Note $D_{i,i}=\emptyset$ for all $i$. Let $T_{i,j}$ be the first positive \textbf{hitting map} from $D_{i,j}$ to $S_j$, that is, for $x\in D_{i,j}$, $$ T_{i,j}(x)=\Phi_{t_{i,j}(x)}(x), $$ Similarly, we define $T_{i,j}'$ from $D_{i,j}'$ to $S_j'$. \end{df} \begin{df}\label{def:i_th return map} Define the \textbf{first return time} map\\ $t^{(1)}_\mathcal{G}(x)=t_\mathcal{G}~:~ \mathcal{G}~=~\cup_{S\in \mathcal{S}} S\to \mathbb{R}_+$ by $$t^{(1)}_\mathcal{G}(x)=t_\mathcal{G}(x):=\min \{t>0|\, \Phi_t(x)\in \mathcal{G}\}.$$ As $\mathcal{S}=\{S_i \}_{i=1}^{N}$ is a family of closed disjoint cross-sections, there is $\gamma>0$ so that $S=\bigcup_{i=1}^{N} S_i$ is a cross-section with injectivity time $\gamma$. Thus the argument set in the definition above is never empty. Inductively for $i\geq 1$ define the \textbf{$(i+1)$-th return time} map \\ $t^{(i+1)}_\mathcal{G}(x)~:~ \mathcal{G}~=~\cup_{S\in \mathcal{S}} S\to \mathbb{R}_+$ by $$t^{(i+1)}_\mathcal{G}(x)=t^{(i)}_\mathcal{G}(t_\mathcal{G}(x))+t_\mathcal{G}(x).$$ Define $t^{(0)}_\mathcal{G}(x)=0$ and $t^{(-k)}_\mathcal{G}(x)$ for $k\in \mathbb{N}$, similarly to the above. \end{df} \begin{df} For $1\leq i,j\leq N$, define: $$F_{i,j}:=\{x\in D_{i,j}|\, t_{i,j}(x)=t_\mathcal{G}(x)\}. $$ That is $F_{i,j}$ is the set of $x\in S_i$ such that the first member of $\mathcal{S}$ it hits after leaving $S_i$ is $S_j$. \end{df} \begin{lem}\label{lem:F_ij_bounded_below} If $x\in F_{i,j}$, then $t_{i,j}(x)>\gamma$. \end{lem} \begin{proof} Note $t_\mathcal{G}$ is bounded from below by $2\gamma$. \end{proof} \begin{lem}\label{lem:F_covers} For all $1\leq i\leq N$, it holds that $S_i=\cup_{j=1}^N F_{i,j}$. \end{lem} \begin{proof} Let $x\in S_i$. As $\Phi_{[-\alpha, 0]}\mathcal{G}=X$, there exists some $j$, $t\in [0,\alpha]$ and $y\in S_j$ so that $\Phi_{-t}y=x$, i.e., $\Phi_{t}x=y\in S_j$ . Thus $t_{i,j}(x)\leq \alpha<\eta$ and $x\in D_{i,j}$. Let $K=\{1\leq k\leq N|\, x\in D_{i,k}\}$. Note $j\in K$. Clearly there is $k\in K$ so that $t_{i,k}=\min_{s\in K} t_{i,s}$. Conclude $x\in F_{i,k}$. \end{proof} \begin{lem}\label{lem:identify_t_ij} Let $1\leq i,j\leq N$. \begin{enumerate} \item If $x\in S_i'$, $y\in S_j'$ and $\Phi_r(x)=y$ for some $0\leq r\leq 2\eta$, then $t_{i,j}'(x)=r$. \item If $x\in D_{i,j}$, then $t_{i,j}'(x)=t_{i,j}(x)$. \end{enumerate} \end{lem} \begin{proof} For (1) assume for a contradiction that $t_{i,j}'(x)<r$. It follows that $\Phi(y, r-t_{i,j}'(x))\in S_j'$ and $|t_{i,j}'(x)-r|\leq 2\eta$. Since $\Phi$ is injective on $S_j'\times [-\eta, \eta]$, this is a contradiction. For (2) note that by definition $t_{i,j}(x)\leq \eta$ and $\Phi_{t_{i,j}(x)}(x)\in S_j'$, thus by (1), $t_{i,j}'(x)=t_{i,j}(x)$. \end{proof} \begin{prop}\label{Prop:T_homeomorphism} Let $x\in D_{i,j}$. Then there is an open neighborhood $V$ of $x$ in $S_i'$ with $\overline{V}\subset \text{\rm Int}^{\Phi} S_i'$ such that $t_{i,j}'$ is continuous on $V$, $t_{i,j}'(x)\leq 2\eta$ for all $x\in V$ and $T_{i,j}'|_{V}:V\rightarrow S_j'$ is an open map and $T_{i,j}'|_{V}:V\rightarrow T_{i,j}'(V)$ is a homeomorphism. \end{prop} \begin{proof} Let $0<\epsilon<\eta$. Since $T_{i,j}(x)\in S_j\in \text{\rm Int}^{\Phi} S_j'$, there is a $\kappa>0$ such that the open ball $B(T_{i,j}(x), \kappa)$ is contained in $\Phi_{(-\epsilon, \epsilon)}(\text{\rm Int}^{\Phi} S_j')$. Since $\Phi$ is continuous, there is an $\delta>0$ such that for any $y\in X$ with $d(x,y)<\delta$, we have that $\Phi(y, t_{i,j}(x))\in B(T_{i,j}(x), \kappa)\subset \Phi_{(-\epsilon, \epsilon)}(\text{\rm Int}^{\Phi} S_j')$. Let $V_{\delta}=\{y\in S_i': d(x,y)<\delta\}$ which is an open neighborhood of $x$ in $S_i'$. As $x\in S_i$, $d(x, \partial^{\Phi} S_i')>0$, thus taking $\delta$ small enough, one may assume w.l.o.g.\ that $\overline{V_{\delta}}\subset \text{\rm Int}^{\Phi} S_i'$. It follows by Lemma \ref{lem:identify_t_ij} that \begin{equation}\label{eq:two_sided_ineq} t_{i,j}(x)-\epsilon <t_{i,j}'(y)<t_{i,j}(x)+\epsilon, \end{equation} whenever $y\in V_{\delta}$. In particular $t_{i,j}'(y)< 2\eta$ for all $y\in V_\delta$ and $T_{i,j}'(V)\subset \text{\rm Int}^{\Phi} S_j'$. We claim that after taking $\delta$ small enough, the map $t_{i,j}'$ is continuous on $V=V_{\delta}$. If not, then there exists a sequence $\{y_n\}_{n\in \mathbb{N}}\subset V$ such that $\lim\limits_{n\to \infty}y_n=x$ but $t_x:=\lim\limits_{n\to \infty}t_{i,j}'(y_n)\not=t_{i,j}(x)$. As by definition $0<t_{i,j}(x)\leq \eta$ and by choice $\epsilon<\eta$, one has that $t_x\le t_{i,j}(x)+\epsilon<2\eta$ and $\Phi(x, t_x)\in S_j'$. It follows that $\Phi(T_{i,j}(x), t_x-t_{i,j}(x))\in S_j'$ and $|t_x-t_{i,j}(x)|<2\eta$. Since $\Phi$ is injective on $S_j'\times [-\eta, \eta]$, this is a contradiction. Let $z=T_{i,j}'(x')\in \text{\rm Int}^{\Phi} S_j'$ for some $x'\in V$. Let $W_{\rho}=\{y\in S_j': d(z,y)<\rho\}$ be an open neighborhood of $z$ in $S_j'$. Using Equation \eqref{eq:two_sided_ineq} with $y=x'$, for $\rho>0$ small enough, there is $0<\xi<\eta$ such that $\Phi_{-t_{i,j}(x)}(\Phi_{(-\rho, \rho)}W_{\rho})$ is an open set in $\Phi_{(-\xi, \xi)}(V)$. Thus for each $w\in W_{\rho}$, there are unique $v\in V$, $r\in (-\xi, \xi)$, such that $\Phi_{t_{i,j}(x)-r}(v)=w$. By Lemma \ref{lem:identify_t_ij}, one must have $t_{i,j}'(v)=t_{i,j}(x)-r$ and $T_{i,j}'(v)=w$. This implies $W_\rho\subset T_{i,j}'(V)$ which implies $T_{i,j}'|_{V}:V\rightarrow S_j'$ is an open map. In addition, as $t_{i,j}'$ is continuous on $V$ so is $T_{i,j}'|_{V}$. Thus in order to establish that $T_{i,j}'|_{V}:V\rightarrow T_{i,j}'(V)$ is a homeomorphism, it is enough to show that $T_{i,j}'|_{V}$ is injective. Indeed if it holds for $z_1,z_2\in V$, that $T_{i,j}'(z_1)=T_{i,j}'(z_2)$, that is $\Phi_{t_{i,j}'(z_1)}(z_1)=\Phi_{t_{i,j}'(z_2)}(z_2)$, then $z_1=\Phi_{t_{i,j}'(z_2)-t_{i,j}'(z_1)}(z_2)$ which implies $|t_{i,j}'(z_2)-t_{i,j}'(z_1)|>2\eta$, contradicting Equation \eqref{eq:two_sided_ineq}. Q.E.D. \end{proof} \subsection{Return time sets and cross-section names}\label{sec:return_names} \begin{prop}\label{prop:C_ij} For every $1\leq i,j\leq N$ there is a finite collection of open sets $\mathcal{C}_{i,j}$ in $S_i'$ with the following properties: \begin{enumerate} \item $F_{i,j}\subset \bigcup \mathcal{C}_{i,j}\subset \overline{ \bigcup \mathcal{C}_{i,j}}\subset \text{\rm Int}^{\Phi} S_i'$. \item For all $V\in \mathcal{C}_{i,j}$, $t_{i,j}'$ is continuous on $V$ and $t_{i,j}'(x)\leq 2\eta$ for all $x\in V$. \item For all $V\in \mathcal{C}_{i,j}$, $(T_{i,j}')_{|V}:V\rightarrow T_{i,j}'(V)$ is a homeomorphism and $T_{i,j}'(V)$ is open in $S_j'$. \end{enumerate} \end{prop} \begin{proof} Fix $1\le i,j\le N$. Let $x\in \overline{F_{i,j}}$. Let $\{x_q\}_{q=1}^{\infty}\subset F_{i,j}\subset D_{i,j}$ so that $x_q\rightarrow_{q\rightarrow \infty} x$. As $t_{i,j}(x_q)\leq \eta$, we may assume w.l.o.g.\ $t_{i,j}(x_q)$ converges to some $t\leq \eta$. Clearly $\Phi_t(x)\in S_j$ and therefore $x\in D_{i,j}$. Conclude $\overline{F_{i,j}}\subset D_{i,j}\subset S_{i}\subset \text{\rm Int}^{\Phi} S'_i$. Using Proposition \ref{Prop:T_homeomorphism} cover $\overline{F_{i,j}}$ by a finite collection of open sets $\mathcal{C}_{i,j}$ such that $\bigcup\mathcal{C}_{i,j}\subset \overline{ \bigcup \mathcal{C}_{i,j}}\subset \text{\rm Int}^{\Phi} S_i'$. with properties (2) and (3). \end{proof} For each $C\in \mathcal{C}_{i,j}$, we define $T_C:C\to S_j'$ by $x\mapsto T_{i,j}'(x)$. For $n\in \mathbb{N}\cup \{0\}$, $\textbf{i}=(i_k)_{0\le k\le n}\in\{1,2,\dots N\}^{n+1}$ and $C^n=(C_0, C_1, \dots, C_{n-1}, C_n)\in \mathcal{C}(\textbf{i}):=\prod_{k=0}^{n-1}\mathcal{C}_{i_k, i_{k+1}}\times \{\mathcal{C}_{i_n,j} \}_{1\le j\le N}$, we define $$ T_{C^n}^k= \begin{cases} T_{C_{k-1}}\circ \cdots \circ T_{C_0}~&\text{for}~1\le k\le n;\\ \text{Id}_{C_0} &\text{for}~k=0; \end{cases} $$ on the set $$ Z_{C^n}:=\{x\in C_0: T_{C_{k-1}}\circ \cdots \circ T_{C_0}(x)\in C_k~\text{for}~1\le k\le n \}. $$ Let $x\in Z_{C^n}$. The set $Z_{C^n}$ is called an $n$-\textbf{cross-section name} of $x$. Note $x$ may have more than one $n$-cross-section name for a given $n \in \mathbb{N}$. Let $V\subset X$, not necessarily contained in the image of $T_{C^n}$. Following standard notation, we denote \begin{equation}\label{eq:T_C_n} T_{C^n}^{-k}(V):=\{ x\in Z_{C^n}: T_{C^n}^k(x)\in V\}=(T_{C^n}^{k})^{-1}(V\cap T_{C^n}^k(Z_{C^n})). \end{equation} Let $n\in \mathbb{N}\cup \{0\}$. Denote $$\mathcal{C}(n)= \cup_{\textbf{i}\in \{1,2,\dots N\}^{n+1}} \mathcal{C}(\textbf{i}).$$ $$\mathcal{C}_i=\cup_{1\leq j\leq N} \mathcal{C}_{i,j}.$$ \begin{df}\label{def:I_C_n} We define the \textbf{return time set}: $$ I_{C^n}=\{0\le k\le n: i_{k}=i_0 \}. $$ It follows that the image of $T_{C^n}^k$ is contained in $S_{i_0}'$ for $k\in I_{C^n}$. Note $0\in I_{C^n}$. \end{df} \begin{rem} Note that $\mathcal{C}(0)=\bigcup_{1\le i,j\le N} \mathcal{C}_{i,j}$. Note that for $C^0=(C_0)\in \mathcal{C}(0)$ it holds $I_{C^0}=\{0\}$, $Z_{C^0}=C_0$ and $T_{C^0}^0=\text{Id}_{C_0}$. \end{rem} \begin{lem}\label{lem:F_sigma_invariance} Let $C^n\in \mathcal{C}(n)$ and $1\leq k\leq n$. Then the following holds: \begin{enumerate} \item $Z_{C^n}$ and $T_{C^n}^{k}(Z_{C^n})$ are open; $T_{C^n}^{k}$ is a homeomorphism on $Z_{C^n}$. \item Let $V$ be $F_\sigma$. Then $T_{C^n}^{-k}(V)$ is $F_\sigma$. \end{enumerate} \end{lem} \begin{proof} The first claim is best understood by considering the first cases. Indeed note $T_{C^n}^1=T_{C_0}:C_0\rightarrow T_{i_0,i_1}'(C_0)$ and $T_{C^n}^2:T_{C_0}^{-1}(T_{C_0}(C_0)\cap C_1)\rightarrow T_{C_1}(T_{C_0}(C_0)\cap C_1)$, both have a domain which is an an open set in $S_{i_0}'$ and a range which is an open set in $S_{i_1}'$ and $S_{i_2}'$ respectively by Proposition \ref{Prop:T_homeomorphism}. Now assume we have proven that the domain of $T_{C^n}^k$ is an open set $D_k$ in $S_{i_0}'$ and that the range of $T_{C^n}^k$ is an open set $R_k$ in $S_{i_k}'$. It is easy to see that the domain of $T_{C^n}^{k+1}$ is the open set in $D_k$ and therefore in $S_{i_0}'$, $(T_{C_n}^k)^{-1}(R_k\cap C_k)$ and that the range of $T_{C^n}^{k+1}$ is the open set in $S_{i_{k+1}}'$, $T_{C_k}(R_k\cap C_k)$. Note that $Z_{C^n}$ equals the domain of $T_{C^n}^{n}$ which we have seen to be open. Using Proposition \ref{Prop:T_homeomorphism}(3), we have that $T_{C^n}^{k}(Z_{C^n})$ is open. By the above $T_{C^n}^{k}$ is a homeomorphism on an open domain which contains $Z_{C^n}$ and therefore is a homeomorphism on $Z_{C^n}$. For Claim $(2)$, note $T_{C^n}^{-k}(V)=(T_{C^n}^{k})^{-1}(V\cap T_{C^n}(Z_{C^n}))$ and $T_{C^n}(Z_{C^n})$ is open by (1). Thus $V\cap T_{C^n}(Z_{C^n})$ is $F_\sigma$ and on this set $T_{C^n}^{-k}$ is a homeomorphism. \end{proof} Using Proposition \ref{prop:C_ij} let $\delta_0>0$ small enough so that $2\delta_0$ is a Lebesgue number for each of the open covers $\mathcal{C}_{i,j}$ (of $F_{i,j}$). Thus for all $1\leq i,j\leq N$, \begin{equation}\label{eq:delta_approx} F_{i,j}\subset \bigcup_{V\in \mathcal{C}_{i,j}}\overline{\Theta}^{S_i'}_{-\delta_0}(V). \end{equation} From now on in this section we abbreviate $\overline{\Theta}_{\cdot}(\cdot)=\overline{\Theta}^{S_1'}_{\cdot}(\cdot)$. \begin{df}\label{def:delta_0} We define $$ Z_{C^n}^{\delta_0}:=\{x\in \overline{\Theta}_{-\delta_0}(C_0): T_{C_{k-1}}\circ \cdots \circ T_{C_0}(x)\in \overline{\Theta}_{-\delta_0}(C_k)~\text{for}~1\le k\le n \}. $$ This is clearly a closed set. \end{df} The following lemma will be important in the sequel: \begin{lem}\label{lem:internal_return} Let $x\in S_1$ so that there are $0=t_0<t_1<t_2<\ldots<t_{d}$ with $\Phi_{t_k}x\in S_1$ for $k=0,\ldots, d$. Then there exists $C^n=(C_0, C_1, \dots, C_{n-1}, C_n)\in \mathcal{C}(n)$ for some $n\in\mathbb{N}$, and $j_0=0<j_1<\ldots <j_d$ such that $x\in Z_{C^n}^{\delta_0}$ and $T_{C^n}^{j_i}x=\Phi_{t_i}x$ for $i=0,\ldots, d$. \end{lem} \begin{proof} By Lemma \ref{lem:F_covers}, $x\in F_{1,j_1}$ for some $j_1$. By Equation \eqref{eq:delta_approx}, we may find $C_0\in \mathcal{C}_{1,j_1}$ so that $x\in \overline{\Theta}_{-\delta_0}(C_0)$. A priori $T_{C_0}(x)\in S_{j_1}'$, however by Lemma \ref{lem:identify_t_ij}(2), $T_{C_0}(x)\in S_{j_1}$. Thus by Lemma \ref{lem:F_covers}, $T_{C_0}(x)\in F_{j_1,j_2}$ for some $j_2$. Repeating the argument so that we may find $C_1\in \mathcal{C}_{j_1,j_2}$ so that $T_{C_0}(x)\in \overline{\Theta}_{-\delta_0}(C_1)$. Continuing inductively we construct a sequence $C^n:=(C_0,C_1,\ldots C_n)\in\mathcal{C}(n)$ with the properties: \begin{enumerate} \item $x\in Z_{C^n}^{\delta_0}$. \item $T_{C^n}^k(x)=\Phi_{t_\mathcal{G}(T_{C^n}^{k-1}(x))}(T_{C^n}^{k-1}(x))$, for $k=1,\ldots,n$. \item $n\gamma\geq t_d.$ \end{enumerate} Conditions (2) and (3) guarantee that there exist $j_0=0<j_1<\ldots <j_d$ such that $T_{C^n}^{j_i}x=\Phi_{t_i}x$ for $i=0,\ldots, d$. \end{proof} \subsection{Statement of the Main Theorem} The main result is as follows. \begin{thm}[=Theorem A]\label{main thm} Let $X$ be a finite-dimensional space. Let $\Phi$ be a topological flow on $X$ without fixed points, having a countable number of periodic orbits. Then $(X, \Phi)$ has the small flow boundary property. \end{thm} We follow the strategy of \cite{L95}. Lindenstrauss proved that for any discrete finite dimensional dynamical system $(X,T)$ with (arbitrary) periodic points set $\per(X)$, for every pair of open sets $U,V\subset X$, such that $\partial U\setminus \per(X)\subset V$, there is an open set $U'$ with $U \subset U'\subset U\cup V$, such that $\partial U'$ is the union of a set of zero (discrete) orbit capacity and a subset of $\per(X)$. This statement should be compared with Theorem \ref{thm:SFBP}. Our proof is not a routine generalization. Using the same method, it is very probable one could prove an analog statement to the theorem by Lindenstrauss, imposing no condition on periodic orbits. Burguet \cite[Proposition 2.1]{burguet2019symbolic} proved that a $C^2$-smooth flow without fixed points on a compact smooth manifold (without boundary), satisfying that for any $\tau>0$ the number of periodic orbits of period less than $\tau$ is finite, has the small flow boundary property. A key tool in Burguet's proof is what he calls the \emph{$n$-transverse property} ($\thickapprox$ the \emph{$n$-general property} defined later in this section). This property is established through successive approximation. A crucial point for the approximation scheme to work is the so-called \emph{$C^1$-stability of transversality}, that is if $M_1$ and $M_2$ are compact transverse smooth manifolds then any compact smooth manifolds $\tilde{M}_1$ and $\tilde{M}_2$ which are sufficiently small $C^1$-perturbations of $M_1$ and $M_2$ are transverse (\cite[Corollary A.3.17]{katok1997introduction}. There are two difficulties in generalizing Burguet’s method to topological flows. Firstly transversality is defined in the context of smooth manifolds. Secondly, even if we consider a topological flow on a compact manifold we are faced with the fact that transversality is not \emph{$C^0$-stable}. Let us give a very rough overview of our proof. We are given $x\in \text{\rm Int}^{\Phi} S$, where $S$ is a closed cross-section. We may find $U,V\subset S$ with $x\in\text{\rm Int}^{\Phi} U=U$, $\text{\rm Int}^{\Phi} V=V$ and $\partial^{\Phi} U\subset V$. Our goal is to find a perturbation of $U$ in the form of $U'\subset S$ with $U\subset U'\subset U\cup V$ and $\partial^{\Phi} U'\subset V$ such that $\cocap(\partial^{\Phi} U')=0$. Actually we establish the stronger property that for every $y\in S$, \begin{equation} \sharp\{t\geq0: \Phi_t(y)\in \partial^{\Phi} U' \}\le d \end{equation} Consider a small neighborhood of $x\in \partial^{\Phi} U'$ and consider the return profile of this neighborhood: i.e., how elements in this neighborhood return to $\partial^{\Phi} U'$. This can be represented by certain intersections of certain images of this neighborhood. This enables one do use the dimension of this intersection as a proxy to number of returns. Indeed if the dimension drops below zero, the intersection is empty and no further return is possible. The dimension drop mechanism is the key to the proof and this is captured by the concept of \textit{general position} explained in the next subsection. \subsection{General position} By Lemma \ref{lem:dim_cross-section}, a global closed cross-section in $X$ has dimension $d$. We will therefore use the following definition: \begin{df}\label{df:genpos} A collection $\mathcal{A}$ of subsets in a $d$-dimensional space is said to be in \textbf{general position} if for every finite $\mathcal{B}\subset \mathcal{A}$, one has that $$\dim(\cap_{C\in \mathcal{B}}C)\le \max\{-1, d-|\mathcal{B}|\}$$ \end{df} The definition of general position is due to John Kulesza \cite{kulesza1995zero}. To acquire a better understanding, we quote the following sentences from Lindenstrauss \cite{L95}: \medskip {\it The motivation for this definition is that given a collection of $(d-1)$-dimensional subsets of an $n$-dimensional space then generically any two will have intersection with dimension less than $d-2$, etc.} \begin{rem}\label{rem:empty_int} For the purposes of the proof the most important consequence of Definition \ref{df:genpos} is that every finite sub-collection $\mathcal{B}\subset \mathcal{A}$ with $d+1$ elements has \textit{empty intersection}. \end{rem} Recall the definition of $T_{C^n}^{-k}$ in Equation \eqref{eq:T_C_n} and the definition of $I_{C^n}$ in Definition \ref{def:I_C_n} in Subsection \ref{sec:return_names}. \begin{df} Let $n\ge 0$. A set $V\subset S_1$ is called \textbf{$C^n$-general} for $C^n\in \mathcal{C}(n)$ if $\left(T_{C^n}^{-k}(\partial^{\Phi} V)\right)_{k\in I_{C^n}}$ is in general position. It is called \textbf{n-general} if it is $C^n$-general for every $C^n\in \mathcal{C}(n)$. Moreover, we say that it is \textbf{$C^n$-general} (or \textbf{n-general}) \textbf{on a set $U$} if $\left(T_{C^n}^{-k}(\partial^{\Phi} W) \cap U\right)_{k\in I_{C^n}}$ is in general position (or respectively for all $C^n\in \mathcal{C}(n)$). \end{df} The following remark is easy to establish: \begin{rem}\label{rem:gen_post_inherited} If a set $V\subset S_1$ is $n$-general then it is $m$-general for all $0\leq m\leq n$. \end{rem} \begin{lem}\label{lem:0-general} If $W\subset S_1$ with $\dim(\partial^{\Phi} W)\leq d-1$, then $W$ is $0$-general. \end{lem} \begin{proof} Note that for $C^0=(C_0)\in \mathcal{C}(0)$, $I_{C^0}=\{0\}$. Therefore $\dim(\partial^{\Phi} W)\leq d-1$ implies the result. \end{proof} \subsection{Key Lemma} In this subsection, we prove the key technical lemma of the paper: \begin{lem}\label{lem:n-general2} Let $U$ be a subset in $S_1$ with $\text{\rm Int}^{\Phi} U=U$ and $n\ge 0$. Then for every subset $V$ in $S_1$ with $\text{\rm Int}^{\Phi} V=V$ and $\partial^{\Phi} U\subset V$, there is a subset $U'\subset S_1$ with $\text{\rm Int}^{\Phi} U'=U'$ such that \begin{itemize} \item [(1)] $U\subset U'\subset U\cup V$; \item [(2)] $\partial^{\Phi} U'\subset V$; \item [(3)] $U'$ is $n$-general; \item [(4)] $\partial^{\Phi} U'\cap P=\emptyset$. \end{itemize} \end{lem} \begin{proof}[Proof of Lemma \ref{lem:n-general2}] If $\partial^{\Phi} V=\emptyset$, then we set $U'=U\cup V$. By Lemma \ref{lem:In U=U} and the fact that $\partial^{\Phi} U\subset \text{\rm Int}^{\Phi} (V) \subset \text{\rm Int}^{\Phi} (U\cup V)$, we see that $\partial^{\Phi} U'=\emptyset$. Thus $U'$ is $n$-general for all $n\ge 0$. Let us assume $\partial^{\Phi} V\not=\emptyset$. We will prove the lemma by induction on $n$. Note that by assumption the number of periodic orbits is countable. As a periodic orbit intersects a cross-section a finite number of times, we conclude $P\cap S_1$ is countable. By Theorem \ref{thm:R3}, the $F_\sigma$ set $P\cap S_1$ is at most zero-dimensional. By Lemma \ref{lem:dim_cross-section}, $\dim(S_1)\leq d$. By Theorem \ref{thm:R2}, one may find a $F_\sigma$ subset $E$ of $S_1$ with $\dim(E)\leq 0$ such that $\dim(S_1\setminus E)\leq d-1$. By Corollary \ref{cor:sum_theorem_for_F_sigma} $\dim(E\cup P\cap S_1)\leq 0$. We now treat the base case $n=0$. By Theorem \ref{thm:R1} we may pick for every $x\in \partial^{\Phi} U$ an open neighborhood $U_x\subset V$ such that $\partial^{\Phi} U_x\cap (P\cup E)=\emptyset$. Let $U_{x_1}, U_{x_2},\ldots, U_{x_m}$ be a cover of $\partial^{\Phi} U$. Define $U'=U\cup \bigcup_{i=1}^m U_{x_i}$. Clearly $\partial^{\Phi} U'\subset \bigcup_{i=1}^m \partial^{\Phi} U_{x_i}$ and therefore Condition (4) holds. In addition it is easy to see that Conditions (1) and (2) hold. As $\partial^{\Phi} U'\subset S_1\setminus E$, $\dim(\partial^{\Phi} U')\leq d-1$. By Lemma \ref{lem:0-general}, Condition (3) holds. By induction we suppose that we have constructed $A_0\subset S_1$ with $\text{\rm Int}^{\Phi} A_0=A_0$ which satisfies the following properties: \begin{itemize} \item [(A1)] $U\subset A_0\subset U\cup V$. \item [(A2)] $\partial^{\Phi} A_0\subset V$. \item [(A3)] $A_0$ is $(n-1)$-general\footnote{Recall Remark \ref{rem:gen_post_inherited}.}. \item [(A4)] $\mathcal{C}(n)=\mathcal{F}_0\sqcup \mathcal{F}_1$ such that $\mathcal{F}_1\not=\emptyset$ and $A_0$ is $C^n$-general for all $C^n\in \mathcal{F}_0$. \item [(A5)] $\partial^{\Phi} A_0\cap P=\emptyset$. \end{itemize} Now fix $\mathcal{D}\in \mathcal{F}_1$. We will construct $U'$ such that \begin{itemize} \item [(B1)] $\text{\rm Int}^{\Phi} U'=U'$. \item [(B2)] $U\subset U'\subset U\cup V$ and $\partial^{\Phi} U'\subset V$. \item [(B3)] $U'$ is $(n-1)$-general. \item [(B4)] $U'$ is $C^n$-general for all $C^n\in \mathcal{F}_0\sqcup \{ \mathcal{D}\}$. \item [(B5)] $\partial^{\Phi} U'\cap P=\emptyset$. \end{itemize} Assuming that such $U'$ has been constructed, we may complete the proof by inducting on $\mathcal{F}_1$, finally obtaining $\mathcal{F}_0=\mathcal{C}(n)$. Thus it remains to construct $U'$ that satisfies the conditions $(B1)-(B5)$ which will be achieved in the sequel. Indeed set $U'=A_m$ of Lemma \ref{lem:E1-E6v2}. \end{proof} \begin{df} Denote by $B(x, r)$ with $x\in S_1'$ and $r>0$ the intersection of $ S_1'$ and the open ball centered at $x$ with radius $r$. Also, denote by $B'$ the \textbf{doubling ball} of $B$, i.e.\ if $B=B(x, r)$ then $B'=B(x, 2r)$. \end{df} \begin{lem}\label{lem:D} There exists open sets $\{B_i\}_{i=1}^{m}$ satisfying the following conditions: \begin{itemize} \item [(D1)] for any $C\in \mathcal{C}(n)$ and for any distinct $j,\ell\in I_{C}$, one has that $T_{C^n}^{-j} (B_i') \cap T_{C^n}^{-\ell}(B_i')=\emptyset$; \item [(D2)] $\{B_i\}_{i=1}^{m}$ is an open cover of $\partial^{\Phi} A_0$; \item [(D3)] $\bigcup_{i=1}^m B_i\subset V$. \end{itemize} \end{lem} \begin{proof} Let $x\in \partial^{\Phi} A_0$. We will now construct a ball around $x$, $B_x=B(x,r)$ for some $r>0$, such that $B_x\subset V$ and for any $C\in \mathcal{C}(n)$ and for any distinct $j,\ell\in I_{C}$, one has that $T_{C^n}^{-j} (B_x') \cap T_{C^n}^{-\ell}(B_x')=\emptyset$. Since $\mathcal{C}(n)$ is a finite collection, it is sufficient to consider a fixed $C^n=(C_0, C_1, \dots, C_{n-1}, C_n)\in \mathcal{C}(n)$ and fixed $(j,\ell)\in I_{C^n}\times I_{C^n}$ with $j>\ell$ (as one may take the intersection of the open balls associated to each $C^n$ and each $j>\ell$). Assume for a contradiction that for every $p\in \mathbb{N}$ we may find an open neighborhood $W_p$ in $S_1'$ of $x$ with $\diam(W_p)<\frac{1}{p}$ and $w_p\in X$ such that $w_p\in T_{C^n}^{-j}W_p\cap T_{C^n}^{-\ell}W_p$. Let $z_p:=T_{C^n}^{j}w_p,\, y_p:=T_{C^n}^{\ell}w_p\in W_p$. This implies we have found sequences $(y_p)_{p\in \mathbb{N}}$ and $(z_p)_{p\in \mathbb{N}}$ in $X$ converging to $x$ such that $T_{D^{j-\ell}}^{j-\ell}y_p=z_p$, where $D^{j-\ell}=(C_{\ell}, \dots, C_{j})\in \mathcal{C}(j-\ell)$. Thus by Proposition \ref{prop:C_ij} there exist $t_p\leq 2\eta (j-\ell)$ such that $\Phi_{t_p}y_p=z_p$. W.l.o.g.\ $t_p\rightarrow t$, for $k\to \infty$ for some $t\leq 2\eta (j-\ell)$. We conclude that $\Phi_{t}x=x$ which is a contradiction to the fact that $x\notin P$ as $\partial^{\Phi} A_0 \cap P=\emptyset$. Let now $\{B_i\}_{i=1}^{m}$ be a finite subcover of $\partial^{\Phi} A_0$ extracted from $\{B_x\}$. Clearly properties (D1)-(D3) hold. \end{proof} \begin{lem}\label{lem:E1-E6v2} There exists sets $\{A_k\}_{k=1}^{m}$ such that for $k\geq 1$: \begin{itemize} \item [(E1)] $A_{k-1}\subset A_k$ and $A_k\setminus A_{k-1}\subset B_k'$; \item [(E2)] $\text{\rm Int}^{\Phi} A_k=A_k$; \item [(E3)] $\partial^{\Phi} A_k \subset V$; \item [(E4)] $\partial^{\Phi} A_k\cap P=\emptyset$; \item [(E5)] $\partial^{\Phi} A_k\subset \bigcup_{i=1}^m B_i$; \item [(E6)] $A_k$ is $(n-1)$-general and $C^n$-general for all $C^n\in \mathcal{F}_0$; \item [(E7)] $A_k$ is $\mathcal{D}$-general on $\cup_{1\le j\le k} B_j$. \end{itemize} \end{lem} \begin{proof} We will construct sets $\{A_k\}_{k=1}^{m}$ by induction on $k$. As $A_0$ is used as the base case we notice from properties (A1)-(A5) that $A_0$ satisfies (E2)-(E6) and (E7) (in an empty fashion). Suppose that $(A_i)_{i=0}^{k-1}$ have been already defined, satisfying the listed proprieties. Define: $$ \mathcal{A}_{C}=\{T_{C}^{-j}(\partial^{\Phi} A_{k-1}): j\in I_{C}\}, $$ for every $C\in \mathcal{C}(n-1)\cup \mathcal{F}_0\cup \{\mathcal{D}\}$. By Theorem \ref{thm:R2}, for every $\mathcal{A}\subset \mathcal{A}_{C}$ with $\bigcap \mathcal{A} \neq \emptyset$, there exists an $F_\sigma$ subset $E_\mathcal{A}^C \subset Z_{C}$ of dimension zero such that \begin{equation}\label{eq:dim_drop} \dim(\bigcap \mathcal{A} \setminus E_\mathcal{A}^C)=\dim(\bigcap \mathcal{A} )-1. \end{equation} Define $$ E=\bigcup_{C\in \mathcal{C}(n-1)\cup \mathcal{F}_0\cup \{\mathcal{D}\}, \mathcal{A}\subset \mathcal{A}_{C}, j\in I_{C}} T_{C}^j E_\mathcal{A}^C, $$ Clearly, $E$ is a zero-dimensional set in $S_1'$ by Corollary \ref{cor:sum_theorem_for_F_sigma} and Lemma \ref{lem:F_sigma_invariance}(2). Note that by the inductive assumption $\partial^{\Phi} A_{k-1}\subset \bigcup_{i=1}^m B_i$. Thus one may find $\epsilon>0$ so that: $$\overline{\Theta}_{\epsilon}(\overline{B_k}\cap \partial^{\Phi} A_{k-1})\subset \ \bigcup_{i=1}^m B_i,$$ where recall from Subsection \ref{subsec:notation} that the \textit{closed $\epsilon$-tube} around a closed set $Q\subset S_1'$ is the set $\overline{\Theta}_{\epsilon}(Q):=\{y\in S_1'|\, d(y,Q)\leq \epsilon\}$. Clearly $\overline{\Theta}_{\epsilon}(\overline{B_k})\subset B_k'$ for $\epsilon>0$ small enough. Thus $\overline{\Theta}_{\epsilon}(\overline{B_k}\cap \partial^{\Phi} A_{k-1})\subset B_k'\cap \bigcup_{i=1}^m B_i$. Note $P\cap S_1$ is a $F_\sigma$ set with $\dim(P\cap S_1)\leq 0$. By Corollary \ref{cor:sum_theorem_for_F_sigma}, $\dim(E\cup P\cap S_1)\leq 0$. By Lemma \ref{prop:avoiding zero_dim}, there exists a set $W$ with $\text{\rm Int}^{\Phi} W=W$ such that \begin{itemize} \item [(F1)] $\overline{W}\subset B_k'\cap \bigcup_{i=1}^m B_i$; \item [(F2)] $\overline{\Theta}_{\epsilon}(\overline{B_k}\cap \partial^{\Phi} A_{k-1}) \subset W$; \item [(F3)] $\partial^{\Phi} W \cap (E\cup P)=\emptyset$. \end{itemize} Let $A_k=A_{k-1}\cup W$. We will show that $A_k$ is the set we look for. By (F1), Conditions (E1) and (E5) hold. Since $\text{\rm Int}^{\Phi} W=W$, (E2) follows by Lemma \ref{lem:In U=U}. Note: \begin{equation}\label{eq:A_k} \partial^{\Phi} A_k\subset \partial^{\Phi} A_{k-1} \cup \partial^{\Phi} W \subset \partial^{\Phi} A_{k-1} \cup \overline{W}\subset V. \end{equation} This gives (E3). By (F3) and (E4) for $k-1$ (E4) holds for $k$. Using \eqref{eq:A_k} and (F3), we have (E7). It remains to check $(E6)$ and $(E7)$. We start by proving $(E6)$. Fix $C\in \mathcal{C}(n-1)\cup \mathcal{F}_0$. Assume for a contradiction that $A_k$ is not $C$-general. We can thus find $I\subset I_C$ with \begin{equation*} \dim( \cap_{j\in I} T_C^{-j} \partial^{\Phi} A_k )>\max \{-1, d-\sharp I\}. \end{equation*} By Equation \eqref{eq:A_k} it follows that \begin{equation*} \begin{split} \cap_{j\in I} T_C^{-j} \partial^{\Phi} A_k &\subset \cap_{j\in I} \left( T_C^{-j} \partial^{\Phi} A_{k-1} \cup T_C^{-j} \partial^{\Phi} W \right)\\ &\subset \cup_{\alpha\in \{ 0,1\}^{I}}\left( \cap_{j\in I} K_j^{\alpha_j} \right), \end{split} \end{equation*} where $K_j^0=T_C^{-j} \partial^{\Phi} A_{k-1}$ and $K_j^1=T_C^{-j} \partial^{\Phi} W$ for $j\in I$. By $(D1)$ and the fact that $W\subset B_k'$, we see that $ \cap_{j\in I} K_j^{\alpha_j}=\emptyset$ if $\alpha_j=1$ for at least two $j\in I$. By the induction hypothesis, we have that $\dim(\cap_{j\in I} K_j^0 )\le \max\{-1, d-\sharp I\}$. By Lemmas \ref{lem:clo_int_open_is_F_sigma} and \ref{lem:F_sigma_invariance} the sets $K_j^{\alpha_j}$ are $F_\sigma$. As the intersection of finitely many $F_\sigma$ sets is an $F_\sigma$ set, we may apply the sum theorem for $F_\sigma$ sets (Corollary \ref{cor:sum_theorem_for_F_sigma}). It follows that there is $\ell\in I$ such that \begin{equation}\label{eq:dim 1} \dim(\big (\cap_{j\in I\setminus\{\ell\}} K_j^0 \big )\cap K_\ell^1 )> \max\{-1, d-\sharp I\}. \end{equation} Let $\mathcal{A}=\{K_j^0\}_{j\in I\setminus\{\ell\}}$. Assume there is $x\in K_\ell^1 \cap E_\mathcal{A}^C$, then $T_C^\ell x\in \partial^{\Phi} W\cap T_C^\ell E_\mathcal{A}^C$ which is a contradiction to $(F3)$. We conclude $$ \big (\cap_{j\in I\setminus\{\ell\}} K_j^0 \big ) \cap K_\ell^1 \subset \big (\cap_{j\in I\setminus\{\ell\}} K_j^0 \big ) \setminus E_\mathcal{A}^C. $$ By the inductive hypothesis (E6) holds for $k-1$ and may be applied on $\cap_{j\in I\setminus\{\ell\}} K_j^0$. Thus, using Equation \eqref{eq:dim_drop}: \begin{equation*} \begin{split} \dim(\big (\cap_{j\in I\setminus\{\ell\}} K_j^0\big ) \cap K_\ell^1 ) &\le \dim(\big (\cap_{j\in I\setminus\{\ell\}} K_j^0\big )\setminus E_\mathcal{A}^C)\\ &= \dim(\cap_{j\in I\setminus\{\ell\}} K_j^0)-1\\ &\le \max\{-1, d-\sharp I\}. \end{split} \end{equation*} This is a contradiction to Equation \eqref{eq:dim 1}. We now prove (E7). We notice from (F2) and the fact that $A_k=A_{k-1}\cup W$ that $\partial^{\Phi} A_k\cap (B_k\cap \partial^{\Phi} A_{k-1})=\emptyset$, equivalently $\partial^{\Phi} A_k\setminus (B_k\cap \partial^{\Phi} A_{k-1})=\partial^{\Phi} A_k$. In addition we conclude from (F1) that $\partial^{\Phi} W \setminus B_k'=\emptyset$. Thus $\partial^{\Phi} A_k=\partial^{\Phi} A_k\setminus (B_k\cap \partial^{\Phi} A_{k-1})\subset (\partial^{\Phi} A_{k-1} \cup \partial^{\Phi} W)\setminus (B_k\cap \partial^{\Phi} A_{k-1})\subset (\partial^{\Phi} A_{k-1}\setminus B_k)\cup (\partial^{\Phi} W\cap B_k')$. Finally: $$\bigcup_{i=1}^k B_i\cap \partial^{\Phi} A_k\subset(B_{k}\cup\bigcup_{i=1}^{k-1} B_i)\cap \big((\partial^{\Phi} A_{k-1}\setminus B_k)\cup (\partial^{\Phi} W\cap B_k')\big) $$ The (RHS) equals the union of the four following expressions, one of which is empty: \begin{itemize} \item $B_{k}\cap (\partial^{\Phi} A_{k-1}\setminus B_k)=\emptyset$, \item $\bigcup_{i=1}^{k-1} B_i\cap (\partial^{\Phi} A_{k-1}\setminus B_k)\subset \bigcup_{i=1}^{k-1} B_i\cap \partial^{\Phi} A_{k-1}$, \item $B_{k}\cap \partial^{\Phi} W$ \item $(\bigcup_{i=1}^{k-1}B_i)\cap (\partial^{\Phi} W\cap B_k') $ \end{itemize} Fix $I\subset I_\mathcal{D}$. One can write $\bigcap_{j\in I} T_\mathcal{D}^{-j}(\bigcup_{i=1}^k B_i\cap \partial^{\Phi} A_k)$ as the union of $3^{|I|}$ expressions. Notice that as above each one of these expressions is a finite intersection of $F_\sigma$ sets, thus $F_\sigma$ by itself. Therefore by Corollary \ref{cor:sum_theorem_for_F_sigma}, it is enough to show that each of these expressions has dimension less or equal than $d-|I|$. Let us analyze each of these expressions. If the expression $B_{k}\cap \partial^{\Phi} W$ appears twice, then the expression is empty as $T_\mathcal{D}^{-j_1}(B_{k})\cap T_\mathcal{D}^{-j_2}(B_{k})=\emptyset$ for $j_1\neq j_2$. The same is true for the expression $(\bigcup_{i=1}^{k-1}B_i)\cap (\partial^{\Phi} W\cap B_k')$ as $T_\mathcal{D}^{-j_1}(B_{k}')\cap T_\mathcal{D}^{-j_2}(B_{k}')=\emptyset$\ for $j_1\neq j_2$. Thus the expressions involving $\partial^{\Phi} W$ appear at most once. If such expression do not appear at all, then we are only left with the expression $\bigcup_{i=1}^{k-1} B_i\cap \partial^{\Phi} A_{k-1}$ and the dimension estimate is handled by the inductive assumption that condition (E7) holds for $k-1$. Finally we treat the case where an expression involving $\partial^{\Phi} W$ appears exactly once. We thus analyze an expression contained in an expression of the form $$T_\mathcal{D}^{-\ell}\partial^{\Phi} W\cap \bigcap_{j\in {I\setminus \{\ell\}}} T_\mathcal{D}^{-j}(\bigcup_{i=1}^{k-1} B_i\cap \partial^{\Phi} A_{k-1})$$ Let $\mathcal{A}=\{T_\mathcal{D}^{-j}(\partial^{\Phi} A_{k-1}): j\in {I\setminus \{\ell\}}\}$, Assume there is $x\in T_\mathcal{D}^{-\ell}\partial^{\Phi} W \cap E_{\mathcal{A}}^\mathcal{D}$, then $T_\mathcal{D}^\ell x\in \partial^{\Phi} W\cap T_\mathcal{D}^\ell E_{\mathcal{A}}^\mathcal{D}\subset \partial^{\Phi} W\cap E$ which is a contradiction to $(F3)$. Thus, using condition (E7) for $k-1$ and Equation \eqref{eq:dim_drop}, \begin{equation*} \begin{split} &\dim(T_\mathcal{D}^{-\ell}\partial^{\Phi} W\cap \bigcap_{j\in {I\setminus \{\ell\}}} T_\mathcal{D}^{-j}(\bigcup_{i=1}^{k-1} B_i\cap \partial^{\Phi} A_{k-1}))\\ &\le \dim(\big(\bigcap_{j\in {I\setminus \{\ell\}}} T_\mathcal{D}^{-j}(\bigcup_{i=1}^{k-1} B_i\cap \partial^{\Phi} A_{k-1})\big )\setminus E_{\mathcal{A}}^\mathcal{D})\\ &= \dim(\bigcap_{j\in {I\setminus \{\ell\}}} T_\mathcal{D}^{-j}(\bigcup_{i=1}^{k-1} B_i\cap \partial^{\Phi} A_{k-1}))-1\\ &\le \max\{-1, d-|I|\}, \end{split} \end{equation*} as desired. \end{proof} \subsection{Proof of the Main Theorem} The proof of the main theorem necessitates an approximation lemma. Recall the definition of $\delta_0$ in Definition \ref{def:delta_0}. \begin{lem}\label{lem:open neigh intersection empty} Let $A$ and $V$ be subsets such that $A=\overline{A}\subset V=\text{\rm Int}^{\Phi} V\subset\text{\rm Int}^{\Phi} S_1$. Let $n,d\in \mathbb{N}$. Suppose that for all $C^n=(C_0,C_1,\ldots, C_n)\in \mathcal{C}(n)$ and $I\subset I_C$ with $0\in I$ and $\sharp I=d+1$, it holds, \begin{equation}\label{eq:open neigh intersection empty 0} \bigcap_{k\in I} T_{C^n}^{-k}A=\emptyset, \end{equation} Then there exists $U\subset \overline{U}\subset V\subset S_i$ with $\text{\rm Int}^{\Phi} U=U$ and $A\subset U$ such that \begin{equation}\label{eq:open neigh intersection empty 1} Z_{C^n}^{\delta_0}\cap \bigcap_{k\in I} T_{C^n}^{-k}U=\emptyset, \end{equation} for all $C^n=(C_0,C_1,\ldots, C_n)\in \mathcal{C}(n)$ and $I\subset I_C$ with $0\in I$ and $\sharp I=d+1$. \end{lem} \begin{proof} We claim there is $m_0\in \mathbb{N}$, so that for all $m\geq m_0$, $U=U_m:=\Theta_{1/m}(A)$ fulfills the sought after requirements. Assume for a contradiction that this is not the case. We may thus find $ x_m \in Z_{C_m^n}^{\delta_0}$, $C_m^n=(C_{0,m},C_{1,m},\ldots, C_{n,m})\in \mathcal{C}(n)$, $0=j_0^m<j_1^m<\cdots <j_d^m$, $\{j_i^m\}_{i=0}^d\subset I_{C_m^n}$ so that $T_{C_m^n}^{j_i^m} x_m \in \Theta_{1/m}(A)$ for $i=0,\ldots, d$. As $\mathcal{C}(n)$ is finite, by passing to a subsequence we may assume w.l.o.g.\ $C_m^n=C^n$ for some fixed $C^n=(C_0,C_1,\ldots, C_n)\in \mathcal{C}(n)$, and $j_i^m=j_i$ for $i=0,\ldots, d$, for some fixed $I:=\{j_i\}_{i=0}^d\subset I_{C^n}$. By passing to a subsequence we may assume w.l.o.g.\ that $x_m\rightarrow x\in Z_{C^n}^{\delta_0}$. In particular $T_{C^n}^{j_i} x$ is well defined for $i=0,\ldots, d$. Conclude $T_{C^n}^{j_i} x \in A$ for $i=0,\ldots, d$. Thus $x\in \bigcap_{k\in I} T_{C^n}^{-k}A$ which is a contradiction. Q.E.D. \end{proof} \begin{thm}\label{thm:SFBP} Let $X$ be a compact finite-dimensional space. Let $\Phi$ be a topological flow on $X$ without fixed points, having a countable number of periodic orbits. Then for any $U, V\subset S_1$ with $\text{\rm Int}^{\Phi} U=U$, $\text{\rm Int}^{\Phi} V=V$ and $\partial^{\Phi} U\subset V$, there exists $U'\subset S_1$ with $\text{\rm Int}^{\Phi} U'=U'$ and $U\subset U'\subset U\cup V$ such that $U'$ has a small flow boundary. \end{thm} \begin{proof} By induction, we will construct $\{U_k\}_{k=0}^\infty$ and $\{V_k\}_{k=0}^{\infty} $ satisfying for all $k\ge 0$, \begin{itemize} \item [(1)] $U_0=U$ and $V_0=V$; \item [(2)] $\text{\rm Int}^{\Phi} U_k =U_k\subset S_1$ and $\text{\rm Int}^{\Phi} V_k=V_k\subset S_1$; \item [(3)]$\partial^{\Phi} U_k\subset V_k$; \item [(4)]$\overline{V_{k+1}}\subset V_k$ and $U_k\subset U_{k+1}$; \item [(5)]$U_{k+1}\subset U_k \cup V_k$; \item [(6)]for every $C^k=(C_0,C_1,\ldots, C_k)\in \mathcal{C}(k)$, one has that $ Z_{C^n}^{\delta_0}\cap\bigcap_{j\in I} T_{C^k}^{-j}V_k=\emptyset $ for all $I\subset I_{C^k}$ with $0\in I$ and $\sharp I=d+1$. \end{itemize} Assuming that such $\{U_k\}_{k=0}^\infty$ and $\{V_k\}_{k=0}^{\infty}$ have been constructed, we set $U'=\cup_{i=0}^{\infty} U_i$. It is clear that $U'$ is relatively open in $S_1'$. By $(4)$ and $(5)$, we have that $$ U_{k+\ell}\cup V_k \subset U_{k+\ell-1}\cup V_{k+\ell-1}\cup V_k \subset U_{k+\ell-1}\cup V_k, $$ for all $k,\ell>0$. It follows that \begin{equation*} U_{k+\ell}\cup V_k \subset U_{k}\cup V_k, \end{equation*} and thus as by (4) $U'=\bigcup_{\ell\geq k} U_\ell$ for all $k$, \begin{equation*} U_{k+1} \subset U' \subset U_{k+1}\cup V_{k+1}, \end{equation*} for all $k>0$. Then by $(2), (3), (4)$ we obtain that \begin{equation}\label{eq:last thm 1} \partial^{\Phi} U' \subset \overline{U_{k+1}\cup V_{k+1}}\setminus U_{k+1} \subset \partial^{\Phi} U_{k+1} \cup \overline{V_{k+1}} \subset V_{k}, \end{equation} for all $k>0$. In order to establish $\cocap(\partial^{\Phi} U')=0$, it is sufficient to show that any orbit of the flow hits at most $d$ times the set $\partial^{\Phi} U'$. Let $x\in \partial^{\Phi} U'\subset S_1$ and assume for a contradiction that there are $0<t_1<t_2<\ldots<t_{d}$ such that $\Phi_{t_i}x\in \partial^{\Phi} U'\subset S_1$, $i=1,\ldots,d$. By Lemma \ref{lem:internal_return}, for some $n$, there is $C^n=(C_0, C_1, \dots, C_{n})\in \mathcal{C}(n)$ with $x\in Z_{C^n}^{\delta_0}$ and $I=\{j_i\}_{i=0}^d\subset I_{C^n}$ with $j_0=0<j_1,\ldots <j_d$ such that $T_{C^n}^{j_i}x=\Phi_{t_i}x$ for $i=0,\ldots, d$. Thus $$x\in Z_{C^n}^{\delta_0}\cap \bigcap_{j\in I} T_{C^n}^{-j}\partial^{\Phi} U'\subset Z_{C^n}^{\delta_0}\cap \bigcap_{j\in I} T_{C^n}^{-j}V_n\neq\emptyset,$$ which is a contradiction to $(6)$. It remains to construct $\{U_k\}_{k=0}^\infty$ and $\{V_k\}_{k=0}^{\infty} $. By (1), the sets $U_0$ and $V_0$ are well defined for $k=0$. Suppose that $(U_i)_{i=0}^k$ and $(V_i)_{i=0}^k $ have been constructed. Applying Lemma \ref{lem:n-general2} to $U_k,V_k$, we have a subset $U_{k+1}\subset S_1$ with $\text{\rm Int}^{\Phi} U_{k+1}=U_{k+1}$ such that \begin{itemize} \item [(A1)] $U_k\subset U_{k+1}\subset U_k\cup V_k$; \item [(A2)] $\partial^{\Phi} U_{k+1}\subset V_k$; \item [(A3)] $U_{k+1}$ is $(k+1)$-general. \end{itemize} Applying Lemma \ref{lem:open neigh intersection empty} and Remark \ref{rem:empty_int} to $A=\partial^{\Phi} U_{k+1}$, we have a set $V_{k+1}\subset S_1$ with $\text{\rm Int}^{\Phi} V_{k+1} = V_{k+1}$ such that \begin{itemize} \item [(B1)] $\partial^{\Phi} U_{k+1}\subset V_{k+1}\subset \overline{V_{k+1}}\subset V_k $; \item [(B2)] for all $C^{k+1}=(C_0,\ldots,C_{k+1})\in \mathcal{C}(k+1)$, it holds that $Z_{C^n}^{\delta_0}\cap \bigcap_{j\in I} T_{C^{k+1}}^{-j}V_{k+1}=\emptyset $ for all $I\subset I_{C^{k+1}}$ with $0\in I$ and $\sharp I=d+1$. \end{itemize} Note that the set $U_{k+1}$ and $V_{k+1}$ satisfy the conditions $(2)-(6)$. \end{proof} We can now prove the main theorem: \begin{proof}[Proof of Theorem \ref{main thm}] Fix a point $x\in X$ and a closed cross-section $S$ with $x\in \text{\rm Int}^{\Phi} S$. Using Lemma \ref{lem:complete family appendix}, one may find two complete families of cross-sections $\mathcal{S}=\{S_i \}_{i=1}^{N}$ and $\mathcal{S}'=\{S_i' \}_{i=1}^{N}$ obeying the conditions in the beginning of Section \ref{sec:notations}, with $x\in \text{\rm Int}^{\Phi} (S_1)\subset S_1 \subset S$. We now choose $U$ and $V$ in $S_1$ with $\text{\rm Int}^{\Phi} U=U$ and $\text{\rm Int}^{\Phi} V=V$ such that $x\in U \subset S_1$ and $\partial^{\Phi} U \subset V \subset S_1$. Applying Theorem \ref{thm:SFBP} to $U$ and $V$, we have an open set $U'\subset S_1$ with $\text{\rm Int}^{\Phi} U'=U'$ and $U\subset U'\subset U\cup V$ such that $U'$ has a small flow boundary and $x\in U'$. We thus conclude that $(X, \Phi)$ has the small flow boundary property. \end{proof} \section{Expansive flows have strongly isomorphic symbolic extensions}\label{sec:Expansive flows have strongly isomorphic symbolic extensions} \subsection{Background} In this section, we are interested in expansive flows. We first recall the definition of expansive flows and several properties of such flows. Then we build the symbolic extension of an expansive flow following Bowen and Walters. Finally we give a positive answer to an open question by Bowen and Walters, showing that any expansive topological flow has a strongly isomorphic symbolic extension. The system $(X,T)$ is said to be {\bf expansive} if there exists an $\delta>0$ such that $d(T^nx,T^ny)<\delta$ for all $n\in \mathbb{Z}$ implies $x=y$ where $d$ is a metric on $X$. For example, Anosov diffeomorphisms are expansive (see \cite{anosov1967geodesic}). Keynes and Robertson \cite{keybob} proved that every expansive homeomorphism has a symbolic extension. Moreover, when $X$ is $0$-dimensional, it is conjugate to a subshift. The expansive systems have been extensively investigated and many interesting properties have been proven. For example, expansiveness is invariant under topological conjugacy; the \textit{periodic growth}\footnote{The periodic growth of $(X,T)$ is defined as $\limsup_{n\rightarrow\infty}(1/n)\log |P_n|$ where $P_n$ is the set of $n$-periodic points.} and the topological entropy of an expansive discrete system is finite \cite{conze1968points}. Ma\~n\'e \cite{mane1979expansive} showed that an expansive $\mathbb{Z}$-system must be finite-dimensional. From this it follows that an expansive $\mathbb{Z}$-system without fixed points has the small boundary property (\cite{L95}). By \cite[Proposition C.1]{burguet2019symbolic} an expansive $\mathbb{Z}$-system with the small boundary property admits an \textit{essential uniform generator} which induces a strongly isomorphic symbolic extension (\cite[Proposition 3.1]{burguet2019symbolic}). \begin{df} The flow $(X, \Phi)$ is said to be \textbf{expansive} if for any $\epsilon>0$, there exists $\delta>0$ such that if $d(\Phi_t(x), \Phi_{s(t)}y)<\delta$ for all $t\in \mathbb{R}$, for a pair of points $x,y\in X$ and a continuous map $s:\mathbb{R} \to \mathbb{R}$ with $s(0)=0$, then then there exists $t\in \mathbb{R}$ with $|t|< \epsilon$ such that $y=\Phi_t(x)$. \end{df} The notion of expansiveness of topological flows was introduced by Bowen and Walters \cite{bowen1972expansive}. We remark that the definition of expansiveness is clearly independent of the metric. By \cite[Theorem 5]{bowen1972expansive}, the entropy of an expansive flow is finite and for every $t\geq 0$ the number of periodic orbits of period in $[0,t]$ is finite. By \cite[Lemma 1]{bowen1972expansive}, an expansive flow has only a finite number of fixed points and each of them is an isolated point\footnote{Note that an expansive $\mathbb{Z}$-system has also only a finite number of fixed points. However, these points need not be isolated as witnessed by the (expansive) full shift on two letters. }. This reduces the study of expansive flows to those without fixed points. Another important property of expansiveness which Bowen and Walters proved is that expansiveness is a conjugacy invariant for flows \cite[Corollary 4]{bowen1972expansive}. One may therefore argue that the definition of expansiveness by Bowen and Walters is the correct one (see also the discussion of other attempts in \cite[Page 180-181]{bowen1972expansive}). Keynes and Sears \cite{keynes1981real} extended the above mentioned result of Ma\~n\'e, showing that expansive flows must be finite-dimensional. It can be shown that Anosov flows are expansive (\cite{anosov1967geodesic}). The geodesic flow on a compact smooth manifold of negative curvature is an Anosov flow and thus expansive; also, Axiom A flows are expansive on their nonwandering sets \cite{bowen1972periodic}. \subsection{Bowen and Walters' construction}\label{sec:Bowen and Walters} In this section, we recall the construction of Bowen and Walters, i.e.\ the symbolic extension of an expansive flow. Let $(X, \Phi)$ be an expansive flow without fixed points. The following exposition is based on \cite[Section 5]{bowen1972expansive}. \begin{thm}[\cite{bowen1972expansive}, Theorem 3]\label{Thm:expansive appendix} A topological flow without fixed points $(X, \Phi)$ is expansive if and only if for any $\epsilon>0$ there exists $\alpha=\alpha(\epsilon)>0$ such that the following holds: if $\mathbf{t}=(t_i)_{i\in \mathbb{Z}}$ and $\mathbf{u}=(u_i)_{i\in \mathbb{Z}}$ with $t_0=u_0=0$, $0<t_{i+1}-t_i\le \alpha$, $|u_{i+1}-u_i|\le \alpha$, $t_i\to \infty$ and $t_{-i}\to -\infty$ as $i\to \infty$ and if $x,y\in X$ satisfy $d(\Phi_{t_i}(x), \Phi_{u_i}(y))\le \alpha$ for all $i\in \mathbb{Z}$, then there exists $t\in \mathbb{R}$ with $|t|\le \epsilon$ such that $y=\Phi_t(x)$. \end{thm} Let $\eta$ be as in Lemma \ref{lem:complete family appendix}. W.l.o.g.\ we assume $\alpha<\eta$. Choose $\alpha=\alpha(\eta)>0$ as in Theorem \ref{Thm:expansive appendix}. By Lemma \ref{lem:complete family appendix}, we can find a complete family $\mathcal{S}=\{S_i \}_{i=1}^N$ of injectivity time $\eta>0$ with $\Phi_{[0,\alpha]}\mathcal{G}=\Phi_{[-\alpha, 0]}\mathcal{G}=X$, where $\mathcal{G}=\cup_{1\le i\le N}S_i$. Since $S_i$ are pairwise disjoint and compact, there is $\beta>0$ so that $\Phi_{[0, \beta]}(S_i)$ are pairwise disjoint for all $i$. For $x\in \mathcal{G}$, let $r_i(x)$ be the $i$-th return time to $\mathcal{G}$ (see Definition \ref{def:i_th return map}). As $\Phi_{[-\alpha, 0]}\mathcal{G}=X$, every $y\in \mathcal{G}$, has some $x\in \mathcal{G}$ and $0\leq s\leq \alpha$ so that $\Phi_{-s}x=y$, i.e. $\Phi_{s}y=x\in \mathcal{G}$. It follows that $0<\beta\le r_{i+1}(x)-r_i(x)\le \alpha$ for all $x\in \mathcal{G}$. Denote by $\Sigma=\{1,2,\dots, N\}$ and $\sigma$ the (left) shift on $\Sigma^{\mathbb{Z}}$. Define $\mathcal{G} \rtimes \Sigma^{\mathbb{Z}}$ by $$ \{ (x, \mathbf{q})\in \mathcal{G} \times \Sigma^{\mathbb{Z}}: \exists \mathbf{t}\in \mathbb{R}^{\mathbb{Z}} ~\text{so that}~t_0=0, t_{i+1}-t_i\in [\beta,\alpha], \Phi_{t_i}(x)\in S_{q_i} \}. $$ In the definition $\mathbf{t}$ is called a \textbf{return-time sequence} for $(x, \mathbf{q})$; $\mathbf{q}$ is called a \textbf{return-name sequence} for $x$ and $x$ is called a \textbf{realization} of $\mathbf{q}$. A return-time sequence $\mathbf{t}$ for $x$ is any return-time sequence of $(x, \mathbf{q})\in \mathcal{G} \rtimes \Sigma^{\mathbb{Z}}$ for some $\mathbf{q}$. Note that the sequence $\mathbf{r}=\{r_i(x)\}_{i=-\infty}^{\infty}$ is the \textbf{maximal} return-time sequence for $x$ in the sense that if $\mathbf{t}=\{t_i\}_{i=-\infty}^{\infty}$ is a return-time for $x$, then $\{t_i\}_{i=-\infty}^{\infty}\subset \{r_i(x)\}_{i=-\infty}^{\infty}$. Other return-time sequences than $\mathbf{r}$ for $x$ may exists. This is the case if for example for some $i$, $r_{i+1}-r_{i-1}\leq \alpha$, as one can remove $r_i$ from $\mathbf{r}$ and still have a return-time sequence for $x$. In contrast Lemma \ref{lem:unique_return_time} shows that $(x, \mathbf{q})$ has a unique return-time sequence. In addition in Lemma \ref{lem:unique return-time} it is shown that the return-name sequence determines a realization uniquely. \begin{lem}\label{lem:closed appendix} $\mathcal{G} \rtimes \Sigma^{\mathbb{Z}}$ is closed in $\mathcal{G} \times \Sigma^{\mathbb{Z}}$. \end{lem} \begin{proof} Let $(x^n, \mathbf{q}^n)\in \mathcal{G} \rtimes \Sigma^{\mathbb{Z}}$ which converge to $(x, \mathbf{q})$ as $n\to \infty$. Let $\mathbf{t}^n$ be a return-time sequence of $(x^n, \mathbf{q}^n)$. Since $|t_i^n|\le |i|\alpha$ for every $i\in \mathbb{Z}$, we may assume that the limit $\lim_{n\to \infty}t_i^n$ exists, say $t_i$ for each $i\in \mathbb{Z}$. Since $\Sigma$ is finite, we see that for each $i\in \mathbb{Z}$, $q_i^n=q_i$ for large $n>n(i)$. Since $S_{q_i}$ is closed for each $i\in \mathbb{Z}$, we obtain that $\Phi_{t_i}(x)=\lim_{n\to \infty} \Phi_{t_i^n} (x^n)\in S_{q_i}$ for each $i\in \mathbb{Z}$. This implies that $(x,\mathbf{q})\in \mathcal{G} \rtimes \Sigma^{\mathbb{Z}}$ with return-time sequence $\mathbf{t}=(t_i)_{i\in \mathbb{Z}}$. \end{proof} \begin{lem}\label{lem:unique_return_time} For each $(x, \mathbf{q})\in \mathcal{G} \rtimes \Sigma^{\mathbb{Z}}$, the return-time sequence $\mathbf{t}$ is unique. \end{lem} \begin{proof} Suppose that $\mathbf{t}$ and $\mathbf{t}'$ are two distinct return-time sequences of $(x, \mathbf{q})\in \mathcal{G} \rtimes \Sigma^{\mathbb{Z}}$. Without loss of generality, we assume that $(t_i)_{i>0}\not= (t_i')_{i>0}$. Let $k=\min\{i>0: t_i\not=t_i' \}$. It follows that $t_{k-1}=t_{k-1}'$ and $\Phi_{t_k}(x), \Phi_{t_k'-t_k}(\Phi_{t_k}(x))\in S_{q_k}$. Since $t_k-t_{k-1}\leq\alpha$ and $t_k'-t_{k-1}'\leq\alpha$, we have that $|t_k'-t_k|\le \alpha<\eta$ and consequently $t_k=t_k'$. This is a contradiction. \end{proof} Thus we may define the map: $$ \mathbf{t}: \mathcal{G} \rtimes \Sigma^{\mathbb{Z}} \to \mathbb{R}^\mathbb{Z}. $$ \begin{lem} The map $\mathbf{t}$ is continuous. \end{lem} \begin{proof} It suffices to show that $t_i: \mathcal{G} \rtimes \Sigma^{\mathbb{Z}} \to [-|i|\alpha,|i|\alpha]\subset \mathbb{R}$ is continuous for each $i\in \mathbb{Z}$. Assume $(x^n, \mathbf{q}^n)\in \mathcal{G} \rtimes \Sigma^{\mathbb{Z}}$ converge to $(x, \mathbf{q})$ as $n\to \infty$. Let $\{t_i(x^{n_j}, \mathbf{q}^{n_j})\}_j$ be an arbitrary converging subsequence. By the argument of Lemma 4.3, $\lim_{j\rightarrow \infty} t_i(x^{n_j}, \mathbf{q}^{n_j})=t_i(x, \mathbf{q})$. This establishes continuity. \end{proof} Let $P_2: \mathcal{G} \rtimes \Sigma^{\mathbb{Z}} \to \Sigma^{\mathbb{Z}}$ be the projection. \begin{lem}\label{lem:unique return-time} The map $P_2$ is injective. \end{lem} \begin{proof} Suppose that $(x, {\bf q}), (y, {\bf q}) \in \mathcal{G} \rtimes \Sigma^{\mathbb{Z}}$. Since $(X, \Phi)$ is expansive, by Theorem \ref{Thm:expansive appendix}, we see that $x=\Phi_{t}(y)$ for some $t$ with $|t|\le \eta$. Since $x,y\in S_{q_0}$ and $S_{q_0}$ is a cross-section of injectivity time $\eta$, we have that $x=y$. \end{proof} Let $\mathcal{Z}:=\cup_{1\le i\le N}\partial^{\Phi} (S_i)$ and $\mathcal{W}$ be the set of points whose orbit does not intersect $\mathcal{Z}$. In other words, \begin{equation}\label{eq:W} \mathcal{W}=X\setminus \bigcup_{r\in \mathbb{R}} \Phi_{r}(\mathcal{Z}). \end{equation} Clearly, the set $\mathcal{W}$ is $\Phi$-invariant, i.e.\ $\Phi_t(\mathcal{W})=\mathcal{W}$ for all $t\in \mathbb{R}$. Define $$\mathcal{V}=\mathcal{W}\cap \mathcal{G}.$$ Recall the definition of $r_i(x)$ above. Define $Q: \mathcal{V} \to \Sigma^{\mathbb{Z}}$ by $Q(x)=(Q_i(x))_{i\in \mathbb{Z}}$ where $Q_i(x)=j$ if $\Phi_{r_i(x)}(x)\in S_j$. Note that the map $Q$ is not necessarily continuous. If $x\in \mathcal{V}$, then also $\Phi_{r_1(x)}(x)\in \mathcal{V}$, thus $Q(\mathcal{V})$ is $\sigma$-invariant. Define $$\Lambda=\overline{Q(\mathcal{V})}.$$ Clearly $(\Lambda, \sigma)$ is a subshift. Let $P_1: \mathcal{G} \rtimes \Sigma^{\mathbb{Z}} \to \mathcal{G}$ be the projection on the first coordinate. Note that the map $P_2^{-1}: P_2(\mathcal{G} \rtimes \Sigma^{\mathbb{Z}}) \to \mathcal{G} \rtimes \Sigma^{\mathbb{Z}}$ is continuous. Clearly, for $x\in \mathcal{V}$, the point $(x, Q(x))\in \mathcal{G} \rtimes \Sigma^{\mathbb{Z}}$ and consequently $Q(x)\in P_2(\mathcal{G} \rtimes \Sigma^{\mathbb{Z}})$. Since $P_2(\mathcal{G} \rtimes \Sigma^{\mathbb{Z}})$ is compact and contains $Q(\mathcal{V})$, we see that $\Lambda$, which is the closure of $Q(\mathcal{V})$, is contained in $P_2(\mathcal{G} \rtimes \Sigma^{\mathbb{Z}})$. We define the continuous map $P: \Lambda \to \mathcal{G}$ by $$P({\bf q})=P_1(P_2^{-1}({\bf q}))$$ for ${\bf q} \in \Lambda$. Thus, $P$ associates with a return-name sequence in $\Lambda$ its realization in $\mathcal{G}\subset X$. Clearly, $P\circ Q$ is the identity on $\mathcal{V}$. We define the continuous function $f: \Lambda \to [\beta, \alpha]$ by $$ f({\bf q})=t_1(P_2^{-1}{\bf q}). $$ \begin{rem}\label{rem:f} From the proof of Lemma \ref{lem:unique_return_time} it is clear that $f({\bf q})$ is the first positive time when the realization of $\bf q$ in $\mathcal{G}$ returns to $S_{q_1}$ but it is possible that it returns to $\mathcal{G}$ earlier. However, if $\bf q\in Q(\mathcal{V})$ then $f({\bf q})$ is the first positive return time to $\mathcal{G}$ of the realization of $\bf q$ (due to the definition of $Q$). \end{rem} \begin{df} Denote by $(\Lambda_{f}, \Psi)$ the suspension flow over $(\Lambda, \sigma)$. Define $\pi: \Lambda_f \to X$ by $$ \pi(({\bf q}, t))=\Phi_t(P({\bf q})). $$ \end{df} \begin{thm}[Theorem 10, \cite{bowen1972expansive}]\label{thm:symbolic appen 1} The flow $(X, \Phi)$ is a factor of $(\Lambda_{f}, \Psi)$ via $\pi$. \end{thm} \begin{proof} First, we show that $\pi$ is well defined, i.e.\ that it holds $\pi({\bf q}, f({\bf q}))=\pi(\sigma({\bf q}), 0)$. This implies easily that $\pi$ is $\mathbb{R}$-equivariant. By the definition of $\pi$, one has to show $\pi(({\bf q}, f({\bf q})))=\Phi_{f({\bf q})}(P({\bf q}))=P(\sigma({\bf q}))=\pi((\sigma({\bf q}), 0))$. It is enough to show $(\Phi_{f({\bf q})}(P({\bf q})), \sigma({\bf q}))\in \mathcal{G} \rtimes \Sigma^{\mathbb{Z}}$ as the return-name sequence determines the realization. Indeed, we claim that the associated return-time sequence of $(\Phi_{f({\bf q})}(P({\bf q})), \sigma({\bf q}))$ is $\{{t}_{i+1}(P({\bf q}), {\bf q})-t_1(P({\bf q}), {\bf q})\}_{i\in \mathbb{Z}}$, which is confirmed by: $$ \Phi_{{t}_{i+1}(P({\bf q}), {\bf q})-t_1(P({\bf q}), {\bf q})} (\Phi_{f({\bf q})}(P({\bf q})))=\Phi_{{t}_{i+1}(P({\bf q}), {\bf q})} (P({\bf q})) \in S_{q_{i+1}}, $$ for $i\in \mathbb{Z}$, where we used that $t_1(P({\bf q}), {\bf q})=f({\bf q})$ (see Remark \ref{rem:f}). Finally, we note that the image $\pi(\Lambda_f)$ is a $\Phi$-invariant closed subset of $X$ which contains $\mathcal{V}$ as for $x\in \mathcal{V}$ it holds $\pi((Q(x),0))=P\circ Q(x)=x$. From Definition \ref{def:Flow boundaries and interiors} it is easy to see that the closed set $\Phi_{[-\eta, \eta]}\mathcal{Z}$ has empty interior (see also \cite[Lemma 2.4(1)]{burguet2019symbolic}). By Baire category theorem this implies that $\mathcal{W}=\bigcup_{r\in \mathbb{R}} \Phi_{r}(\mathcal{V}) $ is dense in $X$. We conclude that $\pi(\Lambda_f)=X$. \end{proof} The following lemma is crucial for the proof of Theorem \ref{thm:symbolic appen} in the sequel. \begin{lem}\label{lem:one-to-one appendix} Let $({\bf q},t)\in \Lambda_f$ for some $0\leq t<f({\bf q})$. If $\pi(({\bf q},t))=z\in \mathcal{V}$, then $t=0$ and ${\bf q}=Q(z)$. \end{lem} \begin{proof} By definition $\pi(\Lambda_{f}\times \{0\})\subset \mathcal{G}$. Thus $x:=\Phi_{-t}(z)\in \mathcal{G}\cap \mathcal{W}=\mathcal{V}$. Moreover $x=P({\bf q})$. We will show ${\bf q}=Q(x)$. This will imply that $t=0$ as $f({\bf q})$ is the first positive return time to $\mathcal{G}$ (see Remark \ref{rem:f}) and $0\leq t<f({\bf q})$. This will also imply, as desired, that ${\bf q}=Q(x)=Q(z)$. To show ${\bf q}=Q(x)$, pick $x_n\in \mathcal{V}$ such that $Q(x_n)\to {\bf q}$ as $n\to \infty$. As $P$ is continuous, we see that $x_n=P(Q(x_n))\to P({\bf q})=x$ as $n\to \infty$. Let ${\bf t}={\bf t}(x, Q(x))$. For $N>0$, let $$ E_N=\bigcup_{i=-N}^{N-1} [t_i+\beta/2, t_{i+1}-\beta/2] $$ and define the open set $$ O_N=\left( \bigcap_{i=-N}^{N} \Phi_{-(t_i-\beta/2, t_i+\beta/2) }\text{\rm Int}^{\Phi}(S_{Q_i(x)})\right) \setminus \Phi_{-E_N} \mathcal{G}. $$ Note that for $i=-N,\ldots, N$, $\Phi_{(t_i-\beta/2, t_i+\beta/2) }x\in Q_i(x)$ and $x\notin \Phi_{[t_i+\beta/2, t_{i+1}-\beta/2]}\mathcal{G}$. As $x\in\mathcal{V}$, it holds that for all $t\in\mathbb{R}$, $\Phi_t x\notin\mathcal{Z}$. Thus, if $\Phi_t x\in S_i$, then in effect $\Phi_t x\in\text{\rm Int}^{\Phi} S_i$. We conclude $x\in O_N$ is open and contains $x$. Note that if $y\in O_N\cap \mathcal{V}$, then $Q_i(y)=Q_i(x)$ for $-N\le i\le N$. It follows that for each $N$, there exists $m(N)$ such that for $n>m(N)$, $Q_i(x_n)=Q_i(x)$ for $-N\le i\le N$. This implies that $Q(x_n)\to Q(x)$ as $n\to \infty$. Recall that $Q(x_n)\to {\bf q}$ as $n\to \infty$. Thus we obtain that ${\bf q}=Q(x)$ as desired. \end{proof} \subsection{Bowen and Walters' question answered} Bowen and Walters asked the following question: \begin{question}(\cite[Problem, p. 192]{bowen1972expansive}) Let $(X,\Phi)$ be an expansive topological flow without fixed points. Can the symbolic extension $\pi$ in Theorem \ref{thm:symbolic appen 1} be made entropy preserving by choosing carefully the cross-sections $\{S_i\}_{i=1}^{N}$? \end{question} \begin{thm}\label{thm:symbolic appen} Let $(X,\Phi)$ be an expansive flow without fixed points. Then $\pi$ is strongly isomorphic. \end{thm} \begin{proof} By \cite[Theorem 4.2]{keynes1981real} an expansive flow is finite-dimensional. By \cite[Theorem 5]{bowen1972expansive}, an expansive flow has the property that for any $\tau>0$, the number of periodic orbits of period less than $\tau$ is finite. Since $(X, \Phi)$ has no fixed points, we can now apply Main Theorem \ref{main thm} to conclude $(X, \Phi)$ has the small flow boundary property. Adapting the notation and results of Subsection \ref{sec:Bowen and Walters}, we notice that the cross-sections $S_i$ in Lemma \ref{lem:complete family appendix} may be chosen such that the $S_i$ have small flow boundaries for each $i$. Thus $\Phi_{[-\eta,\eta]}(\mathcal{Z})$ is a null set and by $\sigma$-additivity of measures, $\mathcal{W}$ defined in equation \ref{eq:W} is a full set. It therefore suffices to show that $\pi$ is one-to-one when restricted to $\pi^{-1}(\mathcal{W})$. In other words, if $\pi(({\bf q_1}, t_1))=\pi(({\bf q_2}, t_2))=y\in \mathcal{W}$, then $({\bf q_1}, t_1)=({\bf q_2}, t_2)$. Write $y=\Phi_{t}x$ for some $t\in \mathbb{R}$ and $x\in \mathcal{V}$. By applying $\Phi_{-t}$, we may assume w.l.o.g.\ $\pi(({\bf q_1}, t_1))=\pi(({\bf q_2}, t_2))\in \mathcal{V}$. It is thus enough to show that $\pi$ is one-to-one when restricted to $\pi^{-1}(\mathcal{V})$. This follows from Lemma \ref{lem:one-to-one appendix}. \end{proof} The following theorem gives a strong positive answer to Bowen and Walters' question above. \begin{thm}\label{thm:thm B} (=Theorem B) Let $(X, \Phi)$ be an expansive flow. Then it has a strongly isomorphic symbolic extension. \end{thm} \begin{proof} If $(X, \Phi)$ has no fixed points then the result follows from Theorem \ref{thm:symbolic appen}. If $(X, \Phi)$ has fixed points then they are isolated (\cite[Lemma 1]{bowen1972expansive}), so the result follows from the previous case. \end{proof} \section{Appendix}\label{sec:Appendix} \subsection{Existence of a complete family} The following lemma is obtained by a slight modification of the proof of Lemma $7$ of \cite{bowen1972expansive}. See also \cite[Lemma 2.4]{keynes1981real} for a similar construction. \begin{lem}\label{lem:complete family appendix} Let $(X, \Phi)$ be a topological flow without fixed points. There is an $\eta>0$ so that the following holds. For each $\alpha>0$, $z\in X$ and cross-section $S$ with $z\in \text{\rm Int}^{\Phi} (S)$, there are two finite families $\mathcal{S}=\{S_i \}_{i=1}^N$ and $\mathcal{S}'=\{S_i' \}_{i=1}^N$ of pairwise disjoint closed cross-sections of injectivity time $\eta$ and diameter at most $\alpha$ so that \begin{itemize} \item $z\in \text{\rm Int}^{\Phi}(S_1)\subset S'_1\subset S$, \item $\overline{S_i}\subset \text{\rm Int}^{\Phi}(S_i')$ for all $1\le i\le N$; \item $\Phi_{[0,\alpha]}\mathcal{G}=\Phi_{[-\alpha, 0]}\mathcal{G}=X$, \end{itemize} where $\mathcal{G}=\cup_{1\le i\le N}S_i$. \end{lem} \begin{proof} By Theorem \ref{thm:Whitney}, for each $x\in X$ there is a cross-section $S_x$ of injectivity time $2\eta_x>0$ such that $x\in \text{\rm Int}^{\Phi} S_x$. By compactness of $X$, there are $x_i\in X$ $(1\le i\le n)$ with $x_1=z$ and $S_{x_1}=S$ such that $$ X=\cup_{i=1}^n \Phi_{(-\eta_{x_i}, \eta_{x_i})}\text{\rm Int}^{\Phi} S_{x_i}. $$ Let $\eta=\min_{1\le i\le n}\{\eta_{x_i} \}$. Then for each $x$ there is an $x_i$ and an $\rho_x\in (-\eta_{x_i}, \eta_{x_i})$ with $x\in \Phi_{\rho_x}\text{\rm Int}^{\Phi} S_{x_i}$. Let $T_x:=\Phi_{\rho_x}S_{x_i}$ which is a cross-section of injectivity time at least $\eta_{x_i}\ge 2\eta$ and $x\in \text{\rm Int}^{\Phi} T_x$. Let $\alpha>0$ be given. Choose $\epsilon>0$ sufficiently small such that $\epsilon\le \min\{\alpha/4, \eta \}$ and $\diam(\Phi_r (A))\leq\alpha$ whenever $|r|\le \epsilon$ and $A\subset X$ with $\diam(A)<\epsilon$. For each $x\in X$, let $V_x, V_x'\subset \text{\rm Int}^{\Phi} T_x$ be closed neighborhoods of $x$ in $T_x$ with $V_x\subset \text{\rm Int}^{\Phi} V_x'$ and $\diam(V_x')<\epsilon$. Then $x\in \text{\rm Int}^{\Phi} V_x\subset V_x'$ and $V_x, V_x'$ are cross-sections of injectivity time $2\eta$ and diameter at most $\alpha$. As $X$ is compact, we can find $y_i\in X, 1\le i\le k$ with $y_1=z$ such that $$ X=\cup_{i=1}^k \Phi_{(-\epsilon, \epsilon)}\text{\rm Int}^{\Phi} V_{y_i}. $$ We construct finite pairwise disjoint families $\mathcal{S}_i$ and $\mathcal{S}_i'$ of closed cross-sections recursively. Let $\mathcal{S}_1=\{ V_{y_1}\}$ and $\mathcal{S}_1'=\{V_{y_1}' \}$. Suppose $\mathcal{S}_{i-1}$ and $\mathcal{S}_{i-1}'$ have been defined. For each $y\in V_{y_i}'$, $\Phi_{[-\epsilon, \epsilon]}(y)\cap \cup_{S'\in \mathcal{S}'_{i-1}}S'$ is a finite set of points since $\mathcal{S}'_{i-1}$ consists of cross-sections. As $\Phi$ is continuous and $\cup_{S'\in \mathcal{S}'_{i-1}}S'$ is closed, there exists a non-empty open interval $I_y\subset (-\epsilon, \epsilon)$ and closed neighborhoods $W_y, W_y'$ of $y$ in $V_{y_i}$ and $V_{y_i}'$ respectively with $W_y\subset \text{\rm Int}^{\Phi} W_y'$ such that \begin{equation}\label{eq:empty_int} \Phi_{I_y}(W_y')\cap \cup_{S'\in\mathcal{S}'_{i-1}}S'=\emptyset \end{equation} Let $y^1, y^2, \dots, y^\ell$ be points in $V_{y_i}'$ such that $W_{y^1}, \dots, W_{y^\ell}$ cover $V_{y_i}$ and $W_{y^1}', \dots, W_{y^\ell}'$ cover $V_{y_i}'$. Pick distinct $\rho_1\in I_{y^1}, \dots, \rho_\ell\in I_{y^\ell}$ and set $$ \mathcal{S}_i=\mathcal{S}_{i-1} \cup \{\Phi_{\rho_1}(W_{y^1}), \dots, \Phi_{\rho_\ell}(W_{y^\ell}) \} $$ and $$ \mathcal{S}_i'=\mathcal{S}_{i-1}' \cup \{\Phi_{\rho_1}(W_{y^1}'), \dots, \Phi_{\rho_\ell}(W_{y^\ell}') \}. $$ By Equation \eqref{eq:empty_int} and as $\rho_1, \dots, \rho_\ell$ are distinct, the members of $\mathcal{S}_i$ are pairwise disjoint. The same holds for $\mathcal{S}'_i$. Set $\mathcal{S}=\mathcal{S}_k$ and $\mathcal{S}'=\mathcal{S}_k'$. Clearly $X= \Phi_{[-2\epsilon, 2\epsilon]}\mathcal{G}$. For every $x\in X$, $\Phi_{2\epsilon}x\in \Phi_{[-2\epsilon, 2\epsilon]}\mathcal{G}$. Thus $x\in \Phi_{[-4\epsilon, 0]}\mathcal{G}\subset\Phi_{[-\alpha, 0]}\mathcal{G}$. Thus $X=\Phi_{[-\alpha, 0]}\mathcal{G}$. Similarly $X=\Phi_{[0, \alpha]}\mathcal{G}$. Q.E.D. \end{proof} \bibliographystyle{alpha}
2,869,038,154,214
arxiv
\section{Introduction} Continuous automorphisms of Lie groups were widely studied \cite{bourgralg,pontr, hewross,fell}, but discontinuous and non-measurable automorphisms are known substantially less \cite{bicht,moore,luumn92,lumsb95,lumz2000}. In this work an explicit direct procedure of a construction of non-measurable automorphisms of locally compact Lie groups is given and it is proved that their existence is a local property. Moreover, non-measurable automorphisms of locally compact Lie algebras are constructed. Their application for the construction of weakly non-measurable irreducible unitary representations of locally compact groups is given. In this article basic necessary facts are reminded. Besides this non-measurable automorphisms of infinite dimensional over the real field, as well as non-archimedean fields Lie groups, which are not locally compact are studied. Non-measurable automorphisms on definite more general topological groups are investigated. The basic results of the second section of the paper are Theorems 13, 15, 16, 19, 20 and Corollaries 14, 17, while in Corollary 14, Theorem 19 and \S 13 the specific features of Lie groups are taken into account. \par Besides real-valued measures here in the third section apart from the previous works non-measurability of automorphisms of totally disconnected topological groups is investigated also for measures with values in infinite locally compact fields of zero characteristic with non-archimedean non-trivial multiplicative norms, that is with values in local fields. \section{Non-measurable automorphisms of groups relative to real-valued measures} \par {\bf 1. Definitions.} For groups $G$ and $S$ a mapping $f: G\to S$ is called a homomorphism, if it preserves the multiplication operation, that is $f(ab)=f(a)f(b)$ for each $a, b\in G$. If a homomorphism $f$ is bijective and surjective from $G$ onto $S$, $f(G)=S$, then the homomorphism $f$ is called the (algebraic) isomorphism. In the case $G=S$ an isomorphism $f$ is called the automorphism. \par A homomorphism $f: G\to U(H)$ is called a unitary representation of the group $G$, if $H$ is the Hilbert or the unitary space over $\bf C$, $U(H)$ is the unitary group of $H$. In the particular case $H=\bf C$, that is $U({\bf C})=S^1 = \{ z\in {\bf C}: |z|=1 \} $, a homomorphism $f: G\to S^1$ is called a character of the group $G$. \par If $\sf g$ is a Lie algebra over the field $\bf F$, then a bijective surjective mapping $\phi : {\sf g}\to \sf g$ we call an automorphism, if it preserves addition and multiplication: $\phi (a+b)=\phi (a)+\phi (b)$, $\phi ([a,b])=[\phi (a), \phi (b)]$ for each $a, b\in \sf g$. \par A topological space $X$ is called compact, if from each its open covering there is possible to extract a finite subcovering. A topological space $X$ is called locally compact, if each its point $x\in X$ has a neighborhood $U$, the closure of which $\bar U$ is compact (see \cite{eng}; in \cite{pontr} the old topological terminology is slightly different from the new one \cite{eng}). \par For a locally compact Hausdorff topological group $G$ a non-negative non-trivial $\sigma $-additive measure $\mu $ on the $\sigma $-algebra ${\cal B}(G)$ of all Borel subsets in $G$ is called a left (or right) Haar measure, if $\mu (gA) = \mu (A)$ (or $\mu (Ag)=\mu (A)$ respectively) for each $A\in {\cal B}(G)$ and $g\in G$. \par Let a $\sigma $-algebra ${\cal A}(G)= {\cal A}_{\mu }(G)$ be a completion of the Borel $\sigma $-algebra ${\cal B}(G)$ with the help of subsets $P$ in $G$ such that $P\subset F\in {\cal B}(G)$ and $\mu (F)=0$, where $\mu $ is a non-negative non-trivial $\sigma $-additive measure on the $\sigma $-algebra ${\cal B}(G)$. An automorphism $f$ of a topological group $G$ is called $\mu $-measurable, if $f^{-1}(U)\in {\cal A}(G)$ for each $U\in {\cal B}(G)$. In the contrary case an automorphism is called $\mu $-non-measurable. \par A set $X$ with a $\sigma $-algebra of its subsets $\cal U$ is called a measurable space and it is denoted by $(X, {\cal U})$. \par A measure $\nu $ is called absolutely continuous relative to the measure $\mu $ on the measurable space $(X, {\cal U})$, if from $\mu (A)=0$ it follows $\nu (A)=0$, where $A\in {\cal U}$. Measures $\mu $ and $\nu $ are called equivalent, if they are absolutely continuous relative to each other. \par {\bf 2. Lemma.} {\it Let $G$ be a separable locally compact non-compact group, and let $\mu $ be a non-negative Haar measure on $G$. Then on the one-point compactification $\alpha G$ (considered as the topological space) there exists a finite measure $\nu $ equivalent to the measure $\mu $.} \par {\bf Proof.} For each locally compact non-compact topological space their exists its one-point (Alexandroff) compactification due to Theorem 3.5.11 \cite{eng}. Take an open neighborhood $U$ of the unit element in $G$ such that $\mu (U)<\infty $. Since $\mu $ is non-trivial, then $\mu (U)>0$. Due to the separability of the group $G$ there exists a countable family $ \{ g_j: j\in {\bf N} \} $ of elements in $G$ such that $\bigcup_j g_jU=G$, where $gU= \{ z: z=gf, f\in U \} $. Put \par $\nu (A) := \sum_{j=1}^{\infty } \mu ((g_jU)\cap A)/2^j$ (1) \\ for each $A\in {\cal A}(G)$ and $\nu ( \{ \alpha \} )=0$, where $ \{ \alpha \} = \alpha G\setminus G$ is the compactification enlargement. Then the measure $\nu $ is defined on ${\cal A}(\alpha G)$ and $0<\nu (\alpha G)<\infty $. In view of Formula (1) $\nu (A)=0$ if and only if $\mu (A)=0$. Therefore, measures $\mu $ and $\nu $ are equivalent. \par {\bf 3. Lemma.} {\it If $\mu $ and $\nu $ are two equivalent $\sigma $-additive measures on ${\cal B}(G)$, where $G$ is a topological group, then an automorphism $f$ is $\mu $-measurable if and only if it is $\nu $-measurable.} \par {\bf Proof.} Since measures $\mu $ and $\nu $ are equivalent and on the same $\sigma $-algebra ${\cal B}(G)$ are given, then ${\cal A}_{\mu }(G)= {\cal A}_{\nu }(G)$. From the definition of measurability of the automorphism the statement of this lemma follows. \par {\bf 4. Definitions.} A subgroup $H$ of a group $G$ is called normal, if its left and right cosets coincide $gH=Hg$ for each $g\in G$. A group $G$ is called algebraically simple, if it has not a normal subgroup different from $e$ and $G$, where $e=e_G$ is the unit element of the group $G$. A topological group $G$ is called topologically simple, if it has not a normal closed subgroup different from $e$ and $G$. \par {\bf 5. Lemma.} {\it If $f: G\to V$ is a homomorphism of an algebraically simple group $G$ into a group $V$, then either $f^{-1}(e_V)=e_G$ or $f^{-1}(e_V)=G$.} \par {\bf Proof.} The subgroup $J=f^{-1}(e_V)$ is normal in $G$, since if $f(g)=e_V$, then $f(h^{-1}gh)= f^{-1}(h)f(g)f(h)=f^{-1}(h)f(h)=f(h^{-1}h)=f(e_G)=e_V$ for each $h\in G$. Due to the definition of the algebraic simplicity of the group $G$ either $J=e_G$ or $J=G$. \par {\bf 6. Corollary.} {\it If an algebraically simple group $G$ has a character $f : G\to S^1$, then either $f^{-1}(1)=e_G$ or $f^{-1}(1)=G$, where $S^1 := \{ z\in {\bf C}: |z|=1 \} $ is the multiplicative Abelian group.} \par {\bf 7. Remark.} If $f^{-1}(e_V)=e_G$ for a homomorphism $f: G\to V$, then $f$ is bijective. In this case $f(G)$ is (algebraically) isomorphic with $G$. If moreover $V$ is Abelian, for example, $V=S^1$, then $G$ is Abelian and can not be simple, besides the trivial case $G=e$. \par {\bf 8. Lemma.} {\it If a Hausdorff topological group $G$ is topologically simple and $N$ is a normal subgroup in $G$, then either $N=e_G$ or ${\bar N} =G$, where ${\bar N}=cl_GN$ is the closure of $N$ in $G$.} \par {\bf Proof.} Each topological group is the uniform space with the entourages of the diagonal of the form $W(U) := \{ (g,q)\in G\times G: g^{-1}q\in U \} $, where $U$ is an open neighborhood of $e$ in $G$ (see Example 8.1.17 in \cite{eng}). If $g\in {\bar N}$, then there exists a net $\{ g_{\alpha }: \alpha \in \Lambda \} $ such that $\lim g_{\alpha }=g$, where $\Lambda $ is a directed set, $g_{\alpha }\in N$ for each $\alpha \in \Lambda $ (see \S 1.6 and Corollary 8.1.4 in \cite{eng}). Since $h^{-1}g_{\alpha }h\in N$ for each $h\in G$, then due to the continuity of the multiplication in $G$ there is satisfied the equality $\lim h^{-1}g_{\alpha }h=h^{-1}gh$, hence $h^{-1}{\bar N}h={\bar N}$ for each $h\in G$. Then $\bar N$ is the closed normal subgroup in $G$. In view of the topological simplicity of the group $G$ either ${\bar N}=e$ or ${\bar N}=G$. Since $N\subset \bar N$, then in the case of ${\bar N}=e$ we get $N=e$. \par {\bf 9. Corollary.} {\it If $f: G\to V$ is a continuous homomorphism of a topologically simple group $G$ in a topological group $V$, where $G$ and $V$ are Hausdorff, then either $f^{-1}(e_V)=e_G$ or $f^{-1}(e_V)=G$.} \par {\bf Proof.} In a Hausdorff topological space each singleton is closed (see \S 1.5 in \cite{eng}). Since the homomorphism $f$ is continuous, then $f^{-1} (e_V) =: N$ is closed in $G$. On the other hand, $N$ is the normal subgroup in $G$. From Lemma 8 this Corollary follows. \par {\bf 10. Remark.} Henceforth, Lie groups $G$ of the smoothness class $C^{\infty }$ over the field of real numbers or $C^{\omega }$ over local fields, that is finite algebraic extension of the field of $p$-adic numbers, are considered, where a smoothness class is for $G$ as the manifold and for the smoothness of the operation $G\times G\ni (g,q)\mapsto g^{-1}q\in G$, $C^{\infty }$ denotes the class of infinitely differentiable mappings, $C^{\omega }$ denotes the class of locally analytic mappings. As usually $U(n)$ denotes the unitary group of the unitary space ${\bf C^n}$, where $n$ is the natural number. \par {\bf 11. Lemma.} {\it If $G$ is a locally compact Lie group over $\bf R$ of the dimension $n$, then there exists an open neighborhood $U$ of the unit element $e$ in $G$, which has a topological embedding into $(S^1)^n$, as well as an embedding into $U(n)$ as the local Lie group.} \par {\bf Proof.} For a locally compact Lie group $G$ of the smoothness class $C^{\infty }$ over $\bf R$ as it is well-known there exists a Lie algebra ${\sf g}=T_eG$ and an exponential mapping $\exp : V_1\to U_1$ of an open neighborhood $V_1$ of zero in $\sf g$ on an open neighborhood $U_1$ of the unit element $e$ in $G$, while $\exp $ is the infinite differentiable diffeomorphism, $U_1$ is a local Lie group (see \cite{bourgralg,kling,pontr}). \par As the linear space over $\bf R$ the algebra $\sf g$ has the dimension $n$, consequently, there exists the embedding of $U_1$ into $\bf R^n$. Choose a compact subset $V$ in $V_1$, such that the interior $Int (V)$ is the open neighborhood of zero in $\sf g$. Then $\exp ( V)=: U $ is the compact subset in $U_1$, moreover, $\exp (Int (V))$ is the local subgroup in $G$. Therefore, $U$ has a topological embedding into ${\bf R^n}/\bf Z^n$, where the latter topological space is isomorphic with $(S^1)^n$. Since the Lie algebra $\sf g$ has the basis of generators $v_1,...,v_m$ of a dimension not exceeding $n$, then each element in $U$ can be presented as the finite product of local one-parameter subgroups $\exp (t_jv_j)$ with $t_j\in (-\epsilon ,\epsilon )$, where $\epsilon >0$. For a sufficiently small $\epsilon >0$ each $\{ \exp (t_jv_j): t_j\in (-\epsilon ,\epsilon ) \} $ has an embedding into $S^1$ as the Abelian local subgroup. \par In $\sf g$ in the open neighborhood of zero there is accomplished the Campbell-Hausdorff formula (see Chapter III in \cite{bourgralg}). \par Remind that the Campbell-Hausdorff formula for the calculation of the expression $w=\ln (e^ue^v)$ in the neighborhood of zero $V$ of the Lie algebra $\sf g$ over ${\bf K}=\bf R$ or a local non-archimedean field $\bf K$ has the form: \par $w=\sum_{n=1}^{\infty } n^{-1} \sum_{r+s = n, r\ge 0, s\ge 0} ({{\tilde w}}_{r,s}+{\hat w}_{r,s})$, where \par ${{\tilde w}}_{r,s} := \sum_{m\ge 1} (-1)^{m+1} m^{-1} \sum^* ((\prod _{i=1}^{m-1} (ad\enskip u)^{r_i} (r_i!)^{-1} (ad \enskip v)^{s_i} (s_i!)^{-1})$ \par $ (ad\enskip u)^{r_m} (r_m!)^{-1} )(v),$ \par ${\hat w}_{r,s} := \sum_{m\ge 1} (-1)^{m+1} m^{-1} \sum^{**} (\prod _{i=1}^{m-1} (ad\enskip u)^{r_i} (r_i!)^{-1} (ad \enskip v)^{s_i} (s_i!)^{-1}) (u),$ \\ where $\sum^*$ denotes the sum by $r_1+...+r_m=r,$ $s_1+...+s_{m-1}= s-1$, $r_1+s_1\ge 1$,...,$r_{m-1}+s_{m-1}\ge 1$, while $\sum^{**}$ denotes the sum by $r_1+...+r_{m-1}= r-1$, $s_1+...+s_{m-1}=s$, $r_1+s_1\ge 1$,...,$r_{m-1}+s_{m-1}\ge 1$, \\ where the convergence radius of the series depends on $\bf K$ and the multiplicative norm in it. \par Each local one-parameter subgroup $\exp (tv)$ with $t\in (-\epsilon ,\epsilon )$ acts on $\sf g$ and then has the embedding into $Gl(n,{\bf R})$, therefore, $U$ has the embedding into $GL(n,{\bf R})$ (see also Theorems 58, 59, 84, 87-90 and Propositions 42(A,B), 56(A,B,C) in Chapter 10 \S \S 42, 53, 56 and 57 \cite{pontr}). \par The Lie algebra ${\sf u}(n)$ has the basis of generators $E_{k,j}-E_{j,k}$, $i(E_{k,j}+E_{j,k})$ with $1\le k<j\le n$ and $i E_{j,j}$ with $j=1,...,n$, where $i=(-1)^{1/2}$, $E_{k,j}$ is the $n\times n$ matrix with $1$ on the crossing of the $k$-th row and $j$-th column and others elements are zero. The Lie algebra $\sf g$ has the embedding into ${\sf gl}(n,{\bf R})$, while each generator of the algebra ${\sf gl}(n,{\bf R})$ is the linear combination of generators of the algebra ${\sf u}(n)$, hence $\sf g$ has the embedding into ${\sf u}(n)$ as the Lie subalgebra. Then $\exp_u\circ ln_G: U_1\to U(n)$ gives the embedding, where $\exp_u$ is the exponential mapping for the Lie algebra ${\sf u}(n)$ of the Lie group $U(n)$, $ln_G$ is the logarithmic mapping for $G$ from $U$ into $V$. \par Certainly, in general an embedding of a local Lie subgroup may have not an extension over the entire group. \par {\bf 12. Lemma.} {\it Let $X$ be a Tychonoff (completely regular) dense in itself topological space with $\sigma $-additive, $\sigma $-finite non-negative Borel regular measure $\mu $ on a complete $\sigma $-algebra ${\cal A}_{\mu }$ such that for each $x\in X$ there exists an open neighborhood $U$ of a finite positive measure $0<\mu (U)<\infty $, moreover, $\mu $ has not atoms in $X$. If $f: X\to X$ is the bijective epimorphic mapping such that $card (f(U)\cap V)\ge {\sf c} := card ({\bf R})$ for each open subsets $U$ and $V$ in $X$, then $f$ and $f^{-1}$ are not $({\cal A}_{\mu },{\cal B})$-measurable.} \par {\bf Proof.} Recall, that a measure $\mu $ is called Borel, if it is defined on the $\sigma $-algebra of all Borel subsets ${\cal B}(X)$ in $X$, that is the minimal $\sigma $-algebra generated by the family of all open subsets in $X$. In the given case the algebra ${\cal A}_{\mu }$ is the minimal $\sigma $-algebra, produced from ${\cal B}(X)$ and the family of all subsets of $\mu $ measure zero. A measure $\mu $ is called Borel regular, if $\mu (A) = \sup \{ \mu (C): C\subset A, C \mbox{ is closed } \} $ for each $A\in {\cal B}(X)$. If $\mu (A)<\infty $, then the transition to the completion gives $\mu (A) =\inf \{ \mu (V): A\subset V \mbox{ open } \} $, since the measure $\mu $ is $\sigma $-finite (see Theorem 2.2.2 \cite{federer}). \par Let $U$ and $V$ be open in $X$, while by the condition of the Lemma $card (f(U)\cap V)\ge \sf c$, then $f^{-1}(f(U)\cap V)=U\cap f^{-1}(V)$, since $f$ is the bijective mapping from $X$ onto $X$, consequently, $card (U\cap f^{-1}(V)\ge \sf c$ for each $U$ and $V$ open in $X$. If $A\in {\cal A}_{\mu }$, $\mu (A)<\infty $, then there exists a Borel subset $B\in {\cal B}(X)$ such that $A\subset B$ and $\mu (A)=\mu (B)$. Then for each $\epsilon >0$ there exists an open subset $W$ in $X$ such that $A\subset W$ and $\mu (A)\le \mu (W)<\mu (A)+\epsilon $. If $S$ is an arbitrary subset in $X$, then by $\mu ^*(S)$ there is denoted $\inf \{ \mu (W): S\subset W, W \mbox{ is open } \} =: \mu ^*(S)$. \par For arbitrary open subsets $U$ and $V$ of a finite $\mu $ measure take open subsets $U_1$ and $U_2$ in $U$, $V_1$ and $V_2$ in $V$ such that $0<\mu (U)/2 - \delta < \mu (U_j)<\mu (U)/2+\delta $ and $0< \mu (V)/2 - \delta < \mu (V_j)< \mu (V)/2 + \delta $ for $j=1, 2$, where $0<\delta <\min (\mu (U), \mu (V))/9$. This is possible, since $\mu $ is non-negative, has not any atom, while for each $x\in X$ there exists an open subset $P$ with $x\in P$, $0<\mu (P)<\infty $. Denote $A:= f^{-1} (U)$, $A_j := f^{-1}(U_j)$. By the supposition of this lemma $card (A_j\cap V_k)\ge \sf c$ for each $j, k \in \{ 1, 2 \} $. Suppose that $f$ is a $({\cal A}_{\mu },{\cal B}(X))$ measurable mapping. Then there would be $A_j\in {\cal A}_{\mu }$ for $j=1, 2$ and there would exist $B_j\in {\cal B}(X)$ such that $A_j\subset B_j$ and $\mu (B_j)=A_j$, and hence open subsets would be $W_j$ with $B_j\subset W_j$ and $\mu (B_j)\le \mu (W_j)< \mu (B_j)+\delta $ for $j=1, 2$. But $A_1\cap A_2=\emptyset $, consequently, $\mu (A)=\mu (A_1)+\mu (A_2)$. \par If for all open $U$ there would be $\mu (f^{-1}(U)\cap V)=0$ for each open $V$ in $X$ with $\mu (V)<\infty $, then in view of the $\sigma $-finiteness and $\sigma $-additivity of $\mu $ then would be $\mu (X)=0$, that contradicts the supposition of this lemma, therefore, there can be chosen open subsets $U$, $U_1$ and $U_2$ such that $\mu (A_j\cap V)>0$, where $U_1\cup U_2\subset U$. But $card (A_j\cap P_k)\ge \sf c$ for each $P_k$ - open subset in $V_k$. \par On the other hand, $\mu ^*(A_j\cap V_k) = \inf \{ \mu (Y): Y \mbox{ is open }, Y\supset (A_j\cap V_k) \} >0$, consequently, there exists a countable sequence of open sets $Y_n\supset (A_j\cap V_k)$ such that $\lim_{n\to \infty } \mu (Y_n)= \mu ^*(A_j\cap V_k)$. Let $C_n = \bigcap_{s=1}^nY_s$, then $(A_j\cap V_k)\subset C_{n+l}\subset C_n$ for every $n, l\in \bf N$, each $C_n$ is open. Moreover, $\lim_{n\to \infty } \mu (C_n)=\mu ^*(A_j\cap V_k)$. Since $card (A_j\cap P)\ge \sf c$ for each $P$ open in $X$, then $ Int (C_n\setminus C_{n+l}) = \emptyset $ for all $n, l\in \bf N$, where $Int (B)$ denotes the interior of a subset $B$ in $X$. Then $\mu ^* (A_j\cap V_k) = \mu ^* ([cl_X(A_j\cap V_k)]\cap V_k)=\mu ([cl_X(A_j\cap V_k)]\cap V_k)$, since $cl_X(A_j\cap V_k)\in {\cal B}(X)$, while $X$ is the completely regular space dense in itself, where $cl_X(B)$ denotes the closure of a subset $B$ in $X$. Thus, $\mu ^*(A_j\cap V_k)\ge \mu (V)/2-\delta $, since $cl_X(A_j\cap V_k)=cl_X(V_k)\supset V_k$. Therefore, $\mu (A_1\cap V_1)+\mu (A_1\cap V_2)+\mu (A_2\cap V_1)+\mu (A_2\cap V_2)\ge 2\mu (V)- 8\delta >10 \mu (V)/9$ and the contradiction is obtained, since by the construction $A_1\cap A_2=\emptyset $, $V_1\cap V_2=\emptyset $ and $V_1\cup V_2\subset V$, consequently, $f$ is not measurable. \par Applying the above proof to $f^{-1}$ instead of $f$ we get, that $f^{-1}$ is also nonmeasurable, since $f^{-1}$ satisfies conditions from the second section of the proof. \par {\bf 13. Theorem.} {\it Let $G$ be a non-trivial locally compact Lie group over $\bf R$ or over a non-archimedean local field $\bf K$. Then the group of its automorphisms $Aut (G)$ has a family of the cardinality not less than $2^{\sf c}$ of different non-measurable automorphisms relative to a non-trivial non-negative Haar measure $\mu $ on $G$, where ${\sf c}:= card ({\bf R})$ denotes the cardinality of the continuum.} \par {\bf Proof.} Each locally compact Lie group $G$ has a finite dimensional Lie algebra $\sf g$ over ${\bf K}=\bf R$ or over a non-archimedean local field $\bf K$ respectively. Therefore, both $G$ and $\sf g$ are metrizable, moreover the metric can be chosen left-invariant (see Theorem 8.3 \cite{hewross}). \par In the non-archimedean case as a neighborhood $U$ of the unit element $e$ in $G$ there can be chosen a compact clopen subgroup, since $G$ is totally disconnected (see Theorems 5.13 and 7.7 \cite{hewross}). If ${\bf K}=\bf R$, then we take an open symmetric $U=U^{-1}$ neighborhood $U$ of the unit element $e$ in $G$, where $U^{-1} := \{ g^{-1}: g\in U \} $. For a sufficiently small $U$ there is a bijective exponential mapping $\exp : V\to U$ from the corresponding neighborhood $V$ of zero in $\sf g$ on $U$, where $\exp $ belongs to the class of smoothness $C^{\infty }$ or $C^{\omega }$ (see \cite{bourgralg,bourmnog,kling,pontr}). Choose $U$ sufficiently small, that in it would be satisfied the Campbell-Hausdorff formula. \par Take a basis of generators $v_1,...,v_m$ in the Lie algebra $\sf g$ over the field $\bf K$, and as the linear space over the field $\bf K$ it has the basis $\eta _1,...,\eta _n$, where $n\ge m$, $v_j=\eta _j$ for $j=1,...,m$, and $\eta _{m+1},...,\eta _n$ are obtained as finite products (commutators) $[u,v]$ of basic generators in the Lie algebra $\sf g$ for $n>m$. \par Consider a set ${\bf K}\setminus {\bf Q}$ of all irrational elements of the field $\bf K$. The field $\bf K$ is uncountable and as the linear space over $\bf Q$ it is infinite-dimensional, since the field of rational numbers $\bf Q$ is countable. For each $b\in {\bf K}\setminus \bf Q$ there exists an extension ${\bf Q}(b)$ of the field $\bf Q$ with the help of a number $b$, ${\bf Q}\subset {\bf Q}(b)$. Remind that a number $a\in {\bf K}\setminus \bf F$ is called algebraic over the field $\bf F$, if it is a root of a polynomial with coefficients from $\bf F$. If an element $a$ is not algebraic over $\bf F$, then it is called transcendental. For a transcendental element $a$ over a field $\bf F$ the field of rational quotients with coefficients from the field $\bf F$ is purely transcendental. \par A set $\{ a_1,...,a_n \} $ from $\bf K$ is algebraically independent, if each polynomial $P(x_1,...,x_n)$ with coefficients from $\bf F$ becoming zero while substitution of $a_1,...,a_n$ instead of $x_1,...,x_n$, is the identically zero polynomial. Moreover, the field $({\bf F}(a_1,...,a_{n-1}))(a_n)$ is isomorphic with the field of rational quotients ${\bf F}(a_1,...,a_n)$ of variables $a_1,...,a_n$ with coefficients from the field $\bf F$. If a field $\bf F$ is countable, then ${\bf F}(a_j: j\in {\bf N})$ is also countable, since $\bigcup_{n=1}^{\infty }\aleph _0^n=\aleph _0$, where $\aleph _0=card ({\bf N})$, $\bf N$ denotes the set of natural numbers \cite{eng}. \par A subset (not necessarily finite) $S$ in ${\bf K}\setminus \bf F$ is called algebraically independent over $\bf F$, if each its finite subset is algebraically independent. In the family of all algebraically independent subsets in $\bf K$ over $\bf F$ there exists a partial ordering by the inclusion, that makes it directed. In view of the Kuratowski-Zorn lemma there exists a maximal relative to such ordering algebraically independent family $\Psi $ in $\bf K$ over $\bf F$. Each such subset is called the basis of transcendence of the field $\bf K$ over the field $\bf F$. In view of Theorem of section 1.1.5 \cite{bacht} the cardinal numbers of any transcendence bases coincide. \par Since by the G. Cantor theorem a set $A$ of all algebraic numbers over a countable field is countable, hence $card (\Psi )={\sf c}=card ({\bf R})$, in particular, for ${\bf F}=\bf Q$. Moreover, the Haar measure $\nu $ of algebraic numbers in $\bf K$ is equal to zero, $\nu (A)=0$, where $\nu $ is non-negative and non-trivial on $\bf K$. Then ${\bf Q}\subset {\bf Q}(\Psi )\subset \bf K$, where ${\bf F}(\Psi )$ denotes the purely transcendental extension of the field $\bf F$, while ${\bf Q}(\Psi )\subset \bf K$ is the algebraic extension (see section 1.1.5 \cite{bacht}). \par The Lie algebra $\sf g$ is infinite-dimensional over $\bf Q$ with an uncountable Hamel basis $\gamma $ over $\bf Q$, that is $w_1,...,w_k$ are linearly independent over the field of rational numbers $\bf Q$ for each $w_1,...,w_k\in \gamma $ and $k\in \bf N$, while each element $w\in \sf g$ is the finite linear combination $w=c_1w_1+...+c_kw_k$ over the field $\bf Q$ of elements $w_j$ from $\gamma $ with rational coefficients $c_j\in \bf Q$. \par In the non-archimedean case $U$ is the subgroup in $G$ (see above), then put $W=U$. In the case of the Lie group $G$ over the field of real numbers $\bf R$ the neighborhood $U$ generates the group $ W:= \bigcup_{n=1}^{\infty }U^n$ containing in it the connected component $C$ of the unit $e$ (see Theorem 7.4 \cite{hewross}), where $AB:= \{ gh: g\in A, h\in B \} $ for $A, B\subset G$. \par Let $g\in G$ be an element of the group $G$, then elements of the form $g^n$ for $n\in \bf Z$ generate the commutative subgroup $S(g)$ in $G$. If a subgroup $S(g)$ is finite, then its order is called the order $ord (g)=k$ of the element $g$, that is $S(g) = \{ e, g, g^2,...,g^{k-1} \} $ and $g^k=e$. If $S(g)$ is infinite, then it is said, that $g$ is the element of the infinite order. Let $G_{fin}:= gr ({\cal F})$ be a minimal subgroup in $G$, generated by all possible finite products of elements from $\cal F$, where $\cal F$ is the set of all elements of finite orders in $G$. Then we consider the subgroup $W_{fin} = G_{fin}\cap W$ in $G$. Let $U_{fin}=W_{fin}\cap U$. Then $ln (U_{fin})\subset \sf g$, where $ln : U\to V$ is the logarithmic mapping of a local Lie subgroup on a neighborhood of zero in the Lie algebra $\sf g$, corresponding to the Campbell-Hausdorff formula, where coefficients of the series are rational numbers (see \S 11). Consider the minimal subalgebra ${\sf g_{fin}}$ over the field of rational numbers $\bf Q$ generated by $ln (U_{fin})$ and $v_1,...,v_m$, ${\sf g_{fin}}\subset \sf g$, since ${\bf Q}\subset \bf K$. \par Consider one-parameter local subgroups. Each element from the open neighborhood $U$ of the unit element belongs to a local one-parameter subgroup $\{ g^t: t\in {\bf K}, |t|<\epsilon \} = g_{loc}$, where $\epsilon >0$, which is unique for each non-unit element $g\ne e$ from $U$. Each cyclic commutative group is isomorphic to the group of roots of $1$ in $\bf C$. Therefore, on each subgroup $g_W := \bigcup_{n=1}^{\infty }(g_{loc})^n$ the set of elements of finite orders has the Haar measure ${\tilde \mu }_g(g_W)=0$ zero, where ${\tilde \mu }_g$ is the Haar measure on $g_W$. Then $\mu (W_{fin})=0$ and $\nu ^n({\sf g_{fin}})=0$, since the image $\mu _{ln}$ of the measure $\mu $ on $V$ is equivalent to $\nu ^n|_V$, where $\nu ^n$ is the Haar measure on $\bf K^n$ as the additive group, $\mu _{ln}(B) = \mu (\exp (B))$ for each $B\in {\cal B}(V)$. \par If $g\in G$, $ord (g)=k\in \bf N$ and $h^m=g$, $m\in \bf N$, then $ord (h)\le mk\in \bf N$. If $g\in U$, $g=e^v$, $v\in V$, $ord (g)=k\in \bf N$, $h=e^{tv}\in U$ for some $t\in \bf K$, $ord (h)\in \bf N$, then $t\in \bf Q$. \par Let ${\bf Q}(A)$ denotes the minimal field, containing all elements from $A$ and such that ${\bf Q}\subset {\bf Q}(A)\subset \bf K$. Among subsets $A\subset \Psi $ take such that ${\sf g}(A)\supset {\sf g_{fin}}$, where ${\sf g}(A)$ is the Lie algebra over the field ${\bf Q}(A)$ with the Hamel basis $\gamma _A$ as the ${\bf Q}(A)$-linear space, $\gamma _A\subset \gamma $, $\gamma $ denotes the Hamel basis $\sf g$ over $\bf Q$ as the $\bf Q$-linear subspace (see above). Moreover, it is possible to restrict the consideration by $A$ such that $(\nu ^n)^*({\sf g}(A))=0$, since $\nu ^n({\sf g_{fin}})=0$. Among such Lie algebras ${\sf g}(A)$ there exists a minimal due to the Kuratowski-Zorn lemma. Denote it by ${\sf g}(A_{fin})$, where $A_{fin}\subset \Psi $. \par If ${\bf F} = {\bf Q}( \{ a_j: j\in \lambda \} ) $, $\lambda \subset \bf N$, then the field $\bf F$ is countable, since $\bigcup_{k=1}^{\infty }\aleph _0^k=\aleph _0$. If $B\subset \bf K$ and $\nu (B)=0$, where $\nu $ is a measure equivalent to the Haar measure on $\bf K$, then $\nu (B+{\bf F})=0$ and $\nu (B{\bf F})=0$ for a countable subfield $\bf F$ in $\bf K$, consequently, $\nu ((B+{\bf F}){\bf F})=0$ and $\nu (\bigcup_{k=1}^{\infty }(B^k+{\bf F}){\bf F})=0$, since $\nu (B_1B_2)=\int_{\bf K} \chi _{B_1B_2}(x)\nu (dx)$, where $\chi _B(x)=1$ for $x\in B$, $\chi _B(x)=0$ for $x\notin B$, $\chi _B$ denotes the characteristic function of a subset $B$, $B_1+B_2 := \{ x: x=b_1+b_2, b_1\in B_1, b_2\in B_2 \} $, $B_1B_2 := \{ x: x=b_1b_2, b_1\in B_1, b_2\in B_2 \} $. \par Since $\nu ^n({\sf g}(A_{fin}))=0$, then $\nu ({\bf Q}(A_{fin }))=0$, where $\nu _{\bf K}=\nu $ is the Haar measure on $\bf K$ as the additive group. Consequently, there exists $A_{fin}$ such that $card (\Phi )=\sf c$, where $\Phi := \Psi \setminus A_{fin}$, since each element $v$ from $\sf g$ is the finite linear combination over $\bf Q$ elements from $\gamma $, while in the Campbell-Hausdorff formula the expansion coefficients are rational (see \S 11). \par Over each field ${\bf F} = ({\bf Q}(A_{fin}))(b)={\bf Q}(A_{fin} \cup \{ b \} )$ for $b \in \Phi $ consider a Lie subalgebra ${\sf g}(A_{fin})_{\bf F}$ generated from the algebra ${\sf g}(A_{fin})$ by extension of the field of scalars ${\bf Q}(A_{fin})$ up to $\bf F$, ${\sf g}(A_{fin})_{\bf F}\subset {\sf g}$, that is each element from ${\sf g}(A_{fin})_{\bf F}$ is a finite linear combination over $\bf F$ elements from ${\sf g}(A_{fin})$. \par Each element $g\in U$ is a finite product of elements of local one-parameter subgroups of the form $\exp (t_jv_j)$, $t_j\in \bf K$. The exponential mapping $\exp : V\to U$ gives an uncountable family of local subgroups $S_{\bf F}=\exp (V\cap {\sf g}(A_{fin})_{\bf F})$, where ${\bf F}= {\bf Q}(A_{fin }\cup \{ b \} )$, $b\in \Phi $. All such local subgroups for different ${\bf F}={\bf Q}(A_{fin}\cup \{ b_j \} )$, $b_1\ne b_2\in \Phi $, are pairwise isomorphic, since the fields ${\bf Q}(A_{fin}\cup \{ b_1 \} )$ and ${\bf Q}(A_{fin}\cup \{ b_2 \} )$ are pairwise isomorphic, and in $U$ there is satisfied the Campbell-Hausdorff formula about the local relation between the multiplication in the Lie algebra $\sf g$ and the multiplication in its Lie group $G$ (see Chapters 2 and 3 in \cite{bourgralg}). Moreover, ${\sf g}(A_{fin})_{{\bf Q}(A_{fin}\cup \{ b_1 \} )}$ is isomorphic with ${\sf g}(A_{fin})_{{\bf Q}(A_{fin}\cup \{ b_2 \} )}$. The isomorphism of the fields ${\bf Q}(A_{fin}\cup \{ b_1 \} )$ and ${\bf Q}(A_{fin}\cup \{ b_2 \} )$ is established by the mapping $\theta =\theta _{b_1,b_2}$, $\theta (b_1)= b_2$ with the identity mapping on the field ${\bf Q}(A_{fin})$ such that $\theta (P_k(b_1)/L_s(b_1))= P_k(b_2)/L_s(b_2)$, where $b_1, b_2\in \Phi $, $P_k$ and $L_s$ are non-zero polynomials of non-negative integer degrees $k$ and $s$ respectively with coefficients from the field ${\bf Q}(A_{fin})$ (see also \cite{bacht,plotk}). \par In the non-archimedean case each local subgroup $S_{\bf F} =: J_{\bf F}$ is the group, since $W=U$ is the group. In the case of the Lie group $G$ over $\bf R$ take $W=\bigcup_{n=1}^{\infty }U^n$ (see above). Therefore, $\bigcup_{n=1}^{\infty }(S_{\bf F})^n =: J_{\bf F}$ is the subgroup in $G$. Then the group $J_{{\bf Q}(A_{fin}\cup \{ b \} )}$ is isomorphic with $J_{{\bf Q}(A_{fin}\cup \{ r \} )}$ for each $b\ne r\in \Phi $. \par Take a bijective epimorphic mappings $\phi : \Phi \to \Phi $. It gives an isomorphism $\phi $ of each field ${\bf Q}(A_{fin}\cup \{ b_1,...,b_z\} )$ onto ${\bf Q}(A_{fin}\cup \{ r_1,...,r_z \} )$, $\phi (P_k(b_1,...,b_z)/L_s(b_1,...,b_z))= P_k(r_1,...,r_z)/L_s(r_1,...,r_z)$ for each non-zero polynomials of integer non-negative degrees $k$ and $s$ of $z\in \bf N$ variables and with expansion coefficients from ${\bf Q}(A_{fin})$, since $\phi (x)=x$ for each number $x$ from ${\bf Q}(A_{fin})$, where $r_j=\phi (b_j)$ for each $j=1,...,z\in \bf N$, $b_1,...,b_z\in \Phi $. Therefore, it has a natural extension up to an algebraic automorphism of the field ${\bf Q}(\Psi )=({\bf Q}(A_{fin}))(\Phi )$ onto itself. \par If $a\in {\bf K}\setminus {\bf Q}(\Psi )$, then the set $(\Psi ,a)$ is algebraically dependent, that is there exist $b_1,...,b_k\in \Psi $ such that the family $\{ b_1,...,b_k, a \} $ is algebraically dependent. This means that there exists a polynomial $F_s(T_1,...,T_k,T_{k+1})$ of a degree $s\in \bf N$ with rational expansion coefficients such that a substitution of $b_1,...,b_k$ instead of $T_1,...,T_k$, $a$ instead of $T_{k+1}$ the polynomial takes the zero value. Then for an automorphism $\phi $ of the field $\bf K$ a number $\phi (a)$ need to be a root of the polynomial $F_s(r_1,...,r_k,T_{k+1})$, where $r_j=\phi (b_j)$ for every $j=1,...,k$. If $a$ is an algebraic number over $\bf Q$, then $k=0$ and there can be taken $\phi (a)=a$. \par For each algebraic number $a$ from ${\bf K}$ over the field ${\bf Q}$ put $\phi (a)=a$. Since the field $\bf Q$ is everywhere dense in the field $\bf R$ or $\bf Q_p$ respectively, while the fields $\bf R$ and $\bf Q_p$ are complete as the normed spaces relative to their multiplicative norms, then $\bf Q$ is everywhere dense in ${\bf Q}(\Psi )$ relative to the norm inherited from the field $\bf K$. Therefore, every algebraic number from the field $\bf K$ over the field ${\bf Q}(\Psi )$ is the limit of a converging sequence of algebraic numbers from $\bf K$ over the field $\bf Q$. The field $\bf C$ is algebraically closed and is obtained by the way of the extension of the field $\bf R$ with the help of the root of the polynomial $x^2+1=0$, which is invariant relative to the automorphism $\phi $. \par In the case of the non-archimedean local field $\bf K$ the residue class field $B({\bf K},0,1)/B({\bf K},0,|\pi |)$ is the finite field ${\bf F}_q$ with the number $q=p^y$ of its elements, where $y\in \bf N$, $\pi \in \bf K$, $|\pi |= \max \{ |x|: x\in {\bf K}, |x|<1 \} $, $B({\bf K},x_0,r) := \{ x\in {\bf K}: |x-x_0|\le r \} $ is the ball of radius $r>0$ in $\bf K$, containing $x_0$. At the same time the residue class field $B({\bf Q_p},0,1)/B({\bf Q_p},0,1/p)$ is composed of $p$ elements, where $|p|=1/p$ for $p\in \bf Q_p$. The field of $p$-adic numbers has the normalization group $\Gamma _{\bf Q_p} := \{ |x|: x\ne 0, x\in {\bf Q_p} \} = \{ p^k: k\in {\bf Z} \} $. While $\Gamma _{\bf K} = \{ p^{k/l}: k\in {\bf Z} \} $ for some $0<l\in \bf N$. That is $\bf K$ is obtained by a finite algebraic extension of the field of $p$-adic numbers by adding roots of polynomials with expansion coefficients from the field of rational numbers $\bf Q$ (see also Theorem 7 and Proposition 5 in \S I.4 \cite{weil}). \par Therefore, for each algebraic number $a\in {\bf K}\setminus {\bf Q}(\Psi )$ there exists a polynomial $F_s(b_1,...,b_v,X)$ with $b_1,...,b_v\in \Psi $ of degree $s\in \bf N$ with rational expansion coefficients such that $F_s(b_1,...,b_v,a)=0$, consequently, it can be taken $\phi (a)=c$, where $F_s(r_1,...,r_v,c)=0$, $r_j=\phi (b_j)$, $c\in \bf K$, since $\phi (q)=q$ for each rational number $q\in \bf Q$, while ${\bf Q}(b_1,...,b_v)$ and ${\bf Q}(r_1,...,r_v)\subset {\bf Q}_p$. Thus, the automorphism $\phi $ has an extension up to an automorphism $\phi : {\bf K}\to \bf K$ as in the case ${\bf K}=\bf R$, as well as for the local field $\bf K$. \par Since $card (\Phi )=\sf c$, $card {\bf Q}(\Phi )=\sf c$ and ${\bf Q}(\Phi )$ is everywhere dense in $\bf K$, then there exist bijective surjective mappings $\phi : \Phi \to \Phi $ generating automorphisms of the field $\phi : {\bf K}\to \bf K$ as above such that for each open subsets $S$ and $T$ in $\bf K$ there is satisfied the relation for the cardinality $card (\phi (S)\cap T)=\sf c$. Indeed, the set $\Phi $ can be described in the form of the disjoint union of subsets $\Lambda _a$, $a\in E$, $card (E)=\sf c$, $card (\Lambda _a)=\sf c$ for each $a$, $\bigcup_{a\in E}\Lambda _a= \Phi $, $\Lambda _a\cap \Lambda _b=\emptyset $ for each $a\ne b$. At the same time $card ({\bf Q}(\Lambda _a)\cap T)=\sf c$ for each $T$ open in $\bf K$. Since $card (\Lambda _a)=card (\Lambda _b)$, then there exists a bijection $\phi ^a_b: \Lambda _a\to \Lambda _b$ from $\Lambda _a$ onto $\Lambda _b$ for each $a, b$. \par Take a bijective mapping $\eta : E\to E$ from $E$ onto $E$ such that $\eta (a)\ne b$ for each $a$. Then the combination of mappings $\{ \phi ^a_{\eta (a)}: a\in E \} $ generates the bijective mappings $\phi : \Phi \to \Phi $, putting $\phi (a)=a$ on $A_{fin}$, we get the bijective mapping $\phi : \Psi \to \Psi $ from $\Psi $ on $\Psi $. In view of the proof given above it has an extension up to an automorphism of the field $\phi : {\bf K}\to \bf K$. Therefore, $card (\phi (S)\cap T)=\sf c$ for each $S$ and $T$ open in $\bf K$, since $card ({\bf Q}(\Lambda _a)\cap T) = \sf c$ for each $T$ open in $\bf K$. It is not difficult to mention, that the family of such different algebraic automorphisms of the field $\bf K$ has the cardinality $2^{\sf c}$, since $card ({\sf c}^{\sf c})=2^{\sf c}$ \cite{eng}. In view of Lemma 12 they are $({\cal A}_{\nu },{\cal B}({\bf K}))$-non-measurable, where $\nu $ - is the non-negative non-trivial Haar measure on $\bf K$ (see also \cite{bourhm,weil}). \par On the other hand, every automorphism $\phi $ of the field $\bf K$ generates an automorphism $f$ of the group $G_0 := gr(J_{{\bf Q} (A_{fin})}\cup [\bigcup_{b\in \Phi }J_{{\bf Q}(A_{fin}\cup \{ b \} )}])$, where $gr (B)$ is a minimal algebraic subgroup in $G$ generated by finite products $g_1^{a_1}...g_r^{a_r}$ of all elements $g_j\in B$, $a_j\in \bf Z$, $j=1,...,r\in \bf N$. The automorphism $f$ is produced with the help of the isomorphisms $f: J_{{\bf Q}(A_{fin}\cup \{ b \} )} \to J_{{\bf Q}(A_{fin}\cup \{ \phi (b) \} )}$ for each $b\in \Phi $, while the restriction $f| J_{{\bf Q}(A_{fin})}$ take, for example, the identity mapping. The mapping $f$ has an extension from $A := J_{{\bf Q}(A_{fin})}\cup [\bigcup_{b\in \Phi }J_{{\bf Q}(A_{fin} \cup \{ b \} )}]$ on $G_0$ by taking of all possible finite products of initial elements from $A$. For each such subgroup $G_0$ of the group $G$ the automorphism has an extension on $G$, since every automorphism $q$ of a subgroup $Y$ of the group $G$ can be extended up to an automorphism of some group $H$ containing in itself $G$, but $q(G)$ is contained in $H$ and is isomorphic with $G$ (see \cite{focus,neumann}). \par By the construction of the group $G_0$ it is everywhere dense in the group $W$ in the topology inherited from $G$. Then due to Lemma 12 the automorphism $f$ is $({\cal A}_{\mu },{\cal B}(G))$ non-measurable. This also follows from the fact that the exponential mapping $\exp $ from the neighborhood of zero $V_0$ in the Lie algebra $\sf g$ on the neighborhood $U_e$ of the unit element in $G$ induces the image of the measure $\nu ^n_{\exp }$ on $U_e$, where $n$ is the dimension of $\sf g$ as the linear space over the field $\bf K$, $\nu ^n$ is the Haar measure on $\bf K^n$ as the additive group. At the same time the measure $\nu ^n_{\exp }$ is equvalent to the restriction of the Haar measure $\mu $ on $U_e$ (see Definitions 1 above). \par From the proof of Theorem 13 it follows the following. \par {\bf 14. Corollary.} {\it The family $\Upsilon $ of non-measurable automorphisms from Theorem 13 has a subfamily $\Omega $ of the cardinality $card (\Omega ) \ge 2^{\sf c}$ such that every $f\in \Omega $ after a restriction on each one-parameter subgroup over the field $\bf K$ in $G$ is non-measurable relative to the corresponding Haar measure on the subgroup.} \par {\bf 15. Theorem.} {\it Let $\sf g$ be a non-trivial Lie algebra finite-dimensional over the field $\bf K$ with a measure $\mu $ equal to the non-trivial non-negative Haar measure on the additive group for $\sf g$. Then the algebra $\sf g$ has $2^{\bf c}$ non-measurable automorphisms.} \par {\bf Proof.} Take any algebraic automorphism $\phi $ of the field $\bf K$ from the proof of Theorem 13. Since $\sf g$ is finite-dimensional over the field $\bf K$, then the non-negative non-trivial Haar measure $\mu $ on $\sf g$ as the additive group for $\sf g$ is equivalent to the measure $\nu ^n$ (see Definitions 1). The automorphism $\phi $ has the extension up to an automorphism of the algebra: $\phi (a_jv_j)=\phi (a_j)v_j$, $\phi (a_1v_1+....+a_mv_m)= \phi (a_1)v_1+...+\phi (a_m)v_m$, $\phi ([a_kv_k,a_jv_j]) = [\phi (a_k)v_k, \phi (a_j)v_j]$ for each $a_j\in \bf K$, $k, j=1,...,m$, where $v_1,...,v_m$ is the basis of generators in $\sf g$ (see Definitions 2.1). Therefore, in view of Lemma 12 the automorphism $\phi $ is $({\cal A}_{\mu },{\cal B}({\sf g}))$-non-measurable, relative to the additive group of the algebra $\sf g$. The family of such different automorphisms of the algebra $\sf g$ has the cardinality $2^{\sf c}$. \par {\bf 16. Theorem.} {\it Let $G$ be a locally compact Hausdorff group with a countable base of neighborhoods of the unit element $e$ and $f: G\to G$ be its automorphism non-measurable relative to the non-trivial non-negative left- (or right-)invariant Haar measure $\mu $ on $G$. Then $G$ has a topologically irreducible unitary representation, which is not weakly measurable.} \par {\bf Proof.} Since $G$ has a non-measurable automorphism, then it is non-discrete, since all algebraic automorphisms are continuous relative to the discrete topology, since in discrete topology every point is an open subset. On the other hand, every $T_0$ topological group is completely regular (see Theorem 8.4 \cite{hewross}). In view of Theorem 5.8 \cite{hewross} a subgroup of a topological group is discrete if and only if it contains an isolated point. Therefore, the group $G$ is dense in itself, that is every its point $q$ is a limit of a convergent net, contained in the punctured open neighborhood $U\setminus \{ q \} $ of a point $q$. In view of Lemma 5.28 \cite{hewross} locally countably compact regular topological space $Y$ can not be presented as a countable union of closed subsets with the empty interior. Thus, every open subset $U$ in $G$ is uncountable, $card (U)\ge \sf c$ (see Remark (4.26) \cite{hewross}). In view of this the Haar measure $\mu $ on $G$ has not atoms. By its construction the Haar measure is Borel regular. \par Recall, that the unitary representation is a homomorphism $T: G\to U(X)$, where $U(X)$ is the unitary group of the Hilbert or the unitary space $X$ over the field of complex numbers $\bf C$. It is called topologically irreducible, if in $X$ there does not exist any closed invariant subspace relative to the family of unitary operators $\{ T_g: g\in G \} $, besides $\{ 0 \} $ and the entire $X$. \par A unitary representation is called weakly measurable, if the functions $(y,T_gz)$ are $({\cal A}_{\mu }, {\cal B}({\bf C}))$-measurable for each given vectors $y, z\in X$, where $g\in G$. On every locally compact group there exists a non-trivial non-negative left-(or right-)invariant Haar measure (see \S 27 \cite{nai} and \cite{bourhm,hewross}). \par Then the space $L^1(G,{\cal A},\mu ,{\bf C})$ is supplied with the ring structure with the usual addition of functions and convolutions of functions as multiplication in the ring, which is called the group ring (see \S 28 \cite{nai}). A ring $R$ is called normed, if it is the normed space, for each $x, z\in R$: $|xz|\le |x| |z|$, and if in $R$ there is the unit $e$, then $|e|=1$. The complete normed ring is called the Banach ring. A ring $R$ is called symmetric, if it is supplied with the involution $x\mapsto x^*$, which maps from $R$ into $R$. In particular, the ring $L^1(G,{\cal A},\mu ,{\bf C})$ is symmetric. \par Therefore, a representation of the group generates a representation of the group ring in the ring (algebra in more modern terminology) $L(X)$ of bounded linear operators from $X$ into $X$. The adjoining of the unit to the ring $L^1(G,{\cal A},\mu ,{\bf C})$ gives the group ring with the unit, denote it by $R(G)$. In view of Theorem 29.1 \cite{nai} to each representation $x\mapsto A_x$ of the group ring $R(G)$ not containing a degenerate representation, there corresponds a continuous unitary representation $g\mapsto T_g$ of the group $G$. Vice versa, to every weakly continuous unitary representation $g\mapsto T_g$ of the group $G$ there corresponds a representation $x\mapsto A_x$ of its group ring $R(G)$, which does not contain a degenerate representation. These representations are related with each other by the formula: $A_{be+f} = b1 + \int_G f(g)T_g \mu (dg)$ for each $b\in \bf C$, $f\in L^1(G,{\cal A},\mu ,{\bf C})$. \par A linear functional $f$ is called positive, if $f(x^*x)\ge 0$ for each $x\in R$. If $f_1$ and $f$ are positive functionals, then $f_1$ is called subordinated to the functional $f$, which is denoted by $f_1<f$, if there exists a number $b$ such that $bf-f_1$ is a positive functional in $R$. A functional $f_1$ in a symmetric ring is called subordinated to a given positive functional $f$, if $f_1$ is a linear combination with complex coefficients of positive functionals subordinated to the functional $f$. A positive functional $f$ is called indecomposable, if each functional $f_1$ subordinated to the functional $f$ is its multiple, that is $f_1=bf$, where $b\in \bf C$. A representation $x\mapsto A_x$ is called cyclic, if there exists a vector $y_0\in X$ such that $ \{ A_xy_0: x\in R \} $ is everywhere dense in $X$. \par In view of Theorem 19.3.1 \cite{nai} a cyclic representation of a Banach symmetric ring $R$: $x\mapsto A_x$ is irreducible if and only if, each defining it positive functional $f(x)=(A_xy_0,y_0)$ is indecomposable. Let $S$ be a set of all positive functionals on $R$ such that $f(e)=1$. In view of Proposition 19.4.1 \cite{nai} a positive functional $f$, satisfying condition $f(e)=1$ is indecomposable if and only if it is an extremal point in the set $S$. \par If $H$ is a subgroup in $G$ and $M$ is a topologically irreducible unitary representation of $H$ in the Hilbert space $Y$, $Y\ni y_0\ne 0$, $\| y_0 \| =1$, $t(h):= (M_hy_0,y_0)$, $h\in H$, then the function $t$ is positive definite. Consider a convex subset $W$ of all positive definite functions on $G$, coinciding with $t$ on $H$. This set is non-void, since it contains the function equal to $t$ on $H$ and $0$ on $G\setminus H$. If $v\in W$, then $v(e)=(T_ey_0,y_0)=1$, consequently, $|v(g)|\le 1$ for every $g\in G$. Therefore, $W$ is compact in the topology of pointwise convergence. In view of the Krein-Milman theorem $W$ contains an extremal point $r$ (see Theorem 3.9.1 \cite{nai}), its restriction is $r|_H=f$. Consequently, $r$ is indecomposable and to it a topologically irreducible unitary representation of the group $G$ corresponds (see \S 2.1 \cite{bicht}, \cite{nai,fell,hewross}). \par Consider the Hilbert space $X := L^2(G,{\cal A},\mu ,{\bf C})$ of the equivalence classes of all functions $v: G\to \bf C$ with square integrable its module on $G$ relative to the measure $\mu $. Then there exists a strongly continuous unitary regular representation $T: G\to U(X)$. The strong continuity means, that $T_gz$ is the continuous mapping from $G$ into $X$ for each $z\in X$, where $X$ is supplied wit the standard norm associated with the scalar product: $\| z \| ^2 = (z,z)$. In the case of the left-invariant Haar measure it is given by the formula $T_gv(h) := v(g^{-1}h)$ for each $g, h\in G$, $v\in X$, where $\mu (gJ)=\mu (J)$ for every $g\in G$ and each $\mu $-measurable subset of finite measure, $J\in {\cal A}$. \par Evidently, if a mapping (in the given case a functional $(T_gx,y)$ of the representation $T$) is non-measurable on an open subset $W$ in $G$, then it is non-measurable on $G$. For the proof of non-measurability it is sufficient to take a clopen subgroup $W$ in $G$, which is compact, if $G$ is totally disconnected, or $W=\bigcup_{n=1}^{\infty }U^n$ for a locally connected $G$ which is not totally disconnected, where $U$ is a symmetric open neighborhood of $e$ in $G$, $0<\mu (U)<\infty $ (see Theorems 7.7 and 5.7 \cite{hewross}). Since $e$ has the countable base of neighborhoods of $e$, then the spaces $L^p(W,{\cal A},\mu ,{\bf C})$ for $1\le p<\infty $ are separable. Then the proof reduces to the consideration of the subgroup $W$. Denote $W$ by $G$. \par The regular representation $T$ is injective, $T_g\ne T_h$ for each $g\ne h\in G$. In view of Theorem 41.4.3 \cite{nai} it can be decomposed into the direct integral of topologically irreducible unitary representations $T= \bigoplus \int_S T^s\lambda (ds)$, where $S$ is a compact (bi-compact in old terminology) Hausdorff topological space, where $\lambda $ is a $\sigma $-additive measure on ${\cal B}(S)$ (see also \cite{eng,nai,hewross}). At the same time the representation $T^s$ is strongly continuous. Therefore, $q^{-1}({\cal B}(G))\subset {\cal B}(X)$ for every function $q(g) := T^s_gz$ for marked $z\in X$, $s\in S$, where $q: G\to X$. \par For each non-zero vector $z\in X^s$, $z\ne 0$, the closure of the linear span over the field of complex numbers $\bf C$ of all vectors $T^s_gz$ coincides with $X^s$, where $X^s$ is an invariant closed subspace in $X$ relative to the unitary representation $T^s$. Then the set of functions $\{ q_{x,z,s}: G\to {\bf C}; x\in X^s \} $ separates points in $X^s$, where $q_{x,z,s}(g) := (x,T^s_gz)$, $x, z \in X^s$. At the same time $q_{x,z,s}\circ f(g)= (x,T^s_{f(g)}z)$. If $A\subset \bf C$, then $(q_{x,z,s}\circ f)^{-1}(A)= f^{-1}( q_{x,z,s}^{-1}(A))$. If $A$ is open in $\bf C$, then $q_{x,z,s}^{-1}(A)$ is open in $G$. Since $f$ is non-measurable on $G$, also $G\ni h\mapsto gh\in G$ is the continuous mapping from $G$ onto $G$, $gU$ is open for each $g\in G$ and open $U$ in $G$, then the restriction $f|_U$ is non-measurable for each $U$ open in $G$. \par Consider an algebraic homomorphism $T\circ f: G \to U(X)$. If every $T^s\circ f$ would be weakly $({\cal A}_{\mu },{\cal B}({\bf C}))$-measurable, then $T\circ f$ also would be weakly $({\cal A}_{\mu },{\cal B}({\bf C}))$-measurable. But in view of strong continuity of $T$ and non-measurability of the automorphism $f$ the composition $T\circ f$ is not weakly $({\cal A}_{\mu },{\cal B}({\bf C}))$-measurable (see also Lemma 12 and Theorem 13 above). \par {\bf 17. Corollary.} {\it If $G$ is a non-trivial locally compact Lie group, then it has not less than $2^{\sf c}$ weakly non-measurable relative to a non-trivial non-negative Haar measure $\mu $ topologically irreducible unitary representations.} \par {\bf 18. Remark.} From the proofs of Theorems 13 and 16 it follows, that the property of $\mu $-non-measurability of an automorphism $f$ or of a unitary representation is local: if $f$ is $\mu $-non-measurable for the restriction on an open subset $W$ in $G$, then it is $\mu $-non-measurable on $G$. \par For a totally disconnected locally compact group it can be taken as $W$ a clopen compact subgroup in $G$ (see Theorem 7.7 \cite{hewross}). In the case of locally connected which is not totally disconnected group $G$ it can be taken a clopen subgroup $W=\bigcup_{n=1}^{\infty }U^n$, where $0<\mu (U)<\infty $, $U$ is a connected symmetric open neighborhood of $e$ in $G$ (see Theorem 5.7 \cite{hewross}). Then on $W$ there exists a probability measure equivalent with $\mu $ (see also Lemmas 2 and 3). In the case of a locally compact Lie group over $\bf R$ it can be taken on $U$ also a measure equivalent with the measure $\lambda _{\theta ^{-1}}$, where $\theta : U\to (S^1)^n$ is the topological embedding from Lemma 11, while $\lambda $ is the Haar measure on $(S^1)^n$, $\lambda _{\theta ^{-1}}(Y) := \lambda (\theta (Y))$ for each $Y\in {\cal B}(U)$. \par In Theorem 16 and Lemma 17 there are irreducible unitary representations, since in general one can not restrict on characters because of Lemmas 5, 8 and Corollaries 6,9. \par {\bf 19.} Let $G$ be a $C^{\infty }$ or $C^{\omega }$ non-trivial Lie group over the field ${\bf K}=\bf R$ or a non-archimedean local field $\bf K$, moreover, $G$ is complete as an uniform space. Suppose that $\mu $ is a $\sigma $-additive $\sigma $-finite non-negative non-trivial Borel regular measure on ${\cal B}(G)$ such that for each $g\in G$ there exists an open neighborhood $U$, $g\in U$, with $0<\mu (U)<\infty $, moreover, $\mu $ has not any atom. Let $\exp : V\to U$ be the exponential mapping for $G$ as the smooth manifold over $\bf K$ from the open neighborhood $V$ of zero in $T_eG$ onto an open neighborhood $U$ of the unit element $e$ in $G$. Let also a measure $\nu $ on $T_eG$ be such that $\nu (J)=\mu (\exp (J))$ for each Borel subset $J\in {\cal B}(T_eG)$, where $T_eG$ is the linnear space of separable type over the field $\bf K$. \par Suppose that $G$ has an open neighborhood $U$ of the unit element $e$ with $0<\mu (U)<\infty $, $\exp : V\to U$ for which a set of elements $S := \{ g\in U: g \mbox{ belongs to a local}$ $\mbox{ one-parameter}$ \\ $\mbox{ subgroup in }$ $U, \mbox{ over the field } {\bf K},$ $\mbox{ moreover it is unique}$ $\mbox{for a given } g \} $ has a positive outer measure, $\mu ^*(S)>0$, where local one-parameter subgroups have the form $\{ \exp (xv): x\in {\bf K}, |x|<\epsilon \} \subset U$, $v\in T_eG$, $\epsilon >0$. Suppose that the restriction corresponding to the Campbell-Hausdorff formula, where $\ln $ is the inverse mapping to $\exp $. \par Let $\pi _v: T_eG\to {\bf K}v$ be a linear over $\bf K$ projection operator, moreover, $\nu _v$ is equivalent to the Haar measure on $\bf K$, where $v$ is a non-zero vector of the tangent space $v\in T_eG$, ${\bf K}v$ is the one-dimensional over $\bf K$ subspace in $T_eG$ containing a vector $v$, $\nu _v(J)=\nu (\pi _v^{-1}(J))$ for each $J\in {\cal B}({\bf K})$. \par {\bf Theorem.} {\it Then such group $G$ has a family $\Upsilon $ $({\cal A}_{\mu },{\cal B}(G))$-non-measurable automorphisms of the cardinality not less than $2^{\sf c}$, $card (\Upsilon ) \ge 2^{\sf c}$. Moreover, $\Upsilon $ has a subfamily $\cal P$ of automorphisms $f$, restrictions of which on one-parameter over $\bf K$ local subgroups $\{ \exp (xv): |x|<\epsilon \} $ in $S$ are non-measurable relative to the corresponding Haar measure on $\{ \exp (xv): |x|<\epsilon \} $.} \par {\bf Proof} is proceeded by the generalization of the proof of Theorem 13. For this consider a minimal subgroup $G_S$ in $G$, generated by elements in $S$. The image $\nu $ on $V$ of the measure $\mu $ with the help of the logarithmic mapping $\ln $, $\nu (B)= \mu (\exp (B))$ for every $B\in {\cal B}(V)$, has the extension up to a $\sigma $-additive finite measure on $\sf g$, $\nu (B) := \sum_{j=1}^{\infty } \nu ((B-h_j)\cap V)/2^j$ for every $B\in {\cal B}({\sf g})$, where $\{ (V+h_j): j\in {\bf N}, h_j\in {\sf g} \} $ is the covering of $\sf g$. Therefore, $\nu $ has $\sigma $-additive projections $\nu _{\omega ({\bf K^n})}$ on $\omega ({\bf K^n})$ for each embedding $\omega : {\bf K^n} \hookrightarrow \sf g$ as the $\bf K$-linear space, $\nu _{\omega ({\bf K^n})}(B) = \nu (\pi ^{-1}(B))$ for each $B\in {\cal B}(\omega ({\bf K^n}))$, where $\pi : {\sf g}\to \omega ({\bf K^n})$ is the projection. \par The operator $\pi $ is $\bf K$-linear and it exists, since $\omega ({\bf K^n})$ is finite-dimensional over $\bf K$, the field $\bf K$ is locally compact (see Theorems 5.13 and 5.16 \cite{rooij} and \cite{nari}). The image $\nu _{\omega ({\bf K})}|_{V\cap \omega ({\bf K})}$ with the help of $\exp $ generates the measure on a local one-parameter subgroup in $S$, $\mu _g(B)= \nu _{\omega ({\bf K})}(\ln (B))$ for every $B\in {\cal B}(g_W\cap U)$, which has the extension up to the $\sigma $-additive measure $\mu _g$ on $g_W$. Therefore, the set of elements of finite orders from $g_W$ has $\mu _g$-measure zero. Then as in \S 13 $\mu ^*((G_S)_{fin}\cap S)=0$, since the measure $\mu $ is Borel regular, $\sigma $-additive and it has not atoms. Since the measure $\mu $ is $\sigma $-finite, then also $\mu ^*((G_S)_{fin})=0$. \par Consider the Lie algebra ${\sf g}$ generated by $\ln (S)$ over the field $\bf K$. As the linear space $\sf g$ has the separable type over $\bf K$, that is there exists a countable family $\rho $ of $\bf K$-linearly independent vectors the linear span of which $span_{\bf K}\rho $ is everywhere dense in $\sf g$. Then the minimal Lie algebra over $\bf K$ $Lie alg (span_{\bf K}\rho )$ containing $span_{\bf K}\rho $ is everywhere dense in $\sf g$. \par Construct $\sf g_{fin}$ as the minimal algebra over the field of rational numbers $\bf Q$ generated by $\ln ((G_S)_{fin}\cap S)=0$ and $\rho $. For every minimal subalgebra ${\sf g}(\{ v_j: j\in \lambda \} )$ over $\bf K$ with generators $v_j\in \rho $, $\lambda \subset \bf N$, there exists $A\subset \Psi $ such that ${\sf g}(\{ v_j: j\in {\bf N} \} )\cap {\sf g_{fin}}\subset {\sf g}(A)$, where ${\sf g}(A)$ is the minimal Lie algebra over the field ${\bf Q}(A)$, satisfying this inclusion, $\gamma _A$ denotes the Hamel basis ${\sf g}(A)$ over ${\bf Q}(A)$, since every $v$ from $\sf g$ is the finite linear combination over $\bf Q$ of elements from $\gamma $, also $card (\bigcup_{k=1}^{\infty }(\gamma \aleph _0)^k)=card (\gamma \aleph _0)=card (\gamma )\ge \sf c$, while expansion coefficients in the Campbell-Hausdorff formula are rational numbers, $\gamma $ denotes the Hamel basis of $\sf g$ over $\bf Q$ as the $\bf Q$-linear space, $card (\Psi \setminus A)=\sf c$. \par If $B\subset \bf K$ and $\nu (B)=0$, where $\nu $ is a measure equivalent with the Haar measure on $\bf K$, then $\nu (B+{\bf F})=0$ and $\nu (B{\bf F})=0$ for a countable subfield $\bf F$ in $\bf K$, consequently, $\nu ((B+{\bf F}){\bf F})=0$ and $\nu (\bigcup_{k=1}^{\infty }(B^k+{\bf F}){\bf F})=0$, since $\nu (B_1B_2)=\int_{\bf K} \chi _{B_1B_2}(x)\nu (dx)$, where $\chi _B(x)=1$ for $x\in B$, $\chi _B(x)=0$ for $x\notin B$, $\chi _B$ is the characteristic function of a subset $B$, $B_1+B_2 := \{ x: x=b_1+b_2, b_1\in B_1, b_2\in B_2 \} $, $B_1B_2 := \{ x: x=b_1b_2, b_1\in B_1, b_2\in B_2 \} $. \par Since $\mu ^*((G_S)_{fin})=0$ and $\nu _{\sf g}^*({\sf g_{fin}})=0$, $card (\rho )\le \aleph _0$, then there exists $A_{fin}\subset \Psi $ such that ${\sf g}(A_{fin})\supset {\sf g}_{fin}$ and $card (\Psi \setminus A_{fin})=\sf c$, since $\beta \aleph _0=\beta $ for every cardinal number $\beta \ge \aleph _0$, moreover, $\nu _{\omega ({\bf K})}({\bf Q}(A_{fin}))=0$, since $\nu _{\omega ({\bf K})}(B)>0$ for each $B$ open in $\omega ({\bf K})$, $\nu _{\omega ({\bf K})}$ is equivalent to the Haar measure on $\bf K$ due to the conditions from \S 19. \par Take $\Phi = \Psi \setminus A_{fin}$ and an algebraic automorphism $\phi $ of the field $\bf K$ from \S 13. Analogously to \S 13 we construct an automorphism $f$ of the group $G_S$, also it has an extension up to an automorphism of the group $G$ (see \cite{neumann,focus}). In view of Lemma 12 and $\mu ^*(S)>0$ it follows, that $f$ is $({\cal A}_{\mu },{\cal B}(G))$-non-measurable. Non-measurability of restrictions of $f$ on one-parameter subgroups relative to the Haar measure on them follows from the properties of $\phi $ as in \S 13. \par {\bf 20. Theorem.} {\it Let $G$ be an infinite topological dense in itself Hausdorff group with a non-negative non-trivial Borel regular measure $\mu $ on ${\cal B}(G)$ having not any atom, also for each $g\in G$ there exists an open neighborhood $U$ such that $0<\mu (U)<\infty $, moreover, $G$ is complete as the uniform space, $card (U)\ge \sf c$ for each open $U$ in $G$. Then there exists a family $\Upsilon $ of $({\cal A}_{\mu },{\cal B}(G))$-non-measurable different automorphisms of the group $G$ of the cardinality $card (\Upsilon )\ge 2^{\sf c}$.} \par {\bf Proof.} Take an open symmetric neighborhood $U$ of the unit element $e$ in $G$ such that $0<\mu (U)<\infty $, then $\mu $ is the $\sigma $-finite measure on $W=\bigcup_{n=1}^{\infty }U^n$ (see Theorem 7.4 \cite{hewross}). In view of the fact that $G$ is infinite and dense in itself, then it is non-discrete (see Theorem 5.8 \cite{hewross}). If $\{ g_n: n\in \alpha \} $ is a net converging to $e$, $\alpha $ is an ordinal, $card (\alpha )\ge \aleph _0$, then $\{ gg_n: n\in \alpha \} $ is the net converging to $g$ for each $g\in G$. \par The topological Hausdorff group $G$ can be supplied with the left-invariant uniformity giving the initial topology on $G$, while the left-invariant uniformity can be produced with the help of the family $\{ \eta _x: x\in M \} $ left-invariant pseudo-metrics $\eta _x(g,h)=\eta _x(h^{-1}g,e)$ for each $g, h\in G$, then $\eta _x (g^a,g^b)=\eta _x(g^{a-b},e)$ for each $g\in G$ and each $a, b\in \bf Z$, where $M$ is some set (see Chapter 8 in \cite{eng}). \par If $g\in G$, $ord (g)=k<\infty $, then $ord (g^r)\le k$ for each $r\in \bf Z$, in particular, for mutually prime numbers $r$ and $k$ there is satisfied $ord (g)=ord (g^r)$. If $ord (g)=\omega _0$, then $ord (g^r)=\omega _0$ for each integer non-zero $r$, where $\omega _0$ denotes the initial ordinal of the cardinality $\aleph _0$. \par Since the measure $\mu $ is non-trivial, non-negative and for $e\in G$ there exists an open neighborhood such that $0<\mu (U)<\infty $, the set ${\bf N}\cup \{ \omega _0 \} $ is countable, then there exists $k\in {\bf N}\cup \omega _0$ such that the outer measure of the intersection is positive $\mu ^*(U\cap G_k)>0$, where $G_k= gr \{ g\in G: ord (g)=k \} $ is the minimal subgroup in $G$ generated by elements of the $k$-th order. Therefore, it is sufficient to consider all such $G_k$ in the topology inherited from $G$, $\mu ^*(U\cap G_k)>0$. \par Mention that if $Y$ is an everywhere dense in $G$ subgroup, also $card (B\cap T)\ge \sf c$ for some subset $B$ in $G$ for each $T$ from the base $\Pi $ of neighborhoods of the unit element $e\in G$, $T\in \Pi $, then $card ((BY)\cap P)\ge \sf c$ for each $P$ open in $G$. \par Consider the family $\Xi $ consisting of subgroups $H$ in $G$ and their automorphisms $s: H\to H$ such that $card (H)\ge {\sf c}$, $card (s(P)\cap T)\ge {\sf c}$ for each $P$ and $T$ open in $H$ in the topology inherited from $G$. Such $H$ exists. For the proof of their existence take $U$ and $W$ as above. Let $W_k := \{ g\in W: ord (g)= k \} $, where $k\in {\bf N}\cup \omega _0$. Then for at least one $k$ there is satisfied the equality $card (W_k)=card (W)$, since $card (W)\ge \sf c$. \par Since $card (U)\ge \sf c$ and every subgroup $\{ g^n: n\in {\bf Z} \} $ generated by a chosen element $g\in G$ is either finite or countable, then in $W$ there exists a family $J_{b,k}$, $b\in \Lambda $, $k\in {\bf N}\cup \omega _0$, $card (\Lambda ) \ge \sf c$, $card (J_b)=card (U)\ge \sf c$. This family can be chosen such that elements in $J_{b,k}$ would be algebraically independent: $g\in J_{b,k}$ can not be presented as a finite product of elements different from it from the set $J_{b,k}$, moreover, $gr (J_{b,k})\cap gr (J_{d,l})=\{ e \} $ for each $b\ne d$, since ${\sf q}^{\aleph _0}=\sf q$ for each ${\sf q}\ge \sf c$, where $J_b=\bigcup_{k\in {\bf N}\cup \omega _0}J_{b,k}$, $\sf q$ and ${\sf c} := card ({\bf R})$ are cardinal numbers, $gr (B)$ denotes the minimal subgroup in $G$ generated by elements from $B$, every $g$ from $J_{b,k}$ has the order $ord (g)=k$. Moreover, $J_{b,k}$ can be chosen such that $G\setminus G_{\Lambda }\supset (Y\setminus \{ e \} )$, where $Y$ is some everywhere dense in $G$ subgroup $G_{\Lambda } := gr (\bigcup_{b\in \Lambda , k\in {\bf N} \cup \omega _0} J_{b,k})$, since $G_{\Lambda }$ and $Y$ have algebraically indeppendent from each other families of generating elements, $G_{\Lambda }\cap Y= \{ e \} $. \par Choose every $J_b$ such that $card (J_{b,k_0}\cap P)\ge \sf c$ for each $P$ open in $W$ for at least one $k_0\in {\bf N}\cup \omega _0$. Make it for every $k_0\in {\bf N}\cup \omega _0$, for which $card ( \{ g\in W: ord (g)=k_0 \} ) \ge {\sf c} $. Let $\phi : \Lambda \to \Lambda $ be a bijective mapping from $\Lambda $ onto $\Lambda $, $\phi (\Lambda (k))=\Lambda (k)$, $\Lambda =\bigcup_{k\in {\bf N}\cup \omega _0}\Lambda (k)$. Let $S_{b,k_0}\subset J_{b,k_0}$ and $card (S_{b,k_0})=card (S_{d,k_0})\ge \sf c$ for each $b, d\in \Lambda (k_0)\subset \Lambda $. With the help of the transfinite induction take $S_{b,k_0}$ and bijective mappings $\psi ^d_{b,k_0}: S_{d,k_0}\to S_{b,k_0}$ from $S_{d,k_0}$ onto $S_{b,k_0}$, put $s(g) := \psi ^d_{\phi (d),k_0}(g)$ for each $g\in S_{d,k_0}$ and with the help of finite products of elements in $G$, take $s|_Y=id$, extend $s$ from $Y\cup \bigcup_{b\in \Lambda (k_0), k_0\in {\bf N}\cup \omega _0} S_{b,k_0}$ up to the automorphism $s: H\to H$, where $H := gr (Y\cup \bigcup_{b\in \Lambda (k_0), k_0\in {\bf N}\cup \omega _0} S_b)$ (see also \cite{eng}). \par The family $\Xi $ order with the help of the relation $(H_1,s_1)\le (H_2,s_2)$, if $H_1\subset H_2$ and $s_2|_{H_1}=s_1$, that supplies $\Xi $ with the structure of the directed set. Every linearly ordered subset ${\cal C} = \{ (H_{\beta }, s_{\beta }): \beta \in \Lambda ({\cal C}) \} $ in $\Xi $ has an element $(H,s)\in \Xi $ such that $(H_{\beta },s_{\beta })\le (H,s)$ for each $(H_{\beta },s_{\beta })\in {\cal C}$, where $H= \bigcup_{\beta \in \Lambda ({\cal C})} H_{\beta }$, $H\subset G$, $s|_{H_{\beta }}=s_{\beta }$ for each $\beta \in \Lambda ({\cal C})$, here $\Lambda ({\cal C})\subset \Lambda $. In view of the Kuratowski-Zorn lemma in $\Xi $ there exists a maximal element \cite{eng}. Then it need to be $(G,f)$, since each automorphism $s$ from a subgroup has an extension up to an automorphism $f$ of the entire group \cite{neumann,focus}. In view of Lemma 12 the automorphism $f$ is $({\cal A}_{\mu },{\cal B}(G))$-non-measurable. Since there exists not less than $2^{\sf c}$ different bijective surjective mappings $\phi : \Lambda \to \Lambda $, then $card (\Upsilon )\ge 2^{\sf c}$. \par {\bf 21. Remark.} If $G$ is a Lie group over the field $\bf K$, $T_eG$ is the Banach space of separable type over the field $\bf K$, then measures $\mu $ on $G$ and $\nu $ on $T_eG$ with the needed properties from \S 19 exist (see \cite{dalfom,ludanmat}). If $G$ is a $C^{\infty }$ over $\bf R$ or $C^{\omega }$ over a local non-archimedean field Banach-Lie group, then $G$ as the manifold has the $C^{\infty }$ exponential mapping (see \cite{bourgralg,bourmnog,kling}). \section{Non-measurable automorphisms of groups relative to measures with values in local fields} \par To avoid misunderstandings we first remind the basic definitions. \par {\bf 1. Definitions.} Let $X$ be a completely regular totally disconnected topological space, let also $\cal R$ be its covering ring of subsets in $X$, $\bigcup \{ A: A\in {\cal R} \} =X$. We call the ring separating, if for each two distinct points $x, y \in X$ there exists $A\in \cal R$ such that $x\in A$, $y\notin A$. A subfamily ${\cal S}\subset \cal R$ is called shrinking, if an intersection of each two elements from $\cal A$ contains an element from $\cal A$. If $\cal A$ is a shrinking family, $f: {\cal R}\to \bf K$, where ${\bf K}=\bf R$ or $\bf K$ is the field with the non-archimedean norm, then it is written $\lim_{A\in \cal S} f(A)=0$, if for each $\epsilon >0$ there exists $A_0\in \cal S$ such that $|f(A)|<\epsilon $ for each $A\in \cal S$ with $A\subset A_0$. \par A measure $\mu : {\cal R}\to \bf K$ is a mapping with values in the field $\bf K$ of zero characteristic with the non-archimedean norm satisfying the following properties: \par $(i)$ $\mu $ is additive; \par $(ii)$ for each $A\in \cal R$ the set $\{ \mu (B): B\in {\cal R}, A\subset B \} $ is bounded; \par $(iii)$ if $\cal S$ is the shrinking family in $\cal R$ and $\bigcap_{A\in \cal S}A =\emptyset $, then $\lim_{A\in \cal S} \mu (A) = 0$. \par Measures on ${\sf Bco}(X)$ are called tight measure, where ${\sf Bco}(X)$ is the ring of clopen (simultaneously open and closed) subsets in $X$. \par For each $A\in \cal R$ there is defined the norm: $\| A \| _{\mu } := \sup \{ |\mu (B)|: B\subset A, B\in {\cal R} \} $. For functions $f: X\to \bf K$ and $\xi : X\to [0,+ \infty )$ define the norm $\| f \|_{\xi }:= \sup \{ |f(x)| \xi (x): x\in X \} $. Put also $N_{\mu } (x) := \inf \{ \| U \|_{\mu }: x\in U\in {\cal R} \} $. If a function $f$ is a finite linear combination over the field $\bf K$ of characteristic functions $\chi _A$ of subsets $A\subset X$ from $\cal R$, then it is called simple. A function $f: X\to \bf K$ is called $\mu $-integrable, if there exists a sequence $f_1, f_2,...$ of simple functions such that there exists $\lim_{n\to \infty } \| f-f_n \| _{N_{\mu }}=0$. \par The space $L(\mu )=L(X,{\cal R},\mu ,{\bf K})$ of all $\mu $-integrable functions is $\bf K$-linear. At the same time $\int_X\sum_{j=1}^n a_j\chi _{A_j}(x)\mu (dx) := \sum_{j=1}^n a_j\mu (A_j)$ for simple functions extends onto $L(\mu )$, where $a_j\in \bf K$, $A_j\in \cal R$ for each $j$. \par Put ${\cal R}_{\mu } := \{ A: A\subset X, \chi _A \in L(\mu ) \} $. For $A\in {\cal R}_{\mu }$ let ${\bar \mu }(A) := \int_X\chi _A(x)\mu (dx)$. \par An automorphism $\phi $ of a totally disconnected Hausdorff topological group $G$ is called $\mu $-non-measurable, if it is $({\cal R}_{\mu }, {\cal R})$-non-measurable.\par A totally disconnected compact Hausdorff group $G$ is called $p$-free, if it does not contain any open normal subgroup of an index divisible by $p$. \par Let $G$ be a totally disconnected Hausdorff locally compact group, let also ${\sf B_c}(G)$ be a covering ring of clopen compact subsets in $G$, suppose that $\mu : {\sf B_c}(G)\to \bf K$ is a finitely-additive function such that its restriction $\mu |_A$ for each $A\in {\sf B_c}(G)$ is a tight measure. A measure $\mu : {{\sf B_c}(G)}\to \bf K$ is called left- (right-)invariant Haar measure, if $\mu (gA)=\mu (A)$ ($\mu (Ag)=\mu (A)$ respectively) for each $A\in {\sf B_c}(G)$ and $g\in G$. \par A measure $\eta : {\cal R}\to \bf K$ is called absolutely continuous relative to a measure $\mu : {\cal R}\to \bf K$, if there exists a function $f\in L(\mu )$ such that $\eta (A)=\int_X \chi _A(x) f(x)\mu (dx)$ for each $A\in \cal R$, denote it by $\eta \preceq \mu $. If $\eta \preceq \mu $ and $\mu \preceq \eta $, then we say that $\eta $ and $\mu $ are equivalent $\eta \sim \mu $. \par {\bf 2. Lemma.} {\it Let $X$ be a Tychonoff (completely regular) totally disconnected dense in itself topological space with a tight measure $\mu $ such that for each $x\in X$ the function $N_{\mu }(x)>0$ is non-negative, moreover, $\mu $ has not atoms in $X$. If $f: X\to X$ is a bijective epimorphic mapping such that $card (f(U)\cap V)\ge {\sf c} := card ({\bf R})$ for each open subsets $U$ and $V$ in $X$, then $f$ and $f^{-1}$ are not $({\cal R}_{\mu },{\sf Bco}(X))$-measurable.} \par {\bf Proof.} Since $f$ is the bijective mapping from $X$ onto $X$, then $f^{-1}(f(U)\cap V)=U\cap f^{-1}(V)$, consequently, $card (U\cap f^{-1}(V)\ge \sf c$ for each $U$ and $V$ open in $X$. In view of Lemma 7.2 \cite{rooij} $ \| \chi _U \|_{N_{\mu }} = \| U \| _{\mu }$ for each $U\in \cal R$. Due to Lemma 7.5 \cite{rooij} $N_{\mu }(x)=N_{\bar \mu }(x)$ for each $x\in X$, $L(\mu )=L({\bar \mu })$ and ${\cal R}_{\bar \mu }={\cal R}_{\mu }$. Theorem 7.6 \cite{rooij} states that $N_{\mu }(x)$ is upper semi-continuous and for each $\epsilon >0$ the set $\{ x\in A: N_{\mu }(x)\ge \epsilon \} $ is $R_{\mu }$-compact. If $x_0\in X$, then $0\le N_{\mu } (x_0)<\infty $, and for each $x_0\in X$ and $r> N_{\mu }(x_0)$ there exists a neighborhood $P$ of the point $x_0$ such that for each point $x\in P$ there is accomplished the inequality $N_{\mu }(x)<r$. Then for each $\epsilon >0$ there exists an open subset $W$ in $X$ such that $A\subset W$ and $N_{\mu }(x) < \epsilon $ for every $x\in W\setminus A$. \par For arbitrary clopen subsets $U$ and $V$ of finite $\mu $ measure take clopen subsets $U_1$ and $U_2$ in $U$, $V_1$ and $V_2$ in $V$ such that $0< \|U \|_{\mu }<\infty $, $0< \| V \| _{\mu }<\infty $, $U_1\cap U_2=\emptyset $, $V_1\cap V_2=\emptyset $, $0<\delta \le \min ( \| U_1 \| _{\mu }, \| U_2 \| _{\mu }, \| V_1 \| _{\mu }, \| V_2 \| _{\mu }) /s$, $ \| U\setminus (U_1\cup U_2) \| _{\mu }\le \delta /s$, $ \| V\setminus (V_1\cup V_2) \| _{\mu }\le \delta /s$. This is possible, since $N_{\mu }(x)>0$ for each $x\in X$, while $\mu $ has not atoms. \par Denote $A:= f^{-1} (U)$, $A_j := f^{-1}(U_j)$. By the supposition of this Lemma $card (A_j\cap V_k)\ge \sf c$ for each $j, k \in \{ 1, 2 \} $. Suppose that $f$ is the $({\cal R}_{\mu },{\sf Bco}(X))$ measurable mapping. Then there would be $A_j\in L({\mu })$ for $j=1, 2$ and there would be open subsets $B_j$ in $X$ such that $A_j\subset B_j$ and $N_{\mu }(x) < \delta /s$ for each $x\in B_j\setminus A_j$ and $j=1, 2$. But $A_1\cap A_2=\emptyset $, consequently, $\mu (A)=\mu (A_1)+\mu (A_2)$. \par Since $N_{\mu }(x)>0$ for each $x\in X$, then there can be chosen clopen $U$, $U_1$ and $U_2$ such that $N_{\mu }|_{(A_j\cap V)}>0$, where $U_1\cup U_2\subset U$. But $card (A_j\cap P_k)\ge \sf c$ for each $P_k$ open subset in $V_k$. Consequently, there exists a countable sequence of open subsets $Y_n$ in $X$ with $Y_n\supset (A_j\cap V_k)$ such that $\lim_{n\to \infty } \sup \{ N_{\mu }(x): x\in Y_n\setminus (A_j\cap V_k) \} =0$ for given $j, k$, $Y_n=Y_{n,j,k}$, $j, k\in \{ 1, 2 \} $. Let $C_n = \bigcap_{s=1}^nY_s$, then $(A_j\cap V_k)\subset C_{n+l}\subset C_n$ for any $n, l\in \bf N$, every $C_n$ is open in $X$, $C_n=C_{n,j,k}$. \par At the same time $\lim_{n\to \infty } \sup \{ N_{\mu }(x): x\in C_n\setminus (A_j\cap V_k) \} =0$. Since $card (A_j\cap P)\ge \sf c$ for each subset $P$ open in $X$, then for each $\epsilon >0$ there exists $n_0\in \bf N$ such that $ Int (\{ x\in C_n\setminus C_{n+l}: N_{\mu }(x)\ge \epsilon \} ) = \emptyset $ for each $n>n_0$ and $l\in \bf N$, where $Int (B)$ denotes the interior of the subset $B$ in $X$. Then $\mu (A_j\cap V_k) = {\bar \mu }(cl_X(A_j\cap V_k))= \mu (cl_X(A_j\cap V_k))=\mu (V_k)$, since $cl_X(A_j\cap V_k)=V_k \in {\cal R}_{\mu }(X)$, where $X$ is the completely regular space dense in itself, here $cl_X(B)$ denotes the closure of a subset $B$ in $X$. \par But then it would be $\sum_{j=1}^2 \mu (A_j\cap V_k) = 2 \mu (V_k)$, that contradicts to the additivity of the measure: $2\mu (V_k) = \mu (A_1\cap V_k) +\mu (A_2\cap V_k) = \mu ((A_1\cup A_2)\cap V_k)={\bar \mu }(cl_X(A_1\cup A_2)\cap V_k))= \mu (cl_X(A_1\cup A_2)\cap V_k))= \mu (V_k)$, since the characteristic of the field $\bf K$ is zero, $char ({\bf K})=0$. Consequently, $f$ is not measurable (see also Theorem 7.12 \cite{rooij}). \par Applying this proof to $f^{-1}$ instead of $f$ we get that $f^{-1}$ as well is $\mu $-non-measurable, since $f^{-1}$ satisfies conditions of the second section of the proof. \par {\bf 3. Theorem.} {\it Let $G$ be a non-trivial locally compact Lie group over the non-archimedean local field ${\bf F}$, ${\bf F}\supset {\bf Q_p}$, and $\mu $ be a non-trivial tight Haar measure on $G$ with values in a local field ${\bf K}\supset {\bf Q_s}$, where $p$ and $s$ are mutually prime numbers, $(p,s)=1$. Then the group of its automorphisms $Aut (G)$ has a family of the cardinality not less than $2^{\sf c}$ of distinct non-measurable automorphisms $\mu $ on $G$, where ${\sf c}:= card ({\bf R})$ denotes the cardinality of the continuum.} \par {\bf Proof.} Since the group $G$ belongs to the class of smoothness $C^{\omega }$, then for it the Lie algebra $\sf g$ over $\bf F$ is defined. This Lie algebra is the finite-dimensional space over $\bf F$, $dim_{\bf F}{\sf g}=n\in \bf N$. Then the additive group for $\sf g$ is $s$-free. In view of the Monna-Springer theorem 8.4 \cite{rooij} a Haar measure $\nu ^n$ on it is defined with values in $\bf K$ such that $\nu ^n(B({\bf F^n},0,1))=1$. It is known, that the Haar measure on ${\sf B_c}(G)$ has not atoms. \par Take a clopen compact subgroup $W$ in $G$ and an automorphism $\phi $ of the group $G$ from \S 2.13. Without loss of generality choose $W$ such that for it there is satisfied the Campbell--Hausdorff formula. In view of Lemma 2 of this section the automorphism $\phi $ is not $\mu $-measurable. \par {\bf 4. Corollary.} {\it The family $\Upsilon $ of non-measurable automorphisms from Theorem 3 has a subfamily $\Omega $ of the cardinality $card (\Omega ) \ge 2^{\sf c}$ such that every $f\in \Omega $ being restricted on any one-parameter subgroup over the field $\bf F$ in $G$ is non-measurable relative to the corresponding Haar measure on the subgroup with values in $\bf K$.} \par {\bf Proof.} The exponential mapping $\exp $ from a neighborhood of zero $V_0$ in the algebra $\sf g$ onto a neighborhood $U_e$ of the unit element in $G$ induces the image of the measure $\nu ^n_{\exp }$ on $U_e$, where $n$ is the dimension of $\sf g$ as the linear space over the field $\bf F$, $\nu ^n$ is the Haar measure on $\bf F^n$ as the additive group, moreover, it has not atoms. \par Theorem 7.34 in \cite{rooij} states that if there are two measures $\lambda $ and $\zeta $ on a covering ring $\cal R$ of a topological completely regular totally disconnected space $X$, then the following two conditions are equivalent: $(\alpha )$ there exists a locally $\zeta $-integrable function $h$ such that $\lambda (dx)= h(x)\zeta (dx)$; $(\beta )$ for each $x\in X$ there exists $b\in \bf K$ with $N_{\lambda -b\zeta }(x)=0$. Consequently, the measure $\nu ^n_{\exp }$ is equivalent to the restriction of the Haar measure $\mu $ on $U_e$, since $\exp $ is the locally bijective of class $C^{\omega }$ mapping. Then the Haar measure $\nu $ on $\bf F$ also induces the measure $\eta _g$ on the one-parameter subgroup $g_W = \{ g^t: t\in {\bf F}, |t|<\epsilon \} $, $0<\epsilon $, where $g\in g_W$. This measure $\eta _g$ is equivalent to the Haar measure $\mu _g$ on $g_W$. \par Then from \S \S 2.13, 3.3 and Lemma 3.2 the statement of this corollary follows. \par {\bf 5. Theorem.} {\it Let $\sf g$ be a non-trivial Lie algebra finite-dimensional over the field $\bf F$ with a measure $\mu $ equal to the non-trivial $\bf K$-valued Haar measure on an additive group of $\sf g$. Then the algebra $\sf g$ has the family of the cardinality $2^{\bf c}$ of $\mu $-non-measurable automorphisms.} \par {\bf Proof.} Take any algebraic automorphism $\phi $ of the field $\bf F$ from the proof of Theorem 2.13. Since $\sf g$ is finite-dimensional over the field $\bf F$, then the non-trivial Haar measure $\mu $ on $\sf g$ as the additive group for $\sf g$ with values in $\bf K$ is equivalent to the measure $\nu ^n$ (see Definitions 1). The automorphism $\phi $ extends up to an automorphism of the algebra: $\phi (a_jv_j)=\phi (a_j)v_j$, $\phi (a_1v_1+....+a_mv_m)= \phi (a_1)v_1+...+\phi (a_m)v_m$, $\phi ([a_kv_k,a_jv_j]) = [\phi (a_k)v_k, \phi (a_j)v_j]$ for each $a_j\in \bf F$, $k, j=1,...,m$, where $v_1,...,v_m$ is the basis of generators in $\sf g$ (see Definitions 2.1). Therefore, due to Lemma 2 the automorphism $\phi $ of the algebra $\sf g$ is $({\sf B_c}_{\mu },{\sf B_c }({\sf g}))$-non-measurable. The family of such different automorphisms of the algebra $\sf g$ has the cardinality $2^{\sf c}$. \par {\bf 6.} Let $G$ be a $C^{\omega }$ non-trivial Lie group over a non-archimedean local field $\bf F$, moreover, $G$ be complete as an uniform space. Suppose that $\mu : {\cal R}(G)\to \bf K$ is finitely-additive and there exists a clopen subgroup $W$ in $G$ such that the restriction $\mu |_W$ is the tight non-trivial measure on ${\sf Bco}(W)$ with values in $\bf K$ such that $N_{\mu }(g)>0$ for each $g\in G$, moreover, $\mu $ has not any atoms, where ${\sf Bco}(G)\supset {\cal R}(G)\supset {\sf Bco}(W)$, ${\cal R}(G)$ is a covering ring for $G$. Let $\exp : V\to W$ be the exponential mapping for $G$ as the analytic $C^{\omega }$ manifold over $\bf F$ from an open neighborhood $V$ of zero in $T_eG$ on $W$. Also suppose that a tight $\bf K$-valued measure $\nu $ on ${\cal R}(T_eG)$ is such that $\nu (J)=\mu (\exp (J))$ for each $J\in {\sf Bco}(V)$, where $T_eG$ is the linear space of separable type over the field $\bf F$, ${\sf Bco}({\sf g})\supset {\cal R}({\sf g})\supset {\sf Bco}(V)$, ${\cal R}({\sf g})$ is the covering ring for $\sf g$. \par Suppose that $G$ has a clopen subgroup $W$, $\exp : V\to W$, moreover, in $W$ there exists an everywhere dense subgroup $S$ such that the restriction $\ln |_S$ corresponds to the Campbell-Hausdorff formula, where $\ln $ is the inverse mapping to $\exp $. \par Let $\pi _v: T_eG\to {\bf F}v$ be a $\bf F$-linear projection operator, where $\nu _v$ is equivalent to the Haar measure on $\bf F$, where $v$ is the non-zero vector of the tangent space $v\in T_eG$, $\nu _v(J)=\nu (\pi _v^{-1}(J))$ for each $J\in {\sf B_c}({\bf F})$. \par {\bf Theorem.} {\it Then such group $G$ has a family $\Upsilon $ of $({\cal R}(G)_{\mu },{\cal R}(G))$-non-measurable automorphisms of the cardinality not less than $2^{\sf c}$, $card (\Upsilon ) \ge 2^{\sf c}$. Moreover, $\Upsilon $ has a subfamily $\cal P$ of automorphisms $f$, restrictions of which on one-parameter over $\bf F$ local subgroups $\{ \exp (xv): |x|<\epsilon \} $ in $S$ are non-measurable relative to the corresponding $\bf K$-valued Haar measure on $\{ \exp (xv): |x|<\epsilon \} $.} \par {\bf Proof.} The image $\nu $ on $V$ of the measure $\mu $ with the help of the mapping $\ln $, $\nu (B)= \mu (\exp (B))$ for every $B\in {\sf Bco}(V)$ is extendable up to the tight measure on the corresponding covering ring ${\cal R}({\sf g})$, $\nu (B) := \sum_{j=1}^{\infty } \nu ((B-h_j)\cap V)/2^j$ for every $B\in {\cal R}({\sf g})$, where $\{ (V+h_j): j\in {\bf N}, h_j\in {\sf g} \} $ is the covering for $\sf g$, since $\sf g$ by the condition has the separable type over $\bf F$, while the field $\bf F$ is separable and locally compact. As ${\cal R}({\sf g})$ we can take the minimal ring generated by $\bigcup_{j=1}^{\infty } {\sf Bco}(V+h_j)$. Therefore, $\nu $ has tight measures as the projections $\nu _{\omega ({\bf F^n})}$ on $\omega ({\bf F^n})$ for each embedding $\omega : {\bf F^n} \hookrightarrow \sf g$ as the $\bf F$-linear space, $\nu _{\omega ({\bf F^n})}(B) = \nu (\pi ^{-1}(B))$ for each $B\in {\sf B_c}(\omega ({\bf F^n}))$, where $\pi : {\sf g}\to \omega ({\bf F^n})$ is the projection operator. \par The operator $\pi $ is $\bf F$-linear and it exists, since $\omega ({\bf F^n})$ is finite-dimensional over $\bf F$, while the field $\bf F$ is locally compact (see Theorems 5.13 and 5.16 \cite{rooij} and \cite{nari}). The image $\nu _{\omega ({\bf F})}|_{V\cap \omega ({\bf F})}$ with the help of $\exp $ generates the measure on the local one-parameter subgroup in $S$, $\mu _g(B)= \nu _{\omega ({\bf K})}(\ln (B))$ for every $B\in {\cal R}(g_W\cap W)$, which extends up to a tight measure $\mu _g$ on $g_W$. \par Take an automorphism $f$ of the group $G$ from \S 2.19. In view of Lemma 3.2 and $N_{\mu }(g)>0$ for each $g\in G$ we get that $f$ is $\mu $-non-measurable. The non-measurability of restrictions of $f$ on one-parameter subgroups relative to the Haar measures on them follows from the properties of $\phi $ again due to Lemma 3.2 and \S 2.13. The families $\Upsilon $ and $\cal P$ of such different automorphisms of the group $G$ due to \S 2.19 have the cardinalities not less than $2^{\sf c}$. \par {\bf 7. Theorem.} {\it Let $G$ be an infinite topological totally disconnected dense in itself Hausdorff group with a non-trivial tight measure $\mu $ on $G$ having no any atom, moreover, $N_{\mu }(g)>0$ for each $g\in G$, while $G$ is complete as the uniform space, $card (U)\ge \sf c$ for each open $U$ in $G$. Then there exists a family $\Upsilon $ of $\mu $-non-measurable distinct automorphisms of the group $G$ of the cardinality $card (\Upsilon )\ge 2^{\sf c}$.} \par {\bf Proof.} Take an automorphism $\phi $ of the group $G$ from \S 2.20. In view of Lemma 3.2 it is $\mu $-non-measurable. The family of such distinct automorphisms of the group $G$ due to \S 2.20 ha sthe cardinality not less than $card (\Upsilon )\ge 2^{\sf c}$. \par {\bf 8. Remark.} If $G$ is a Lie group over the field $\bf F$ of the class $C^{\omega }$, its tangent space $T_eG$ is a Banach space of separable type over the field $\bf F$, then there exist measures $\mu $ on $G$ and $\nu $ on $T_eG$ with the desired properties from \S 3.6 (see \cite{lujmsqim}).
2,869,038,154,215
arxiv
\section{Introduction} \label{intro} Redshifted observations of the 21-cm spin-flip transition of neutral hydrogen (H{\sc \,i}) trace the cool component of the gas in distant galaxies. Since the surface brightness has a $(1+z)^4$ dependence, the detection of the 21-cm in emission is very difficult at redshifts of $z\geq0.1$, and so the neutral gas in distant active galactic nuclei (AGN) is usually studied in absorption. Furthermore, since most published searches\footnote{Prior to the $z\sim3$ survey of \citet{cww+08}.} have been at redshifts of $z\lapp1$, there are generally no observations of the Lyman-\AL\ transition, which is redshifted into the optical bands at $z\geq1.7$. Therefore, to date 21-cm absorption has been the most common probe of the neutral gas in the galaxies host to AGN, being detected in approximately 40\% of $z\gapp0.1$ cases (see \citealt{cww+08}). In order to explain the detection rate, many studies invoke unified schemes of active galactic nuclei, which attempt to unify the many classes of luminous extragalactic object, a key element of which is the presence of a torus of highly obscuring circumnuclear material: In these schemes, the appearance of the object is dependent upon the orientation of this material along our line-of-sight to the nucleus \citep{ost78,am85,mg87,ant93,up95}, with the popular consensus being that only type-2 objects present a dense column of intervening gas, which can absorb in 21-cm (see \citealt{jm94,cb95}, figure 2 of \citealt{cww08}). Other observational examples of this include: \begin{enumerate} \item From a survey for 21-cm absorption in 23 radio galaxies, \citet{mot+01} find that of the five detections, four occur in sources which could be considered type-2 objects, whereas there is only one detected case for a type-1 object. \item From a study of 49 gigahertz peaked spectrum (GPS) and compact steep spectrum (CSS) sources, \citet{pcv03} find that 21-cm absorption is more likely to arise in objects classified as galaxies, rather than in quasars. Since the former are generally considered to be type-2 objects, while latter are type-1 objects, this is consistent with the orientation of the central obscuration playing a major r\^{o}le in producing strong 21-cm absorption along our sight-line. \item Also, from a study of 27 GPSs and CSSs, \citet{gss+06} find that 21-cm absorption is twice as likely to be detected in the galaxies than in the quasars of the sample, again suggesting that the absorption occurs in the dense sub-parsec torus. \item From a sample of 23 galaxies and 9 quasars, \citet{gs06a} find that 15 of the galaxies exhibit 21-cm absorption, compared to just a single case for the quasars. Like \citet{pcv03}, this is consistent with unified schemes, where galaxies are host to edge-on obscurations, whereas quasars have their tori oriented more face-on. \end{enumerate} However, the situation may be more complex than this with evidence that 21-cm absorption may also be due to in-falling gas or outflows (e.g. \citealt{vpt+03,mto05,mhs+07}), and if these are directed along the radio jet axis, we would expect outflows of neutral gas to render absorption detectable towards type-1 sources. Whether due to an outflow or the presence of an intervening circumnuclear obscuration, these scenarios are consistent with unified schemes of AGN, playing a major r\^{o}le in whether H{\sc \,i}\ 21-cm absorption is detected. Therefore, from these possibilities in addition to absorption by the large reservoir of neutral gas in the galactic disk, we may expect a high 21-cm detection rate in distant radio galaxies and quasars. However, from a recent survey of the host galaxies of $z\geq2.9$ quasars, \citet{cww+08} found no evidence of absorption in any of the ten sources searched. Upon an analysis of the spectral types of the targets, as well as those of all the other $z\geq0.1$ published searches, they found the non-detections all to be type-1 objects, as are many of the lower redshift non-detections (see Fig.~\ref{lum-z}). \begin{figure*} \centering \includegraphics[angle=270,scale=0.70]{lum-z.eps} \caption{The ultra-violet luminosity--redshift distribution for the $z\geq0.1$ radio galaxies and quasars searched in associated 21-cm absorption. The filled symbols/hatched histogram represent the 21-cm detections and the unfilled symbols/unfilled histogram the non-detections. The shapes represent the AGN classifications, with triangles representing type-1 objects and squares type-2s ({\bf +} and {\sf x} designate an undetermined AGN type for a detection and non-detection, respectively). The legend shows the number of each AGN type according to the $L_{\rm UV}=10^{23}$ W Hz$^{-1}$\ partition. Updated from \citep{cww+08}.} \label{lum-z} \end{figure*} Superficially, this suggests that the orientation of the circumnuclear obscuration may be key in the detection of 21-cm absorption, although there may also be other effects at play, the evidence for which we discuss in this paper. \section{Factors affecting the 21-cm detection rate} \subsection{Luminosities} \subsubsection{Ultra-violet luminosity} \label{lum} \begin{table*} \begin{center} \caption{The $z\geq0.1$ sources detected in 21-cm absorption.\label{dets}} \small \begin{tabular}{r c l cccc cccccc } \tableline Source &Class & $z_{\rm em}$ &$B$ &$V$ &$R$ &$K$ &$\log L_{\rm UV}$ & Type & $\log_{10}\,N_{\rm HI}$ & ID & \multicolumn{2}{c}{References}\\ & & &[mag]&[mag]&[mag]&[mag]&[W Hz$^{-1}$]& & $(f/T_{\rm s})$[\scm/K] & & Spe. & Con.\\ \tableline J0025--2602 &Gal &0.3220 &20.300 &--- &18.084 &15.674 &20.100 &2 & 18.36 & CSS & V03 & T02\\ 0108+388 &Gal &0.6685 &--- &--- &22.000 &16.690 &20.309 &2 & 19.90 & GPS & C98 & P88,B90,O98,Z02\\%J0111+3906 J0141+1353 &Gal &0.6210 &22.327 &20.920 &20.876 &16.680 &20.777 &2 & 18.04 & CSS & V03 & F89,S95\\ J0410+7656 &Gal &0.5985 &--- &--- &21.200 &--- &--- &2 & 18.40 & GPS & V03 & D95,S95a\\ J0414+0534 &Gal &2.6365 &24.100 &23.800 &21.270 &13.540 &22.188 &1 & 18.88 & --- & M99 & ---\\ J0431+2037 &Gal &0.2190 &22.174 &--- &19.085 &14.924 &18.039 &--& 18.54 & GPS & V03 & D95,S01a\\ 0500+019 &Gal &0.5846 &22.500 &21.350 &20.682 &15.430 &20.367 &2 & 18.79 & FSRS& C98 & S01b\\ 3C\,190 &QSO &1.1946 &19.976 &17.460 &18.972 &15.300 &22.825 &1 & 19.6 & CSS & I03 & --- \\ J0834+5534 &Gal &0.2420 &18.921 &17.390 &17.180 &14.180 &20.719 &1 & 18.03 & RG & V03 & W85\\ J0901+2901 &Gal &0.1940 &19.321 &18.078 &18.600 &15.200 &21.280 &1 & 17.04 & CSS & V03 & A95\\ 0902+343 &Gal &3.3980 &--- &23.800 &23.500 &19.900 &22.422 &--& 18.49 & --- & U91 & --- \\ J0909+4253 &QSO &0.6700 &18.960 &19.049 &18.220 &14.860 &22.699 &2 & 18.09 & CSS & V03 & V92\\ J1124+1919 &Gal &0.1650 &22.082 &21.448 &20.513 &15.930 &19.190 &--& 18.70 & CSS & G06 & S90,S95b\\ 12032+1707 &Gal &0.2170 &18.758 &--- &17.327 &14.864 &20.949 &2 & 18.74 & OHM & P05 & --- \\ J1206+6413 &Gal &0.3710 &21.847 &20.790 &19.910 &--- &19.908 &1 & 18.29 & CSS & V03 & S95a,L98\\ J1326+3154 &Gal &0.3700 &21.367 &19.822 &18.882 &14.940 &19.638 &2 & 17.85 & GPS & V03 & M81,F96 \\ 4C\,12.50 &QSO &0.1217 &16.615 &16.050 &15.718 &13.216 &21.736 &2 & 18.79 & GPS & M89 & L03\\ J1357+4354 &Gal &0.6460 &--- &22.708 &20.951 &--- &18.620 &--& 19.52 & GPS & V03 & T96\\ J1400+6210 &Gal &0.4310 &22.137 &20.373 &19.530 &16.130 &19.459 &2 & 18.27 & GPS & V03 & D95\\ 1413+135 &QSO &0.2467 &21.055 &20.000 &18.461 &14.928 &19.105 &1 & 19.11 & CSS & C92 & P96,P00\\ 1504+377 &Gal &0.6715 &--- &21.808 &20.800 &16.100 &20.295 &2 & 19.65 &FSRS& C98 & VLBA\\ 1549--79 &Gal &0.1501 &--- & 18.800 & --- &12.407 &19.965 &1 & 18.56 & CFS& M01 & same \\%\multicolumn{2}{l}{~~~~~~~~~M01} \\ J1815+6127 &QSO &0.6010 &21.272 &--- &19.122 &--- &20.665 &1 & 18.64 & GPS & V03 & T94\\ J1816+3457 &Gal &0.2448 &20.342 &--- &18.459 &15.525 &20.034 &--& 18.71 & GPS & P00 & same \\%\multicolumn{2}{l}{P00}\\ J1821+3942 &Gal &0.7980 &19.598 &--- &18.135 &15.023 &22.202 &1 & 18.22 & CSS & V03 & D95,S01a\\ J1944+5448 &Gal &0.2630 &21.732 &--- &18.591 &15.000 &18.424 &2 & 18.69 & GPS & V03 & S01a,X95\\ J1945+7055 &Gal &0.1010 &18.726 &--- &17.199 &13.369 &20.067 &2 & 18.5 & GPS & P99 & T97\\ J2052+3635 &Gal &0.3550 &22.083 &--- &21.200 &--- &20.648 &1 & 18.86 & GPS & V03 & P81 \\ 3C\,433 &Gal &0.1016 &17.660 &16.350 &--- &12.891 &21.739 &2 & 18.36 & RG & M89 & --- \\ J2255+1313 &QSO &0.5430 &19.535 &19.590 &19.190 &--- &22.530 &2 & 17.62 & CSS & V03 & A95\\ J2316+0405 &Gal &0.2199 &18.595 &17.440 &17.220 &13.991 &21.081 &2 & 17.85 & BLRG& V03 & T03\\ J2355+4950 &Gal &0.2379 &21.101 &--- &18.400 &15.112 &18.940 &2 & 18.45 & GPS & V03 & P95,T00\\ \tableline \end{tabular} \tablecomments references in the penultimate column (3C\,190 = 0758+143, 4C\,12.50 = 1345+12, 3C\,433 = 2121+24, 3C\,452 = J2245+3941). The references for the magnitudes and AGN types are given in \citet{cww+08} [see also footnote \ref{new_sources}]. The 1216 \AA\ luminosities and AGN types are calculated and determined as described in \citet{cww+08} and the final four columns give the H{\sc \,i}\ column density, radio ID (see notes) and the 21-cm search (spe.) \& high resolution radio imaging (where available, con.) references. \\BLRG -- broad line radio galaxy, CFS -- compact flat spectrum, CSO -- compact symmetric object, CSS -- compact steep spectrum source, EORG -- end-on radio galaxy, EORQ -- end-on radio, quasar, FRI -- Fanaroff Riley type-1/BL Lac object, FRII -- Fanaroff Riley type-2, FSRQ -- flat-spectrum radio quasar, FSRQ -- flat spectrum radio source, FSSO -- flat-spectrum symmetric object, HFP -- high frequency peaker galaxy, NLRG -- narrow-line radio galaxy, OHM--OH megamaser, RG -- radio galaxy, RQ -- radio quasar.} \tablerefs{{\em Spectral}: D85 -- \citet{dsm85}, M89 -- \citet{mir89}, V89 -- \citet{vke+89}, U91 -- \citet{ubc91}, C92 -- \citet{cps92}, C98 -- \citet{cmr+98}, M99 -- \citet{mcm98}, P99 - \citet{ptc99}, P00 -- \citet{ptf+00}, M01 -- \citet{mot+01}, I03 -- \citet{ida03}, P03 -- \citet{pcv03}, V03 -- \citet{vpt+03}, P05 -- \citet{pbdk05}, C06 -- \citet{cwm+06}, G06 -- \citet{gss+06}, GS06 -- \citet{gs06}, O06 -- \citet{omd06}, C07 -- \citet{cwh+07}, C08 -- \citet{cww+08}. {\em Continuum}: J77 -- \citet{jpr77}, M81 -- \citet{mrs81}, P81 -- \citet{pm81}, B82 -- \citet{bod+82} U83 -- \citet{ujw83}, A85 -- \citet{au85}, W85 -- \citet{wbw+85}, G88 -- \citet{gfgp88}, P88 -- \citet{pr88}, F89 -- \citet{ffp+89}, S89a -- \citet{smc+89}, S89b -- \citet{som89}, B90 -- \citet{bomd90}, S90 -- \citet{srs+90}, A91 -- \citet{asz+91}, V92 -- \citet{vff+92}, M93 -- \citet{mbp93}, T94 -- \citep{tvp+94}, A95 -- \citet{ag95}, D95 -- \citet{dff+95}, N95 -- \citet{nrh95}, P95 -- \citet{pwx+95}, S95a -- \citet{sjw+95}, S95b -- \citet{ssl+95}, X95 -- \citet{xrp+95}, B96 -- \citet{bpf+96}, F96 -- \citet{fcf96}, P96 -- \citet{pcsc96}, T96 -- \citet{trp96}, T97 -- \citet{tv97}, B98 -- \citet{bgg98}, L98 -- \citet{lgs+98}, S98 -- \citet{sk98}, O98 -- \citet{ocp98}, R99 -- \citet{rkp99}, F00 -- \citet{ffp+00}, T00 - \citet{tmpr00}, S01a -- \citet{sjs+01}, S01b -- \citet{sdo+01}, B02 -- \citet{bgp+02}, T02 -- \citet{tkm+02}, Z02 -- \citet{zrk+02}, F03 -- \citet{fpm+03}, L03 -- \citet{lkv+03}, T03 -- \citet{tsm03}, P05 -- \citet{pkfg05}, K09 -- \citet{klm+09}, VLBA -- VLBA calibrator.} \end{center} \end{table*} \begin{table*} \begin{center} \caption{As Table \ref{dets}, but for the non-detections, where the H{\sc \,i}\ column density limits are quoted at a $3\sigma$ level per channel. \label{non-dets}} \small \begin{tabular}{r c l cccc cccccc } \tableline Source &Class & $z_{\rm em}$ &$B$ &$V$ &$R$ &$K$ &$\log L_{\rm UV}$ & Type & $\log_{10}\,N_{\rm HI}$ & ID & \multicolumn{2}{c}{References}\\ & & &[mag]&[mag]&[mag]&[mag]&[W Hz$^{-1}$]& & $(f/T_{\rm s})$[\scm/K] & & Spe. & Con.\\ \tableline J0003+2129 &QSO &0.4520 &21.005 & 20.580 &19.650 &--- &20.971 &--& $<19.61$ & HFP & O06 & VLBA\\ 0035--024 &Gal &0.2197 &19.110 & 17.920 & --- &14.494 &21.862 &1 & $<17.67$ & FRII& M01 & VLBA\\ 0131--001 &QSO &0.8790 &23.340 & 22.500 &20.780 &16.780 &20.221 &--& $<18.28$ & -- & C06 & VLBA\\ J0157--1043 &QSO &0.6160 &17.504 & --- &17.039 &--- &23.380 &1 & $<17.98$ & EORQ& V03 & R99\\ J0201--1132 &QSO &0.6690 &16.232 & --- &16.073 &13.860 &24.176 &1 & $<17.80$ & EORQ& V03 & R99\\ J0224+2750 &Gal &0.3102 &19.502 & --- &18.263 &15.250 &21.225 &1 & $<18.14$ & CSS & V03 & S95a,b\\ 0335--122 &QSO &3.4420 &21.018 & 20.110 &20.199 &17.510 &23.722 &1 & $<18.32$ & --- & C08 & VLBA\\ 0347--211 &QSO &2.9940 &20.476 & --- &20.297 &17.900 &23.722 &1 & $<18.38$ & --- & C08 & VLBA\\ J0348+3353 &Gal &0.2430 &20.723 & --- &19.110 &14.390 &20.121 &2 & $<18.12$ & CSS & V03 & D95\\ J0401+0036 &Gal &0.4260 &20.200 & 19.010 &18.532 &--- &20.969 &2 & $<18.03$ & EORG& V03 & N95\\ J0521+1638 &QSO &0.7590 &19.370 & 18.840 &18.480 &15.380 &22.580 &1 & $<17.65$ & CSS & V03 & F89,A91,S95a\\ 0537--286 &QSO &3.0140 &19.290 & --- &18.789 &16.770 &24.231 &1 & $<18.45$ & FSRQ& C08 & K09\\ J0542+4951 &QSO &0.5450 &18.450 & 17.800 &17.210 &--- &22.311 &2 & $<17.45$ & CSS & V03 & L98\\ J0556--0241 &Gal &0.2350 &20.968 & --- &19.533 &--- &20.150 &2 & $<18.77$ & GPS & V03 & P05 \\ J0609+4804 &Gal &0.2769 &21.198 & --- &18.767 &--- &19.349 &--& $<17.79$ & EORG& V03 & N95\\ J0709+7449 &Gal &0.2921 &19.982 & --- &17.540 &13.790 &19.898 &2 & $<18.30$ & FRII& V03 & --- \\ 0723--008 &QSO &0.1273 & 17.39 & 16.57 & 15.82 &13.166 &20.793 & 2 & $<17.76$ & CFS & V89 & B96\\ J0741+3112 &QSO &0.6350 &16.517 & 16.100 &16.322 &16.100 &23.990 &1 & $<17.97$ & GPS& V03 & S98,S01\\ J0815--0308 &Gal &0.1980 &18.490 & 16.940 &16.797 &13.858 &20.707 &--& $<18.16$ & EORG& V03 & N95\\ J0840+1312 &QSO &0.6808 &18.370 & 17.940 &17.622 &15.280 &22.947 &1 & $<17.69$ & RQ & V03 & VLBA\\ J0913+5919 &QSO &5.1200 &--- & 23.281 &24.948 &--- &22.071 &1 & $<19.34$ & CSO & C07 & ---\\ J0924--2201 &Gal &5.2000 &--- & --- &25.850 &--- &21.893 &--& $<18.34$ & CSO & C07 & ---\\ J0927+3902 &QSO &0.6948 &17.064 & --- &16.486 &--- &23.603 &1 & $<17.97$ & EORG& V03 & B82\\ J0939+8315 &Gal &0.6850 &--- & --- &20.140 &--- &--- &2 & $<17.67$ & EORG& V03 & J77\\ J0943--0819 &Gal &0.2280 &19.401 & --- &18.100 &14.750 &20.868 &2 & $<18.08$ & GPS & V03 & B02\\ J0954+7435 &Gal &0.6950 &--- & --- &21.700 &--- &--- &--& $<18.37$ & RG & V03 & F00\\ J1035+5628 &Gal &0.4590 &--- & 21.244 &20.200 &--- &19.889 &2 & $<18.12$ & GPS & V03 & T94,P00\\ J1120+1420 &Gal &0.3620 &--- & 20.935 &20.100 &17.100 &20.098 &--& $<17.76$ & GPS & V03 & B98\\ J1159+2914 &QSO &0.7290 &17.489 & 18.113 &17.652 &--- &23.955 &1 & $<18.26$ & EORG& V03 & A85\\ 1228--113 &QSO &3.5280 &22.010 & --- &19.115 &16.370 &23.754 &1 & $<18.63$ & ---& C08 & VLBA \\ J1252+5634 &QSO &0.3210 &17.760 & 17.930 &17.660 &--- &22.949 &1 & $<17.83$ & CSS & V03 & S95a,L98\\ J1308--0950 &Gal &0.4640 &20.767 & 20.500 &18.439 &--- &20.340 &2 & $<18.11$ & CSS & V03 & T02\\ J1313+5458 &QSO &0.6130 &--- & 21.735 &20.374 &--- &19.581 &2 & $<18.23$ & RQ & V03 & T94\\ 1351--018 &QSO &3.7070 &21.030 & 19.696 &19.277 &17.070 &24.014 &1 & $<18.41$ & --- & C08 & ---\\ 1356+022 &QDO & 1.330 &--- & 17.436 & --- &14.537 &24.055 &1 & $<17.88$ & FSSO& D85 & VLBA\\%agpw06 J1421+4144 &Gal &0.3670 &20.496 & 19.330 &18.560 &15.910 &20.435 &2 & $<17.82$ & CSS & V03 & A95\\ J1443+7707 &Gal &0.2670 &--- & --- &18.730 &--- &--- &2 & $<18.15$ & CSS & V03 & L98\\ 1450--338 &Gal &0.3680 &22.520 & 20.400 &19.390 &15.230 &18.629 &2 & $<17.83$ & --- & C06 & VLBA\\ 1535+004 &QSO &3.4970 &--- & --- &--- &19.540 &--- &--& $<18.22$ & FSSO& C06 & VLBA\\%agpw06 J1540+1447 &QSO &0.6050 &17.480 & 17.000 &17.240 &13.640 &23.529 &1 & $<17.77$ & EORG& V03 & U83\\ J1546+0026 &Gal &0.5500 &19.730 & 18.900 &--- &16.420 &22.703 &--& $<18.00$ & GPS & V03 & P00 \\ 1615+028 &QSO & 1.339 &18.010 &17.750 &17.310 &15.890 &23.869&1 & $<18.30$ & FSSO &D85 & ---\\%agpw06 J1623+6624 &Gal &0.2030 &19.477 & --- &17.430 &--- &20.004 &2 & $<18.48$ & HFP & O06 & ---\\ J1642+6856 &QSO &0.7510 &19.723 & --- &19.219 &--- &22.667 &1 & $<18.10$ & EORG& V03 & M93\\ J1658+0741 &QSO &0.6210 &19.993 & --- &19.598 &--- &22.441 &1 & $<18.17$ & EORG& V03 & M93\\ J1823+7938 &Gal &0.2240 &19.269 & --- &17.415 &13.866 &20.385 &2 & $<19.44$ & GPS & V03 & T94 \\ J1829+4844 &QSO &0.6920 &16.260 & --- &16.860 &14.250 &24.692 &1 & $<17.26$ & CSS & V03 & L98\\ J1831+2907 &Gal &0.8420 &21.917 & --- &20.200 &--- &21.201 &2 & $<18.21$ & CSS & V03 & S89a\\ J1845+3541 &Gal &0.7640 &--- & --- &21.900 &--- &--- &2 & $<19.02$ & GPS & V03 & X95 \\ 1937--101 &QSO &3.7870 &18.800 & --- &17.188 &13.816 &24.910 &1 & $<18.11$ & --- & C08 & VLBA\\ J2022+6136 &Gal &0.2270 &19.830 & --- &18.146 &--- &20.334 &2 & $<17.56$ & GPS & V03 & F00 \\ J2137--2042 &Gal &0.6350 &20.400 & --- &19.286 &--- &21.808 &1 & $<18.05$ & CSS & V03 & T02,F03 \\ 2149+056 &QSO &0.7400 &23.700 & 22.050 &20.850 &17.170 &19.582 &1 & $<18.38$ & FSRS& C98 & S89b\\%J2151+0552 2215+02 &QSO &3.5720 &21.840 & 20.420 &20.140 &19.340 &23.613 &1 & $<17.57$ &FSSO & C08& VLBA\\%agpw06 J2250+1419 &QSO &0.2370 &16.760 & 16.640 &17.243 &--- &23.616 &1 & $<18.09$ & CSS & V03 & ---\\ 2300--189 &Gal &0.1290 &18.430 & --- &16.569 &13.060 &20.099 &1 & $<18.20$ & --- & C06 & VLBA\\ J2321+2346 &Gal &0.2680 &20.315 & --- &18.468 &14.710 &20.187 &--& $<18.28$ & EORG& V03 & G88\\ J2344+8226 &QSO &0.7350 &21.769 & --- &20.220 &15.850 &21.165 &2 & $<17.85$ & GPS & V03 & D95,S01b \\ \tableline \end{tabular} \end{center} \end{table*} As stated above, at redshifts of $z\gapp3$ \citet{cww+08} detected no 21-cm absorption down to sensitivities sufficient for most of the current detections, $N_{\rm HI}\lapp10^{18}.\,(T_{\rm s}/f)$ \scm\ per channel (Tables \ref{dets} \& \ref{non-dets}, where we include the $z\geq0.1$ searches which were previously missed\footnote{\label{new_sources}We have added the $z\geq0.1$ detections 1549--79 \& 3C\,433 and the $z\geq0.1$ non-detections 0035--024, 0723--008, 1356+022 \& 1615+028, where the photometry and AGN type have been obtained/determined from \citet{awh82,wp85,lnn88,sh89,dwf+97,fww00,tdm+02,bmbd06,scs+06,aaa+08}.}). From an analysis of the optical photometry, the target absorption systems of \citet{cww+08} are found to be located in quasars with high ultra-violet ($\lambda\approx1216$ \AA) luminosities ($L_{\rm UV}\gapp10^{23}$ W Hz$^{-1}$, Fig.\ref{lum-z})\footnote{Throughout this paper we use $H_{0}=71$~km~s$^{-1}$~Mpc$^{-1}$, $\Omega_{\rm matter}=0.27$ and $\Omega_{\Lambda}=0.73$.}, which suggests that the gas may have a significant ionisation fraction in all of these sources. In light of this, the large non-detection rate at high redshift is perhaps expected due to the flux limited nature of optical surveys, which selects only the most UV bright objects at these distances. Note, however, that this effect is also apparent for previously searched lower redshift ($z\lapp1$) sources, a trait which was previously unknown. \begin{table*} \begin{center} \caption{The incidence of detections for various UV luminosity partitions. \label{probs}} \begin{tabular}{l cccc cccc} \tableline\tableline $L_{\rm UV}$ & \multicolumn{4}{c}{For luminosities $<L_{\rm UV}$} & \multicolumn{4}{c}{For luminosities $>L_{\rm UV}$} \\ W Hz$^{-1}$ & $k/n$ & rate & $P(\leq k/n)$ & $S(\leq k/n)$ & $k/n$ & rate & $P(\leq k/n)$ & $S(\leq k/n)$ \\ \tableline $10^{20}$ & 10/17 & 59\% & 0.31 &$1.00\sigma$ & 21/68 & 31\% &0.0011 & $3.26\sigma$\\ $10^{21}$ & 21/43 & 49\% & 0.50 & $0.67\sigma$& 10/41 & 24\% &0.00073 & $3.38\sigma$\\ $10^{22}$ & 25/53 & 47\% & 0.39 & $0.86\sigma$& 6/31 & 19\% &0.00044 & $3.52\sigma$\\ $10^{23}$ & 31/67 & 46\% & 0.31 & $1.00\sigma$& 0/17 & 0\% & $7.6\times10^{-6}$ & $4.46\sigma$\\ \tableline \end{tabular} \tablecomments{$k$ is the number of 21-cm detections below/above the partition and $n$ is the total number of sources in the same region, ``rate'' expresses this as a percentage detection rate, $P(\leq k/n)$ is the binomial probability of this number of detections or less occuring by chance for an unbiased sample and $S(\leq k/n)$ is the significance of this (derived assuming Gaussian statistics).} \end{center} \end{table*} In order to determine the significance of this UV segregation, in Table \ref{probs} we summarise the binomial probabilities of the observed distributions occuring by chance, given that a 21-cm detection and non-detection are equally probable at a given UV luminosity. From this, we see that below each UV luminosity cut-off there is no bias, with the likelihood of a detection staying close to 50\%. On the other hand, above the cut-off the probabilities of the observed distributions resulting from an unbiased sample are small, dropping dramatically at $L_{\rm UV}>10^{23}$ W Hz$^{-1}$. Since all luminosities above the cut-off are included, the most luminous sources could well dominate the upper partitions in Table \ref{probs}, which does appear to be evident from the vertical histogram of Fig. \ref{lum-z}. As is also illustrated by the histogram, above $L_{\rm UV}\gapp10^{20}$ W Hz$^{-1}$\ the number of non-detections outweighs the number of detections in each bin, which may suggest that at all values the ultra-violet luminosity introduces a bias against 21-cm absorption. However, this could also be explained by other effects which could make non-detections more numerous, e.g. a larger proportion of type-1 objects (presuming that unified schemes were key in determining whether 21-cm absorption could be detected, see below). What is clear, however, is that 21-cm has yet to be detected at $L_{\rm UV}>10^{23}$ W Hz$^{-1}$\ and that the probability of this distribution occuring by chance is very small (Table~\ref{probs}). A quantitative estimate of the critical UV luminosity may be obtained by examining the detection proportions above and below certain values of $L_{\rm UV}$. We define the statistic \begin{equation} T = \frac{\hat{p}_1 - \hat{p}_2}{\sqrt{\hat{p}(1-\hat{p})(N_1^{-1}+N_2^{-1})}}, \label{MrT} \end{equation} where $\hat{p}_1=X_1/N_1$ and $\hat{p} _2=X_2/N_2$ are the two measured proportions and $\hat{p}=(X_1+X_2)/(N_1+N_2)$ is the total proportion. This has the standard normal distribution under the null hypothesis that the proportions ($\hat{p}_1$ and $\hat{p}_2$) are the same, i.e. that the UV luminosity does not affect the 21-cm detection rate. Testing the statistic in steps of $\Delta\log_{10}L_{\rm UV}=0.1$, we reject the null hypothesis for all UV luminosities of $\log_{10}L_{\rm UV}\geq22.5$, where the difference between the two proportions is significant at $\geq3\sigma$ (Fig. \ref{null}, left). \begin{figure*} \centering \includegraphics[angle=0,scale=0.80]{lumCut_detFrac.ps} \caption{The significance of the T-statistic of the difference in the proportions, $\hat{p}_1$ and $\hat{p}_2$, for various critical luminosities. The left panel shows this for the ultra-violet and the right panel for the rest-frame 1420 MHz continuum luminosity. The ordinate label on the right-hand side of each panel shows the detection fraction above (upward arrows) and below (downward) the cut-off.} \label{null} \end{figure*} Additionally, as found above, the sources below the $L_{\rm UV}$ cut-off always exhibit a $\approx50$\% detection rate (Table \ref{probs}), which is the natural rate in the absence of high UV luminosities. \subsubsection{Radio luminosity} \label{rl} Since, in the optically thin regime, the column density of the neutral gas is related to the 21-cm absorption strength via $N_{\rm HI}\propto (T_{\rm s}/f).\int \tau dv$, where $T_{\rm s}$ is the spin temperature of the absorbing gas and $f$ the covering factor of the background flux, a possible cause of the non-detections at high redshift could be elevated spin temperatures, a trait which may have been observed in intervening absorption systems \citep{kc02}. However, for these, the mean value at $z_{\rm abs} \gapp1$ may only be double that at $z_{\rm abs} \lapp1$ ($T_{\rm spin}/{f}=2400$~K, cf. $1200$~K, \citealt{ctd+09}), a factor which can be accounted for by the different line-of-sight geometry effects, introduced by a flat expanding Universe \citep{cw06}. This could negate the need for significantly higher spin temperatures in order to explain the lower incidence of detections at high redshift. In the case of associated absorption, with $z_{\rm abs}\approx z_{\rm em}$ the same geometry effects cannot be responsible for such a discrepancy between the low and high redshift samples, although the generally higher radio luminosities of the latter could be raising the spin temperature through a high population of the upper hyperfine level. We have already suggested that the more intense ultra-violet fluxes could be responsible for the deficit in detections at high redshift (Sect. \ref{lum}), which could be causing large ionisation fractions and/or a raising of the spin temperature \citep{fie59,be69} and so in Fig. \ref{UV-radio} we show the ultra-violet luminosity (source-frame $\lambda\approx1216$~\AA) versus that of the radio (source-frame $\lambda\approx21$~cm). \begin{figure}[hbt] \centering \includegraphics[angle=270,scale=0.75]{UV-radio.eps} \caption{The ultra-violet versus the radio luminosity for the sample. Again, the filled symbols represent the 21-cm detections and the unfilled symbols the non-detections, with stars signifying quasars and circles galaxies (see Sect. \ref{type}).} \label{UV-radio} \end{figure} There is a strong apparent correlation between the two luminosities ($5.27\sigma$), with a large dispersion, particularly in the UV luminosity. Any relationship will, however, be largely driven by both quantities having a strong redshift dependence. However, we are not sensitive to sources in the bottom right corner, where the optical flux is too low to obtain a redshift (all sources in this sample are identified as having associated absorption). Also, while all of the 21-cm detections are limited to $L_{\rm UV}\lapp10^{23}$ W Hz$^{-1}$, they do cover the same range of radio luminosities as the non-detections, thus suggesting that these are not so critical in the detection of 21-cm absorption. As a check, we apply Equ. \ref{MrT} to the radio luminosities and find no significant relationship between the detection rate and the radio continuum luminosity (Fig. \ref{null}, right). This is in contrast to the same statistic applied to the ultra-violet luminosities and so it appears unlikely that the radio power is the dominant effect in raising the spin temperature of the gas above the detection thresholds. \subsection{Source structure and environment} \subsubsection{Host galaxy environments} \label{hge} In considering the presence or otherwise of absorbing neutral gas, one needs to consider the wider picture of the quasar host galaxy and its environment. A great deal is known about the host galaxies of luminous AGN at low redshift. Imaging studies, particularly with HST, of $z\lapp0.4$ quasars \citep{tdhr96,bkss97,dmk+03,fkd+04}, indicate that early-type or spheroidal hosts are much more likely for high-luminosity quasars, and that almost all luminous radio-loud quasars and radio galaxies are to be found in elliptical galaxies. Studies at higher redshift, often with ground-based adaptive optics \citep{hcm+99,kdm+01,rhcl01,pir+06} show a slightly more complex picture. The luminous quasars tend to follow the same trend, with elliptical/early-type hosts predominant, although it is more difficult to gain a confident picture of the host morphology at these redshifts. Using gravitationally-lensed quasars can provide some extra resolution, from which \citet{pir+06} find that quasar hosts at $1<z<4.5$ span a range of morphologies from early-type to disky/late-type galaxies. Determining the expected H{\sc \,i}\ content of typical quasar host galaxies is difficult, as 21-cm absorption is often the only way to probe the neutral gas at such redshifts. For low redshifts, 21-cm emission studies are possible with much being gleaned from blind surveys: The HIPASS survey had detection rates of 6\% for RC3 ellipticals and 13\% for S0s \citep{som02}\footnote{Although 30--50\% of these were confused, with more than one optical galaxy in the beam.} and from the ALFALFA survey, \citet{dgg+07} detected H{\sc \,i}\ in just 2-3\% of bright early-type galaxies in the Virgo cluster, with only one of these (M86, an S0) having $M_{\rm B}<-20$ ($L_{\rm B}\gapp2\times10^{22}$ W Hz$^{-1}$). \citet{gdg+09}, however, examined early-type galaxies in low-density environments, and found a higher detection fraction of H{\sc \,i}\ emission for the more luminous objects, although neither of the two ellipticals with $M_{\rm B}<-20$ were detected. Targeted searches with interferometers \citep{oms+02,oms+07,mdo+06,nyl09} show different results. The detection fraction tends to be much larger than in the blind surveys, with a mix of ordered, relaxed disks and more complex interacting systems seen. However, these have mostly been limited to fairly local galaxies, although a small number of more powerful (yet still nearby compared to our sample, $z<0.03$) radio galaxies have been observed to have H{\sc \,i}\ disks as well as tails, resulting from an interaction, which often coincide with H{\sc \,i}\ absorption. One such example is NGC612 \citep{emo+08}, an S0 galaxy with a large neutral disk of gas, likely originating in an interaction or merger. However, hosts luminous AGN are very rare in the local universe, making a clear understanding of the H{\sc \,i}\ properties elusive. The above can provide qualitative description of why we see the cut-off in the absorption distribution at high luminosities, where the the hosts tend to be the larger elliptical galaxies. For instance, the low-redshift luminous AGN imaged by the HST all have elliptical hosts \citep{bkss97,dmk+03,fkd+04} and $M_{\rm V}\ifmmode\stackrel{<}{_{\sim}}\else$\stackrel{<}{_{\sim}}$\fi-23$, which corresponds to a $V$-band luminosity of $L_{\rm V}\gapp10^{22}$ W Hz$^{-1}$\ -- similar to the brightest sources in our sample. Regarding the $L_{\rm V}\gapp10^{23}$ W Hz$^{-1}$\ sources of our sample, of the five imaged, two have resolved host galaxies both of which are ellipticals (J1540+1447; \citealt{uso+00} and J2250+1419; \citealt{dmk+03})\footnote{Of the remaining three, J0201-1132 and J1829+4844 are only marginally resolved (\citealt{kf00,lms+99}, respectively) and in J0927+3902 no host is seen \citep{csg+98a}.}. To summarise, at lower luminosities the observed morphological mix could present a range of H{\sc \,i}\ column densities, whereas at higher luminosities the hosts are expected to be gas-poor ellipticals, resulting in a lack of 21-cm absorption. This could give a mix of 21-cm detections at low redshift and exclusive non-detections at high redshift (Fig. \ref{lum-z}). However, the bulk of the $z\geq0.1$ sample has not been observed with a resolution sufficient to determine the host galaxy morphologies, so that, while we do have a good understanding of how the 21-cm detections are distributed with $L_{\rm UV}$, how these are distributed with morphology for this sample is at present undetermined. \subsubsection{Radio source sizes} \label{rss} In addition to any possible evolution in the host galaxy environments, selection effects may arise from the large range of beam sizes and radio source sizes over what constitutes a heterogeneous sample (all radio sources at $z\geq0.1$ searched for 21-cm absorption). For example, a $10''$ diameter beam (a typical lower value for the GMRT at 90-cm, \citealt{cww+08}) covers a linear extent of 18 kpc at $z=0.1$, $\approx86$ kpc (the maximum) at $z=1.6$ and 63 kpc at $z=5.2$. That is, the area subtended by a given telescope beam can vary by a factor of $\approx20$ due to the redshift range alone, an effect which is compounded by the published searches being performed with numerous instruments/configurations, each with its own beam size. This can, in principle, affect the sensitivity to 21-cm absorption. For instance, absorption can be missed in near-by Seyfert galaxies by lower resolution observations, while being revealed on VLBA/MERLIN scales (e.g. \citealt{mwpg03}). Therefore, the radio source sizes and the bias these may have on the effective coverage of the background emission must be considered. This, however, is fraught with uncertainties as the heterogeneity of the sample combined with the very different scales probed, will contribute to diluting out any strong trends in the linear extents of the sources. Further compounding this is the fact that the radio emission can exhibit very different structure at different frequencies and, although the sizes used are from the nearest available frequencies,VLBI and VLBA continuum observations are typically at $\gapp2$ GHz, significantly higher than that of the redshifted 21-cm line. The only way to fully address this would be through dedicated mapping of the 21-cm absorption (e.g. \citealt{lbs00,mwpg03}), although the available VLBA bands would only allow this at redshifts of $1.27 \leq z \leq 1.38$ and $3.15 \leq z \leq 3.55$, thus covering only six of the sample (all of which are non-detections in any case). \begin{figure} \centering \includegraphics[angle=270,scale=0.70]{tau-size.eps} \caption{The scaled velocity integrated optical depth ($1.823\times10^{18}.\int \tau dv$) of the 21-cm absorption versus the linear extent of the radio source size (from the references listed in Tables \ref{dets} and \ref{non-dets}.). The symbols are as per Fig. \ref{lum-z}, where the downwards arrows signify the upper limits to the non-detections. The five detections not classified as compact objects (CSO, CSS, GPS nor HFP, defined in Sect. \ref{drco}) are circled.} \label{tau-size} \end{figure} Nevertheless, applying these source sizes, in Fig. \ref{tau-size} we see the same ``column density'' (actually the velocity integrated optical depth)--linear size anti-correlation reported for the GPS and CSS sources by \citet{pcv03,gss+06}: A Kendall's $\tau$-test on the detections gives a two-sided probability $P(\tau) = 1.82\times10^{-5}$ of the correlation arising by chance, which is significant at $4.28\sigma$ assuming Gaussian statistics. Including the 21-cm non-detections, through the {\sc asurv} survival analysis package \citep{ifn86}, decreases this to $P(\tau) = 0.017$ ($2.39\sigma$). If we consider just the GPS and CSS sources \citep{pcv03}\footnote{Although from Fig. \ref{tau-size} it appears that {\em all} of the detections follow the same $\int \tau dv$--linear size correlation as is seen for the compact objects only \citep{pcv03,gss+06}, although this is based upon the small number of non-compact objects (five).}, we obtain $P(\tau) = 0.006$ ($2.75\sigma$), which nonetheless indicates that the inclusion of the non-detections significantly worsens the correlation. This suggests that the 21-cm non-detections may be subject to an additional effect. This could be due to smaller absorption cross-sections further reducing the covering factor, generally higher spin temperatures and/or lower neutral hydrogen column densities, all of which could be due to both orientation and UV luminosity effects. Although the decrease in the ``column density'' is most likely due to a decrease in the covering factor with increasing with linear extent, there is also the possibility that the 21-cm absorption towards the larger sources is more susceptible to dilution by the extended 21-cm emission. However, in spite of the four orders of magnitude span in size, we find no significant difference in the detection rates between the smaller ($<1$ kpc, 16 out of 42) and the larger ($\geq1$ kpc, 11 out of 36) sources (Fig.~\ref{tau-size}), indicating that this is not a strong effect. \subsection{AGN spectral type and source classification} \label{type} If the 21-cm non-detections are due to orientation effects, it would suggest that all of the $L_{\rm UV}\gapp10^{23}$ W Hz$^{-1}$\ sources are type-1 objects and therefore the gas may only necessarily be ionised along our line-of-sight, which is direct to the AGN. To examine this question, we make use of the spectral classifications obtained by \citet{cww+08}. These were compiled by exhaustively searching the literature for published emission-line fluxes or spectra. Sources with broad permitted lines were classified as type-1 and those with only narrow lines as type-2. Our classifications generally agree with those of \citet{vpt+03}, except for 2316+0405 which we assign as type-2 \citep{mil81}, although \citet{vpt+03} have this classified as a broad line radio galaxy. Using these classifications, it is apparent that all the high UV luminosity sources are indeed type-1 (Fig.\ref{lum-z}). However, they appear to be distinct from the population of lower UV luminous type-1 objects, some of which have been detected in 21-cm absorption at, somewhat surprisingly, the same detection rate (50\%) as for the type-2 objects. That is, the overall bias towards type-1 non-detections is caused by the inclusion of the $L_{\rm UV}\gapp10^{23}$ W Hz$^{-1}$\ non-detections. Therefore, the ultra-violet luminosity of the object appears to have much more bearing on whether 21-cm absorption can be detected and, at moderate UV luminosities, the AGN type provides no indicator of whether a high column of cold neutral gas is likely to intercept our sight-line. Nevertheless, as has been noted in the literature (e.g. \citealt{pcv03,gs06a}), 21-cm absorption is more likely to be detected in a radio galaxy than in a quasar, even if there are even odds between the two AGN types. We designate each object in our sample as either a quasar or a galaxy, using the classifications from \citet{cww+08}. To obtain these, optical imaging and spectroscopy from the literature were used to distinguish sources dominated by the nuclear source (the ``quasars'') from those dominated by the extended stellar light of the host galaxy (the ``galaxies''), the latter being either weaker AGN or relatively obscured type-2 AGN. \begin{table} \begin{center} \caption{The mean ultra-violet luminosities [W Hz$^{-1}$] for the galaxy and quasar sub-samples. $\sigma$ gives the standard deviation. \label{uv-lum}} \begin{tabular}{l cc cc } \tableline\tableline &\multicolumn{2}{c}{GALAXIES} & \multicolumn{2}{c}{QUASARS}\\ & Detections & All & Detections & All \\ \tableline $\overline{\log_{10} L_{\rm UV}}$ & 20.3 & 20.4 & 21.6 & 22.7 \\ $\sigma$ of $\log_{10} L_{\rm UV}$& 1.1 & 1.1 & 1.3 & 1.5 \\ Sample size& 25 & 48 & 6 & 36 \\ \tableline \end{tabular} \end{center} \end{table} From Table~\ref{uv-lum}, where we show the average UV luminosities for each class, we see that there is little difference in the luminosities between the detections and whole sample for the galaxies (as well as a 50\% detection rate). However, the quasars in which 21-cm has been detected are generally an order of magnitude brighter than the galaxies, with the whole quasar sample being another order of magnitude brighter yet (with only a 17\% detection rate). Again, over and above an underlying 50\% due to orientation effects, this strongly suggests that it is the ultra-violet luminosity which is the key criterion in the detection of H{\sc \,i}\ in these objects and that any detection bias against the quasars is due to their generally higher ultra-violet output. \subsection{Detection rates in compact objects} \label{drco} Until the 725--1200 MHz survey of \citet{vpt+03}, there were few detections of associated 21-cm absorption at $z\geq0.1$. Of the \citet{vpt+03} detections most (17 out of 19) are gigahertz peaked spectrum and compact steep spectrum sources, with the general consensus being that these exhibit higher 21-cm detection rates than other radio sources, due to their gas rich-host galaxies (\citealt{con96,ode98,gss+06} and references therein). CSSs and GPSs are believed to be intrinsically small ($\lapp10$ and $\lapp1$ kpc, respectively) and may be the young precursors of the typically large radio sources, themselves evolving from compact symmetric objects (CSOs, \citealt{ffd+95}). Although the high occurrence of broad forbidden lines may suggest that these are primarily type-1 objects, in which the compact appearance is due to the radio lobes being directed along our sight-line, the diminished sizes are not believed to be due to projection effects \citep{ffs+90}: Although there are radio lobes present, the jets may be embedded in a dusty interstellar medium so that the AGN is believed to be subject to significant extinction (e.g. \citealt{btm+03}), resulting in strong radio emission as the trapped jets interact with the rich, dense cocoon, in this early evolutionary stage. The possibility that CSSs and GPSs are compact type-1 objects, would be consistent with the 21-cm absorption being, on average, blue-shifted with respect to the optical redshift (\citealt{vpt+03}, see Sect. \ref{adto}). Furthermore, the efficient coverage of the confined radio core could contribute to a high incidence of 21-cm absorption, which probably occurs in an outflow. However, of the 17 CSS/GPS detections of \citet{vpt+03} only six are classified as type-1 objects, cf. nine type-2s (and two unclassified, Table~\ref{dets}), which again may suggest that these objects have random orientations, with the possibility of absorption arising in either an outflow or the disk. If 21-cm absorption favours compact objects (and this was a more important effect than the UV luminosity), we may expect a very low number of CSSs/GPSs in the exclusively non-detected $L_{\rm UV}\gapp10^{23}$ W Hz$^{-1}$\ sample. However, three of the eight low redshift $L_{\rm UV}\gapp10^{23}$ W Hz$^{-1}$\ objects of \citet{vpt+03} are classified as CSS/GPS, and so these are not immune to the 21-cm absorption being undetected at high UV luminosity. Being from the Parkes Quarter-Jansky Flat-spectrum Sample, the high redshift sources are, by definition, flat spectrum (with $\alpha>-0.5$), although of these eight the radio SEDs for 1351--018 and 1535+004 are suggestive of GPSs or high frequency peaker galaxies (HFPs)\footnote{The turnover frequencies of $\gapp5$ GHz may suggest newly born radio sources \citep{dal03} and from the anti-correlation between turnover frequency and the source size \citep{ffs+90,ob97}, we also expect these to be extremely compact. For the GPS/HFP suspects of the $L_{\rm UV}\gapp10^{23}$ W Hz$^{-1}$\ targets of \citet{cww+08}, the Very Large Array's FIRST (Faint Images of the Radio Sky at Twenty Centimetres, \citealt{wbhg97}) survey gives deconvolved source sizes of $<0.99''\times0.39''$ for 1351--018 and $<1.18''\times0.76''$ at an observed frequency of 1.4 GHz. At redshifts of $z=3.707$ and $3.497$, these sizes correspond to $<7\times3$ kpc and $<9\times6$ kpc. respectively, where at a turnover frequency of $\sim10$ GHz, we may expect a source size of $\lapp1$ kpc \citep{ob97}.}. For the remainder of the $L_{\rm UV}\gapp10^{23}$ W Hz$^{-1}$\ sample, however, being flat spectrum sources \citep{dsm85,jws+02,vpt+03} may {\em also} suggest that many of these are oriented end-on with respect to us, consistent with their type-1 status (Sect.~\ref{type}). Although the vast majority of the detections of \citet{vpt+03} are CSS/GPS, 22 of the 38 non-detections are also classified as such (giving a CSS/GPS detection rate of 44\%). Furthermore, \citet{pcv03} and \citet{gs06a} both find a $\approx50\%$ detection rate in their CSS/GPS samples, as well as the 33\% rate from the (albeit small) HFP sample of \citet{omd06}. Summarising this in Fig. \ref{compact-hist} (top), \begin{figure} \centering \includegraphics[angle=270,scale=0.55]{compact-hist.eps} \caption{The incidence of detections (hatched histogram) and non-detections (unfilled histogram) for compact objects (CSO, CSS, GPS and HFP) compared to the others. Top -- the whole sample. Bottom -- those with $L_{\rm UV}\leq10^{23}$ W Hz$^{-1}$.} \label{compact-hist} \end{figure} we find the overall detection rate to be 47\%, cf. 30\% for the ``others'', not classified as CSO/CSS/GPS/HFP, and 20\% for those unclassified, thus indicating higher detection rates in compact objects. This distribution, however, becomes more uniform (51\% -- compact, 43\% -- others \& 40\% -- unclassified) between the classes when considering the $L_{\rm UV}\leq10^{23}$ W Hz$^{-1}$\ sources only (Fig. \ref{compact-hist}, bottom). That is, the compact source detection rate {\em may not} be significantly higher than that of the others in the moderate UV luminosity sample and, again, it is the inclusion of $L_{\rm UV}\gapp10^{23}$ W Hz$^{-1}$\ sources which introduces a bias\footnote{Although a larger number of the non-compact objects have luminosities of $L_{\rm UV}\leq10^{23}$ W Hz$^{-1}$, note that there are several CSSs/GPSs close to the UV cut-off (Table \ref{non-dets}).}. \begin{table} \begin{center} \caption{The mean ultra-violet luminosities [W Hz$^{-1}$] and galaxy/quasar distribution for the sample based upon radio classification. \label{radio-class}} \begin{tabular}{l c c c } \tableline & Compact & Other & Unclassified \\ \tableline $\overline{\log_{10} L_{\rm UV}}$ & 20.9 & 21.7 & 22.4\\ $\sigma$ of $\log_{10} L_{\rm UV}$& 1.4 & 1.7 & 2.0 \\ No. galaxies& 35 & 15 & 4 \\ No. quasars & 15 & 15 & 6 \\ Mean redshift $\pm1\sigma$& $0.60\pm0.96$ & $0.67\pm0.78$ & $2.5\pm1.4$\\ \tableline \end{tabular} \end{center} \end{table} This possibility is supported by the average UV luminosities of the radio classes, which are appreciably lower for the compact objects (Table \ref{radio-class}), which could be consistent with these being young sources in which the AGN activity (radio and ultra-violet) has yet to reach its full strength (Fig. \ref{UV-radio}). Therefore, it is possible that compact objects exhibit these higher 21-cm detection rates due mainly to their generally low UV luminosities, with the line strength in these being dominated by effect of the projection of the radio lobes on the covering factor (Sect. \ref{rss}). \section{The location of the absorbing gas} \subsection{H{\sc \,i}\ 21-cm line kinematics} \subsubsection{Absorption in the galactic disk} For the moderate UV luminous sample, which are not expected to be dominated by elliptical hosts (Sect. \ref{hge}), the 50\% detection rate for {\em both} type-1 and type-2 objects at $L_{\rm UV}\lapp10^{23}$ W Hz$^{-1}$\ strongly suggests that the absorption {\em does not} occur in the obscuring torus associated with the AGN. The next logical candidate is therefore the large-scale galactic disk, where most of the gas would be expected to reside, as is seen in absorption studies of low redshift starburst and Seyfert galaxies. In particular, \citet{gbo+99} find that the 21-cm line strength is correlated with the galaxy inclination in $z\leq0.04$ Seyferts. Furthermore, although they suggest that the incidence of 21-cm absorption is broadly consistent with unified schemes of AGN, \citet{mot+01} acknowledge that it is likely that some absorption is also occuring beyond the sub-pc scale of the circumnuclear torus. Although we may expect the gas to be nearly coplanar on all scales (e.g. \citealt{ckb08}), if the large-scale disk is the source of the absorption, the fact that both types of AGN have the same odds of exhibiting 21-cm absorption would suggest that the orientation of the large-scale disk with respect to the small-scale obscuring torus is random. At redshifts of $z\geq0.1$, the disk orientations cannot generally be determined, although from lower redshift Seyfert studies we can show the inclination of the galaxy disk for various Seyfert types (Fig. \ref{inc-z}). \begin{figure*} \centering \includegraphics[angle=270,scale=0.70]{pa-inc-z.eps} \caption{The difference in the position angles of the radio and dis axes (top) and the inclinations of the galaxies (bottom) hosting low redshift Seyferts. The triangles/hatched histogram represent the type-1 objects and the squares/unfilled histogram the type-2, with the colours of the symbols indicating the source reference \citep{kee80,sksa97,nw99,cur99p}.} \label{inc-z} \end{figure*} From the figure we see the full range of possible offsets between the position angles of the radio jets and the position angle of the host galaxy (top panel). Were the obscuring torus and main galaxy disk coplanar, we would expect the points on the ordinate to be concentrated close to 0$^{\circ}$\ for both AGN types, but, as noted by \citet{sksa97,nw99}, the distribution tends to be quite random. Furthermore, if the torus is coplanar with the host disk, we would expect type-1 Seyferts to occupy galaxies of low inclination and type-2s in those of high inclination. However, from the vertical histogram (Fig. \ref{inc-z}, bottom panel) it is apparent that neither Seyfert type has a preferred disk inclination, with both types exhibiting a mean inclination of 48$^{\circ}$\ ($\sigma\approx16$$^{\circ}$, from 83 exclusive\footnote{Sources common to more than one sample have only been counted once.} type-1 objects and 58 exclusive type-2s). Interestingly, although the large-scale disk exhibits a random orientation with respect to the circumnuclear torus, the sub-kpc molecular ring is generally aligned (the hollow symbols in Fig. \ref{inc-z}): Although the numbers are much smaller, the mean inclination of the molecular ring in the six type-1 objects is $<28$$^{\circ}$, cf. $>49$$^{\circ}$\ for the ten type-2 objects ($\sigma\approx19$$^{\circ}$, with the limits being due to limits in the inclination estimates, \citealt{cur99p}). A Kolmogorov-Smirnov test gives a $<6$\% probability that the inclinations of the type-1 and type-2 molecular rings are drawn from the same sample, in contrast to 38\% for the galactic disk inclinations. This alignment between the molecular ring and torus may be expected, despite the random larger scale orientations, as these rings generally only reach $\sim1/100$th the extent of the atomic gas (\citealt{ms87,plr+91,babr92,is92,tgb+94,kkto96,cjrb98,ivha98})\footnote{Although the molecular gas beyond the ring can be seen to extend much farther (e.g. \citealt{ys91,ckb08}).} and may be funneling the material to the smaller scale torus (see \citealt{cur99} and references therein)\footnote{\label{foot7}http://nedwww.ipac.caltech.edu/level5/Curran/frames.html}. \subsubsection{Absorption due to outflows} \label{adto} Aside from the disk, as noted above, absorption may also be due to in-falling gas or outflows. That is, gas located along the polar axes, thus being located between us and the AGN in type-1 objects, rendering the gas detectable in absorption. Since this gas {\em may} exhibit a wider profile (FWHM) than gas tracing the rotation of the disk\,\footnote{For example, presuming the systemic velocities are sufficiently accurate to determine these large blue-shifted offsets, outflows with widths of $\gapp1000$ \hbox{${\rm km\ s}^{-1}$}\ are seen in some low redshift radio galaxies \citep{mto05,mhs+07}, in addition to the broad components observed by \citet{moe+03,mot+05}, in which the blue-shifts from the peak absorption component are strongly suggestive of outflows.}, while having a larger offset from the systemic velocity of the galaxy ($\Delta v$), we may expect type-1 objects to be grouped separately from type-2s in a plot of FWHM versus $\Delta v$, at least in terms of the abscissa. \begin{figure*} \centering \includegraphics[angle=270,scale=0.70]{FWHM-detlaV-long.eps} \caption{The profile width versus the offset from the systemic velocity for the 21-cm detections at $z\geq0.1$. The triangles/hatched histogram represent the type-1 AGN and the squares/unfilled histogram the type-2. {\bf +} signifies that there is no AGN classification available. We have plotted each individually resolved absorption component and flagged the primary (filled symbols) and secondary (unfilled) absorption lines, where the primary is the line with the largest optical depth and the secondaries are the remaining shallower lines (see \citealt{vpt+03}). For the sake of clarity, the plot has been truncated to $|\Delta v|\leq500$ \hbox{${\rm km\ s}^{-1}$}, although there are five cases with $\Delta v<-500$ \hbox{${\rm km\ s}^{-1}$}: 1413+135 at $-705$ \hbox{${\rm km\ s}^{-1}$}, J1815+6127 at $-1258$ \hbox{${\rm km\ s}^{-1}$}, J1821+3942 at $-869$ \hbox{${\rm km\ s}^{-1}$}\ (primary lines), as well as the $-742$ \hbox{${\rm km\ s}^{-1}$}\ secondary line in the latter object. All of these lines arise in type-1 objects, with the one type-2 being in J1944+5448 with $-1420$ \hbox{${\rm km\ s}^{-1}$}\ (primary line). On the redshifted end there are just three cases at $v> 500$ \hbox{${\rm km\ s}^{-1}$} -- the two unclassified AGN J0431+2037 at $+636$ \hbox{${\rm km\ s}^{-1}$}\ and 0902+343 at $+970$ \hbox{${\rm km\ s}^{-1}$}, as well as the type-1 object 1549--79 at $+665$ \hbox{${\rm km\ s}^{-1}$}.} \label{FWHM-detlaV} \end{figure*} Showing this in Fig. \ref{FWHM-detlaV}, we see no clear distinction between the AGN types in either axis. For the FWHM, the type-1 absorbers do exhibit slightly wider profiles, which would suggest that the outflows are subject to large velocity differentials, as has been seen in several low redshift cases (e.g. \citealt{moe+03,mot+05,mto05,mhs+07}). If absorption were occuring in the circumnuclear torus, we may also expect very wide profiles for the type-2 AGN, possibly much wider than those of the galactic disk itself\,\footnote{For example, in the type-2 Seyfert NGC 4258, the sub-parsec disk is found to have a rotation speed of $900$ \hbox{${\rm km\ s}^{-1}$}\ \citep{hbp94} and in the Circinus galaxy, also a type-2 Seyfert, the \WAT\ masers also trace a disk which rotates much more rapidly than the galactic disk (\citealt{gbe+03}, cf. \citealt{ckb08}).}. As it is, we can make no distinction between the FWHM of the type-1 and type-2 objects and unlike in emission, the absorption profile widths will ultimately be subject to the covering factor and the size of the continuum source, making any distinction between disk and outflow absorption difficult. Again, for the velocity offsets there is no real difference between the two AGN types and both exhibit a slight bias towards blue-shifted absorption\footnote{This is most obvious for the type-2s in Fig. \ref{FWHM-detlaV}, but as stated in the caption, there are also four (three primary and one secondary) type-1 absorbers with $\Delta v<-500$ \hbox{${\rm km\ s}^{-1}$}.}. Although uncertainties of $\sim10^2$ \hbox{${\rm km\ s}^{-1}$}\ due to the optical emission lines in $\Delta v$ are to be expected, many studies show a tendency for the absorption to be blue-shifted with respect to the systemic velocity \citep{vpt+03,moe+03,mot+05,mto05,mhs+07}. If these offsets are artifacts of poorly constrained optical redshifts, we would expect similar numbers of red-shifted components and, as stated previously, in low redshift radio galaxies wide blue-shifted tails are seen in the profiles, where the main feature is close to the systemic velocity (\citealt{moe+03,mot+05}). Since many of the 21-cm detections at $z\geq0.1$ are from \citet{vpt+03} [Table~\ref{dets}], it is not surprising that we also see this skew towards blue-shifted absorption (Fig. \ref{FWHM-detlaV}). Therefore in Table~\ref{stats}, where we show the average offsets for the AGN classes, we also show the contribution from the remainder of the literature. \begin{table*} \begin{center} \caption{The means ($\overline{\Delta v}$) and standard deviations ($\sigma$) for the absorption offset from the systemic velocity [\hbox{${\rm km\ s}^{-1}$}]. \label{stats}} \begin{tabular}{l ccc ccc ccc ccc ccc} \tableline\tableline & \multicolumn{3}{c}{TYPE-1} & \multicolumn{3}{c}{TYPE-2} & \multicolumn{3}{c}{WHOLE SAMPLE} & \multicolumn{3}{c}{\citet{vpt+03}} & \multicolumn{3}{c}{OTHERS} \\ & $\overline{\Delta v}$ & $\sigma$ & $n$& $\overline{\Delta v}$ & $\sigma$ & $n$& $\overline{\Delta v}$ & $\sigma$ & $n$& $\overline{\Delta v}$ & $\sigma$ & $n$ & $\overline{\Delta v}$ & $\sigma$ & $n$\\ \tableline Primary & -240 & 540 & 10 & -150 & 380 & 16 & -130 & 470 & 32 & -220 & 470 & 19 & 10 & 420 & 13 \\ Secondary & -120 & 300 & 8 & -110 & 150 & 10 & -70 & 280 & 19 & -60 & 380 & 9 & -90 & 150 & 10 \\ Both & -190 & 460 & 18 & -140 & 310 & 26 & -110 & 410 & 51 & -170 & 450 & 28 & -30 & 330 & 23\\ \tableline \end{tabular} \end{center} \end{table*} As seen from the table, although the standard deviations are large, the additional results confirm that on average the absorption is blue-shifted, which could indicate outflows or some other non-symmetric mechanism as the origin. The statistics also confirm the larger spread in the velocity offsets of the type-1 AGN, which is not wholly evident from Fig.~\ref{FWHM-detlaV}, due to the three (primary) type-1 absorbers offset at $\Delta v < -500$~\hbox{${\rm km\ s}^{-1}$}. If it were just these three at the blue-shift extremes, we could at least state that some type-1s show more of a bias for absorption in rapidly outflowing gas, although there is the type-2 case (J1944+5448) with $\Delta v = -1420$~\hbox{${\rm km\ s}^{-1}$}. This, however, could be the consequence of a poorly constrained optical redshift, or a rapid outflow of gas located well clear of the jet axis, as well as the possibility that this is unassociated gas. \subsection{Extinction effects} So far, at least for the intermediate UV luminosity sample ($L_{\rm UV}\lapp10^{23}$ W Hz$^{-1}$), we have found no difference in the incidence of 21-cm absorption between the two AGN types, which are also indistinguishable through the absorption line profiles of the detections. In order to determine whether there is a difference in the extinction of the quasar light, in Fig. \ref{colourcolour} we show the $V-R$ and $R-K$ colours (where available) for the published $z\geq0.1$ searches as classified by AGN type, where, apart from four \begin{figure}[hbt] \centering \includegraphics[angle=0,scale=0.75]{colourcolour.ps} \caption{The $R-K$ colour versus the $V-R$ colour for the sample. As per Fig. \ref{lum-z}, the filled symbols represent the 21-cm detections and the unfilled symbols the non-detections, with the triangles representing type-1 objects and squares type-2s ({\bf +} and {\sf x} designating a non-specific AGN type for a detection and non-detection, respectively).} \label{colourcolour} \end{figure} type-1 outliers, we see no discernible difference between the two types\footnote{The outlier with the high extinction in the direction of the reddening vector is the extremely red quasar J0414$+$0534 (see Figs.~\ref{mag-comps}~and~\ref{N-colour}).}. This suggests that the circumnuclear torus does not introduce a measurable degree of optical extinction, although contamination from the host galaxy starlight could lessen any apparent reddening. \begin{figure*} \centering \includegraphics[angle=0,scale=0.40]{mag-comps.eps} \caption{Comparison of the $V-K$ colours with the $B-K$ and $R-K$ colours for the $z_{\rm em} < 3$ sample (to avoid contamination of the $B$ and $V$ bands by Lyman-\AL\ absorption). From the fits shown we obtain $V-K = 0.90\pm0.01\times(B-K) - 0.62\pm0.29$ (significant at $5.86\sigma$) and $V-K = 1.26\pm0.02\times(R-K) - 0.37\pm0.29$ (significant at $5.25\sigma$), which are used to convert $B-K$ and $R-K$ to $V-K$, where $V$ is unavailable. The symbols are as per Fig.~\ref{lum-z}.} \label{mag-comps} \end{figure*} \begin{figure*} \centering \includegraphics[angle=270,scale=0.70]{N-Vcorrected.eps} \caption{The scaled velocity integrated optical depth of the H{\sc \,i}\ line ($1.823\times10^{18}.\int \tau dv$) versus optical--near-infrared colour for the sample. The symbols are as per Fig. \ref{lum-z}, with the hatched histogram represent the 21-cm detections and the unfilled histogram the non-detections. Where the $V$ magnitudes are not available (Tables 6 \& 7 of \citealt{cww+08}), we have estimated these according to the fits derived in Fig. \ref{mag-comps}. The statistics are summarised in Table \ref{red}. The outlier at $V-K = 10.26$ is due to J0414$+$0534, which has an intervening gravitational lens, which may be responsible for some of the extreme reddening (see \citealt{cdbw07} and references therein).} \label{N-colour} \end{figure*} \begin{table} \begin{center} \caption{The statistics from Fig. \ref{N-colour}. \label{red}} \begin{tabular}{l c rc cc c } \tableline\tableline Sample & $n$& \multicolumn{2}{c}{$\log_{10}(N_{\rm HI}\,.f/T_s)$} & \multicolumn{2}{c}{$V-K$} & $S(\tau)$ \\ & & Mean & $\sigma$ & Mean & $\sigma$ & \\ \tableline Whole & 58 & $<18.3$ & 0.5 & 3.9 & 1.6 & $3.63\sigma$ \\ Detections & 26 & $18.5$ & 0.6 & 4.6 & 1.6 & $2.03\sigma$ \\ Type-1 detections & 7 & 18.5 & 0.8 & 4.8 & 2.6 & $0.75\sigma$ \\ ~~~~~non-detections & 19 & $<18.0$ & 0.4 & 2.7 & 1.3 & --\\ ~~~~~~~exc. UV lum. & 6 & $<17.9$ & 0.3 & 3.6& 0.7 & -- \\ Type-2 detections & 15 & 18.6 & 0.6 & 4.4 & 1.0 & $1.19\sigma$ \\ ~~~~~non-detections & 8 & $<18.2$ & 0.5 & 4.3 & 0.7 & --\\ \tableline \end{tabular} \end{center} \end{table} In order to further examine the reddening, after correcting for the unavailable $V$ magnitudes (see Fig. \ref{mag-comps}), in Fig. \ref{N-colour} we show the 21-cm line strengths and limits against the optical--near-infrared colour of the source. From the statistics (Table \ref{red}), aside from the incidence of the mix of detections and non-detections (as shown by $n$)\footnote{The numbers give the impression that 21-cm absorption is more likely to arise in type-2 objects (cf. Fig. \ref{lum-z}). However these are subject to the available magnitudes and many of the $K$ magnitudes are unavailable for the non-detections (Table \ref{non-dets}).}, the only significant difference between the AGN types are the $V-K$ colours for the non-detections: Although the type-1 detections are slightly redder than the type-2 ($V-K\approx4.8$ cf. $4.4$), the type-1 non-detections could be considerably less red than those of the type-2s ($V-K = 2.6$, cf. $4.3$). Naturally, this will be skewed by the inclusion of the $L_{\rm UV}\gapp10^{23}$ W Hz$^{-1}$\ sources (Sect.~\ref{lum}) and omitting these raises this to $V-K = 3.6$ (the ``exc. UV lum.'' row), which, considering the $1\sigma$ spreads, is indistinguishable from the type-2 values. Note finally, that a mean line strength of $\log_{10}(N_{\rm HI}\,.f/T_s)\sim18.5$ is exhibited for the detections of both AGN types. On this issue, over the whole sample we find a correlation between the 21-cm line strength and the $V-K$ colour, significant at $3.63\sigma$, Table \ref{red})\footnote{The upper limits in the 21-cm line strengths (final column) are incorporated via the {\sc asurv} package.}. This may suggest that the reddening is due to dust associated with the intervening neutral gas, rather than intrinsic to the AGN spectrum (\citealt{wfp+95,srkr96}; although see \citealt{fww00,wwf01}). Circumstantial evidence for this association was previously presented by \citet{cmr+98}, although we show, for the first time, a correlation between the line strength and colour. However, in light of our other findings, we know that the high UV luminous sources have low 21-cm line strengths, and these being located at the blue end of Fig.~\ref{N-colour}\footnote{All but two of the $L_{\rm UV}\lapp10^{23}$ W Hz$^{-1}$\ sources have $V-K<3.5$.} must drive much of the correlation. Furthermore, the overall correlation is quite fragile with the significance dropping quickly with the sample size (Table \ref{red}). This could be a reflection of the heterogeneous nature of the sample\footnote{And is thus not apparent in Fig. \ref{colourcolour}.}, some of which will also be subject to contamination from starlight, although we have previously found that the molecular, rather than atomic, gas content appears to dominate the degree of reddening \citep{cwm+06}. \section{Conclusions and Interpretation} \subsection{Ultra-violet luminosities} In a previous paper \citep{cww+08} we found that the ultra-violet luminosity plays more of a r\^{o}le than the AGN type in the detection of 21-cm absorption in $z\geq0.1$ radio galaxies {\em and} quasars. Specifically, to date, 21-cm absorption has never been detected in a host galaxy when $L_{\rm UV}\gapp10^{23}$ W Hz$^{-1}$. Although these high UV luminosities occur exclusively in type-1 objects, for the moderately UV luminous ($L_{\rm UV}\lapp10^{23}$ W Hz$^{-1}$) sample there is a 50\% probability of detection in {\em either} AGN type, with any apparent bias against type-1 objects being caused by the 17 high UV luminosity objects. Expanding upon these results, in this paper we show: \begin{enumerate} \item That the bias against 21-cm absorption in quasars, compared to radio galaxies, also appears to be due to UV luminosity effects: For the radio galaxies, the 21-cm detections and non-detections both arise in objects with a mean $\overline{L_{\rm UV}}\approx2\times10^{20}$ W Hz$^{-1}$, whereas the 21-cm detected quasars have $\overline{L_{\rm UV}}\approx4\times10^{21}$ W Hz$^{-1}$, with the non-detected quasars having $\overline{L_{\rm UV}}\approx5\times10^{22}$ W Hz$^{-1}$. \item Although there is this decrease in the 21-cm detection rate with increasing $L_{\rm UV}$, it is possible that the exclusive 21-cm non-detections at $L_{\rm UV}\gapp10^{23}$ W Hz$^{-1}$\ are due to the fact that highly luminous sources are believed to trace the (neutral) gas-poor elliptical galaxies. However only two of the 17 $L_{\rm UV}\gapp10^{23}$ W Hz$^{-1}$\ sources are {\em known} to be located in ellipticals (although all of them could be), whereas all are known to have high ultra-violet luminosities. Therefore it is not clear whether the lack of 21-cm absorption is due to a low abundance of neutral galaxy in the host or excitation effects caused by the high luminosities, or indeed how these two scenarios may be related. \item With the exclusion of the $L_{\rm UV}\gapp10^{23}$ W Hz$^{-1}$\ sources, the skew towards the detection of 21-cm absorption in compact objects (CSO/CSS/GPS/HFP) becomes much less significant. Again, this indicates a bias introduced by the high UV luminosity sample and perhaps suggests that it is more meaningful to discuss the 21-cm absorption incidence in terms of the rest frame ultra-violet luminosities rather than by AGN type or radio SED classifications. \end{enumerate} \subsection{Detection rates at moderate UV luminosities} Regarding the $L_{\rm UV}\lapp10^{23}$ W Hz$^{-1}$\ sources, in addition to both AGN types having a 50\% detection rate, there is no evidence for the expected larger degree of reddening in the type-2 objects, which would be caused by the presence of a dusty obscuration. We do find, however, that the optical--near-infrared colour appears to be correlated with the 21-cm absorption line strength over the {\em whole} sample. These points would therefore appear to contradict the notion that the 21-cm optical depth in the hosts of high redshift galaxies and quasars is determined by the orientation of the central obscuring torus. Although, like the literature, we find a higher incidence of 21-cm absorption in galaxies than in quasars, unlike the literature, we believe this to be a consequence of their lower ultra-violet luminosities, rather than their AGN classification\footnote{As discussed in \citet{cww+08}, the exclusivity of type-1 objects at $L_{\rm UV}\gapp10^{23}$ W Hz$^{-1}$\ is not likely to be coincidental. They do, however, seem to be quite distinct from their lower UV luminosity counterparts, which exhibit the same 21-cm detection rate as the type-2 objects.}. Furthermore, the fact that the reddening is also independent of AGN type, suggests that this is also unrelated to the torus and the correlation between the reddening and the absorption line strength suggests that the bulk H{\sc \,i}\ and dust share a similar (non-nuclear) distribution. This could explain why only a fraction of H{\sc \,i}\ absorbing AGN are detection in \WAT-maser emission \citep{tph+02}: \WAT-masers are believed to arise close to the black hole in the central obscuration \citep{hbp94} and thus trace type-2 objects. Therefore, if H{\sc \,i}\ absorption also arose in the torus, one would expect a high detection rate of \WAT-masers in AGN detected in H{\sc \,i}\ absorption. However, \citet{tph+02} find H{\sc \,i}\ and \WAT\ common to only 8 out of 19 objects searched and this 42\% rate is very close to the overall H{\sc \,i}\ detection rate in AGN (see \citealt{cww+08}). This therefore suggests that the lines-of-sight through the masing disk and the H{\sc \,i}\ absorbing clouds are randomly oriented with respect to one another. In the moderate UV luminous sample, not dominated by elliptical hosts, we therefore argue that the absorption may be occuring in the main galactic disk which must be randomly oriented with respect to the torus. In attempting to verify this: \begin{enumerate} \item We find no real difference in the full-width half maxima of the 21-cm absorption profiles between the two AGN types. This may suggest that these are subject to geometric effects (covering factors and radio source sizes) and thus cannot give the full kinematical picture. Although, again, absorption due to a randomly oriented disk could account for this. \item We also find no discernible difference in the offset of the centroid of the absorption and the systemic velocity of the host between the two AGN types. Although through a sample nearly double in size, we can confirm the findings of \citet{vpt+03}, that on average the offsets are blue-shifted. That is, there {\em may be} evidence for outflowing gas, although, again, we see no distinction between the two AGN types. \end{enumerate} If the galactic disk were aligned with the obscuring torus, the 50\% detection rate for {\em both} AGN types suggests that absorption must occur in both galactic disks {\em and} outflows: The outflowing gas is expected to be directed along the radio jets (e.g. \citealt{bbr84,sch88,cbg+96,cbov98}), which are coincident with the axis of the torus, thus giving rise to the absorption in the type-1 objects. The fact that we cannot discriminate between the disk and outflow absorption would suggest rapid, wide outflows of cold neutral gas, perhaps enveloping the wide ionisation cones observed in low redshift AGN (see table 1.2 of \citealt{cur99})$^{\ref{foot7}}$. A 90\deg\ wide outflow of cool neutral gas expanding at $\approx200$ \hbox{${\rm km\ s}^{-1}$}, in which the molecular gas mass rivals that in the disk ($M_{\rm H_2} \sim10^{9}$M$_\odot$), is known of in the Circinus galaxy \citep{crjb98}, the proximate type-2 Seyfert (see also \citealt{moe+03,mot+05} for further examples). However, it would therefore remain a mystery as to why only 50\% of each AGN type (of $L_{\rm UV}\lapp10^{23}$ W Hz$^{-1}$) are detected in 21-cm absorption, although the bulk absorption occuring in the galactic disk, which is randomly oriented with respect to the obscuring torus, could account for this.\\ This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This research has also made use of NASA's Astrophysics Data System Bibliographic Service and {\sc asurv} Rev 1.2 \citep{lif92a}, which implements the methods presented in \citet{ifn86}.
2,869,038,154,216
arxiv
\section{Introduction} \label{sec:intro} Although helioseismology has long proven to be an invaluable tool for calibrating models of the solar interior, general asteroseismological analysis was previously limited to a few extremely bright or evolved stars \citep{chaplin2013}. With the wide-field, high-precision, \textit{Kepler} and \textit{K2} missions, astronomers are now able to perform asteroseismology on tens of thousands of stars \citep{chaplin2011,Yu2018} across a broad range of temperatures and evolutionary status. Even in its final months, \textit{K2} has continued to provide insight into the pulsations of its target stars. Despite the high Galactic latitude of the original \textit{Kepler} field, 85 pulsating B-type candidates have been identified in that field \citep{Balona2011, mcnamara2012}. They exhibit a range of nonradial pulsations (NRPs) with periods consistent with $\beta$ Cephei variables, slowly pulsating B stars (SPB), and hybrids between those two classes. The precision light curves available with \textit{Kepler} are proving to reveal many high frequency and low-amplitude modes, and with excellent frequency resolution, that are not detectable from the ground. Asteroseismology of B stars with NRPs is currently being used to improve stellar structure and evolutionary models for hot stars (e.g., \citealt{saesen2010}). Their various pulsation frequencies probe different layers of their interiors. Doing so, however requires accurate boundary conditions at the stellar surface (e.g., \citealt{huber2012}). Spectroscopy allows accurate measurements of effective temperature, $T_{\rm eff}$, and surface gravity, $\log g$, that are essential in constraining stellar radii, ages, and evolutionary spin-down rates. Knowledge of the projected rotational velocity, $V \sin i$, is key for studying the angular momentum of NRPs. One hurdle for the asteroseismic analysis of B-type pulsators in the \textit{Kepler} field is the lack of accurate physical parameters for these stars. The KIC uses the SDSS $g - r$ color as a temperature indicator, but the Rayleigh-Jeans slope of the hot star spectral energy distributions means that the $g - r$ color is largely insensitive to temperature for B-type stars. The KIC photometric $\log g$ measurements are likewise poor since the $g-r$ index does not sample the Balmer jump, which is strongly dependent on atmospheric pressure and thus $\log g$. \cite{Balona2011} used spectroscopic line profile fitting to measure $T_{\rm eff}$ of 30 B stars in the \textit{Kepler} field and found substantial differences from the predicted $T_{\rm eff}$ of the same stars in the KIC (\citealt{brown2011}). \cite{pinsonneault2012} published revised temperature scales for the KIC, but only for stars with $4,000 {\; \rm K} \le T_{\rm eff} \le 7,000$ K, which is substantially cooler than the stars considered in this work. We present here the results of the measurements of $T_{\rm eff}$, $\log g$, and $V \sin i$ of 25 candidate $\beta$ Cephei, SPB, and hybrid pulsating B stars in the \textit{Kepler} field with $8 \le V \le 16$. Section 2 details our observations and data reduction of the spectra. In Section 3, we describe our measurements of $T_{\rm eff}$, $\log g$, and $V \sin i$ of these stars using the Tlusty BSTAR2006 grid and Kurucz ATLAS9 model atmospheres. Comparing $T_{\rm eff}$ and $\log g$ to the evolutionary tracks of \cite{ekstrom2012}, we also measure the mass, radius, and age. Section 4 compares our results with the KIC and the \textit{Gaia} Data Release 2 as well as other published works. Calculated distances from \cite{Jones2018} using the Gaia parallaxes are also included in order to estimate extinctions ($A_V$) for these stars. \section{Observations} \label{sec:observations} We observed each target using the KPNO 4m Mayall telescope with the RC spectrograph from 2014 May 9-13. We used the grating BL 380 in $2^{nd}$ order, a $\rm CaSO_4$ order sorting filter, a 1.5 arcsec slit, and the T2KA CCD to achieve resolving power $R = \lambda / \Delta \lambda \approx 7,200$. With a central wavelength of 4340 \AA, this setup allowed us to observe the range 4,060--4,620 \AA, covering several useful helium and hydrogen lines. We reduced the raw spectra with the \textsc{doslit} package of IRAF. All spectra were wavelength calibrated using an FeAr arc lamp. \section{Spectral Modeling} \label{sec:modeling} Two grids of synthetic spectra were used in our modeling process to measure $T_{\rm eff}$, $\log g$, and $V \sin i$. First, we used a grid of line blanketed, plane-parallel, local thermodynamic equilibrium (LTE) models generated using the ATLAS9 code \citep{kurucz1994} for stars with $T_{\rm eff} < 15,000$ K. The non-LTE (NLTE) Tlusty BSTAR2006 \citep{lanz2007} model spectra were used for stars with $T_{\rm eff} > 15,000$ K. We adopted grids of (Z/Z$_\odot$ = 1) and a microturbulent velocity of V$_t$ = 2 km s$^{-1}$. Before fitting, we estimated $T_{\rm eff}$ and $\log g$ of the stars based on the strength and shapes of the Balmer and helium lines. We measured the projected rotational velocity ($V \sin i$) by using custom IDL codes to compare the observed profiles of \ion{He}{2} $\lambda$4026, \ion{He}{1} $\lambda\lambda$4387, 4471, and \ion{Mg}{2} $\lambda$4481 to limb darkened, rotationally broadened, and instrumentally broadened model profiles using steps of 10 km s$^{-1}$. For each step, we compared $\Sigma$(O-C)$^2$,the sum of the squares of the residuals, and determined the minimal value of a parabolic fit as the value for $V \sin i$. The error in $V \sin i$ was determined by allowing a 5\% tolerance in $\Sigma$(O-C)$^2$. Table \ref{tab:vsini} lists the measurements of $V \sin i$ for all of the helium lines as well as their weighted averages. We then modeled H$\gamma$ lines for $T_{\rm eff}$ and $\log g$ using broadened models according to our measured $V \sin i$ along each point in our generated ATLAS9 grid or BSTAR2006 gird. Once we found the closest match with the grid, we used linear interpolation between the grid points to find the best fit for $T_{\rm eff}$ and $\log g$. The errors in $T_{\rm eff}$ and $\log g$ were determined by allowing 5\% tolerance of the $\Sigma$(O-C)$^2$. Our $T_{\rm eff}$ and $\log g$ measurements are recorded in Table \ref{tab:mm}. \section{Discussion} \label{sec:discussion} As expected, we find large discrepancies between photometrically derived $T_{\rm eff}$ and $\log g$ and our measurements. In columns 2 and 3 of Table \ref{tab:Param_comp}, we show the derived $T_{\rm eff}$ and $\log g$ from the KIC for our observed stars \citep{brown2011}. We include in column 4 of Table \ref{tab:Param_comp} the $T_{\rm eff}$ from the recently released DR2 \citep{Andrae2018}. Columns 7 and 8 of this table give our measurements using spectroscopic fitting as well as the uncertainties for these values. We also include in columns 5 and 6 of Table \ref{tab:Param_comp} some revised measurements from \cite{Balona2011} and \cite{Papics2017}. \cite{Balona2011} used metal-line blanketed LTE models to model their stars following the methods described by \cite{Ostensen2010}. The biggest reason for the discrepancy with our results and those from \cite{Balona2011} is that they assume $V \sin i \sim$ 0 km s$^{\rm -1}$ for all of their measurements. As a result, they overestimate $T_{\rm eff}$ and $\log g$ significantly for stars with large $V \sin i$, which leads to wider, shallower hydrogen lines which peak in strength around 10,000 K. \cite{Papics2017} used the BSTAR2006 synthetic spectra \citep{lanz2007} to measure the fundamental parameters of KIC 3459297. They also find $V \sin i$ = 109 $\pm$ 14 km s$^{\rm -1}$ for KIC 3459297, which agrees with our measurements. Figure \ref{fig:BvH} compares our results for $T_{\rm eff}$ and $\log g$ with those from \cite{Balona2011} and \cite{Papics2017}. To emphasize the dependency on $V \sin i$, the sizes of the symbols are proportional to our measured value of $V \sin i$. Using our measured $T_{\rm eff}$ and $\log g$, we can compare model spectral energy distributions (SED) to photometric data to calculate the radii of our stars using \begin{equation} F_\nu / \mathfrak{F}_\nu = ( R_\star / r )^2 \end{equation} where $F_\nu$ is the apparent monochromatic flux, $\mathfrak{F}_\nu$ is the absolute flux at the surface of the star, and $r$ is the distance to the star. $F_\nu$ was calculated by converting the J, H, and K band magnitudes from the Two Micron All-Sky Survey (2MASS, \cite{Skrutskie2006}) to fluxes using the zero-points from \cite{Cohen2003}. $\mathfrak{F}_\nu$ was determined using model SEDs with our measured $T_{\rm eff}$ and $\log g$. We assumed no interstellar extinction ($A_V$) during this process, as these stars are above the Galactic plane where we would expect low $A_V$ values and they would have negligible effects in the J, H, and K bands. We used the BSTAR2006 models \citep{lanz2007} for stars with $T_{\rm eff} > 15,000$ K and ATLAS models \citep{Castelli2004} for stars with $T_{\rm eff} < 15,000$ K. The distances were calculated by \cite{Jones2018} by converting the parallaxes measured in DR2 using Bayesian statistics. Our error bars were calculated by propagating the errors from $r$, $T_{\rm eff}$ and $\log g$. We then used our measured $\log g$ and $R_\star$ to calculate the masses ($M_\star$) of our stars. KIC 11293898 likely has an underestimated mass, for reasons we discuss later in this work. Using the non-rotating evolutionary tracks of \cite{ekstrom2012}, we calculated an approximate age of the stars by interpolating between tracks using $T_{\rm eff}$ and $R_\star$. In Table \ref{tab:mm}, we include our measured $R_\star$, $M_\star$ and age ($\tau_\star$), as well as the calculated bolometric luminosity ($L_{\rm bol}$) and the distances from \cite{Jones2018}. We compare our $T_{\rm eff}$ and $\log g$ to the non-rotating evolutionary tracks of \cite{ekstrom2012} in Figure \ref{fig:HR}. Our measured $M_\star$ and $R_\star$ are consistent with their positions along these evolutionary tracks for most stars. The apparent magnitudes (V), bolometric absolute magnitudes (M$_{bol}$), bolometric corrections (BC), absolute magnitudes ($M_V$), and calculated $A_V$ are all listed in Table \ref{tab:photo}. The $V$ values in column 6 are from the SIMBAD database with an assumed uncertainty of 0.1 mag, and the BC values in column 3 are interpolated from \citet{Flower1996} and \citet{Torres2010}. Using $r$, $V$, and our measurements, we can estimate $A_V$. The extinctions are calculated using \begin{equation} A_V=V - M_V + 5 - 5 \log r \end{equation} We find that $A_V$ is consistent with the 3D dust map provided by \cite{Green2018}. As mentioned before, we noticed our calculated mass for KIC 11293898 to be extremely low compared to its measured effective temperature. Using the non-rotated evolutionary tracks from \cite{ekstrom2012}, we can measure $M_\star$ and $R_\star$ based on $T_{\rm eff}$ and $\log g$. We find $R_\star$ = 5.16 $R_\odot$ and $M_\star$ = 5.98 $M_\odot$ for KIC 11293898. Using this $R_\star$, we find $r$ = 9752 pc, which is well over twice the value calculated by \cite{Jones2018}. Our goal with this publication is to improve the measurements of fundamental parameters for pulsating B-type stars in the \textit{Kepler} survey. Using model fitting with the Tlusty BSTAR2006 grid and Kurucz ATLAS9 model atmospheres, as well as use of evolutionary tracks from \cite{ekstrom2012}, we measured $T_{\rm eff}$, $\log g$, $V \sin i$, $M_{\star}$, $\tau_{\star}$, $R_{\star}$, and $L_{\rm bol}$ for 25 pulsating B-type stars . We find that the $T_{\rm eff}$ and $\log g$ measurements from KIC and DR2 are unreliable for these hot stars and that improved stellar parameters are required to continue asteroseismic analysis of these stars. \acknowledgments M.\ V.\ McSwain, was supported by NSF grant No.\ AST-1109247. S.\ W.\ was supported by the National Science Foundation under REU site grant No.\ PHY-1359195. J.\ L.-B.\ was supported by a Sigma Xi Grant-in-Aid of Research. A.\ B.\ had support from Kutztown University. This work is also supported by an institutional grant from Lehigh University. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. This paper includes data collected by the Kepler mission. Funding for the Kepler mission is provided by the NASA Science Mission directorate. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. \vspace{5mm} \facilities{Mayall (RC spectrograph)} \software{IRAF, IDL} \input{Hanes18.bbl} \input{KICVsini.tex} \input{KICPhysical_Parameters.v5.tex} \input{KIC_Param_comp.tex} \input{KICphoto.v5.tex} \newpage \begin{figure} \gridline{\fig{Balona_Temp_comp-cropped.pdf}{0.40\textwidth}{(a)} \fig{Balona_logg_comp-cropped.pdf}{0.40\textwidth}{(b)}} \caption{Comparison of our results with \citet{Balona2011} and \citet{Papics2017} for temperature (a) and surface gravity (b). The horizontal and vertical error bars in these figures are the calculated errors from this work, \citet{Balona2011}, and \citet{Papics2017}, respectively. The symbol size is proportional to our measured $V \sin i$ to highlight the discrepancies between our results and those of other works. } \label{fig:BvH} \end{figure} \begin{figure} \plotone{KIC18_HR-cropped.pdf} \caption{HR-Diagram of observed stars using the evolutionary tracks from \cite{ekstrom2012}. The values associated with each track are in solar masses. All stars to the left of the black, dotted line ($T_{\rm eff} > 15,000 K$) were modeled using the BSTAR2006 grid of model spectra. Those on the right ($T_{\rm eff} < 15,000 K$) were modeled using the ATLAS9 models.} \label{fig:HR} \end{figure} \end{document}
2,869,038,154,217
arxiv
\section{Introduction} The nature of very low surface brightness galaxies with large effective radii has been intensely debated in past years. Such objects were discovered in the 80s \citep{Binggeli,impey,Dalcanton}, but with new detailed observations have recently been dubbed "Ultra Diffuse Galaxies" \citep[UDGs;][]{vd15}. The debate has focussed on the possible differences between UDGs and the general galaxy population with the same luminosity (i.e. dwarf galaxies\footnote{Historically, the term dwarf galaxy has referred to those galaxies that have a low total luminosity and a low central surface brightness in the $\mu_0$-magnitude plane \citep[see an extended discussion on][]{Binggeli1994}. This terminology has been independent of a galaxy's extension and does not include the compact dwarf ellipticals (like M32). In this sense, an alternative name to UDGs would be “large dwarfs”, as already suggested in the 1980s \citep[][]{Sandage1984}.}) and, in particular, on the amount of dark matter these galaxies may possess. Are UDGs "normal" dwarf galaxies with relative little star formation activity in their central regions? Or, are UDGs a new type of galaxy with either surprisingly large amounts of dark matter for their stellar mass \citep[see e.g.][]{vd16} or, on the contrary, very little dark matter \citep[see e.g.][]{2018Natur.555..629V,2019MNRAS.486.1192T,Emsellem2019,oliver2020}? The vast majority of works point to UDGs having the properties of dwarf galaxies \citep[see e.g.][]{beasley2016,javier,venhola2017,ruizlara2018,pavel2019,fensch2019} but with flatter light distributions \citep{chamba,nacho2020}. This flatter light distribution could be caused either by tidal interactions \citep{collins2013,Rong2019}, higher internal angular momentum \citep{amorisco2016,pavel2020} or outflows \citep{cintio2017}. However, some UDGs do not fit nicely into the categories of "normal" dwarf galaxies, as their dark matter content has been suggested to be very high. In particular, one of the better-known examples of such an extreme UDG is Dragonfly 44 \citep[DF44; ][]{vd16}. This iconic galaxy, associated with the Coma galaxy cluster, has been claimed to have a dark matter halo comparable with that measured for the Milky Way \citep{vd16}. The first study of DF44 \citep[][henceforth vD16]{vd16} measured a high stellar velocity dispersion $\sigma$=47$^{+8}_{-6}$ km s$^{-1}$ within the effective radius of the galaxy. By comparing to theoretical NFW profiles, vD16 suggested that DF44 harbours a dark matter halo as massive as 10$^{12}$ M$_{ \odot }$. Considering the DF44 luminosity M$_V$=-16.2 mag and a stellar mass $M_{*}=3 \times 10^8 M_{ \odot }$, the authors of this paper claimed that DF44 resembles a 'failed' Milky Way. This conclusion was further supported by the assertion of an extensive number of globular clusters (GCs) in the vicinity of the galaxy \citep[][henceforth vD17]{vd17}. In the absence of a dynamical estimation of the dark matter halo mass based on the kinematics of the GCs around the galaxy, the GC number count (N$_{GC}$) can be used as a good proxy for regular galaxies \citep{Spitler2009,harris2013,Hudson2014} and UDGs \citep{beasley2016,beasley2016b,peng2016,harris2017,lim2018,amorisco2018,udgngc3,udgngc4}. Using HST data, vD17 measured N$_{GC}$=74$^{+18}_{-18}$ GCs around DF44 which implies a mass for the dark matter halo of M$_{halo}$=5$\times$10$^{11}$ M$_{\odot}$ using the GC system mass -- halo mass relations. This number decreases the original claim by the same group of N$_{GC}$=94$^{+25}_{-20}$ (vD16) based on ground-based data. Later, \citet[][henceforth vD19]{vd19}, using spatially resolved stellar kinematics, reported a smaller velocity dispersion for DF44 ($\sigma$=33$^{+3}_{-3}$ km s$^{-1}$, vD19) compared to the previous estimation ($\sigma$=47$^{+8}_{-6}$ km s$^{-1}$, vD16). Using the new velocity dispersion, vD19 decreased by an almost an order of magnitude the amount of dark matter in the halo of DF44, from M$_{halo}$=10$^{12}$ M$_{\odot}$ and M/L(<r$_{1/2}$)=48$^{+21}_{-14}$ to M$_{halo}$=1.6$\times$10$^{11}$ M$_{\odot}$ and M/L(<r$_{1/2}$)=26$^{+7}_{-6}$. However, the large number of GCs ($\sim$75) remains in strong tension with the expected number according to the stellar mass of the object (we would expect $\sim$20 GCs for the stellar mass of DF44, if this galaxy follows the stellar mass -- halo mass relation). In this paper, we revisit the GC population of DF44 by exploring their spatial distribution, luminosity function, number count and average colour and we find a significantly lower number of GCs, making this galaxy appear more consistent with the general dwarf galaxy population. In this work, the distance to DF44 (100 Mpc), its absolute magnitude (M$_V$=-16.2 mag) and velocity dispersion ($\sigma=33^{+3}_{-3}$ km s$^{-1}$) are from vD17 and vD19. All the magnitudes and colours are expressed in the AB magnitude system (unless explicitly stated otherwise). Throughout this paper, we assumed the cosmological model with $\Omega_{M}=0.3$, $\Omega_{\Lambda}=0.7$ and H$_{0}$=70 km s$^{-1}$ Mpc$^{-1}$. \section{Data} DF44 imaging data was retrieved from the Hubble Space Telescope archive (HST Proposal 14643, PI: van Dokkum). This is the same dataset used by vD17. The data comprise three orbits in \textsl{V$_{606}$} with exposures ranging from 2200-2400s and one orbit in \textsl{I$_{814}$} with 2200s exposure time. \textsl{V$_{606}$} images were median combined using \textit{SWarp} \citep{swarp} and the final amount of time in this band is 7280s. The depth of the images for point sources (5$\sigma$) are \textsl{V$_{606}$}=28.4 mag and \textsl{I$_{814}$}=26.8 mag. The field of view of the WFC3 is 162\arcsec$\times$162\arcsec and pixel size is 0.0396\arcsec. Instrumental zero-points (AB/mag) were calculated using the \textsl{PHOTFLAM} and \textsl{PHOTPLAM} values as given in the data analysis handbook of the instrument \citep{wfc3}. The calculated zeropoint values for \textsl{V$_{606}$} and \textsl{I$_{814}$} are 26.10 mag and 25.14 mag respectively. Fig. \ref{figure-imagedf44} shows a colour composite image of DF44 combining \textsl{V$_{606}$} and \textsl{I$_{814}$} filters. \begin{figure*} \centering \includegraphics[trim=90 70 70 70, clip,width=0.9\linewidth]{DF44+main+combined.png} \caption{Colour composite image of DF44 combining \textsl{V$_{606}$} and \textsl{I$_{814}$} filters. The black and white background corresponds to the \textsl{V$_{606}$} filter. The surface brightness limit of the image is $\sim$28.5 mag/arcsec$^2$ (3$\sigma$; 3\arcsec$\times$3\arcsec).} \label{figure-imagedf44} \end{figure*} \section{Analysis} \subsection{Structural parameters of the DF44 galaxy} In order to extract the structural properties of DF44, we have used the code IMFIT \citep{2015ApJ...799..226E}. We assumed, as vD17 did, that the galaxy is well described by a S\'ersic model. The model was convolved with the point spread function (PSF) of the HST image. To conduct a proper fit, we have masked the background galaxies and the foreground stars in the images. Fig. \ref{df44lightprofile} shows the observed light profile of the galaxy in the \textsl{V$_{606}$} band and the corresponding S\'ersic fit. The structural properties of the galaxy are R$_e$=3.9$\pm$0.7 kpc and S\'ersic index n=0.72$\pm$0.14. The axis ratio is q=0.66$\pm$0.01 and the position angle PA=-26.4$\pm$0.7 (measured from North to East, counterclockwise). The various published values of effective radius, R$_e$, of DF44 have been estimated using different data and assumptions about the shape of its surface brightness profile. For instance, \citet{vd15} used CFHT imaging and assumed an exponential surface brightness profile for DF44 to get R$_e$=4.6$^{+1.5}_{-0.8}$ kpc. Using Keck deep imaging, \citet{vd15b} found R$_e$=4.3$\pm$0.3 kpc using a single S\'ersic component with n=0.89$\pm$0.06 and R$_e$=4.1 kpc when two components are used for fitting DF44. Later, using Gemini deep data, \citet{vd16} measured R$_e$=4.3$\pm$0.2 kpc fitting a S\'ersic model with n=0.85. Using the same data but this time calculating R$_e$ from the growth curve, \citet{chamba} got R$_e$=3.3$\pm$0.3 kpc. Finally, vD17 using the same HST dataset than the one we have used here found R$_{e,vD17}$=4.7 kpc and n$_{vD17}$=0.94\footnote{Uncertainties of R$_{e}$ and n are not provided in vD17.}. A possible explanation for the different values of R$_e$ measured here and in vD17 involves the estimation of the local background around the galaxy, a slight change in which affects the determination of n, and ultimately, the value of R$_e$. \begin{figure} \centering \includegraphics[width=\linewidth]{DF44_sersic_fit_606.png} \caption{Surface brightness profile of DF44 in the \textsl{V$_{606}$} band. Together with the observed profile (black solid line), we also include the best fitting (red curve) using a S\'ersic model (R$_e$=3.9$\pm$0.7 kpc and n=0.72$\pm$0.14). The dashed lines indicate the $\pm$1$\sigma$ uncertainty on the surface brightness profile. The lower panel shows the residuals from the fits.} \label{df44lightprofile} \end{figure} \subsection{Detection of globular cluster candidates} The detection of the GC candidates around DF44 was performed in the deepest image available: \textsl{V$_{606}$}. \textit{SExtractor} \citep{sex} was used to extract all the sources from the \textsl{V$_{606}$} image to perform aperture photometry. First, a background model was made using a 32$\times$32 pixels median filter and subtracted from the final frame. This background subtraction removes the diffuse light of DF44 and improves the detection efficiency of compact objects in the vicinity of the galaxy (bottom panel in Fig. \ref{df44subtracted}). Three different techniques were used to explore the effect of the diffuse light subtraction on the catalogues of point-like sources we retrieved from the images, namely : i. unsharp masking, ii. median filtering and subtracting and iii. S\'ersic fitting and subtracting the galaxy model. These three different methods produce the same catalogues of point-like sources down to \textsl{V$_{606}$}=28.5 mag. We chose the median-filtered approach to assure that the removal of the diffuse light was done homogeneously throughout the entire image. \begin{figure} \centering \includegraphics[trim=70 60 70 60, clip, width=\linewidth]{DF44+.png} \includegraphics[trim=70 60 70 60, clip, width=\linewidth]{DF44+sub.png} \caption{\textit{Top}: DF44 as seen by HST using the \textsl{V$_{606}$} filter. \textit{Bottom}: Median filtered (32$\times$32 pixels) removal of the extended light distribution of the galaxy to highlight the presence of point-like sources on the image.} \label{df44subtracted} \end{figure} To characterize the objects in the image, we used \textit{SExtractor}. Most of the \textit{SExtractor} parameters were left to their default settings except for BACK\_SIZE, DEBLEND\_NTHRESH and DEBLEND\_MINCONT whose values are shown in Table \ref{sextractortable}. To take into account the local variation of the background, including residuals from DF44 diffuse light subtraction, BACK\_SIZE=32 was used (instead of the default value BACK\_SIZE=64). Moreover, to avoid extracting the small features of background galaxies as sources, we adjusted DEBLEND\_NTHRESH=2 and DEBLEND\_MINCONT=0.02 (instead of the default values DEBLEND\_NTHRESH=32 and DEBLEND\_MINCONT=0.005). The same configuration parameters were used to extract sources in the \textsl{I$_{814}$} image. \begin{table} \centering \caption {\textsl{SExtractor} parameters that were applied for source detection and aperture photometry.} \begin{tabular}{ c c } \hline Parameter & Value \\ \hline DETECT\_MINAREA & 3.0 \\ DETECT\_THRESH & 1.5 \\ ANALYSIS\_THRESH & 1.5 \\ DEBLEND\_NTHRESH & 2 \\ DEBLEND\_MINCONT & 0.02 \\ BACK\_TYPE & GLOBAL \\ BACK\_SIZE & 32 \\ BACK\_FILTERSIZE & 3 \\ PHOT\_APERTURE & 4,8,30 \\ \hline \end{tabular} \label{sextractortable} \end{table} The total magnitudes of the different sources were measured using aperture photometry. We estimated the aperture magnitudes of all the targets using \textit{SExtractor} and PHOT\_APERTURES=4 pixels (diameter). This was done in both bands (\textsl{V$_{606}$} and \textsl{I$_{814}$}) and the magnitudes were aperture-corrected. The aperture correction values are 0.54 and 0.65 mag for \textsl{V$_{606}$} and \textsl{I$_{814}$}, respectively. To estimate such aperture corrections we use a few bright, non-saturated, stars with magnitudes in the \textsl{V$_{606}$} band between 24 and 25.5 mag. For those stars, we calculate two different apertures: PHOT\_APERTURES=4 pixels and 30 pixels. The above aperture corrections mostly corresponds to the amount of light between 4 to 30 pixels. However, beyond 30 pixels there is still some light that is hard to measure using stars with that signal-to-noise. For such reason, we add another 0.07 mag based on a prescription from the WFC3 instrument handbook\footnote{\url{http://documents.stsci.edu/hst/wfc3/documents/handbooks/}}. After this, \textsl{V$_{606}$}-\textsl{I$_{814}$} colours were calculated using the aperture corrected magnitudes. At the distance of the Coma cluster (100 Mpc), globular clusters are not resolved \citep{peng2011} and appear as compact as foreground stars. Therefore, to select GCs around DF44, as a first step, we measured the compactness of the detected objects and identified the compact sources. We took the difference between the two aperture magnitudes (of 4 and 8 pixels) to represents the compactness of the objects. In this paper, we denoted the magnitude difference (or compactness parameter) as $\Delta$m$_{4-8}$. Compact objects, compare to more extended objects, display smaller values of $\Delta$m$_{4-8}$. To understand how the compactness parameter behaves across the field of view and for different magnitudes, we simulated more than 7500 artificial stars using TinyTim\footnote{\url{http://www.stsci.edu/software/tinytim/}} v7.0. This was done because of the lack of bright stars in the \textsl{V$_{606}$} frame. The artificial stars range in magnitude between 24 and 30. We simulated 128 artificial stars in 60 magnitude bins, randomly distributed on the detector. The TinyTim PSF models take into account many variables such as filter, optical and detector responses with the position on the detector and wavelength, focus, aberrations, geometric distortions and charge diffusion. Moreover, HST long exposures introduce a small displacement (jitter) and change in focus during exposures (berating). For the simulation, we started with jitter=5 mas and applied different values of de-focus between 1 and 10 $\mu$m. The final value of de-focus=6 $\mu$m was chosen to match the compactness of the artificial stars with the few bright stars observed in the field of view. After producing the artificial stars with TinyTim, Poisson noise was added to each mock star. We finally added them to the main frame (\textsl{V$_{606}$}). We explored whether our compactness values can be affected by the effect of the sub-pixel location of the PSFs and by the drizzling algorithm that is used to create the final science images. We found that the subpixel location of real PSFs has a small effect on the compactness parameter ($\sim$0.01 mag). The effect of the drizzling algorithm is the following. When comparing the compactness of real sources in both drizzled and non-drizzled images, we found that the difference in compactness is compatible with zero on average. The scatter between the drizzled and non-drizzled compactness measurements is around 0.1 mag. We conclude that our compactness selection is robust in that sense. We found that the compactness parameter $\Delta$m$_{4-8}$ of the artificial stars varies between 0.3 and 0.4 and has an average value of 0.36. This value is consistent with the average compactness of the bright stars in the data. Next, we selected objects with compactness within 3$\sigma$ from the mean values of the artificial stars (for each magnitude bin) as compact sources. Fig. \ref{simulation} shows $\Delta$m$_{4-8}$ of all the observed sources and the mock stars as a function of their \textsl{V$_{606}$} magnitude. In this work, we have used the region enclosed by the dark blue contour as the selected area for defining our main sample of compact sources (sample S1). \begin{figure*} \includegraphics[width=0.99\linewidth]{selecting-point-sources+.png} \caption{Magnitude (\textsl{V$_{606}$}) versus compactness of the sources $\Delta$m$_{4-8}$ map. This map is used to select the GC candidates in this work. To facilitate for the reader the comparison with the compactness parameter used in vD17, we have included their values in the upper X-axis. Mock point-like sources are shown with black dots, while observed real sources are shown with red dots. The compactness parameter $\Delta$m$_{4-8}$ of the brightest artificial stars varies from 0.3 to 0.4, depending on the position on the detector. For the fainter stars, uncertainties in the photometry play an important role and increase the scatter in $\Delta$m$_{4-8}$. Our main sample S1 corresponds to observed objects within the region indicated by the dark blue lines. We have also explored a less restrictive sample of objects S2 (enclosed by the light blue vertical lines, which corresponds to the selection criteria given in vD17).} \label{simulation} \end{figure*} As is seen in Fig. \ref{completeness}, sample S1 is more than 90\% complete up to magnitude \textsl{V$_{606}$}=28.2 mag and more than 80\% complete to magnitude \textsl{V$_{606}$}=28.5 mag. As a further test to test our selection criteria, we have measured the ellipticities of the objects characterised as compact and extended. We find the average ellipticity of the brightest compact and extended sources (brighter than $V_{606}$$<$26.0 mag) is 0.06 ($\sigma$=0.04) and 0.33 ($\sigma$=0.2) respectively. \begin{figure} \centering \includegraphics[width=\linewidth]{completeness.png} \caption{The completeness of the selection of the compact sources in S1. This sample is more than 90 percent complete up to magnitude \textsl{V$_{606}$}=28.2 mag. Black line corresponds to the completeness of all the mock objects injected in the image, while the blue line is the fraction that are identified as compact sources with our compactness criteria.} \label{completeness} \end{figure} To be able to compare our sample with that of vD17, we select another sample S2 following their selection criteria 0.5$<$c$_{4-8}$$<$1.0. The compactness parameter c$_{4-8}$ is the flux ratio between two different apertures, 4 pixels and 8 pixels (according to vD17). The corresponding region in the \textsl{V$_{606}$} vs compactness map is also shown in Fig. \ref{simulation}. As can be seen in the figure, the S1 sample excludes a fraction of sources that are included in S2. \subsection{The distribution of globular clusters around DF44} Once the selection of the compact sources of the image has been performed, we can explore how they are distributed around DF44. The surface density of the radial distribution of the compact sources is shown in Fig. \ref{radialdistributiongcs}. Here, we only show sources with \textsl{V$_{606}$}$<$28.0 mag. We plotted both the distributions of the compact sources selected in our main sample S1, and the equivalent sample done by vD17, S2. At a radial distance of equivalent to 2R$_e$, both distributions detect a similar number of objects. However, due to the more relaxed selection criteria by vD17, beyond that distance the background density is much more important in S2 than in S1. \begin{figure} \centering \includegraphics[width=\linewidth]{radial-dist-28.png} \caption{Surface density radial distribution of all the extracted sources (grey) around DF44, the compact sources selected in this work S1 (dark blue) and the compact sources selected in vD17 (light blue). We only show sources brighter than \textsl{V$_{606}$}=28.0 mag. The sample selected in this paper S1 is less affected by background contamination than the one following the criteria by vD17.} \label{radialdistributiongcs} \end{figure} To further analyse the spatial distribution of the compact sources around DF44, we have explored the half-number radius R$_{GC}$ (i.e. the radius that contain half of the GC candidates around DF4) and the S\'ersic index of the distributions. The low number (just a few dozens) of GC candidates suggests the need to use a likelihood estimation for these quantities. The likelihood equations for a S\'ersic distribution are provided in the Appendix. Interestingly, for the R$_{GC}$ parameter, the maximum likelihood value can be estimated analytically following a simple prescription (see Eq. \ref{rgcequation}). We calculate the likelihood exploring a range for the parameters of 1 $<$ R$_{GC}$ $<$ 7 kpc and 0.05 $<$ n $<$ 4.2. We have calculated the likelihood maps for these two quantities for both sample S1 (the preferred one in this paper) and S2 (that created following vD17). In addition, we have assumed two different axis ratio distributions for the GC candidates. One assumes the GCs follow a circular distribution (i.e. q=1) while in the second we model a distribution with the same position angle and axis ratio as the underlying host galaxy (i.e. q=0.66). The contour maps are shown in Fig. \ref{figlikelihood}. \begin{figure*} \centering \begin{tabular}{l l} \textbf{i. S1 sample, GC axis ratio = 1} & \textbf{ii. S1 sample, GC axis ratio = 0.66} \\\\ \includegraphics[width=0.4\linewidth]{MLE-params-DF44-q=1-median.png} & \includegraphics[width=0.4\linewidth]{MLE-params-DF44-median.png} \\\\\\ \textbf{iii. S2 sample, GC axis ratio = 1} & \textbf{iv. S2 sample, GC axis ratio = 0.66} \\\\ \includegraphics[width=0.4\linewidth]{MLE-params-DF44-VD-q=1-median.png} & \includegraphics[width=0.4\linewidth]{MLE-params-DF44-VD-median.png} \end{tabular} \caption{Likelihood maps showing contours of constant confidence level. The maps shown correspond to two different samples S1 (\textit{top}) and S2 (\textit{bottom}) and two different axis ratio for the spatial distribution of the GCs q=1 (circular distribution, \textit{left}) and q=0.66 (same axis ratio as the light of the galaxy, \textit{right}). The labels indicate the most likely values for the R$_{GC}$ and $n$ parameters of the GC distributions together with their 1$\sigma$ uncertainty.} \label{figlikelihood} \end{figure*} In order to produce the likelihood maps we first estimated the background contamination corresponding to samples S1 and S2. Backgrounds for S1 and S2 were estimated at radial distances from DF44 between 5R$_e$ to 10R$_e$, and the values are 0.017 kpc$^{-2}$ and 0.034 kpc$^{-2}$ respectively. This corresponds to $\sim$5, $\sim$3, $\sim$11 and $\sim$7 contaminant objects inside a radius of 10 kpc (2.5R$_e$) for S1 (q=1 and q=0.66) and S2 (q=1 and q=0.66) respectively. We also explore apertures from 2 to 3 R$_e$ to measure the background contamination and our results do not change. We randomly remove this quantity of compact sources inside 10 kpc when estimating the likelihood maps we provide in Fig. \ref{figlikelihood}. This was done by following a Gaussian probability distribution whose peak is at the mean background values and whose sigma is given by the uncertainties in measuring the background (approximately 13\% and 9\% for S1 and S2). We conducted 4000 simulations. The most likely values for R$_{GC}$ and n of the GC candidates distribution, together with their uncertainties, are provided in Table \ref{tablelikelihood}. In Fig. \ref{radialdistributionmle}, we show the GC surface density radial distribution together with the most likely solution and the rest of the solutions with 1$\sigma$ uncertainty for sample S1. The parameters describing the distribution of GCs are more affected by the assumed axis ratio distribution than by the sample used (either S1 and S2). As expected, due to the lower background contamination, the parameters on sample S1 are better determined than for S2. Interestingly, the half-number value of the GC candidates\footnote{Throughout this paper, R$_{GC}$ refers to the R$_{GC}$ measured using the S1 sample and q=0.66 (same as their host galaxy) unless otherwise explicitly stated.} (R$_{GC}$=3.1$^{+0.8}_{-0.7}$ kpc) is compatible within the error bars to the effective radius of the light distribution of DF44 (R$_e$=3.9$\pm$0.7 kpc). In other words, R$_{GC}$/R$_e$=0.8$^{+0.3}_{-0.2}$. \begin{table} \centering \caption {Most likely value for the S\'ersic index $n$ and half-number radius R$_{GC}$ of the GC distributions in this work and vD17. We include the parameters using different samples and/or assumptions about the GC distribution. Note that vD17 derived the GC surface density stacking two UDGs in the Coma cluster, DF44 and DFX1. Therefore, the values of the S\'ersic index and the GC half number radius quoted from that work correspond to this stacked distribution.} \begin{tabular}{ c c c c c } \hline Reference & $q$ (axis ratio) & sample & $n$ & R$_{GC}$ [kpc] \\ \hline This work & 1.0 & S1 & $0.51^{+0.74}_{-0.31} $ & $2.6^{+0.8}_{-0.7}$ \\ \\ & 1.0 & S2 & $0.72^{+1.39}_{-0.37} $ & $3.1^{+1.2}_{-1.0}$ \\ \\ & 0.66 & S1 & $0.39^{+0.59}_{-0.16} $ & $3.1^{+0.8}_{-0.7}$ \\ \\ & 0.66 & S2 & $0.58^{+0.80}_{-0.28} $ & $3.4^{+1.1}_{-0.9}$ \\ \hline vD17 & - & - & $3.1^{+0.6}_{-0.9}$ & $10.34^{+6.1}_{-3.3} $ \\ \\ & - & - & $1.0$ (fixed) & $6.58^{+0.94}_{-0.94} $ \\ \hline \end{tabular} \label{tablelikelihood} \end{table} \begin{figure} \centering \includegraphics[width=\linewidth]{radial-dist-28--ell.png} \caption{Observed GC surface densities for the different samples and axis ratio distributions. The blue lines correspond to the S\'ersic distribution with the most likely values according to the analysis of the likelihood maps. In grey, the surface density profiles of all the S\'ersic models compatible (within 1$\sigma$) with the GCs distribution. The radial distances are shown in kpc (upper x-axis) and normalized to the effective radius R$_e$ of the galaxy light (bottom x-axis).} \label{radialdistributionmle} \end{figure} In the left panel of Fig. \ref{gcdistribution} we plot the GC candidates of sample S1 (yellow circles) and the extra GC candidates that are contained in sample S2 (following the vD17 selection criteria). Except for a few GC candidates, the remaining objects are well within the light distribution of DF44 and follow a similar distribution. In the right panel of Fig. \ref{gcdistribution} we plot the location of the ellipse containing the half-number of GCs according to our analysis for sample S1 (yellow solid line) and DF44 half light ellipse (yellow dashed line). We also plot the ellipse containing the half-number of GCs according to vD17 (red solid line). The half-number radius R$_{GC,vD17}$=1.5R$_{e,vD17}$=7.05 kpc that was used by vD17 is not supported by the distribution of the GC candidates we detect. As we will show later on in the text, it is this significant difference in the estimation of R$_{GC}$ between vD17 and the present work which leads to a strong disagreement in the total number of GCs found by vD17 and ourselves. \begin{figure*} \centering \includegraphics[trim=70 70 70 70,clip,width=\columnwidth]{DF44+main+gcs+.png} \includegraphics[trim=70 70 70 70,clip,width=\columnwidth]{DF44+main+.png} \caption{\textit{Left}: The galaxy DF44 together with the GC candidates that compose the sample S1 (yellow circles) and the extra globular clusters added in sample S2 (red circles). All these are the GC candidates that have magnitudes brighter than \textsl{V$_{606}$}=28.0 mag and lie within a circle of radius 10 kpc. \textit{Right}: The galaxy effective radius R$_e$ (yellow dashed line) and the GCs half-number radius R$_{GC}$ (yellow solid line) estimated in this work. The red ellipse represents the GC half-number radius used in vD17. The significant difference in the values of R$_{GC}$ between both works results in very different N$_{GC}$.} \label{gcdistribution} \end{figure*} \subsection{Globular cluster luminosity function} With the GC candidates selected and their spatial distribution characterised, the next step is to calculate the total number of GCs (N$_{GC}$) around DF44. To compute N$_{GC}$ we require knowledge of the GC luminosity function (GCLF) in DF44. This function can be approximated by a Gaussian distribution \citep{1993AJ....105.1358S}. The area of the distribution represents the total number of GCs. The peak of the distribution can be used as a standard candle and is only weakly dependent on host galaxy mass \citep[e.g.][]{gclf1,rejkuba2012}. This peak is located at M$_V$=-7.5 mag (Johnson V filter and in the Vega system), which corresponds to m$_V$ = 27.5 mag at the distance of the Coma Cluster \citep{peng2011}. In order to transform the previous numbers from \textsl{V}(Vega) to \textsl{V$_{606}$}(AB), we use the following assumptions. First, we assume that the GCs of DF44 are old ($\sim$12 Gyr) and well described by simple stellar population models. From MILES models \citep{vazdekis2016}, we get (\textsl{V}-\textsl{I})(Vega)$\sim$0.9 mag for a wide range of metallicities (-2$<$Z$<$-0.5). Next, we use the transformation provided by \citet{harris2018} to get the magnitude of the peak of the GCLF in \textsl{V$_{606}$}(Vega): \begin{equation} V_{606}(Vega) = V - 0.263 \times (V - I) + 0.091 \end{equation} Finally, we transform that value to the AB system by using the handbook of the WFC3 instrument \citep{wfc3}: \begin{equation} V_{606}(AB) = V_{606}(Vega) + 0.093 \end{equation} This give us a magnitude for the peak of the GCLF in the AB system of <\textsl{V$_{606}$}>$\sim$27.45 mag. In order to estimate N$_{GC}$, we consider only the GC candidates inside R$<$3R$_{GC}$. We provide in Table \ref{gccandidates} the catalogue of GC candidates for S1 and q=0.66. We use a radial distance of 3R$_{GC}$ as this radius encloses (for the S\'ersic index value n$\lesssim$0.7 describing the compact sources distribution around DF44) 99\% of the expected objects \citep{2001MNRAS.326..869T}. We plot the derived luminosity functions for different cases (S1 and S2 samples with q=1 and q=0.66) in Fig. \ref{lfunctiongc}. Together with the observed luminosity function (red dashed line), we show the background contribution to this function (blue dashed line). Background contamination for each luminosity function was estimated using the identified compact sources further than the radius 3R$_{GC}$ from DF44 in the image. This allows us to have a statistical significant correction of the background, particularly for the most luminous sources. The observed luminosity functions corrected by the background is shown in grey in Fig. \ref{lfunctiongc}. This corrected luminosity function has also been corrected for incompleteness down to magnitude \textsl{V$_{606}$}=28.5 mag. \begin{table*} \centering \caption{Catalogue of GC candidates in S1 sample and q=0.66. The columns represent R.A. (a), Declination (b), distance to DF44 center as defined in the previous coordinates (c), \textsl{V$_{606}$} magnitude (d) and its error (e), compactness parameter (f) and its error (g), ellipticity (h), \textsl{V$_{606}$}-\textsl{I$_{814}$} colour (i) and its error (j). Those GC candidates without a colour estimation correspond to objects that were not detected in the \textsl{I$_{814}$} image.} \begin{tabular}{ c c c c c c c c c c } \hline RA & DEC & R & \textsl{V$_{606}$} & $e\_\textsl{V$_{606}$}$ & $\Delta m_{4-8}$ & $e\_\Delta m_{4-8}$ & ell & \textsl{V$_{606}$}-\textsl{I$_{814}$} & $e\_\textsl{V$_{606}$}-\textsl{I$_{814}$}$\\ $h$ $m$ $s$ & $d$ $m$ $s$ & \arcsec & mag & mag & mag & mag & - & mag & mag \\ (a) & (b) & (c) & (d) & (e) & (f) & (g) & (h) & (i) & (j) \\ \hline 13h 00m 58.33s & +26d 58m 25.83s & 10.32 & 23.92 & 0.002 & 0.38 & 0.004 & 0.03 & 0.33 & 0.01 \\ 13h 00m 57.70s & +26d 58m 34.80s & 3.882 & 25.74 & 0.013 & 0.43 & 0.023 & 0.02 & 0.41 & 0.03 \\ 13h 00m 58.22s & +26d 58m 36.85s & 4.596 & 25.98 & 0.017 & 0.36 & 0.029 & 0.05 & 0.44 & 0.04 \\ 13h 00m 58.18s & +26d 58m 35.30s & 3.482 & 26.49 & 0.027 & 0.46 & 0.045 & 0.07 & 0.39 & 0.08\\ 13h 00m 57.72s & +26d 58m 33.23s & 3.787 & 26.71 & 0.033 & 0.42 & 0.056 & 0.03 & 0.41 & 0.08\\ 13h 00m 57.59s & +26d 58m 31.58s & 6.165 & 26.95 & 0.042 & 0.54 & 0.066 & 0.09 & 0.19 & 0.13\\ 13h 00m 57.62s & +26d 58m 35.70s & 5.232 & 27.27 & 0.056 & 0.52 & 0.089 & 0.18 & 0.21 & 0.17\\ 13h 00m 57.72s & +26d 58m 41.95s & 8.249 & 27.39 & 0.063 & 0.48 & 0.102 & 0.15 & 0.47 & 0.17\\ 13h 00m 58.40s & +26d 58m 30.18s & 7.932 & 27.47 & 0.067 & 0.45 & 0.111 & 0.29 & 0.21 & 0.21 \\ 13h 00m 57.98s & +26d 58m 33.36s & 1.202 & 27.55 & 0.073 & 0.47 & 0.119 & 0.14 & 0.55 & 0.17 \\ 13h 00m 57.69s & +26d 58m 38.17s & 5.369 & 27.57 & 0.074 & 0.46 & 0.122 & 0.04 & 0.81 & 0.15 \\ 13h 00m 58.18s & +26d 58m 36.07s & 3.659 & 27.58 & 0.074 & 0.14 & 0.149 & 0.34 & --- & --- \\ 13h 00m 58.22s & +26d 58m 45.75s & 11.91 & 27.67 & 0.080 & 0.48 & 0.131 & 0.34 & --- & --- \\ 13h 00m 58.05s & +26d 58m 31.70s & 3.152 & 27.82 & 0.093 & 0.58 & 0.143 & 0.11 & --- & --- \\ 13h 00m 57.95s & +26d 58m 38.69s & 4.190 & 27.85 & 0.096 & 0.37 & 0.166 & 0.09 & --- & --- \\ 13h 00m 57.94s & +26d 58m 29.01s & 5.483 & 27.89 & 0.099 & 0.47 & 0.163 & 0.31 & 0.59 & 0.23 \\ 13h 00m 58.45s & +26d 58m 26.84s & 10.64 & 27.89 & 0.099 & 0.57 & 0.154 & 0.40 & --- & --- \\ 13h 00m 58.99s & +26d 58m 40.10s & 16.50 & 27.94 & 0.104 & 0.58 & 0.161 & 0.04 & --- & --- \\ 13h 00m 58.00s & +26d 58m 35.82s & 1.452 & 27.95 & 0.104 & 0.56 & 0.163 & 0.25 & --- & --- \\ 13h 00m 57.95s & +26d 58m 46.26s & 11.76 & 28.00 & 0.110 & 0.56 & 0.171 & 0.07 & --- & --- \\ 13h 00m 57.73s & +26d 58m 27.15s & 8.092 & 28.14 & 0.124 & 0.21 & 0.240 & 0.30 & --- & --- \\ 13h 00m 58.17s & +26d 58m 24.90s & 10.11 & 28.14 & 0.125 & 0.66 & 0.184 & 0.16 & --- & --- \\ 13h 00m 58.20s & +26d 58m 26.91s & 8.453 & 28.16 & 0.127 & 0.53 & 0.201 & 0.22 & --- & --- \\ 13h 00m 58.37s & +26d 58m 19.74s & 16.04 & 28.20 & 0.133 & 0.66 & 0.196 & 0.21 & --- & --- \\ 13h 00m 58.14s & +26d 58m 37.86s & 4.337 & 28.27 & 0.142 & 0.15 & 0.284 & 0.19 & --- & --- \\ 13h 00m 58.90s & +26d 58m 35.30s & 14.15 & 28.30 & 0.145 & 0.31 & 0.260 & 0.15 & --- & --- \\ 13h 00m 57.60s & +26d 58m 39.04s & 6.948 & 28.39 & 0.157 & 0.26 & 0.292 & 0.26 & --- & --- \\ 13h 00m 58.16s & +26d 58m 34.82s & 3.159 & 28.42 & 0.161 & 0.64 & 0.241 & 0.36 & --- & --- \\ \hline \end{tabular} \label{gccandidates} \end{table*} \begin{figure*} \centering \begin{tabular}{l l} \textbf{i. S1 sample, GC axis ratio = 1} & \textbf{ii. S1 sample, GC axis ratio = 0.66} \\\\ \includegraphics[width=0.4\linewidth]{gcs-res-df44+c.png} & \includegraphics[width=0.4\linewidth]{gcs-res-df44+e.png} \\\\\\ \textbf{iii. S2 sample, GC axis ratio = 1} & \textbf{iv. S2 sample, GC axis ratio = 0.66} \\\\ \includegraphics[width=0.4\linewidth]{gcs-res-df44_VD+c.png} & \includegraphics[width=0.4\linewidth]{gcs-res-df44_VD+e.png} \end{tabular} \caption{Globular cluster luminosity function (GCLF) for the S1 and S2 samples with different GC axis-ratios. In red, the observed distribution of compact sources. The blue dashed line corresponds to the background sources while in grey we show the background corrected distribution in the number of compact sources. The black solid line indicates the GCLF corresponding to the calculated average number of the GCs while the green lines enclose the uncertainty in deriving N$_{GC}$ (see text for details).} \label{lfunctiongc} \end{figure*} Once we have the GCLF corrected for background and incompleteness, we count all the GCs up to the location of the peak <\textsl{V$_{606}$}>. This number is then multiplied by 2 to get the total number of GCs (N$_{GC}$). For the location of the peak and the width of the GC distribution ($\sigma_{GC}$), we use the numbers given by vD17 (i.e. <\textsl{V$_{606}$}>=27.7$_{-0.2}^{+0.2} $ mag and $\sigma_{GC}$=0.82$_{-0.15}^{+0.16}$). The total number of GCs is then all compact sources (background and incompleteness corrected) within R$<$3R$_{GC}$ which are within the \textsl{V$_{606}$} magnitude range \textsl{V$_{606}$}-3$\sigma_{GC}$$<$\textsl{V$_{606}$}$<$ <\textsl{V$_{606}$}> multiplied by 2. To estimate the uncertainties on N$_{GC}$ we take into account the uncertainties on the number of observed compact sources within 3R$_{GC}$ and in the background, the uncertainty in R$_{GC}$ we have estimated above and also the uncertainties in <\textsl{V$_{606}$}> and $\sigma_{GC}$ of the GCLF. In Fig. \ref{lfunctiongc} we show the Gaussian distributions corresponding to the number of GCs derived for each sample (black solid line) together with the Gaussian distributions compatible with the minimum and maximum number of GCs (within 1$\sigma$) compatible with the data (green lines). For all our samples and spatial configurations the total number of GCs is very modest. In particular, for our preferred sample S1 and spatial configuration q=0.66, N$_{GC}$=21$^{+7}_{-9}$. This number is in stark contrast with the value reported by vD17: 74$\pm$18. We will expand on this discrepancy in section \ref{thisworkvsvd17}. \subsection{The average colour of the population of GCs around DF44} Another interesting exercise we can conduct on the GC population around DF44 is to estimate its average colour and compare it with other GC samples. Fig. \ref{gccolors} shows the average colour \textsl{g$_{475}$}-\textsl{z$_{850}$} of the GC population of DF44 compared to other Coma Cluster galaxies \citep{peng2011}. There is a clear trend between the average colour of the GCs and the total luminosity of their host galaxies. To estimate the average colour \textsl{g$_{475}$}-\textsl{z$_{850}$} of the GCs of DF44, we start using the observed colour \textsl{V$_{606}$}-\textsl{I$_{814}$} = 0.40$\pm$0.04 mag. This value is obtained using the brightest (i.e. least affected by the background) GC candidates. We only used GCs with \textsl{V$_{606}$}$<$27.0 mag. This colour is also consistent with that given in vD17. The observed colour \textsl{V$_{606}$}-\textsl{I$_{814}$} was transformed to \textsl{g$_{475}$}-\textsl{z$_{850}$} using the following transformations: (i) \textsl{g$_{475}$}-\textsl{I$_{814}$}=\textsl{V$_{606}$}-\textsl{I$_{814}$}$\times$1.852+0.096 \citep{blak2010} and (ii) \textsl{g$_{475}$}-\textsl{z$_{850}$}=\textsl{g$_{475}$}-\textsl{I$_{814}$}$\times$1.023+0.128 \citep{beasley2016b}. Consequently, we end up having an average colour \textsl{g$_{475}$}-\textsl{z$_{814}$}=0.98$\pm$0.08 for the GCs around DF44. This colour corresponds to a metallicity of [M/H]$\sim$-0.9, using the photometric predictions of the MILES stellar population models \citep{vazdekis2016}, for an old (12 Gyr) SSP. Fig. \ref{gccolors} shows that the average colour of the DF44 GCs are in perfect agreement with other galaxies in the Coma Cluster of similar luminosity. \begin{figure} \centering \includegraphics[width=\linewidth]{color-vmag.png} \caption{The average colour of GCs in DF44 (red square) compared with the colours of the GCs in Coma cluster galaxies \citep{peng2011}. The average colour of the DF44 GCs follows the observed trend for GCs in Coma galaxies, having a colour compatible with GCs of galaxies of the same luminosity.} \label{gccolors} \end{figure} \section{Discussion} \subsection{The halo mass of DF44 based on the GC population} We can use the total number of GCs to estimate the halo mass of DF44, considering that the total stellar mass contained in the GCs in a galaxy scales linearly with galaxy halo mass \citep[see e.g.][]{harris2017}. This observation has been shown to be particularly useful for inferring the halo masses of UDGs \citep{beasley2016}. According to \citet{harris2017}, the relation between both masses is linked through the following equation: M$_{GCs}$/M$_{halo}$=3.9$\times$10$^{-5}$. With the total number of GCs (N$_{GC}$=21$^{+7}_{-9}$) derived here, we have used the following approximation. We assume a GC mean mass $<M_{GC}>$=2$\times$10$^5$ M$_{\odot}$ \citep{jordan2007}, which is multiplied by N$_{GC}$ to get M$_{GCs}$. This gives a total halo mass of M$_{halo}$=1.1$^{+0.4}_{-0.5}$$\times$10$^{11}$M$_{\odot}$. This estimation for the mass of the halo of DF44 is in agreement (within the error bars) with that derived from the velocity dispersion of the stars of the galaxy (see Table \ref{df44mass}), and removes the tension between the kinematic mass and the mass derived from the large number of DF44 GCs identified by vD17. Recently, a dwarf-like dark matter halo ($\sim$ 10$^{11}$M$_{\odot}$) has been confirmed by \citet{akos}. \begin{table*} \centering \caption{Summary of the DF44 halo mass estimates (M$_{halo}$) using different mass proxies from vD16, vD17, vD19 and this work. S$_{N}$ indicates the specific frequency of the GC system in the \textsl{V} band. vD19 noted that motivated by the new measurements of the velocity dispersion of the galaxy DF44, they uncovered an error in vD16 and the corrected velocity dispersion from the old data is $\sigma=42^{+7}_{-7}$ km~s$^{-1}$.} \begin{tabular}{ c c c c c c } \hline Ref & N$_{GC}$ & S$_{N}$ & $\sigma$ & M$_{halo}$ & Mass proxy \\ & & & (km s$^{-1}$) & (M$_{\odot}$) & \\ \hline vD16 & 94$^{+25}_{-20}$ & 31.1$^{+8.3}_{-6.6}$ & 47$^{+8}_{-6}$ & 1.0$\times$10$^{12}$ & kinematics \\ \\ vD17 & 74$^{+18}_{-18}$ & 24.5$^{+6.0}_{-6.0}$ & - & 5.0$\times$10$^{11}$ & GC number count\\ \\ vD19 & - & - & 33$^{+3}_{-3}$ & 1.6 $^{+5.0}_{-1.2}$$\times$10$^{11}$ & kinematics \\ \\ This work & 21$^{+7}_{-9}$ & 7.0$^{+2.3}_{-3.0}$ & - & 1.1$^{+0.4}_{-0.5}$$\times$10$^{11}$ & GC number count\\ \hline \end{tabular} \label{df44mass} \end{table*} \subsection{DF44 GCs in comparison with other GC systems} As we reported in Sect. 3.5, the average colour of the GC population of DF44 is in excellent agreement with the expected colour for galaxies with similar luminosities. Another aspect to address is whether both the number of GCs and their specific frequencies fit well with other galaxies with similar characteristics. In Fig. \ref{df44incontext} we explore this. We have used the compilation by \citet{harris2013} which gives the number of GCs and their specific frequency against the luminosity of the galaxies in the \textsl{V}-band along with their central velocity dispersions. As stated before, we use for the total luminosity of DF44 (M$_V$=-16.2 mag) and its velocity dispersion ($\sigma$=33$^{+3}_{-3}$ km s$^{-1}$), the values reported by vD17 and vD19 respectively. Using these values, and with the number of GCs measured in this work, DF44 is within the observed the relations found for other galaxies. \begin{figure*} \centering \includegraphics[trim=0 20 50 20,clip,clip,width=0.8\linewidth]{ngc-sn-vmag-sigma.png} \caption{Total number of GCs (N$_{GC}$) and GCs specific luminosity frequency in the \textsl{V}-band ($S_N$) for galaxies from \citet{harris2013} (grey points). DF44 is also shown as a red square (this work), green circle (vD17) and blue triangle (vD16). DF44 absolute magnitude (M$_V$=-16.2 mag) and velocity dispersion ($\sigma$=33 km s$^{-1}$) are from vD17 and vD19 respectively. Based on vD17, the total number of GCs of DF44 is higher than the average number for galaxies with similar velocity dispersion (and therefore mass) and luminosity. In this work, we used the same data-set as vD17 and find N$_{GC}$=21$^{+7}_{-9}$ for DF44. With this new estimation, the N$_{GC}$ of DF44 is consistent with those shown on galaxies with similar luminosities and velocity dispersion. } \label{df44incontext} \end{figure*} Another property we can compare with other galaxies is the ratio between the half-number radius of GCs for DF44 compared to the effective radius R$_e$ of the host galaxy. We find that R$_{GC}$/R$_e$=0.8$^{+0.3}_{-0.2}$. This ratio is compared with other galaxies in Fig. \ref{gcratio} using the sample given in \citet{forbes2017}. Unfortunately, this kind of work has not been conducted for galaxies with similar stellar masses to DF44 \citep[with exception of DF17; ][]{peng2016}. Both UDGs share the property that R$_{GC}$/R$_e$$<$2, something which is only observed for some nearby relatively massive galaxies, including our own Milky Way. It remains as an interesting exercise to fill the gap between the massive galaxies and the dwarf population in terms of the ratio between both radii to explore whether UDGs have an anomalously low spatial extension for their GCs or not. \begin{figure*} \includegraphics[trim=0 20 50 20,clip,clip,width=0.8\linewidth]{re-rgc.png} \caption{GC distribution around different types of galaxies with different effective radii and stellar masses. We show a compilation presented by \citet{forbes2017}, DF44 (this work) and DF17 (\citealp{peng2016}). It can be seen that DF44 shows R$_e$/R$_{GC}$$\sim$1. Values similar to this are observed in other nearby galaxies, such as two S0 galaxies: NGC7332 and NGC5473.} \label{gcratio} \end{figure*} \subsection{Different estimates for the radial distribution of GCs lead to much smaller N$_{GC}$ than in vD17} \label{thisworkvsvd17} The main result of this paper is that the number of GCs around DF44, N$_{GC}$=21$^{+7}_{-9}$, is compatible with the number of GCs expected for galaxies with the luminosity and velocity dispersion of DF44 (see previous section). This results is in disagreement with the high value of GCs found by vD17 (N$_{GC}$=74$^{+18}_{-18}$). What is the origin of this discrepancy? We have explored whether the selection of GC candidates around DF44 based on different compactness criteria can explain the different values of N$_{GC}$ obtained by us and vD17. To do this, we estimated N$_{GC}$ using our sample S2 which uses the same selection criteria as vD17. We get N$_{GC}$=18$^{+23}_{-12}$ (assuming a projected circular distribution for the GCs; q=1) and N$_{GC}$=16$^{+22}_{-9}$ (assuming the GCs have the same axial distribution as DF44 itself; q=0.66). As expected, the larger background contamination in sample S2 increases the uncertainty in measuring N$_{GC}$ compared to sample S1. However, even considering the uncertainties, the maximum number of globular clusters is not larger than $\sim$40. This is around a factor of 2 smaller than the value reported by vD17. vD17 indicate in their paper that their total number of GCs in DF44 is four times the number of observed GCs (contamination-corrected) within R$<$1.5R$_{e,vD17}$ and \textsl{V$_{606}$}$<$27.6 mag (i.e. N$_{GC,obs}$=18.5). It is interesting to note that this number is already larger (around a factor of 2) than the values we find for the number of observed GCs to a similar radial distance and magnitude than vD17 (see e.g. our grey histograms in Fig. \ref{lfunctiongc}). This discrepancy is potentially significantly affected by the different background contamination corrections both works have applied. Unfortunately, we can not perform a direct comparison between our GC detections and vD17 since these data are not provided in vD17. The rational for why vD17 multiplied by a factor of 4 their N$_{GC,obs}$ is as follows. First, vD17 multiply by 2 in order to correct for the number of GCs missed after the peak of the GCLF. We have also done this correction to our N$_{GC}$ values reported above. However, since our N$_{GC,obs}$ is a factor of $\sim$2 less than vD17, the discrepancy remains. Second, vD17 multiply again by another factor of 2 to account for GCs which are beyond their spatial selection criterion of R=1.5R$_{e,vD17}$ (where R$_{e,vD17}$=4.7 kpc according to vD17). This spatial selection criterion, R=1.5R$_e$, corresponds to their R$_{GC}$ assuming GCs follow an exponential declining distribution from the center of the distribution. R$_{GC}$ for vD17 is 7 kpc. It is in this last correction where both works disagree fundamentally. While vD17 assumes an exponential distribution (as they do not measure the actual distribution of their GCs) we have measured this quantity. We find the GCs follow a S\'ersic distribution with a S\'ersic index (n$\sim$0.5) significantly lower than an exponential. In addition, we have also measured R$_{GC}$ directly (without assuming an exponential distribution), finding a value of 2.5-3.5 kpc. In practical terms this means that there is not a significant number of GCs beyond 7 kpc. To summarize, according to our measurements there is no need for multiply by another factor of 2 to correct for missing GCs at large radii. Our estimation of the total number of GCs is around a factor of 4 smaller than vD17. Half of the difference is easily explained by the correction that vD17 made for the spatial distribution of GCs that here we find is not necessary. The other half is due to a different background correction of their sample. \section{Conclusion} The existence of UDGs with relatively large numbers of GCs, when compared to the expectations based on their luminosity or dynamical mass, has remained a puzzle. The general properties of UDGs are consistent with them being the low surface brightness counterparts of regular dwarf galaxies. However, the existence of GC-rich UDGs has been an argument to support the idea that at least some of these galaxies could be intrinsically different from dwarfs. In this paper, we have explored this issue in detail by revisiting one of the most iconic UDGs, DF44 in the Coma cluster. DF44 has been claimed to have a dark matter halo similar in mass to that of the Milky Way, and could, therefore be a potential candidate "failed Milky Way" (vD16). More generally, such a massive halo would make DF44 an extreme outlier in stellar mass -- halo mass relations \citep{beasley2016}. Through a detailed analysis of the GC candidates around this object, we have found a total number of GCs of only N$_{GC}$=21$^{+7}_{-9}$, which is a factor of 4 lower than previous measurements (vD17). We believe that the vD17 assumption of a large R$_{GC}$, based on the results in the literature for non-UDG galaxies, has led the authors to over-estimate N$_{GC}$. The significant reduction in the number of GCs found in our work is in good agreement with the expectation for objects with similar central velocity dispersions ($\sigma$=33 km s$^{-1}$; vD19). A smaller N$_{GC}$ resolves the strong tension with respect to previous claims about the amount of dark matter in the halos of these objects based either in their number of GCs or in their dynamical mass. In addition, we have found that the colour and specific frequency of DF44's GCs is in agreement with galaxies of similar luminosities. Based on this analysis, we conclude that DF44 is compatible with being a "regular" low surface brightness dwarf galaxy and the need for a new class of galaxies to account for this type of object is not yet required. \section*{Acknowledgements} We thank the referee for their detailed reading of our manuscript and for a number of excellent suggestions to improve the quality of the analysis. T.S., I.T., R.F.P and J.H.K. acknowledge financial support from the European Union's Horizon 2020 research and innovation programme under Marie Sk\l odowska-Curie grant agreement No 721463 to the SUNDIAL ITN network. M.A.B., I.T. and J.H.K. acknowledge support from the State Research Agency (AEI) of the Ministry of Science and Innovation and the European Regional Development Fund (FEDER) under the grant with reference PID2019-105602GB-I00 and AYA2016-77237-C3-1-P, from IAC projects P/300624 and P/300724, financed by the Ministry of Science and Innovation, through the State Budget and by the Canary Islands Department of Economy, Knowledge and Employment, through the Regional Budget of the Autonomous Community, and from the Fundaci\'on BBVA under its 2017 programme of assistance to scientific research groups, for the project "Using machine-learning techniques to drag galaxies from the noise in deep imaging". M.A.B. acknowledges support from the Severo Ochoa Excellence scheme (SEV-2015-0548). \section*{Data Availability} The data underlying this article were retrieved from the Hubble Space Telescope archive (HST Proposal 14643, PI: van Dokkum) and it provides the reduced/drizzled frames by the standard HST pipeline. The softwares and packages that are used in this work, \textit{SExtractor}, \textit{SWarp}, \textit{TinyTim} are publicly available. The catalogue of the Globular Cluster candidates around DF44 generated in this research is available in the article and in its online supplementary material. \bibliographystyle{mnras}
2,869,038,154,218
arxiv
\section{Introduction}\label{sec:intro} Massive stars have supersonic strong winds that sweep-up and heat the gas and dust from the surrounding interstellar medium (ISM), generating cavities known as wind-blown stellar bubbles. When these stars have a high (supersonic) peculiar velocity with respect to the ISM, the geometry of the bubble becomes bow-shaped instead of spherical \citep{Weaver1977}. The heated ambient dust and gas emit mostly infrared (IR) radiation, which can be detected thus allowing the identification and characterisation of these bowshocks (BSs) \citep[][and references therein]{NoriegaCrespo1997} Strong BSs are promising places for the acceleration of relativistic particles, which produce peculiar radiative features throughout the whole electromagnetic spectrum, especially at radio-cm frequencies and at energies above 1~keV \citep[i.e. X-rays and $\gamma$-rays;][]{delValle2012}. However, despite the large number of such objects observed to date \citep{Peri2012, Peri2015, Kobulnicky2016}, \object{BD$+43^{\circ}3654$} remains the only one in which non-thermal (NT) radio emission has been observed \citep{Benaglia2010}. Until now, no stellar BS has been detected either at X-rays \citep{Toala2016, Toala2017, DeBecker2017}, high-energy $\gamma$-rays\footnote{During the review of this article, \cite{Sanchez2018} published a work with a possible association of two \textit{Fermi} sources with stellar BSs.} \citep{Schulz2014}, or very high-energy $\gamma$-rays \citep{HESSColl2017}. The radio emission at low frequencies is expected to be dominated by synchrotron radiation produced by the interaction of relativistic electrons with the local magnetic field \citep{Ginzburg1965}. The detection of NT radio emission is ubiquitous in systems involving shocks, such as colliding-wind massive binaries \citep[e.g.][]{Eichler1993}, supernova remnants \citep[e.g.][]{Torres2003}, and proto-stellar jets \citep[e.g.][]{Marti1993,RodriguezK2017}. This suggests that shocks produced by strong stellar winds (SWs) are suitable for the acceleration of particles (electrons, protons, and heavier nuclei) up to relativistic energies. In this context, diffusive shock acceleration (DSA) \citep{Axford1977,Krymskii1977,Bell1978,Blandford1978} is the most likely mechanism. Furthermore, it is expected that these relativistic particles radiate their energy at energies from radio to $\gamma$-rays. A few models have been developed to address the NT emission from stellar BSs \citep{delValle2012, delValle2014, Pereira2016}, some of which over-predicted the high-energy radiation from these systems. In this work, we revisit the assumptions of previous emission models for stellar BSs and apply a new multi-zone emission model, presented in Sect.~\ref{sec:model}, to BSs. In Sect.~\ref{sec:results}, we present the results of applying this model and discuss them in order to assess future radio, X-ray, and $\gamma$-ray surveys that search for NT emission from stellar BSs. We also show synthetic radio-cm synchrotron emission maps that reproduce the available data and radio morphology of \object{BD$+43^{\circ}3654$}, and make updated predictions for this object in X-rays and $\gamma$-rays consistent with the latest observational constraints. The conclusions are summarised in Sect~\ref{sec:conclusions}. \section{Model}\label{sec:model} Most of the NT radiation models presented in the literature rely on the one-zone approximation, which assumes that the emitter can be considered as point-like, that is, homogeneous and of irrelevant size. However, the validity of such models has been questioned in view of the discrepancies between predictions and observations \citep{Toala2016}. Additionally, the structure of the BS can, in principle, be resolved with current radio interferometers and X-ray satellites, but it is not possible to address correctly issues of the spatial distribution of the emission using one-zone models. A complete broadband radiative model of the BS needs to take into account the magnetohydrodynamics (MHD) of the stellar wind shock and its evolution, the acceleration of relativistic particles, and the emission of these particles considering inhomogeneous conditions throughout the BS. Here, we develop an extended model for the BS emitter in which we consider analytical prescriptions for the MHD, a DSA mechanism for the acceleration of particles, and a detailed numerical method for the calculation of the particle emission throughout the BS. The details of the model are specified below. \subsection{Geometry} In the reference frame of a moving star, the stellar BS is the result of the interaction of the ISM material acting as a planar wind and the stellar spherical wind. The shape and dynamics of stellar BSs have been studied by several authors \citep[e.g.][]{Dyson1975,Wilkin1996,Meyer2016,Christie2016}. The collision of the SW and the ISM forms an interaction region consisting of a forward shock that propagates through the ISM, a contact discontinuity (CD) between the two media, and a reverse shock (RS) that propagates through the unshocked SW. The CD is the surface where the flux of mass is zero. The reverse shock is adiabatic and fast, whereas the forward shock is radiative and slow \citep[e.g.][]{VanBuren1993}. Since DSA works in the presence of strong adiabatic shock waves, we will focus exclusively on the reverse shock. The stagnation point is located at a distance $R_0$ from the star, on the symmetry axis of the BS (with the direction of the stellar motion), and is the point where the ram pressure of the SW and the ISM completely cancel each other out. For a star with a mass-loss rate $\dot{M}_\mathrm{w}$, wind velocity $v_\mathrm{w}$, and a spatial velocity $V_\star$, moving in a medium of density $\rho_\mathrm{ISM}$, the stagnation point is at \begin{equation} R_0 = \sqrt{\dot{M}_\mathrm{w} v_\mathrm{w} /(4 \pi \rho_\mathrm{ISM} {V_\star}^2)}. \end{equation} As $R_0 \gg R_\star$, it is valid to simply adopt $v_\mathrm{w} = v_\infty$. Any position on the CD can be determined by an angle $\theta$ from the BS symmetry axis, which was characterised by \citet{Wilkin1996} for the case of a cold ISM (i.e. with negligible thermal pressure): \mbox{$R(\theta) = R_0 \csc{\theta}\sqrt{3\,(1-\theta \cot{\theta})}$}. \citet{Christie2016} generalised this solution for the case of non-negligible thermal pressure in the ambient fluid, opening the possibility to study the BSs in warm or even hot fluids such as accretion disks. According to their results, the width of the shocked stellar wind region at $\theta$ is well approximated as $H(\theta) \approx 0.2\,R(\theta)$. Following \citet{delPalacio2016}, we develop a two-dimensional (2D) model assuming that the BS is an axisymmetric shell of negligible width. Given that the shocked gas flows at a fixed angle $\phi$ around the symmetry axis, we can restrict most of our analysis to a one-dimensional (1D) description of a fluid moving in the $XY$ plane. A schematic picture of the model is shown in Fig.~\ref{fig:model}. The position of a fluid element on the $XY$ plane is solely determined by $\theta$. As the magnetic field is not dynamically relevant, it is considered that a fluid element, upon entering the RS, moves downstream from the BS apex. We assume that NT particles are accelerated once the fluid line enters the RS region, and that these particles flow together with the shocked fluid, which convects the ambient magnetic field.\footnote{We note that the flow is not likely to be completely laminar nor the magnetic field completely ordered, but for simplicity we neglect in this description any macroscopic effect of the turbulent component of the flow and the irregular component of the magnetic field.} \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics[width=0.5\textwidth, angle=0]{./BS.png}} \caption[]{Sketch (not to scale) of the model considered in this work. Despite the axis representation, the $(0,0)$ position corresponds to the star location. In the star reference frame, the ISM moves with velocity equal to $V_\star$. The position of the contact discontinuity (CD), forward shock (FS), and reverse shock (RS) are shown in solid lines. The solid line with arrows coming from the star and entering into the shocked region represents one of the streamlines that conform the emitter. The inclination angle $i$ and the position angle $\theta$ are also shown.} \label{fig:model} \end{figure} The BS radiation is formed by a sum of these 1D emitters (linear emitters hereafter) that are symmetrically distributed around the direction of motion of the star in a three-dimensional (3D) space: each discrete emission cell is first defined in the $XY$ plane, and the full 3D structure of the wind interaction zone is obtained via rotation around the $X$ axis, in the $\phi$ direction. The hydrodynamics (HD) and particle distribution have azimuthal symmetry. The dependence with the azimuthal angle arises only for processes that depend also on the line of sight (Sect.~\ref{sec:maps}). We assume that the flow along the BS is laminar, neglecting mixing of the fluid streamlines or mixing between shocked wind and medium. The particles are followed up to an angle $\theta\sim 135\degr$, or equivalently, until they travel a distance $\sim 5\,R_0$. According to our simulations, more than $50\%$ of the emission is produced within $\theta < 60\degr$ ($\sim R_0$), close to $90\%$ within $\theta < 120\degr$ ($\lesssim 4\,R_0$), and above $99\%$ within $\theta < 135\degr$. Therefore, $\theta\sim 135\degr$ is sufficient to capture most of the emission from the injected particles, and it also allows us to capture most of the wind kinetic luminosity available for NT particles. \subsection{Hydrodynamics} \label{subsec:hydrodynamics} The HD of the BS, in particular the shocked SW, depends on the star mass-loss rate, $\dot{M}$, and the wind terminal velocity, $v_\infty$. A stationary approximation is valid as long as $V_\star \ll v_\infty$, as the shocked stellar wind leaves the BS before the environment can change significantly. In addition, the SW shock is adiabatic, that is, the shocked stellar wind leaves the BS before radiating its energy, which enhances its stability. Such wind shocks are expected to be quite stable for supersonic (non-relativistic) stars \citep{Dgani1996}. We apply the analytical HD prescriptions given by \citet{Christie2016} to characterise the values of the relevant thermodynamical quantities in the shocked SW (which we assume to be co-spatial with the CD), and to obtain tangent and perpendicular vectors to the BS surface at each position of the CD. The thermodynamical quantities in the shocked SW rely on the assumption that the fluid behaves like an ideal gas with adiabatic coefficient $\gamma_\mathrm{ad} = 5/3$, and the application of the Rankine-Hugoniot jump conditions for strong shocks. The magnetic field is obtained by assuming that, at each position, its pressure is a fraction $\zeta_B$ of the thermal pressure: \begin{equation} B(\theta) = \left[\zeta_B \, 8\pi P(\theta) \right]^{1/2}, \quad P(\theta)=\frac{2}{1+\gamma_\mathrm{ad}} \, \rho_\mathrm{w}(R(\theta))\,v_{\mathrm{w},\perp}^{2}(\theta). \label{eq:B} \end{equation} The dependence of the thermodynamic quantities with $\theta$ is shown in Fig.~\ref{fig:termo_sw}. Near the apex, the incoming SW impacts on the BS surface perpendicularly, leading to a big jump in the gas pressure and temperature, and the fluid practically halts (i.e. quasi-stagnates). As $\theta$ increases, the shock becomes more oblique, the temperature and pressure are lower, and the tangential velocity becomes a substantial fraction of $v_\infty$ (in fact, the fluid can accelerate a bit further because of the pressure gradient). \begin{figure} \resizebox{\hsize}{!}{\includegraphics[width=0.35\textwidth, angle=270]{./termo_sw-eps-converted-to.pdf}} \caption[]{Thermodynamic quantities of the shocked SW: temperature (solid), magnetic field (long-dashed), density (short-dashed), pressure (dot-dashed), and tangential velocity (double dot-dashed). Quantities with sub-index zero are evaluated at the apex position (see text). The distance to the apex along the shock is given by \mbox{$\Delta s(\theta) = \int_ 0^\theta \mathrm{d} s(\theta')$}.} \label{fig:termo_sw} \end{figure} At large distances from the star, the stellar magnetic field is expected to be toroidal and its intensity to drop with the inverse of the distance to the star \citep{Weber1967}. An upper limit on the stellar surface magnetic field can therefore be estimated by assuming that the magnetic field in the BS comes solely from the adiabatic compression of the stellar magnetic field lines. Adopting an Alfv\'en radius $r_\mathrm{A} \sim R_\star$, we get \mbox{$B_\star = 0.25\, B(\theta)\, (R(\theta)/R_\star)\,(v_\infty/v_\mathrm{rot})$} \citep[][and references therein]{Eichler1993}. We note that it is possible that the magnetic fields are strongly amplified or even generated \textit{in situ} \citep[e.g.][and references therein]{Schure2012}, and therefore the upper limits we derive for $B_\star$ could be underestimated. \subsection{Non-thermal particles} \label{sec:distribution} The BS produced by the SW consists of hypersonic, non-relativistic, adiabatic shocks, where NT particles of energy $E$ and charge $q$ are likely accelerated via DSA. Assuming diffusion in the Bohm regime, the acceleration timescale can be written as \mbox{$t_\mathrm{acc} \approx \eta_\mathrm{acc} E \left( B \, c \, q \right)^{-1}$~s}, with $\eta_\mathrm{acc}\gg 1$ being the acceleration efficiency \citep{Drury1983}. The energy distribution of the accelerated particles at the injection position is taken as \mbox{$Q(E)\propto E^{-p} \exp{(-E/E_\mathrm{cut})}$}, where $p$ is the spectral index of the particle energy distribution and $E_\mathrm{cut}$ is the cut-off energy, obtained by equating $t_\mathrm{acc}$ with the minimum between the cooling and escape timescales. The canonical spectral index for DSA in a strong shock is $p=2$. Electrons at each cell of the BS cool through various processes, namely adiabatic losses (work done expanding with the thermal fluid), Bremsstrahlung, synchrotron, and inverse Compton interactions (IC) with the ambient radiation fields, namely IR from dust emission and ultraviolet (UV) photons from the star. An example of the relevant timescales is shown in Fig.~\ref{fig:tiempos} for the scenario discussed in Sect.~\ref{sec:one_vs_multi}. Adiabatic losses are not dominant with respect to escape losses in this scenario because of the relatively smooth density evolution (see Fig.~\ref{fig:termo_sw}), although they are the dominant cooling process for electrons with $E_\mathrm{e} \lesssim 1$~TeV. For near-equipartition magnetic field values ($\zeta_B \sim 1$), radiative losses are relevant for electrons with $E_\mathrm{e} \gtrsim 100$~GeV, as synchrotron dominates their cooling. Otherwise, for modest magnetic field values ($\zeta_B \ll 1$) escape losses dominate: convection for electrons with $E_\mathrm{e} \lesssim 1$~TeV, and diffusion for $E_\mathrm{e} \gtrsim 1$~TeV. For protons, on the other hand, the cooling processes taken into account are proton-proton inelastic collisions (p-p) and adiabatic expansion. However, protons do not suffer significant energy losses (Fig.~\ref{fig:tiempos}) and escape completely dominates: convection for protons with $E_\mathrm{p} \lesssim 1$~TeV, and diffusion for $E_\mathrm{p} \gtrsim 1$~TeV. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[width=0.35\textwidth, angle=270]{./t_e_sw-eps-converted-to.pdf}} \resizebox{\hsize}{!}{\includegraphics[width=0.35\textwidth, angle=270]{./t_p_sw-eps-converted-to.pdf}} \caption[]{Characteristic cooling and acceleration times for electrons (top) and protons (bottom) at a position $\theta = 30\degr$ for the generic scenario presented in Table~\ref{table:parameters}. Solid lines are used for cooling processes whereas long dashed lines are used for escape processes. The dotted red line represents the cell convection time (see text in Sect.~\ref{sec:numerical}).} \label{fig:tiempos} \end{figure} When convection dominates the loss time, particles move along the BS with an energy distribution that keeps the same spectral index as the injected distribution. Adiabatic losses slightly soften the particle energy distribution as the particles stream along the emitter. Additionally, for high-energy electrons with $E_\mathrm{e} \gtrsim 1$~TeV a combination of synchrotron and IC cooling in the Thomson regime can also contribute to a spectral softening. Figure~\ref{fig:dist} shows the described behaviour for different linear emitters, as well as the total electron distribution and proton distribution at the BS, also calculated for the generic scenario case presented in Sect.~\ref{sec:one_vs_multi}. The IC cooling timescale for each photon field was calculated using the formulae given by \citet{Khangulyan2014}, suitable for black-body-like spectra. For the case of the stellar UV photon field, we consider that the star emits as a black body with a temperature $T_\star \sim 40\,000$~K and a dilution factor $\kappa_\star = \left[ R_\star/(2 R(\theta)) \right]^2$, since $R(\theta)$ is the distance from the star to the BS at the position $\theta$. The dilution factor is defined by \citet{Khangulyan2014} as the ratio of radiation energy density in the emitter to radiation energy density within a thermal gas. The interaction angle for the scattering process is calculated as the angle between the direction of motion of the emitting electron (which is the line of sight, as the emission is beamed in the direction of the emitting electron) and the radial direction of the stellar photon; naturally, this angle varies with $\theta$. For the case of the IR photon field produced by the dust, we assume isotropy within the NT emitter. Its spectrum is well-approximated with a Planck law of temperature $T_\mathrm{IR} \sim 100$~K \citep{Draine2011, Kobulnicky2017}. The observational evidence shows that the emitting dust usually surrounds the cavity produced by the shocked SW \citep{Kobulnicky2017}. In consequence, if the dust were optically thick, it would be appropriate to set $\kappa_\mathrm{IR}=1$, which was adopted by \citet{delValle2012}. However, the dust is not strictly optically thick.\footnote{The assumption that the dust emits as a black body leads to an overestimation by several orders of magnitude of the observed IR emission, as $\sigma {T^4}_\mathrm{IR}{R_0}^2 \gg L_\mathrm{IR}$.} This issue has been addressed by \citet{DeBecker2017} through the introduction of a ``normalization factor'' (also known as a grey body). This is equivalent to setting a proper ``dilution factor'' in the formalism given by \citet{Khangulyan2014}. Considering $U_\mathrm{BB} = 4 \,(\sigma/c)\, T_\mathrm{IR}^4$, the dilution factor of the IR photon field along the BS, $\kappa_\mathrm{IR}(\theta) = U_\mathrm{IR}(\theta)/U_\mathrm{BB}$, can be approximated as \begin{equation} \kappa_\mathrm{IR}(\theta) \approx \frac{L_\mathrm{IR}}{4 \pi \sigma T_\mathrm{IR}^4 R(\theta)^2}, \end{equation} where we have considered $U_\mathrm{IR} \approx L_\mathrm{IR}/[\pi \, R(\theta)^2 \, c]$. As the extended IR radiation is produced in a region of size $\sim R_0$ surrounding the NT emitter, the above expression for $U_\mathrm{IR}$ should be valid, at least, for a region of size $\sim R_0$ centred in the apex, and therefore for the brightest portion of the NT emitter. We note that far away from a point-like source (such as the star) the energy density is $U = L/(4\pi r^2\,c)$, so in the more external regions of the BS we probably overestimate this value. However, if the plasma is not completely optically thin ($\tau \lesssim 1$), part of the more internal emission should be reprocessed and isotropised, so we expect that the above expressions serve as a decent approximation even in the outer regions of the NT emitter as well. Moreover, the election of $R(\theta)$ in the denominator instead of simply $R_0$ is a phenomenological (not particularly physically motivated) approach to reproduce the observed decay of the IR brightness away from the apex of the BS.\footnote{With this prescription the energy density at a position $\theta$ in the BS is a factor $(R_0/R(\theta))^2$ smaller than at the apex.} We emphasise that the detailed modelling of the (extended) IR field produces a negligible impact in the NT emission considering that it is produced mostly at distances from the apex $\lesssim R_0$ (Sect. 2.1). As we show in Sect.~\ref{sec:results}, the inclusion of a dilution factor for the IR photon field is enough to account for the incompatibility between some of the previous predictions and the upper limits set by recent observations in the X-ray and $\gamma$-ray energy bands. We note that even though the energy density of the stellar UV field exceeds the energy density of the dust IR field, cooling due to IC-IR dominates over IC-star for electrons with energies $E_\mathrm{e} \gtrsim 200$~GeV. This happens because, for the latter, IC takes place much deeper in the Klein-Nishina regime. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[width=0.35\textwidth, angle=270]{./diste_sw_bin-eps-converted-to.pdf} } \resizebox{\hsize}{!}{\includegraphics[width=0.35\textwidth, angle=270]{./distp_sw_bin-eps-converted-to.pdf} } \caption{Electron (top) and proton (bottom) energy distribution for the generic case presented in Table~\ref{table:parameters}. The colour scale represents five different sections (I1--I5) of the emitter that correspond to intervals of length $\Delta \theta = 0.15 \, \pi$, starting in $\theta \in [0,0.15\,\pi)$ for I1, and so on. The black dashed line is the total particle distribution (i.e. the sum of all curves).} \label{fig:dist} \end{figure} The normalisation of the evolved particle distribution depends on the power injected perpendicularly to the shock surface, $L_{\mathrm{w},\perp} \approx 50$\% of the total wind power, and on the fraction of that luminosity that goes to NT particles, $f_\mathrm{NT}$, which is a free parameter of the model. Given that there is no tight constraint on how the energy is distributed in electrons and protons, we consider two independent parameters $f_\mathrm{NT,e}$ and $f_\mathrm{NT,p}$. It is worth noting that ISM-termination shocks of massive stars may transfer up to a $\sim 10\%$ of their energy into relativistic protons \citep{Aharonian2018}. \subsection{Numerical treatment} \label{sec:numerical} We apply the following procedure to obtain the NT particle distribution: \begin{enumerate} \item{We obtain the location of the CD in the $XY$-plane in discrete points $\left( x_i(\theta_i),y_i(\theta_i) \right)$. We characterise the position of the particles in their trajectories along the BS region through 1D cells (or linear-emitter segments) located at those points.} \item{We compute the thermodynamic variables at each position $i$ in the trajectory (Sect.~\ref{subsec:hydrodynamics}).} \item{The wind fluid elements reach the RS at different locations, from where they are convected along the BS. We simulate the different trajectories by taking different values of $i_\mathrm{min}$: the case in which $i_\mathrm{min} = 1$ corresponds to a line starting in the apex of the BS, whereas $i_\mathrm{min} = 2$ corresponds to a line that starts a bit further in the BS, and so on (Fig.~\ref{fig:lines}). The axisymmetry allows us to compute the trajectories only for the 1D emitters with $y \geq 0$.} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{./linear-emitters_BS.png} \caption{Illustration of the model for the spatial distribution of NT particles in the BS region. On the left side we show different linear emitters, named as $j=1,2,3$ according to the incoming stellar wind fluid line. On the right side we represent three cells of a bunch of linear emitters, obtained from summing at each location the contributions from the different linear emitters (see Sect.~\ref{sec:numerical}).} \label{fig:lines} \end{figure} \item{We calculate the power available to accelerate NT particles (either protons or electrons) at each position $i$ as \begin{equation} \label{eq:L_inj} \Delta L_\mathrm{NT}(\theta_i) = f_\mathrm{NT}\;L_{\mathrm{w},\perp}(\theta_i)\;\frac{\Delta \Omega(\theta_i)}{4 \pi}, \end{equation} where $L_{\mathrm{w},\perp} = 0.5\;\dot{M}\;{v^2_{\mathrm{w},\perp}}$, and $\Delta \Omega = \sin \theta\; \Delta \theta\; \Delta \phi$ is the solid angle subtended by the cell. The quantity $\Delta L_\mathrm{NT}$ can be considered as a discrete version of a differential luminosity per surface element.} \item{Relativistic particles are injected in only one cell per linear emitter (that corresponding to $i_\mathrm{min}$), and each linear emitter is independent from the rest. For a given linear emitter, particles are injected at a location $\theta_i$ with an energy distribution \mbox{$Q(E,\theta_i) = Q_0(\theta_i) E^{-p} \exp{(-E/E_\mathrm{cut}(\theta_i))}$}. The normalisation constant $Q_0$ is set by the condition \mbox{$\int E \, Q(E,\theta_i) \, \mathrm{d}E = \Delta L_\mathrm{NT}(\theta_i)$}, and the particle maximum energy $E_\mathrm{cut}(\theta_i)$ by equating the acceleration time to the characteristic loss time, which takes into account both cooling and escape losses. The expressions used to calculate the different timescales are given below. The particle acceleration timescale is \begin{equation}\label{eq:t_ac} t_\mathrm{acc} = \eta_\mathrm{acc}(\theta) E_{e,p}/(B(\theta) \, c \, q) \; \mathrm{s}, \end{equation} \noindent where the acceleration efficiency is $\eta_\mathrm{acc}(\theta) = 2 \pi E (c/v_\perp(\theta))^2$. The characteristic escape timescale of the particles is given by \begin{align}\label{eq:t_esc} t_\mathrm{esc} &= {\left(t_\mathrm{conv}^{-1} + t_\mathrm{diff}^{-1} \right)}^{-1} \, \mathrm{s}\\ t_\mathrm{conv} &= R(\theta) / v_\parallel(\theta) \, \mathrm{s} \\ t_\mathrm{diff} &= H(\theta)^2/(2 D_\mathrm{Bohm}) \, \mathrm{s}, \end{align} \noindent where we consider a characteristic convection timescale, and diffusion in the Bohm regime such that $D_\mathrm{Bohm} = r_\mathrm{g}c/3$, where $r_\mathrm{g} = E/(q\,B)$ is the gyroradius of the particle. Denoting $n_\mathrm{ssw}$ the particle number density in the shocked SW, the proton and electron energy losses are \begin{align}\label{eq:t_pp} t_\mathrm{pp} &= 10^{15} / n_\mathrm{ssw}(\theta) \, \mathrm{s} \\ t_\mathrm{adi} &= \frac{3}{v_\parallel(\theta)} \frac{\mathrm{d}s(\theta)}{\mathrm{d}(-\log{\rho(\theta)})} \, \mathrm{s}\\ t_\mathrm{cool,p} &= {\left(t_\mathrm{pp}^{-1} + t_\mathrm{adi}^{-1} \right)}^{-1} \, \mathrm{s} \end{align} and \begin{align}\label{eq:t_rad} t_\mathrm{br} &= 10^{15} / n_\mathrm{ssw}(\theta) \, \mathrm{s} \\ t_\mathrm{sy} &= \left[ 1.6 \times 10^{-3} B_\parallel(\theta)^2 E_e \right]^{-1} \, \mathrm{s} \\ t_\mathrm{IC} &= {\left(t_\mathrm{IC,\star}^{-1} + t_\mathrm{IC,IR}^{-1} \right)}^{-1} \, \mathrm{s} \\ t_\mathrm{cool,e} &= {\left(t_\mathrm{br}^{-1} + t_\mathrm{sy}^{-1} + t_\mathrm{IC}^{-1} + t_\mathrm{adi}^{-1} \right)}^{-1} \, \mathrm{s}, \end{align} respectively.} \item{At the injection cell, the steady-state particle distribution is approximated as \mbox{$N_0(E,i_\mathrm{min}) \approx Q(E) \times \min{\left(t_\mathrm{cell},t_\mathrm{cool} \right)}$}, where \mbox{$t_\mathrm{cell} = s_\mathrm{cell}(\theta)/v_\parallel(\theta)$} is the cell convection time (i.e. the time particles spend in each cell). By the time the relativistic particles reach the next cell in their trajectory, their energy has diminished from $E$ to $E'$, but the total number of particles must be conserved so that $N(E) \, \mathrm{d}E = N(E') \, \mathrm{d}E'$. From that condition we can obtain the evolved version of the injected distribution from \begin{equation} \label{eq:Ne_i} N(E',i+1) = N(E,i)\frac{\lvert \dot{E}(E,i)\rvert}{\lvert \dot{E}(E,i+1)\rvert}, \end{equation} where $\dot{E}(E,i) = E/t_\mathrm{cool}(E,i)$ is the cooling rate for particles of energy $E$ at the position $\theta_i$. The energy $E'$ is given by the condition $t_\mathrm{cell} = \int_{E}^{E'} \dot{E}(\tilde{E},i) \mathrm{d}\tilde{E}$.} \item{We repeat the same procedure varying $i_\mathrm{min}$, which represents different linear emitters in the $XY$-plane.} \item{We obtain the total steady-state particle energy distribution at each cell, $N_\mathrm{tot}(E,i)$, by summing the distributions $N(E,i)$ obtained for each linear emitter. The result is what we call a bunch of linear emitters, which represents a wedge of width $\Delta \phi$ from the total BS structure (calculated at a fixed $\phi$).} \item{We can obtain the evolved particle energy distribution for each position $(x_i,y_i,z_i)$ of the BS surface in a 3D geometry. We achieve this by distributing -- via a rotation -- the bunch of linear emitters around the axis given by the direction of motion of the star. By doing so, we end up with many bunches of linear emitters, each with a different value of the azimuthal angle $\phi$. The particle energy distribution is the same in all the bunches because of the azimuthal symmetry. We note that the normalisation of the particle energy distribution already takes into account the total number of bunches, as for $m_\phi$ bunches we take $\Delta \phi = 2\pi / m_\phi$ in Eq.~\ref{eq:L_inj}.} \end{enumerate} \subsection{Non-thermal emission}\label{sec:nt_emission} Once the distribution of particles at each cell $(x_i,y_i,z_i)$ is known, it is possible to calculate the emission at each cell by the previously mentioned radiative processes, and also to obtain the total emission from the modelled region. Radio, X-ray, and $\gamma$-ray absorption processes are not relevant given the conditions (size, density, and target fields) of the BS region. The relevant radiative processes are: IC \citep{Khangulyan2014}, synchrotron, p-p, and relativistic Bremsstrahlung \citep[see e.g.][and references therein]{Bosch-Ramon2009}. Except for the IC with the stellar UV photons, which depends on the star-emitter-observer geometry, the other radiative processes can be regarded as isotropic, given that the NT particle population is isotropic due to an irregular magnetic field component in the shocked gas. Nevertheless, the presence of an orderly $B$-component leads to some degree of anisotropy in the synchrotron emission. After calculating the emission at each cell, we can take into account absorption effects along the radiation path if necessary (e.g. free-free absorption of radio photons in the unshocked stellar wind). \subsection{Synthetic radio-emission maps}\label{sec:synthetic_maps} The total spectral energy distribution (SED) is obtained as the sum of the emission from all the bunches of linear emitters. This SED does not contain all the information available from the model, which can be contrasted with data from spatially resolved radio observations. Therefore, synthetic radio emission maps are valuable complementary information of the morphology predicted by our model, which can help to interpret observations such as the ones reported by \citet{Benaglia2010}. To produce synthetic radio maps at a given frequency, we first project the 3D emitting structure in the plane of the sky, obtaining a 2D distribution of flux (Fig.~\ref{fig:radio_maps_generic}). We then cover this plane adjusting at each location of the map an elliptic Gaussian that simulates the synthesised (clean) beam. If the observational synthesised beam has an angular size $a \times b$, each Gaussian has $\sigma_x=a/\sqrt{8\log2}$ and $\sigma_y=b/\sqrt{8\log2}$. At each pointing we sum the emission from every location weighted by the distance between its projected position and the Gaussian centre: $\exp{ \left[ -(\Delta x^2/2\sigma_x^2) - (\Delta y^2/2\sigma_y^2) \right] }$. The result obtained is the corresponding flux per beam at each position. \begin{figure*} \centering \includegraphics[width=0.48\textwidth, angle=270]{./maps_pre_post_conv_1GHz.png} \caption[]{Projected emission maps before (top) and after (bottom) convolution with a Gaussian beam with $\sigma_x = \sigma_y = 12\arcsec$, calculated for the parameters of the generic scenario given in Table~\ref{table:parameters} but for different inclinations $i$. The pink cross marks the position of the projected stagnation point, $R_0 \times \sin{i}$. The contours in the bottom plots are at 0.05, 0.1, 0.2, 0.4, 0.8, and 1.6~mJy~beam$^{-1}$.} \label{fig:radio_maps_generic} \end{figure*} As shown in Fig.~\ref{fig:radio_maps_generic}, for observing angles $i \sim 90\degr$ the typical coma-shaped structure of the BS arises \citep{Peri2012, Kobulnicky2016}. On the other hand, for observing angles $i \lesssim 45\degr$ the BS shape is more circular\footnote{This is consistent with the statement by \citet{Kobulnicky2016} that BSs with $i < 65\degr$ are unlikely to be identified as such.} and, additionally, the emission gets very diluted spatially, so it would be difficult to detect such BSs. We also note that the position of $R_{0,\mathrm{proj}}$ lies closer to the star than the position of the maximum of emission; in fact, the maximum emission is always coincident with $R_0$ for values of $i > 45\degr$ (in angular units). Therefore, when measuring the value of $R_0$ from observational radio emission maps, a factor $\sin{i}$ should not be included. \section{Results} \label{sec:results} First, we compute the particle energy distribution and the NT emission for a generic scenario using a one-zone model approximation and the extended emission model. We then present analytical estimates of the NT emission dependence with the different system parameters. Finally, we apply our full emission model to the object \object{BD$+43^{\circ}3654$}. \subsection{One-zone versus multi-zone model}\label{sec:one_vs_multi} The basics of the one-zone model approximation are addressed by \citet{delValle2012}. Here we review only two aspects: the characteristic convection timescale and the IR photon field model. Previous one-zone models estimate the convection timescale as $t_\mathrm{conv} \sim H_0/v_\mathrm{w}$; however, the shocked fluid takes time to re-accelerate once it impacts on the stagnation point (Fig.~\ref{fig:termo_sw}). Moreover, the emitting area is more extended, similar to $R_0$. We consider that a better estimate of the fluid convection time is $t_\mathrm{conv} \sim R_0/c_\mathrm{s}$, where $c_\mathrm{s}$ is the sound speed in the shocked SW, $c_\mathrm{s} \approx \sqrt{\gamma_\mathrm{ad} P/\rho} \approx v_\mathrm{w}/\sqrt{8}$. Regarding the modelling of the IR photon field, as discussed in Sect.~\ref{sec:model}, the assumption of a black-body-emitting surface embedding the emitter is not valid. We consider instead a radiation field approximated as a thermal (black-body-like) spectrum with an energy density of $U_\mathrm{IR} \approx L_\mathrm{IR}/(\pi {R_0}^2 c)$. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[width=0.35\textwidth, angle=270]{./SED_NT_onezone_generic-eps-converted-to.pdf}} \resizebox{\hsize}{!}{\includegraphics[width=0.35\textwidth, angle=270]{./SED_NT_extended_generic-eps-converted-to.pdf}} \caption[]{Comparison of the SEDs of the generic scenario with the parameters specified in Table~\ref{table:parameters} using a one-zone model (top) and a multi-zone model (bottom). The ten-year sensitivity curve of the \textit{Fermi} satellite is taken from \texttt{http://fermi.gsfc.nasa.gov}, and that of the 100-h CTA from \citet{Funk2013}.} \label{fig:sed_onezone_vs_extended} \end{figure} We assume that $10\%$ of the available injection luminosity goes into NT particles, equally distributed between protons and electrons, that is, $f_\mathrm{NT,e} = f_\mathrm{NT,p} = 0.05$. We also fix the magnetic field value adopting an intermediate value $\zeta_B = 0.1$, which yields $B \approx 20$~$\mu$G near the apex of the BS; the full list of the selected parameters for this generic scenario is given in Table~\ref{table:parameters}. In Fig.~\ref{fig:sed_onezone_vs_extended} we show a comparison of the radiative outputs between the one-zone and the multi-zone models. The one-zone model emission estimates agree with the extended model estimates within a factor of two to three, the largest discrepancies being found in the emission peaks related to the particle maximum energy, and in the IC with the stellar UV photon field. As the anisotropic IC depends on the interaction angle and it varies along the BS structure, we treated the IC-star as isotropic in the one-zone model, as using a single value for $i$ would be completely arbitrary and the impact that it has is quite large. For the extended model, using a value of $i = 90\degr$ is representative of the produced emission within a factor of two, as shown in Fig.~\ref{fig:SEDs_IC_i}. \begin{table} \caption{Parameters of the generic system we modelled and the system \object{BD$+43^{\circ}3654$}.} \label{table:parameters} \centering \begin{tabular}{lccc} \hline \hline \textbf{Parameter} & \textbf{Generic} & \textbf{BD$+43^{\circ}3654$} & \textbf{Ref.} \\ \hline $d$ [kpc] & $1.0$ & $1.32$ & 1 \\ $i$ & 90$\degr$ & $75\degr$ & - \\ $R_{0,\mathrm{proj}}$ ['] & - & $3.2$ & 2 \\ $L_\star$ [erg s$^{-1}$] & $2\times10^{39}$ & $3.5\times10^{39}$ & 2 \\ $T_\star$ [K] & $40\,000$ & $40\,700$ & 1 \\ $R_\star$ [$R_\sun$] & $15.0$ & $19.0$ & 1 \\ $\dot{M}_\star$ [$M_\sun$ yr$^{-1}$] & $1\times 10^{-6}$ & $9\times 10^{-6}$ & 3,4 \\ $v_{\infty}$ [km s$^{-1}$] & $2000$ & $2300$ & 5,6 \\ \cline{1-4} $v_\star$ [km s$^{-1}$] & $30$ & $40$ & 6 \\ $T_\mathrm{IR}$ [K] & $100$ & $100$ & 1 \\ $n_\mathrm{ISM}$ [cm$^{-3}$] & $10$ & $15$ & 2,6 \\ $T_\mathrm{ISM}$ [K] & $\sim 0$ & $8000$ & 2 \\ \hline $L_{\mathrm{w},\perp}$ [erg s$^{-1}$] & $7\times10^{35}$ & $8.9 \times 10^{36}$ & - \\ $f_\mathrm{NT,p}$ & $0.05$ & $0.5$ & - \\ $f_\mathrm{NT,e}$ & $0.05$ & $0.004$,$0.16$ & - \\ $\zeta_B$ & $0.1$ & $0.01$,$1$ & 4 \\ $p$ & $2.0$ & $2.2$ & - \\ \hline \end{tabular} \tablefoot{Regarding BD$+43^{\circ}3654$, the values adopted for various parameters are an intermediate between the values given by other authors. Details of the selection criteria can be found in the text.} \tablebib{(1) \citet{Kobulnicky2017}; (2) \citet{Kobulnicky2018}; (3) \citet{Peri2014}; (4) \citet{delValle2012}; (5) \citet{Benaglia2010}; (6) \citet{Brookes2016}.} \end{table} For both models the available luminosity for NT particles in the BS is \mbox{$L_{\mathrm{w},\perp} \approx 7 \times 10^{35}$~erg~s$^{-1}$}, from which only \mbox{$L_\mathrm{NT} \approx 3 \times 10^{34}$~erg~s$^{-1}$} goes to each particle species. We show the radiated luminosities for both models in Table \ref{table:luminosities}, distinguishing the different contributions. Electrons radiate only $\sim 1$\% of their power, while for protons the radiated energy fraction is negligible. They escape as cosmic rays to the ISM. \begin{table} \caption{Luminosities produced by the different contributors of each model.} \label{table:luminosities} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{lcccc} \hline \hline \textbf{Luminosity} & \multicolumn{2}{c}{\textbf{One-zone model}} & \multicolumn{2}{c}{\textbf{Extended model}} \\ \hline & Value & \% of $L_\mathrm{T}$ & Value & \% of $L_\mathrm{T}$ \\ \hline $L_\mathrm{sy}$ [erg~s$^{-1}$] & $6.2\times 10^{31}$ & $44.5$ & $2.5 \times 10^{32}$ & $81.8$ \\ $L_\mathrm{Br}$ [erg~s$^{-1}$] & $1.5\times 10^{28}$ & $\ll 1$ & $2.0\times 10^{28}$ & $\ll 1$ \\ $L_\mathrm{IC,dust}$ [erg~s$^{-1}$] & $1.4\times 10^{31}$ & $9.9$ & $2.7 \times 10^{31}$ & $9.0$ \\ $L_\mathrm{IC,\star}$ [erg~s$^{-1}$] & $6.4\times 10^{31}$ & $45.6$ & $2.8\times 10^{31}$ & $9.2$ \\ $L_\mathrm{pp}$ [erg~s$^{-1}$] & $8.0\times 10^{26}$ & $\ll 1$ & $1.0\times10^{27}$ & $\ll 1$ \\ \hline $L_\mathrm{T}$ [erg~s$^{-1}$] & \multicolumn{2}{c}{$1.85\times 10^{32}$} & \multicolumn{2}{c}{$3.0\times 10^{32}$} \\ \hline \end{tabular}} \end{table} \begin{figure} \resizebox{\hsize}{!}{\includegraphics[width=0.35\textwidth, angle=270]{./SED_IC_generic-eps-converted-to.pdf}} \caption[]{Comparison of $L_\mathrm{IC,star}$ for different values of the observing angle $i$. The simulations are calculated for the generic scenario with the parameters specified in Table~\ref{table:parameters}.} \label{fig:SEDs_IC_i} \end{figure} \subsection{Analytical estimates on emissivity scaling}\label{sec:analytical_estimates} As we have shown in Sect.~\ref{sec:one_vs_multi}, the one-zone approximation gives a good estimate of the emission obtained with a more complex model if one takes into account the modifications we propose. Thus, we can rely on the one-zone formalism to make simple estimates on how different system parameters impact on the NT radiative output. Qualitatively, the NT radio luminosity depends on the ratio $t_\mathrm{conv}/t_\mathrm{sy}$, while the $\gamma$-ray luminosity depends on the ratio $t_\mathrm{conv}/t_\mathrm{IC}$. IC-star dominates the SED for photon energies $\epsilon \lesssim 1$~GeV, while IC-IR dominates the SED above $\gtrsim 10$~GeV. In the X-ray energy band, synchrotron dominates the NT emission, but a competitive or even dominant thermal contribution is possible\footnote{This statement is based in the usage of Eq.~24 from \citet{Christie2016} to estimate the thermal emission from the BS (not shown). However, the simple HD model we use for the BS is not reliable for the calculation of its thermal contribution.}. Quantitatively, on the one hand, $t_\mathrm{conv} \sim R_0/c_\mathrm{s} \propto \dot{M}^{0.5}\, v_\mathrm{w}^{-0.5}\, n_\mathrm{ISM}^{-0.5}\, v_\star^{-1}$ and $t_\mathrm{sy} \propto B^{-2} \propto \left( \dot{M}\, v_\mathrm{w}\, R_0^{-2} \right)^{-1} \propto n_\mathrm{ISM}^{-1}\,v_\star^{-2}$. Thus, $L_\mathrm{sy} \sim L_\mathrm{NT,e} \times (t_\mathrm{conv}/t_\mathrm{sy}) \propto \dot{M}^{1.5}\,v_\mathrm{w}^{1.5}\,n_\mathrm{ISM}^{0.5}\,v_\star$, where we considered $L_\mathrm{NT,e} \propto \dot{M} \, v_\mathrm{w}^2$. As expected, high-velocity stars moving in a dense medium are good candidates, although the most important quantities are intrinsic to the star, and are those related to the stellar wind: the denser and faster this wind is, the better are the chances that its BS will be detectable at radio frequencies. In conclusion, in the search for synchrotron-emitting stellar BSs, it is more important to take into account the individual properties of the star than its runaway conditions. We note that the above scaling is not entirely valid for electrons with $E_\mathrm{e} > 100$~GeV if $\zeta_B\sim 1$, as in that case synchrotron losses might dominate over convection losses close to the apex, depending on the system parameters. On the other hand, the $\gamma$-ray emission at energies above 10~GeV is dominated by IC with the dust IR photon field as already shown by \citet{delValle2012}. Assuming that $L_\mathrm{IR} \propto L_\star\,n_\mathrm{ISM}$ and $U_\mathrm{IR} \propto L_\mathrm{IR} R_0^{-2}$, which seems coherent and also has some empirical support \citep[][although with a high dispersion]{Kobulnicky2017}, we have $L_\mathrm{IC,IR} \propto L_\mathrm{NT,e} \times (t_\mathrm{conv}/t_\mathrm{IC,IR}) \propto \dot{M}^{1.5}\,v_\mathrm{w}^{0.5}\,n_\mathrm{ISM}^{1.5}\,v_\star\,L_\star$. Similar to the synchrotron emission, the most decisive factor determiming $L_{\rm IC,IR}$ is the mass-loss rate of the stellar wind. The above scaling condition is valid if $t_\mathrm{IC} > t_\mathrm{conv}$ (Fig.~\ref{fig:tiempos}). The case for IC cooling dominated by stellar UV photons is similar: $t_\mathrm{IC,\star}^{-1} \propto U_\star \propto L_\star R_0^{-2}$, so that $L_\mathrm{IC,\star} \propto L_\mathrm{NT,e} (t_\mathrm{conv}/t_\mathrm{IC,\star}) \propto \dot{M}^{1.5}\,v_\mathrm{w}^{0.5} \,n_\mathrm{ISM}^{0.5}\,v_\star L_\star$. If we further assume that $L_\star \propto \dot{M}^{0.5} \, v_\mathrm{w}^{0.5}$ \citep[e.g. Fig.~10 from][]{Muijres2012}, then we obtain $L_\gamma \propto \dot{M}^{2} \, v_\mathrm{w} \, n_\mathrm{ISM}^{0.5} \,v_\star$. As the low-frequency radio emission is purely of synchrotron origin, we get \begin{equation}\label{eq:ratio_radio_gamma} \frac{L_\mathrm{radio}}{L_\gamma} \propto \frac{1}{n_\mathrm{ISM}}\, \left( \frac{v_\mathrm{w}}{\dot{M}} \right)^{0.5}\,, \end{equation} and therefore, the best radio emitting candidates are not necessarily the best $\gamma$-ray emitting candidates, as the former prefer faster over denser winds, and the latter have a stronger dependence on the ambient density. In both cases the dependence with $v_\star$ coincides. Finally, we recall that electrons radiate only a small fraction ($\sim 1$\%) of their power in the spatial scales considered in the model (Sect.~\ref{sec:one_vs_multi}), whereas protons escape almost with no energy losses. The luminosity injected in both relativistic electrons and protons that escape the modelled BS region is \mbox{$L_\mathrm{NT} \approx 3 \times 10^{34}$~erg~s$^{-1}$}. These cosmic rays could cool once they reach a region further downstream, where the BS structure becomes very turbulent or even closes due to the pressure of the ISM \citep[e.g.][]{Christie2016}. However, we do not expect the energy release in the back flow of the BS to be very significant, given the lack of detection of NT emission from stellar BSs. The diffusion and emission from cosmic rays accelerated at stellar BSs of stars moving inside molecular clouds has been studied by \citet{delValle2014}, although in such environments the target nuclei are more suitable for efficient proton p-p and electron relativistic Bremsstrahlung. Another possibility to take into account is that the laminar approximation of the shocked flow could be relaxed so that mixing between the shocked SW and the shocked ISM could occur. Given that the ambient density in the shocked SW is $\sim 0.01$~cm$^{-3}$, whereas $n_\mathrm{ISM} \sim 1-10$~cm$^{-3}$, the relativistic Bremmstrahlung and p-p emission could be enhanced by two to three orders of magnitude if mixing is efficient \citep[see e.g.][and references therein]{Munar2013}. \subsection{Application of the model to \object{BD$+43^{\circ}3654$}}\label{sec:BD43} The massive O4If star \object{BD$+43^{\circ}3654$}, located at a distance of $d=1.32$~kpc, has a strong and fast wind that produces a stellar BS. This BS was first identified by \citet{Comeron2007} using infrared data, and it has an extension in the IR sky of 8'. This was the first stellar BS to be detected at radio wavelengths, by \citet{Benaglia2010}. In their work, \citet{Benaglia2010} presented Very Large Array (VLA) observations at 1.43 and 4.86~GHz (Fig.~\ref{fig:radio_maps}, top), from which a mean negative spectral index $\alpha \approx -0.5$ (with $S_\nu \propto \nu^\alpha$) was obtained. Their finding was indicative of NT processes taking place at the BS, in particular relativistic particle acceleration (most likely due to DSA at the shock) and synchrotron emission. This conclusion triggered a series of works on the modelling of the broadband BS emission \citep{delValle2012, delValle2014, Pereira2016} using a simplified one-zone approximation. In those works, the radio emission was assumed to be of synchrotron origin, and the radio observations were used as an input to characterise the NT electron energy distribution; the predicted high-energy emission came from IC up-scattering of ambient IR photons by relativistic electrons. Here, we revisit the predictions from the previous radiative one-zone numerical models by applying our more consistent multi-zone model. The data at radio frequencies allow us to characterise the relativistic electron population, which in turn can be used to model the broadband SED. The NT emission from IR to soft X-rays is completely overcome by the thermal emission from the star and/or the BS, so the SED of NT processes can only be tested in the high-energy (HE) domain, where the NT processes also dominate the spectrum. The electrons that produce the observed synchrotron radiation also interact with the ambient radiation fields, producing HE photons through IC scattering \citep{Benaglia2010}. The relevant HE processes are anisotropic IC with the stellar radiation field and isotropic IC with the dust IR radiation field. The role of secondary pairs in the radiative output is not expected to be relevant \citep{delValle2012}. The reported emission from \object{BD$+43^{\circ}3654$} by \citet{Benaglia2010} is consistent with a canonical NT spectral index $\alpha = -0.5$, which corresponds to an injection index of $p = -2\alpha+1 = 2$. However, it is possible that the observed emission is actually the sum of the synchrotron NT emission and a contribution of thermal emission, either from the stellar wind or the BS. The thermal emission has a positive spectral index and, therefore, in order to produce the observed spectrum with $\alpha = -0.5$, a softer (i.e. more negative) intrinsic NT spectral index would be required. We note, however, that the thermal contribution at low frequencies is likely to be small so a big deviation from $p=2$ is not expected. Furthermore, \citet{Toala2016} derived an upper limit to the X-ray flux in the 0.4--4~keV below $3.6\times 10^{-14}$~erg~cm$^{-2}$~s$^{-1}$ using \textit{XMM-Newton} observations. That upper limit also seems to favour a softer spectrum (Fig.~\ref{fig:SEDs}), although the reported value is model-dependent, and it was derived for a power-law photon spectra with index $\Gamma = 1.5$; such a spectrum would be appropriate for the synchrotron SED below 0.1~keV, where the slope is $\sim 0.5$, but not in the X-ray energy band according to our model.\footnote{Recall that in the SED $\epsilon^2 N_\mathrm{ph}(\epsilon)$ is plotted, so that the slope is $2 - \Gamma$.} According to all these remarks, we decided to adopt a softer injection index of $p=2.2$, although we note that this value is not tightly constrained. Following \citet{Brookes2016}, we assume that the star is moving in the warm ISM, with a temperature $T_\mathrm{ISM} = 8000$~K. \citet{Christie2016} addressed the effects of the non-zero ambient thermal pressure in terms of a parameter $\alpha_\mathrm{th} = P_\mathrm{th}/P_\mathrm{kin}$, which is $\sim kT_\mathrm{ISM}/(m_\mathrm{p} v_\mathrm{ISM}^2) \approx 4 \times 10^{-2}$ for this case. Thus, this introduces only a small correction in the geometry (for instance, $R_0 \propto (1+\alpha_\mathrm{th})^{-1}$), which we account for only for completeness. The system spatial velocity is not well determined. The velocity in the plane of the sky is $\approx 38.4$~km~s$^{-1}$ \citep[][and references therein]{Brookes2016}. \citet{Benaglia2010} used the radial velocity of $-66$~km~s$^{-1}$ as the radial velocity of the star with respect to the surrounding ISM, which might not be accurate when accounting for Galactic rotation. In fact, if this were the case, the inclination angle should be $i \approx \arctan{(38.4/66.2)} \approx 30\degr$, whereas the observed emission map favours a high inclination angle $i > 60\degr$, which is nearly edge-on. As the radial velocity is for certain negative, we can assume $i < 90\degr$. In order to reproduce the observed emission maps, we choose $i \approx 75\degr$, which is consistent with $v_\star \approx 40$~km~s$^{-1}$. The angular separation to the apex of the BS is $R_{0,\mathrm{proj}} \approx 3.2'$ \citep{Kobulnicky2017}, although there is some uncertainty when obtaining $R_{0,\mathrm{proj}}$ from the extended emission, as discussed in Sect.~\ref{sec:synthetic_maps} for the case of radio emission, and for instance \citet{Peri2014} gives a value of $R_{0,\mathrm{proj}} = 3.5'$. In Table~\ref{table:parameters} we list the adopted system and model parameters. The wind velocity, $v_\mathrm{w}$, is an important parameter in the model as it affects the energy budget of NT particles, the acceleration efficiency, the position of $R_0$, and the convection timescale. In order to maintain a good agreement between our model and the observational constraints, we choose a value of \mbox{$v_\mathrm{w} \approx 2300$~km~s$^{-1}$}, like \citet{Peri2014}. We note that \citet{Kobulnicky2018} assume a higher value of $v_\mathrm{w} = 3000$~km~s$^{-1}$, but, under the assumption of Bohm diffusion, that leads to a more efficient acceleration, which yields a synchrotron peak at X-rays well above the observational constraints (see Fig.~\ref{fig:SEDs}). Alternatively, it is possible to reconcile a higher $v_\mathrm{w}$ if Bohm diffusion is not a good approximation for the acceleration of particles at the BS, turning in practice $\eta_\mathrm{acc}$ into a free parameter in the model \citep[e.g.][]{DeBecker2017}. For the unshocked ISM, we consider a molecular weight $\mu_\mathrm{ISM} = 1.37$ \citep[e.g.][]{Kobulnicky2018}, which is only relevant to determine $R_0$. The mean molecular weight in the shocked SW is needed in order to derive the total number of targets for relativistic Bremsstrahlung and p-p interactions; considering that the material is completely ionised, for typical abundances we get $\mu_\mathrm{ssw} = 0.62$. With the selected parameters, the equipartition magnetic field at the apex of the BS is $\sim 100$~$\mu$~G, somewhat smaller than the value of $\sim 300$~$\mu$~G estimated by \citet{delValle2012}. Recall that the magnetic field drops along the BS according to Fig.~\ref{fig:termo_sw}. We are left with only a few free parameters: the value of $B$ in the shocked SW, the fraction $f_\mathrm{NT,e}$ of energy converted to NT electrons, and the inclination angle. The latter only affects the IC-star SED within a factor of approximately two, as discussed in Sect.~\ref{sec:one_vs_multi}. As we will show in what follows, it is possible to constrain these parameters from the measured radio fluxes and morphological information from the resolved NT radio emission region. With this purpose we explore two extreme scenarios below. \subsubsection{Scenario with low magnetic field} \label{subsec:lowB} We apply the model described in Sect.~\ref{sec:model} to a scenario with a low magnetic field (i.e. $\zeta_B \ll 1$). To reproduce the radio observations from \citet{Benaglia2010}, we fix the values of the following parameters: $i=75\degr$, $f_\mathrm{NT,e} = 0.16$, $\zeta_B= 0.01$, which leads to $B \sim 10$~$\mu$G and $B_\star < 85$~G. We also fix a high value of $f_\mathrm{NT,p} = 0.5$ to obtain an upper limit to the p-p luminosity. The calculated broadband SEDs are shown in Fig.~\ref{fig:SEDs}, along with some instrument sensitivities and observations from different epochs. The integrated luminosities for each process are \mbox{$L_\mathrm{sy} \approx 8 \times 10^{32}$~erg~s$^{-1}$}, \mbox{$L_\mathrm{Br} \approx 2 \times 10^{30}$~erg~s$^{-1}$}, \mbox{$L_\mathrm{IC,dust} \approx 5 \times 10^{32}$~erg~s$^{-1}$}, \mbox{$L_\mathrm{IC,\star} \approx 10^{33}$~erg~s$^{-1}$}, and \mbox{$L_\mathrm{pp} \approx 4 \times 10^{29}$~erg~s$^{-1}$}. In this case $n_\mathrm{ssw} \sim 0.04$~cm$^{-3}$ and $n_\mathrm{ISM} \sim 15$~cm$^{-3}$, so relativistic Bremmstrahlung and p-p emission could be enhanced more than three orders of magnitude if mixing is efficient. This would be in tension with the observational upper limits to the $\gamma$-ray flux, therefore suggesting that efficient mixing is not likely and that the laminar approximation of the fluid is consistent. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[width=0.35\textwidth, angle=270]{./SED_NT_lowB-eps-converted-to.pdf}} \resizebox{\hsize}{!}{\includegraphics[width=0.35\textwidth, angle=270]{./SED_NT_highB-eps-converted-to.pdf}} \caption[]{SED for a low (top) and a high magnetic field scenario (bottom). The red dots represent the VLA flux \citep{Benaglia2010}, the green arrow is the \textit{Suzaku} upper-limit (UL) in 0.3--10~keV \citep{Terada2012}, the purple arrow the \textit{XMM-Newton} UL in 0.4--4~keV \citep{Toala2016}, the orange arrows the \textit{Fermi} UL in 0.1--300~GeV \citep{Schulz2014}. The grey and black solid lines are the instrument sensitivities for 100-h HESS and 100-h CTA, respectively \citep{Funk2013}.} \label{fig:SEDs} \end{figure} We predict that, in the most favourable case, the system might be detectable at HE by the \textit{Fermi} satellite. At very HE a detection should wait until the Cherenkov Telescope Array (CTA) becomes operational. \subsubsection{Scenario with equipartition magnetic field} \label{subsec:highB} We apply the same model as in Sect.~\ref{subsec:lowB}, but with a different value of $\zeta_B$ in order to analyse a scenario with an extremely high magnetic field. We explore the case of $\zeta_B= 1$, which corresponds to equipartition between the magnetic field and the thermal pressure in the shocked SW (Eq.~\ref{eq:B}). Under such conditions the magnetic field would be dynamically relevant (i.e. the magnetic pressure would be a significant fraction of the total pressure in the post-shock region). Nonetheless, we do not alter our prescription of the flow properties, as we adopt a phenomenological model for them only to get a rough approximation of the gas properties in the shocked SW. Besides, our intentions are just to give a semi-qualitative description of this extreme scenario, not to make a precise modelling of the emission, and to obtain a rough estimate of some relevant physical parameters. Fixing $\zeta_B= 1$ yields $B \sim 100$~$\mu$G in the shocked SW, and, if the magnetic field in the BS is solely due to adiabatic compression of the stellar magnetic field lines, we get $B_\star < 850$~kG on the stellar surface. Such high values of $B_\star$ are uncommon \citep{Neiner2015}, which suggests that in the energy equipartition scenario the magnetic field in the BS is more likely generated or at least amplified \textit{in situ}. Setting $f_\mathrm{NT,e} \approx 0.004$ leads to the spectral fit of the fluxes obtained by \citet{Benaglia2010} shown in Fig.~\ref{fig:SEDs}. There is a tension between the X-ray flux upper limits by \citet{Toala2016} and the synchrotron emission that extends to energies above 1~keV. This could be either evidence that the magnetic field in \object{BD$+43^{\circ}3654$} is not so extreme, or that the particle acceleration efficiency is overestimated. The integrated luminosities for each process for this case are \mbox{$L_\mathrm{sy} \approx 1.7 \times 10^{33}$~erg~s$^{-1}$}, \mbox{$L_\mathrm{Br} \approx 4.4 \times 10^{28}$~erg~s$^{-1}$}, \mbox{$L_\mathrm{IC,dust} \approx 1.3 \times 10^{31}$~erg~s$^{-1}$}, \mbox{$L_\mathrm{IC,\star} \approx 2.8 \times 10^{31}$~erg~s$^{-1}$}, and \mbox{$L_\mathrm{pp} \approx 4.3 \times 10^{29}$~erg~s$^{-1}$}. As the synchrotron cooling time is shorter than the IC cooling time, the bulk of the NT emission is produced in the form of low-energy ($< 1$~keV) photons, and much less luminosity goes into $\gamma$-ray production. The value of $L_\gamma$ obtained is $\sim 10^2$ times smaller than the one obtained in Sect.~\ref{subsec:lowB}. This result seems consistent as, roughly, $L_\mathrm{sy} \propto f_\mathrm{NT,e} \times \zeta_B$, and we are considering a value $\sim 10^2$ times larger for $\zeta_B$, and therefore $f_\mathrm{NT,e}$ must be $\sim 10^2$ smaller in order to fit the observations. This thus explains that $L_\mathrm{IC} \propto f_\mathrm{NT,e}$ is a factor $\sim 10^2$ smaller. In this scenario \object{BD$+43^{\circ}3654$} is not detectable with the current or forthcoming gamma-ray observatories. \subsubsection{Resolved emission} \label{sec:maps} The full information from the spatially resolved radio observations is only partially contained in the SEDs. Therefore, we compare the morphology predicted by our model with that observed by \citet{Benaglia2010} using VLA observations at two frequencies,1.42~GHz and 4.86~GHz. We use a synthesised beam of $12\arcsec \times 12\arcsec$ in our synthetic maps so it matches with the synthesized beam of the VLA observations from \citet{Benaglia2010}, with a corresponding resolution of $28\arcsec \times 28\arcsec$ (the relation between resolution and beam size is given in Sect.~\ref{sec:synthetic_maps}). \begin{figure*} \centering \hspace*{-1.5cm} \includegraphics[width=0.85\textwidth, angle=0]{./Radio_maps_Benaglia2010.png} \includegraphics[width=0.28\textwidth, angle=270]{./convmap_1G_BD.png} \includegraphics[width=0.28\textwidth, angle=270]{./convmap_5G_BD.png} \caption[]{Comparison between the observed radio emission maps taken from \citet{Benaglia2010} (top) and our synthetic maps (bottom). In the top right corner a synthesised beam of $12\arcsec \times 12\arcsec$ is shown (i.e. $\sigma_x = \sigma_y = 12\arcsec$). Left and right panels correspond to an observing frequency of 1.42~GHz and 4.86~GHz, respectively. The rms levels of the observed maps are 0.3~mJy~beam~$^{-1}$ (left) and 0.2~mJy~beam$^{-1}$ (right). The black solid contours of the 1.42~GHz maps are at 0.9,1.8, 3.0, and 4.5~mJy~beam$^{-1}$, the red dotted contour is at 0.3~mJy~beam$^{-1}$ (the observed map rms), and the green contour is at 6~mJy~beam$^{-1}$ (above the observed values). In the 4.86~GHz maps, the black contours are at 0.6, 1.2, 2.0, and 3.0~mJy~beam$^{-1}$, and the red dotted is at 0.2 mJy~ beam$^{-1}$ (the observed map rms). A projection factor $\cos{(DEC)}$ was used for the $x$-coordinates in the synthetic map in order to relate the synthetic map units to sky positions.} \label{fig:radio_maps} \end{figure*} As shown in Sect.~\ref{sec:synthetic_maps}, the morphology of the emission map depends on the inclination angle $i$. Based on the observed morphology, we can argue that $45\degr < i < 90\degr$. Figure ~\ref{fig:radio_maps} shows that there is a good agreement between the synthetic map and the observed map for $i \sim 75\degr$, both in morphology and emission levels. However, in the modelled map the emission is a bit more extended than the observed one. This could be attributed, for instance, to a magnetic field that drops faster with the apex distance than what our model assumptions predict (Sect.~\ref{subsec:hydrodynamics}). A more detailed analysis of the magnetic field structure could be addressed by assuming frozen-in conditions for the magnetic field in each individual line of fluid of the shocked SW. Also, MHD simulations of colliding-wind binaries performed by \citet{Falceta2012} suggest that the magnetic pressure is not a constant fraction of the thermal pressure throughout the shocked plasma. However, the implementation of detailed MHD models for the shocked fluid is beyond the scope of this work. \section{Conclusions}\label{sec:conclusions} We show that one-zone models can be reconciled with observations by properly accounting for the intensity of the IR dust photon field and motion of the plasma along the shocked region. The multi-zone model we developed improves the constraints on the different model parameters, namely the magnetic field intensity and the amount of energy deposited in NT particles. Our model reproduces fairly well the only radio observations for the object \object{BD$+43^{\circ}3654$}. However, the free parameters, namely the fraction of available energy that goes into accelerating NT electrons and the magnetic field intensity along the shocked SW, can only be constrained by current facilities (radio interferometers, X-ray and $\gamma$-ray satellites) with deep, high-sensitivity observations. Comparison between the synthetic and observed radio maps allows us to constrain the star direction of motion with respect to the observer. Discrepancies in the morphology could account for deviations in the system parameters and/or model hypothesis, such as a highly non-uniform environment or that the magnetic field pressure does not remain a constant fraction of the thermal pressure. Estimating the magnetic field strength in the shocked region allows us to set upper limits for the magnetic field on the stellar surface, thereby inferring whether magnetic field amplification is taking place in the particle acceleration region. The results presented in this work provide a good insight for future observational campaigns in the radio and $\gamma$-ray range. In particular, we show that the most relevant parameters for the radiative output are the mass-loss rate and velocity of the wind of the star, rather than the density of the ISM or the stellar spatial velocity. The NT luminosity (especially the $\gamma$-ray luminosity) is strongly dependent on the mass-loss rate, whereas the detailed shape of the SED is defined by the magnetic field strength, the ambient density, and the ratio $\dot{M}/v_\mathrm{w}$ according to Eq.~\ref{eq:ratio_radio_gamma}. Therefore, deep observations in soft X-rays ($0.3-10$~keV) could provide tighter constraints to the free parameters in our model, such as the injected particle energy distribution spectral index and the magnetic field strength, through comparison of the synchrotron and IC emission, constraining then the acceleration efficiency. We show that moderate values of the stellar surface magnetic field ($B_\star<100$~G) are sufficient to account for the synchrotron emission even if there is no amplification of the magnetic field besides adiabatic compression; however, if the magnetic field in the BS region is high ($\gtrsim 100$~$\mu$G), then magnetic field amplification is likely to occur. Low $B$-values would imply significant gamma-ray emission, whereas high $B$-values render the predicted gamma-ray radiation undetectable with present or forthcoming instrumentation. \begin{acknowledgements} We thank the anonymous referee for the detailed and constructive comments. This work is supported by CONICET (PIP2014-0338) and ANPCyT (PICT-2017-2865). SdP acknowledge support by PIP 0102 (CONICET). V.B-R. acknowledges support from MICINN by the MDM-2014-0369 of ICCUB (Unidad de Excelencia 'Mar\'ia de Maeztu'). V.B-R. and G.E.R. acknowledge support by the Spanish Ministerio de Econom\'{\i}a y Competitividad (MINECO) under grant AYA2016-76012-C3-1-P. The acknowledgment extends to the whole GARRA group. \end{acknowledgements} \bibliographystyle{aa}
2,869,038,154,219
arxiv
\section{Formulae for the phase space integration} The decay width of the process $Z^0(k) \to B^{(*)}_c(q_3) + b(q_2) + \bar c(q_1)$ is proportional to the following phase space, \begin{equation} d\Gamma \propto \frac{(2\pi)^4}{2k^0}\prod\limits_{f = 1}^3 {d^3} {\vec{q}_f} \frac{\delta^4(k - \sum_{f=1}^3 {q_f})} {(2\pi)^3 2q_f^0} \end{equation} where $k=(k^0,\vec k)=(k^0,k^1,k^2,k^3)$, $q_f=(q_f^0,\vec q_f)=(q_f^0,q_f^1,q_f^2,q_f^3)$. Furthermore, in the rest frame of $Z^0$ boson ($k^0=m_Z$), we have \begin{widetext} \begin{eqnarray} \frac{d\Gamma}{ds_1 ds_2}&\propto& \frac{1}{(2\pi)^5(2m_Z)} d^4q_1 d^4q_2d^4q_3 \delta(q_1^2-m_c^2) \delta(q_2^2-m_b^2) \delta(q_3^2-m_{B_c}^2)\theta(q_1^0)\theta(q_2^0)\theta(q_3^0)\nonumber\\ &&\times \delta(s_1-(q_1+q_3)^2)\delta(s_2-(q_1+q_2)^2)\delta(k-q_1-q_2-q_3)\nonumber\\ &\propto& \frac{1}{2^6 \pi^5 m_Z} d^4q_2d^4q_3\delta((k-q_2-q_3)^2-m_c^2) \delta(q_2^2-m_b^2)\delta(q_3^2-m_{B_c}^2) \theta(k^0-q_2^0-q_3^0)\nonumber\\ &&\times \theta(q_2^0)\theta(q_3^0)\delta(s_1-(k-q_2)^2)\delta(s_2-(k-q_3)^2)\nonumber\\ &\propto& \frac{1}{2^8 \pi^5 m^3_Z}d^3\vec{q}_2d^3\vec{q}_3 \delta(q_2^{0^2}-\vec{q}_2^2-m_c^2) \delta(q_3^{0^2}-\vec{q}_3^2-m_{B_c}^2) \theta(k^0-q_2^0-q_3^0)\nonumber\\ &&\times \theta (q_2^0)\theta (q_3^0)\delta(s_1+m_{B_c}^2-m_c^2-2m_Zq_3^0+2q_2^0q_3^0-2\vec{q}_2 \cdot \vec{q}_3)\nonumber\\ &\propto& \frac{|\vec{q}_2|\cdot|\vec{q}_3|}{2^{10} \pi^5 m_Z^3} d\Omega_2 \sin\theta_{23}d\theta_{23} d{\phi_{23}}\theta(k^0-q_2^0-q_3^0)\theta(q_2^0)\theta(q_3^0)\nonumber\\ &&\times \delta(s_1+s_2-m_Z^2-m_c^2+2q_2^0q_3^0-2|\vec{q}_2|\cdot|\vec{q}_3|\cos\theta _{23}) \nonumber\\ &\propto& \frac{1}{2^8 \pi^3 m^3_Z}\theta (k^0-q_2^0-q_3^0)\theta (q_2^0)\theta (q_3^0)\theta(X) , \end{eqnarray} \end{widetext} where $q_2^0=\frac{m_Z^2+m_b^2-s_1}{2m_Z}$ and $q_3^0=\frac{m_Z^2+m_{B_c}^2-s_2}{2m_Z}$, $|\vec q_2|= \sqrt{q_2^{0^2}-m_b^2}$ and $|\vec q_3|=\sqrt{q_3^{0^2}-m_{bc}^2}$. The step function $\theta(X)$ is determined by ensuring $|\cos\theta_{23}|\leq1$, where \begin{displaymath} \cos\theta_{23} =\frac{s_1 + s_2 - m_Z^2 - m_c^2 + 2q_2^0 q_3^0}{2\left|\vec{q}_2 \right| \left|\vec{q}_3\right|}. \end{displaymath} With all these step function above, we can get the integration ranges: \begin{widetext} \begin{eqnarray} s_1^{\min } &=& m_c^2 + m_{B_c}^2 - \frac{\left(s_2 - m_Z^2 + m_{B_c}^2 \right) \left(s_2 - m_b^2 + m_c^2 \right) + \sqrt{\eta\left(s_2,m_Z^2,m_{B_c}^2\right)\eta\left(s_2,m_b^2,m_c^2 \right)}}{2s_2}\\ s_1^{\max } &=& m_c^2 + m_{B_c}^2 - \frac{\left( s_2 - m_Z^2 + m_{B_c}^2 \right) \left(s_2 - m_b^2 + m_c^2 \right) - \sqrt {\eta \left( s_2, m_Z^2, m_{B_c}^2\right) \eta\left(s_2,m_b^2,m_c^2 \right)}} {2s_2}\\ s_2^{\min} &=& \left(m_c + m_b \right)^2 \\ s_2^{\max} &=& \left(m_Z - m_{B_c} \right)^2 \end{eqnarray} \end{widetext} where $\eta(x,y,z)=(x-y-z)^2-4yz$. Furthermore, we can obtain the $\cos\theta_{23}$ distribution \begin{eqnarray} \frac{d\Gamma}{ds_1 d\cos\theta_{23}} \propto \frac{J}{2^7 \pi^3 m_Z^3}\theta(k^0 - q_2^0 - q_3^0) \theta (q_2^0)\theta (q_3^0)\theta (X)\nonumber\\ \end{eqnarray} where the extra Jacobian \begin{equation} J=\frac{- \left| {{\vec q}_2} \right|\left| {\vec q}_3 \right|}{\left| {1 - \frac{q_2^0}{m_Z} + \frac{\left| {{{\vec q}_2}} \right|(m_Z^2 + m_{B_c}^2 - {s_2})}{m_Z \sqrt {m_{B_c}^4 - 2(m_Z^2 + {s_2})m_{bc}^2 + (m_Z^2 -s_2 )^2}}\cos\theta _{23}}\right|} \end{equation} and there are two $s_2$ in different range of $\cos\theta_{23}$ and $s_1$, i.e. \begin{widetext} \begin{eqnarray} s^{\pm}_2=&&\frac{1}{|\vec{q_2}|^2\cos^2\theta_{23}-(q_2^0-m_Z)^2} \Bigg\{[|\vec{q_2}|^2(m_{B_c}^2 +{m_Z}^2)\cos^2\theta_{23}-(q_2^0-m_Z) [m_Z(s_1 +q_2^0 m_Z)+q_2^0 m_{B_c}^2-m_c^2-{m_Z}^2]] \nonumber\\ &&\pm m_Z|\vec{q_2}|\cos\theta_{23}[m_{B_c}^4-2m_{B_c}^2(m_c^2+2(m_Z-q_2^0)^2 -s_1-2|\vec{q_2}|^2 \cos^2\theta_{23})+(m_c^2-s_1)^2]^\frac{1}{2}\Bigg\} , \end{eqnarray} \end{widetext} where $s^{+}_2$ is obtained when $\cos\theta_{23}\in [0,-1]$ and $s_1\in [s_{1\min}[{\cos\theta_{23}}], s_{1\min}[{\cos\theta_{23}=0}]]$. And $s^{-}_2$ is obtained when $\cos\theta_{23}\in[1,0]$ and $s_1\in[s_{1\min}[\cos\theta_{23}=0],s_{1\max}]$ or $\cos\theta_{23}\in[0,-1]$ and $s_1\in[s_{1\min}[{\cos\theta_{23}}],s_{1\max}]$. The $\theta(X)$ function determines the boundary of $s_1$: \begin{widetext} \begin{eqnarray} s_{1\max }&=&(m_Z - m_b)^2 \\ s_{1\min}[\cos\theta_{23}]&=&\frac{m_b^{2} (\cos^2\theta_{23}-1) m_{B_c}^2 + m_Z^{2}(m_c^2 + m_{B_c}^2 \cos^2\theta_{23}) +m_{B_c} m_Z\sqrt{Y}} {(\cos^2\theta_{23}-1) m_{B_c}^2 +m_Z^2} \end{eqnarray} \end{widetext} with \begin{widetext} \begin{eqnarray} Y&=&4m_b^{2} m_{B_c}^2 \cos^4\theta_{23}- (m_b^4 +(6m_{B_c}^2 -2(m_c^2 +m_Z^2)) m_b^2 +((m_{B_c}-m_c)^2-m_Z^2) \nonumber\\ &&\times ((m_{B_c}+m_c)^2 -m_Z^2)) \cos^2\theta_{23} +\left(m_b^2 +m_{B_c}^2 -m_c^{2} -m_Z^2\right)^2. \end{eqnarray} \end{widetext} The distribution for $\cos\theta_{13}$ can be obtained in a similar way. \section{Amplitude of the process $Z^0(k)\rightarrow B^{(*)}_c(q_3) + b(q_2) +\bar c(q_1)$} The amplitude $M$ of the process $Z^0(k)\rightarrow B^{(*)}_c(q_3) + b(q_2) +\bar c(q_1)$ has the general structure \begin{equation} M = {\bar u_s}({q_2})A{v_{s'}}({q_1}) , \end{equation} where $A$ can be read from Eqs.(\ref{A1})-(\ref{A4}). To derive analytical expression for the process and to make its form simpler as much as possible, we adopt the `new trace amplitude approach' suggested by Refs.\cite{chang1,tbc2} to do our calculation. Detailed process of the approach can be found in Refs.\cite{chang1,tbc2}, and here, we shall only list our main results. After summing up the spin states, the square of the amplitude can be divided into four parts, \begin{equation} |M|^2 = |M_{1}|^2 + |M_{2}|^2 + |M_{3}|^2 + |M_{4}|^2, \end{equation} where by introducing a light-like momentum $k_0$ and a spacelike vector $k_1$ that satisfies the relations, $k_1\cdot k_1=-1$ and $k_0\cdot k_1=0$, the four amplitude $M_i$ can be written as \begin{eqnarray} M_1 &=& \frac{N}{{\sqrt 2 }} Tr\left[ {({\slashed{q}_1} - {m_c}){\slashed{k}_0}({\slashed{q}_2} + {m_b})A } \right] ,\nonumber\\ M_2 &=& \frac{N}{{\sqrt 2 }} Tr\left[ {({\slashed{q}_1} - {m_c}){\gamma _5}{\slashed{k}_0}({\slashed{q}_2} + {m_b})A } \right] , \nonumber\\ M_3 &=& \frac{N}{{\sqrt 2 }} Tr\left[ {({\slashed{q}_1} - {m_c}){\slashed{k}_0}{\slashed{k}_1}({\slashed{q}_2} + {m_b})A } \right]\nonumber \end{eqnarray} and \begin{equation} M_4 = \frac{N}{{\sqrt 2 }} Tr\left[ {({\slashed{q}_1} - {m_c}){\gamma _5}{\slashed{k}_1}{\slashed{k}_0}({\slashed{q}_2} + {m_b})A } \right] ,\nonumber \end{equation} where $N = 1/\sqrt {4({k_0}\cdot{q_1})({k_0}\cdot{q_2})}$ is the normalization constant. $k_0$ and $k_1$ are arbitrary momenta, and in order to write down $M_n$ as explicitly and simply as possible: \\ 1) We set $k_0 = {q_2} - \alpha {q_1}$, where the coefficient $\alpha$ is determined by the requirement that $k_0$ be a lightlike vector: \begin{equation} \alpha = \frac{{q_1} \cdot {q_2} \pm \sqrt{({q_1} \cdot {q_2})^2 - m_b^2m_c^2}}{m_c^2} . \end{equation} 2) We set $k_1^\mu = i{N_0}{\varepsilon ^{\mu \nu \rho \sigma }}{q_{1\nu }}{k_{\rho }}{q_{2\sigma }}$, where $N_0$ ensures $k_1\cdot k_1=-1$. It is found that $\slashed{k}_1$ can be expressed as, \begin{equation} \slashed{k}_1 = {N_0}{\gamma _5}\left[ {{q_1} \cdot {k}{\slashed{q}_2} + {\slashed{q}_1}{k} \cdot {q_2} - {q_1} \cdot {q_2}{\slashed{k}} - {\slashed{q}_1}{\slashed{k}}{\slashed{q}_2}} \right]. \\ \end{equation} And then the resultant $M_i$ can be simplified as: \begin{eqnarray} M_1 &=& {L_1} \times Tr [({\slashed{q}_1} - m_c)({\slashed{q}_2} + {m_b})A] ,\\ M_2 &=& {L_2} \times Tr [({\slashed{q}_1} - m_c){\gamma _5}(\slashed{q}_{2}+{m_b})A] ,\\ M_3 &=& M_{3'} - {N_0}[{m_b} ({q_1} \cdot {k}) + {m_c}({q_2} \cdot {k})]M_2 ,\\ M_4 &=& M_{4'} + {N_0}[{m_b} ({q_1} \cdot {k}) - {m_c}({q_2} \cdot {k})]M_1 , \end{eqnarray} where \begin{eqnarray} M_{3'} &=&\frac{N_0}{4L_2} Tr\left[ {({\slashed{q}_1} - {m_c}){\gamma _5}{\slashed{k}}({\slashed{q}_2} + {m_b})A } \right] , \\ M_{4'} &=&-\frac{N_0}{4L_1} Tr\left[ {({\slashed{q}_1} - {m_c}){\slashed{k}}({\slashed{q}_2} + {m_b})A } \right] . \end{eqnarray} Furthermore, the amplitudes $M_i$ can be expanded over some basic Lorentz structures: \begin{equation} M_i(n)=\sum^m_{j=1} A^i_j(n) B_j(n) (i=1-4) \end{equation} and \begin{equation} M_{i'}(n)=\sum^m_{j=1} A^{i'}_j(n) B_j(n) \;\; (i'=3,4) \label{amat} \end{equation} where $m$ is the number of basic Lorentz structure $B_j(n)$, whose value dependents on the $(c\bar{b})$-quarkonium state $n$: e.g. $m=3$ for $n=(c\bar{b})[^1S_0]_1$, $m=12$ for $n=(c\bar{b})[^3S_1]_1$. As for $A^3_j(n)$ and $A^4_j(n)$, they can be expressed by \begin{eqnarray} A^3_j(n) &=& A^{3'}_j(n)-{N_0}[{m_b} ({q_1} \cdot {k}) + {m_c}({q_2} \cdot {k})] A^2_j(n) , \nonumber\\ A^4_j(n) &=& A^{4'}_j(n)+{N_0}[{m_b} ({q_1} \cdot {k}) - {m_c}({q_2} \cdot {k})] A^1_j(n) . \nonumber \end{eqnarray} The explicit expression for $A^{1,2}_j(n)$ and $A^{3',4'}_j(n)$ of each state shall be listed in the following subsections. To shorten the notation, we set $T_b=\frac{1}{4}-\frac{1}{3}{\sin ^2}{\theta _w}$ and $T_c=\frac{1}{4}-\frac{2}{3}{\sin ^2}{\theta _w}$. And, we define some dimensionless parameters \begin{displaymath} r_1=\frac{m_b}{m_Z},\;\; r_2=\frac{m_c}{m_Z},\;\; r_3=\frac{m_{B_c}}{m_Z} \end{displaymath} and \begin{eqnarray} &&x=q_3\cdot k/m_Z^2=\frac{1}{2m_Z^2}(m_{B_c}^2+m_Z^2-s_2),\nonumber\\ &&y=q_2\cdot k/m_Z^2=\frac{1}{2m_Z^2} (m_b+m_Z^2-s_1), \nonumber\\ &&z=q_1\cdot k/m_Z^2=\frac{1}{2m_Z^2}(m_Z^2+m_c^2-s_3), \nonumber\\ &&u=q_3\cdot q_2/m_Z^2 =\frac{1}{2m_Z^2}(s_3-m_{B_c}^2-m_b^2), \nonumber\\ &&v=q_3\cdot q_1/m_Z^2=\frac{1}{2m_Z^2}(s_1-m_{B_c}^2-m_c^2), \nonumber\\ &&w=q_1\cdot q_2/m_Z^2= \frac{1}{2m_Z^2}(s_2-m_b^2-m_c^2),\nonumber \end{eqnarray} where $s_1=(q_1+q_3)^2$, $s_2=(q_1+q_2)^2$, and $s_3=(q_2+q_3)^2$, which satisfy the relation: $s_1+s_2+s_3=m_Z^2+m_c^2+m_b^2+m_{B_c}^2$. And the short notations for the denominators are \begin{eqnarray} &&d_1=\frac{1}{(q_2-k)^2-m_b^2}\frac{1}{(q_{31}+q_1)^2},\nonumber\\ &&d_2=\frac{1}{(k-q_{32})^2-m_b^2}\frac{1}{(q_{31}+q_1)^2},\nonumber\\ &&d_3=\frac{1}{(q_{32}+q_2)^2}\frac{1}{(q_{31}-k)^2-m_c^2},\nonumber\\ &&d_4=\frac{1}{(q_{32}+q_2)^2}\frac{1}{(q_3+q_2)^2-m_c^2},\nonumber \end{eqnarray} Furthermore, the following relations are useful to short the expressions: \begin{displaymath} u+v+r_3^2=x,\;\; w+u+r_2^2=y, \;\; w+v+r_1^2=z. \end{displaymath} \subsection{Coefficients for the production of $B_c$} There are 3 basic Lorentz structures $B_j$ for the case of $B_c$ ($^1S_0$), which are \begin{displaymath} B_1=\frac{q_3\cdot\epsilon(k)}{m_Z} ,\; B_2= \frac{q_2\cdot\epsilon(k)}{m_Z},\; B_3=\frac{i}{m_Z^3}\varepsilon(k,q_3,q_2,\epsilon(k)), \end{displaymath} where $\varepsilon(k,q_3,q_2,\epsilon(k))=\varepsilon^{\mu\nu\rho\sigma} k_{\mu}q_{3\nu} q_{2\rho} \epsilon_\sigma(k)$. The values of the coefficients $A^{1}_j$ and $A^{3'}_j$ are \begin{widetext} \begin{eqnarray} A^1_1 &=&\frac{L_1 m_Z{}^{7/2} }{\sqrt{r_3}}\Bigg((r_1 (1-2 r_1 r_3)-(2 r_1- r_3) y)d_1+(r_1(r_3+2 r_3 u-2 r_1 x)-r_3^2 y)d_2 +(r_1 (2 r_2 (x-u)+r_3 (y-1))\nonumber\\ &&+r_2 (r_3 y-2 r_2 u))d_3+(r_1^3+2 (r_2-2 r_3) r_1^2 +(r_2^2-r_3^2-y) r_1+(r_2-2 r_3) (2 u-y))d_4\Bigg),\\ A^1_2 &=&\frac{L_1 m_Z{}^{7/2} }{\sqrt{r_3}}((r_3 (x-2 u)+r_3 (4 u-2 x-4 y+2))d_1+(r_3 x-2 r_1r_3^2)d_2 \nonumber\\ &&+ (2 r_2 r_3^2-r_3 x)d_3+(-2 r_1 r_3^2+2 r_2 r_3^2-2 u r_3-x r_3)d_4),\\ A^1_3 &=&-4 L_1 m_Z{}^{7/2} \sqrt{r_3} T_b(d_1+d_2+d_3+d_4),\\ A^{3'}_1 &=&\frac{m_Z{}^{9/2} N_0} {{L_2} r_3^{3/2}}(T_b (-2 y^2+y-r_1 r_3)r_3 d_1 +T_b (-2 x r_1^3 +(r_3 (4 x+4 y-3)-2 r_2 x) r_1^2 +(r_2 r_3-2 u+2 (r_3^2- \nonumber\\ && 2 r_2r_3 +2 u) y) r_1+r_3 (2 u+(-2 x-2 y+1) y))d_2 -T_c ((r_3-2 r_2 x) r_1^2+r_2 (r_3 (4 x+4 y-3) -2 r_2 x) r_1 \nonumber\\ && -2 r_3 y^2-2 r_2 u+(-4 r_3 r_2^2+2 r_3^2 r_2+4 u r_2+r_3) y)d_3+T_c (r_1 r_3 (1-2 x) +(2r_3(r_3 -3r_1)-2 u) y)r_3 d_4),\\ A^{3'}_2 &=&\frac{m_Z{}^{9/2} N_0} {L_2 \sqrt{r_3}}(T_b (2 x r_1^2+2 (-2 x r_3-2 y r_3+r_3+r_2x)r_1 +(-r_3^2+2 r_2 r_3-2 u+x) \times(2 y-1))d_1 \nonumber\\ &&-T_b (r_3^2+x (-2 x-2 y+1))d_2-T_c (r_3^2+x (2 y-1))d_3 +T_c (x r_1^2+2 (-2 x r_3-2 y r_3+r_3+r_2 x) r_1 \nonumber\\ &&+r_2^2 x+2 r_2 r_3 (2y-1) -(r_3^2+2 u) (x+2 y-1))d_4),\\ A^{3'}_3 &=&\frac{m_Z{}^{9/2} N_0 }{4 L_2 \sqrt{r_3}}((2 r_1 r_3+2 y-1)d_1+(2 r_1 r_3-2 x-2 y+1)d_2-(2r_2 r_3+2 y-1)d_3 + (2 r_3^2-4 r_2 r_3+2 u)d_4) \end{eqnarray} The values of the coefficients $A^{2}_j$ and $A^{4'}_j$ are \begin{eqnarray} A^2_1 &=&-\frac{4 L_2 m_Z{}^{7/2} }{\sqrt{r_3}}(T_b ((2r_1+ r_3) y-r_1)d_1+T_b ((5 r_1-r_2) r_3 y-r_1 (2 r_1 r_3^2+(4 r_1 (r_1-r_2)+4 u \nonumber\\ &&+1) r_3+2r_1 u-2 r_2 u-2 r_1 x))d_2+(r_1 r_2 r_3 T_c(6r_1-2r_2)+r_3 T_c r_1+(4 r_2 r_1+2 r_2 r_3 )T_c u \nonumber\\ &&-2 r_2 T_c x r_1-r_3 T_c y r_1-3 r_2 r_3 T_c y)d_3+(-r_3 T_c r_1^3+4 r_3^2 T_c r_1^2+r_3^3 T_c r_1 -4 r_2 r_3^2 T_c r_1 \nonumber\\ &&+r_2^2 r_3 T_c r_1+r_3 T_c y r_1+4 r_3^2 T_c u-2 r_2 r_3 T_c u-2 r_3^2 T_c y+r_2 r_3 T_c y)d_4),\\ A^2_2 &=&-\frac{4 L_2 m_Z{}^{7/2} }{\sqrt{r_3}}(T_b ((6r_1-2r_2)r_1 r_3 +(4 u-2 x-4 y+2) r_3 +2 r_1 u-2 r_2 u-3 r_1 x+r_2 x)d_1 \nonumber\\ &&+(r_2-r_1) T_b x d_2+(r_1-r_2) T_c x d_3-T_c (-2 r_3 ((r_1-r_2)^2-r_2 r_3) -2 (r_1-r_2) u-r_3 x)d_4),\\ A^2_3 &=&\frac{L_2 m_Z{}^{7/2} }{\sqrt{r_3}}( r_3 d_1+(r_2-r_1)d_2+(r_2-r_1)d_3-r_3 d_4),\\ A^{4'}_1 &=&\frac{m_Z{}^{9/2} N_0}{4 L_1 r_3{}^{3/2}}((r_1 r_3+y) (2 y-1)d_1 +(2 x r_1^3-(r_3+2 r_2 x) r_1^2+(r_2 r_3+2 u-4 u y) r_1 \nonumber\\ &&-r_3(2 u+(-2 x-2 y+1) y))d_2+((r_3-2 r_2 x) r_1^2+r_2 (2 r_2 x-r_3) r_1 \nonumber\\ &&+(2 y-1) (2 r_2 u-r_3 y))d_3+(r_1 r_3 (2 x-1)-(-4 r_1 r_3-2 u) y)d_4),\\ A^{4'}_2 &=&\frac{m_Z{}^{9/2} N_0}{4 L_1 \sqrt{r_3}}( (r_3 (2 r_1-2 r_2+r_3)+2 u-x) (2 y-1)d_1+(r_3^2+2 r_1 (x+2 y-1) r_3+x \nonumber\\ &&-2 x (x+y))d_2+(r_3^2-2 r_2 (x+2 y-1) r_3+x (2 y-1))d_3+(-x r_1^2+2 r_3 (2 x+2 y-1) r_1 \nonumber\\ &&+r_2^2 x+(r_3^2+2 u) (x+2 y-1)-2 r_2 r_3 (2 x+2y-1))d_4),\\ A^{4'}_3 &=&-\frac{ N_0 m_Z{}^{9/2}}{L_1 \sqrt{r_3}} (T_b (2 y-1) d_1+T_b (-2 x-2 y+1) d_2 +(T_c-2T_c y)d_3+T_c (r_1^2-r_2^2+r_3^2+2 u)d_4) \end{eqnarray} \end{widetext} \subsection{Coefficients for $B^*_c$} There are 12 basic Lorentz structures $B_j$ for the case of $B^*_c$ $(^3S_1)$, which are \begin{eqnarray} B_1 &=& \epsilon(k)\cdot\epsilon(q_3),\;\; B_2 =\frac{i}{m_Z^2}\varepsilon(k,q_3,\epsilon(k),\epsilon(q_3)),\nonumber\\ B_3 &=&\frac{i}{m_Z^2}\varepsilon(k,q_2,\epsilon(k),\epsilon(q_3)),\;\; B_4 = \frac{i}{m_Z^2}\varepsilon(q_3,q_2,\epsilon(k),\epsilon(q_3)),\nonumber\\ B_5 &=&\frac{k\cdot\epsilon(q_3) q_3\cdot\epsilon(k)}{m_Z^2},\;\; B_6 =\frac{k\cdot\epsilon(q_3) q_2\cdot\epsilon(k)}{m_Z^2},\nonumber\\ B_7 &=&\frac{q_2\cdot\epsilon(q_3) q_3\cdot\epsilon(k)}{m_Z^2},\;\; B_8 =\frac{q_2\cdot\epsilon(k) q_2\cdot\epsilon(q_3)}{m_Z^2}, \nonumber\\ B_9 &=&\frac{i}{m_Z^4} \varepsilon(k,q_3,q_2,\epsilon(k))k \cdot \epsilon(q_3),\nonumber\\ B_{10} &=& \frac{i}{m_Z^4} \varepsilon(k,q_3,q_2,\epsilon(q_3))q_3 \cdot \epsilon(k), \nonumber\\ B_{11} &=& \frac{i}{m_Z^4} \varepsilon(k,q_3,q_2,\epsilon(q_3))q_2 \cdot \epsilon(k), \nonumber\\ B_{12} &=& \frac{i}{m_Z^4} \varepsilon(k,q_3,q_2,\epsilon(k))q_2 \cdot \epsilon(q_3) . \nonumber \end{eqnarray} The values of the coefficients $A^{1}_j$ and $A^{3'}_j$ are \begin{widetext} \begin{eqnarray} A^1_1 &=&\frac{4 L_1 m_Z{}^{7/2} }{\sqrt{r_3}}(T_b (r_2 y+r_1 (x+y-1))r_3 d_1+T_b (-2 xr_1^2+(r_3 (x-1)-2 r_2 x) r_1 \nonumber\\ &&-2 u x+r_3^2 y+2 x y)d_2-T_c (2 x r_1^2+(2 r_2 x+r_3 (x-1)) r_1+2 u x+r_3^2 y-2 x y)d_3 \nonumber\\ &&-T_c (r_1^3-(r_2^2-r_3^2-2u+x+y) r_1+r_2 y)r_3 d_4),\\ A^1_2 &=&\frac{L_1 m_Z{}^{7/2} }{\sqrt{r_3}}(-r_1 r_3 d_1+(r_1 (-2r_1- r_3)-4 u+4y)d_2 +(2 r_1^2+r_3 r_1+4 u-4 y)d_3-r_1 r_3 d_4),\\ A^1_3 &=&\frac{L_1 m_Z{}^{7/2} }{\sqrt{r_3}}((r_1-r_2)r_3 d_1+(3 r_3^2-2 x)d_2-(r_3^2-2x)d_3- (r_1-r_2) r_3 d_4),\\ A^1_4 &=&-\frac{2 L_1 m_Z{}^{7/2} }{\sqrt{r_3}}(r_1 r_3 d_1+(r_1 r_3+x-1)d_2+(r_2 r_3-x+1)d_3+r_2 r_3 d_4),\\ A^1_5 &=&\frac{4 L_1 m_Z{}^{7/2} }{\sqrt{r_3}}(-r_1 r_3 T_b d_1+T_b (2 r_1^2+r_3 r_1+2 u-2 y)d_2 +T_c(2 r_1^2+r_3 r_1+2 u-2 y)d_3- r_1 r_3 T_c d_4),\\ A^1_6 &=&-4 L_1 m_Z{}^{7/2}( (3 r_1+r_2) \sqrt{r_3} T_b d_1+r_3^{3/2} T_b d_2-r_3^{3/2} T_c d_3+(r_1-r_2) \sqrt{r_3} T_c d_4),\\ A^1_7 &=&8 L_1 m_Z{}^{7/2}(-r_1 \sqrt{r_3} T_b d_2+r_2 \sqrt{r_3} T_c d_3+r_3^{3/2}T_c d_4),\\ A^1_8 &=&8 L_1 m_Z{}^{7/2} r_3^{3/2}(T_b d_1+T_c d_4),\\ A^1_9 &=&A^1_{10}=\frac{2 L_1 m_Z{}^{7/2}}{\sqrt{r_3}}(d_2-d_3),\\ A^1_{11} &=&A^1_{12}=0,\\ A^{3'}_1 &=&-\frac{m_Z{}^{9/2} N_0 }{4 L_2 \sqrt{r_3}}(r_3 ((2 x+2 y-1) r_1^2+(r_2-2 r_2 y) r_1+u+y-2 y (x+y))d_1 +(-2 x r_1^3+(r_3 (4 x+4 y-3)\nonumber\\ &&-2 r_2 x) r_1^2+(r_2 r_3-2 (u+(x-1) x)+2 (r_3^2-2 r_2r_3+2 u-x) y) r_1+2 r_2 x y+r_3 (-2 y^2-2 x y+y+u))d_2 \nonumber\\ &&-((r_3-2 r_2 x) r_1^2+(-2 x r_2^2+r_3 (4 x+4 y-3) r_2-2 x (x+y-1)) r_1 -2 r_2u-4 r_2^2 r_3 y+ 2 r_2 (r_3^2+2 u+x) y\nonumber\\ &&+r_3 (-2 y^2-2 x y+y+u))d_3 +r_3 (-u-r_1 (r_1+r_2 (2 x-1))+((r_1-r_2)^2+r_3^2+2 u) y)d_4),\\ A^{3'}_2 &=&\frac{2 m_Z{}^{9/2} N_0 }{L_2 \sqrt{r_3}}(T_b (r_1 (x-1)+(r_1 -r_2) y)d_2 -r_3 T_b y d_1 +T_c (r_1 (x-1)+(r_1 -r_2) y)d_3-r_3 T_c (r_1^2+u)d_4),\\ A^{3'}_3 &=&\frac{m_Z{}^{9/2} N_0 \sqrt{r_3} }{L_2}(T_b (1-2 y)d_1+T_b (2 x+2 y-1)d_2+T_c(2 x+2 y-1)d_3 +2r_2 r_3 T_c d_4),\\ A^{3'}_4 &=&-\frac{m_Z{}^{9/2} N_0 \sqrt{r_3} }{L_2}(T_b (d_1+d_2)+T_c d_3+T_c (2 x+2 y-1) d_4),\\ A^{3'}_5 &=&\frac{m_Z{}^{9/2} N_0}{2 L_2 \sqrt{r_3}}(r_3 (r_1^2-y)d_1+(r_2 y-r_1(u+x+y-1))d_2 + (r_3 y-r_2 (u+y)+r_1 (x+y-1))d_3+r_3(r_1^2+u)d_4),\\ A^{3'}_6 &=&\frac{m_Z{}^{9/2} N_0 \sqrt{r_3}}{4 L_2}( - (2 u+2 y-1)d_1+(2 r_1 r_3-2 x-2 y+1)d_2 +(2r_2 r_3+2 y-1)d_3+2r_2 r_3 d_4),\\ A^{3'}_7 &=&-\frac{m_Z{}^{9/2} N_0 }{4 L_2 \sqrt{r_3}}(-r_3 d_1+(r_3-2 r_1 x)d_2+(r_3-2 r_2 x)d_3-r_3 (2 x+4 y-1)d_4),\\ A^{3'}_8 &=&\frac{m_Z{}^{9/2} N_0 \sqrt{r_3} }{2 L_2}(x+2 y-1)(d_1+d_4),\\ A^{3'}_9 &=&-\frac{2 m_Z{}^{9/2} N_0 \sqrt{r_3} }{L_2}T_c d_3,\\ A^{3'}_{10} &=&A^{3'}_{11}=\frac{2 m_Z{}^{9/2} N_0 }{L_2 \sqrt{r_3}}(r_1 T_b d_2- r_2 T_c d_3),\\ A^{3'}_{12} &=&-\frac{2 m_Z{}^{9/2} N_0 \sqrt{r_3} }{L_2}T_c d_4, \end{eqnarray} The values of the coefficients $A^{2}_j$ and $A^{4'}_j$ are \begin{eqnarray} A^2_1 &=&-\frac{L_2 m_Z{}^{7/2} }{\sqrt{r_3}}(r_3 (2 r_1^3-2 r_2 r_1^2+(2 u-x-3 y+1) r_1+r_2 y)d_1+(r_3 r_1(6r_1^2-2r_1 r_2) \nonumber\\ &&+2 (u-2 x) r_1^2+(2 r_2 (x-u)+r_3 (4 u-x-5 y+1)) r_1-2u x+r_2 r_3 y+2 x y)d_2 \nonumber\\ &&-((4 r_2 r_3-2 x) r_1^2+(2 r_2 r_3^2-(4 r_2^2+x+y-1) r_3+2 r_2 u)r_1-2 r_2^2 u+r_2 r_3 (4 u-3 y) \nonumber\\ &&+2 x (y-u))d_3+r_3(r_1^3-2 r_2 r_1^2+(r_2^2+r_3^2+2 u-x-y) r_1+r_2 (y-2 u))d_4),\\ A^2_2 &=&\frac{4 L_2 m_Z{}^{7/2} }{\sqrt{r_3}}(-r_1 r_3 T_b d_1+T_b (r_1(4 r_1-r_3)+4 (u-y))d_2 +T_c (r_1 (4 r_1-r_3)+4 (u-y))d_3-r_1 r_3 T_c d_4),\\ A^2_3 &=&\frac{4 L_2 m_Z{}^{7/2} }{\sqrt{r_3}}(- (r_1-r_2)r_3 T_b d_1+T_b (2x-2r_2-r_3)d_2 +T_c (2x-2r_2-r_3)d_3-(r_1-r_2) r_3 T_c d_4),\\ A^2_4 &=&\frac{8 L_2 m_Z{}^{7/2} }{\sqrt{r_3}}(T_b (x-1)d_2+T_c (x-1)d_3),\\ A^2_5 &=&-\frac{L_2 m_Z{}^{7/2}}{\sqrt{r_3}}( r_1 r_3 d_1+(r_1 (4 r_1- r_3)+2(u-y))(d_2-d_3)+ r_1 r_3 d_4),\\ A^2_6 &=&-L_2 m_Z{}^{7/2} (r_1-r_2) \sqrt{r_3}(d_1+d_2-d_3+d_4),\\ A^2_7 &=&2 L_2 m_Z{}^{7/2} r_1 \sqrt{r_3}(d_1+d_4),\\ A^2_8 &=&2 L_2 m_Z{}^{7/2} (r_1-r_2) \sqrt{r_3}(d_1+d_4),\\ A^2_9 &=&A^2_{10}=-\frac{8 L_2 m_Z{}^{7/2} }{\sqrt{r_3}}(T_b d_2+T_c d_3),\\ A^2_{11} &=&A^2_{12}=0,\\ A^{4'}_1 &=&\frac{m_Z^{9/2} N_0}{L_1 \sqrt{r_3}}(r_3 T_b (r_1^2-r_2 r_1+u+(-2 x-2 y+1) y)d_1 +T_b (r_3 r_1^2+(2 x (x+y-1)-r_2 r_3) r_1 +2 r_2 x y \nonumber\\ &&+r_3 (y+u-2 y^2-2 x y))d_2 +T_c ((r_2 r_3+2 x (x+y-1)) r_1-r_3 r_1^2+2 r_2 x y-r_3 (y+u-2 y^2-2 x y))d_3 \nonumber\\ &&+r_3 T_c ((y-1) r_1^2+r_2 r_1-u+(-r_2^2+r_3^2+2 u) y)d_4),\\ A^{4'}_2 &=&\frac{m_Z{}^{9/2} N_0 }{2 L_1 \sqrt{r_3}}( r_3 (y-r_1^2)d_1+(r_1 (r_1^2-r_2r_1+2 u+x-1)+(2 r_2-r_3) y)d_2 \nonumber\\ &&+ (r_2 r_1^2-(r_2^2+x-1) r_1+2 r_2 u-(2 r_2+r_3) y)d_3+r_3 (r_1^2-r_2 r_1+u)d_4),\\ A^{4'}_3 &=&\frac{m_Z{}^{9/2} N_0}{4 L_1 \sqrt{r_3}}(r_3 (2 r_1 (r_2-r_1)+2 y-1)d_1+ (-2 r_1r_3^2-2 y r_3+r_3+2 (r_1-r_2) x)d_2 \nonumber\\ &&+ (-2 r_2 r_3^2-2 y r_3+r_3+2 (r_2-r_1)x)d_3-r_3 (r_3^2-(r_1-r_2)^2)d_4),\\ A^{4'}_4 &=&\frac{m_Z{}^{9/2} N_0 }{4 L_1 \sqrt{r_3}}(r_3 d_1+(2 r_2-r_3+4 r_1 (x+y-1))d_2 +(2 r_1-r_3+4 r_2 (x+y-1))d_3+r_3 (2 x+2 y-1)d_4),\\ A^{4'}_5 &=&-\frac{2 m_Z{}^{9/2} N_0}{L_1 \sqrt{r_3}}( -r_3 T_b y d_1+T_b (r_1 (r_1^2-r_2r_1+u+x-1)+r_3 y)d_2+T_c (-r_2 r_1^2 \nonumber\\ &&+(r_2^2+x+y-1) r_1-r_2 u+(r_2+r_3)y)d_3-r_3 T_c (r_1^2-r_2 r_1+u)d_4),\\ A^{4'}_6 &=&\frac{m_Z{}^{9/2} N_0 \sqrt{r_3} }{L_1}(T_b (2 r_1 (r_1-r_2)+2 u+2 y-1)d_1+T_b(2 x+2 y-1)d_2 +(T_c-2 T_c y)d_3-4r_1 r_2 T_c d_4),\\ A^{4'}_7 &=&\frac{m_Z{}^{9/2} N_0}{L_1 \sqrt{r_3}}( r_3 (-T_b d_1 +T_b d_2+T_c d_3) +(2 x+4 y-1)(2T_b r_1 d_2 -2T_c r_2 d_3 -T_c r_3 d_4)),\\ A^{4'}_8 &=&-\frac{2 m_Z{}^{9/2} N_0 \sqrt{r_3} }{L_1}(x+2 y-1)(T_b d_1+T_c d_4),\\ A^{4'}_9 &=&-\frac{m_Z{}^{9/2} N_0 }{2 L_1 \sqrt{r_3}}(2r_1 d_2-(r_1-r_2)d_3),\\ A^{4'}_{10} &=&-\frac{m_Z{}^{9/2} N_0 }{2 L_1 \sqrt{r_3}}(r_1 d_2+r_2 d_3),\\ A^{4'}_{11} &=&\frac{m_Z{}^{9/2} N_0 \sqrt{r_3}}{2 L_1}d_1,\\ A^{4'}_{12} &=&\frac{m_Z{}^{9/2} N_0 \sqrt{r_3}}{2 L_1}d_4 . \end{eqnarray} \end{widetext}
2,869,038,154,220
arxiv
\section{Introduction}\label{sec:introduction} The study of moduli spaces of vector bundles on curves has a long and beautiful history. The moduli spaces have been central objects in many branches of mathematics, for example, algebraic geometry, differential geometry, mathematical physics, number theory, representation theory, topology, to name a few. This paper discusses the positivity of a restriction of the normalized Poincar\'e bundle on the moduli space and explains two consequences in the study of derived category and arithmetically Cohen-Macaulay (ACM) bundles. Let $X$ be a connected smooth projective curve of genus $g \ge 2$. Let $r$, $d$ be two positive integers such that $r \ge 2$ and $(r, d) = 1$. We assume that $0 < d < r$. For a line bundle $L$ of degree $d$ on $X$, let $\mathrm{M} (r, L)$ be the moduli space of rank $r$, determinant $L$ stable vector bundles. It is a smooth Fano variety of dimension $(r^{2}-1)(g-1)$, and $\mathrm{Pic} (\mathrm{M} (r, L))$ is generated by an ample divisor $\Theta$. Let $\mathcal{E} $ be a Poincar\'e bundle over $X \times \mathrm{M} (r, L)$. For each point $x \in X$, the restriction of $\mathcal{E} $ to $x \times \mathrm{M} (r, L) \cong \mathrm{M} (r, L)$ is denoted by $\mathcal{E} _{x}$. We assume that $\mathcal{E} $ is normalized, in the sense that $\det(\mathcal{E} _{x}) \cong \Theta^{\ell}$ where $\ell$ is the integer such that $0 < \ell < r$ and $\ell d \equiv 1 \; \mbox{mod}\; r$. We first show that $\mathcal{E} _{x}$ and its dual are both nearly positive. \begin{theoremletter}\label{thm:nefintro} Two vector bundles $\mathcal{E} _{x}$ and $\mathcal{E} _{x}^{*} \otimes \Theta$ are strictly nef. \end{theoremletter} We observe that this positivity statement has two interesting applications. \subsection{Derived category of $\mathrm{M} (r, L)$} Our original motivation for this work was to study the bounded derived category $\mathrm{D} ^{b}(\mathrm{M} (r, L))$ of coherent sheaves on $\mathrm{M} (r,L)$. The structure of $\mathrm{D} ^{b}(\mathrm{M} (r, L))$, particularly its semiorthogonal decomposition, has attracted many experts. When $r=2$ and $d=1$, Narasimhan (and independently Belmans-Galkin-Mukhopadhyay) conjectured that $\mathrm{D} ^{b}(\mathrm{M} (2,L))$ has the following semiorthogonal decomposition, which is known for $g = 2$ case (\cite[Theorem 2.9]{BO95}). \begin{conjecture}\label{conj:semiorthogonaldecomp} The derived category of $\mathrm{M} (2,L)$ has a semiorthogonal decomposition \[ \mathrm{D} ^{b}(\mathrm{M} (2,L)) = \langle \mathrm{D} ^{b}(\mathrm{pt}), \mathrm{D} ^{b}(\mathrm{pt}), \cdots \mathrm{D} ^{b}(X_{k}), \mathrm{D} ^{b}(X_{k}), \cdots, \mathrm{D} ^{b}(X_{g-2}), \mathrm{D} ^{b}(X_{g-2}), \mathrm{D} ^{b}(X_{g-1}) \rangle, \] where $1 \leq k \leq g-2$. Here $X_k$ denotes the $k$-th symmetric product of $X$. \end{conjecture} More generally, it has been conjectured that $\mathrm{D} ^{b}(X)$ is embedded into $\mathrm{D} ^{b}(\mathrm{M} (r, L))$ for every $r \ge 2$. For the normalized Poincar\'e bundle $\mathcal{E} $ on $X \times \mathrm{M} (r, L)$, we may consider the Fourier-Mukai transform $\Phi_{\mathcal{E} } : \mathrm{D} ^{b}(X) \to \mathrm{D} ^{b}(\mathrm{M} (r, L))$ with the kernel $\mathcal{E} $. The following conjecture has been a well-known conjecture to experts: \begin{conjecture}\label{conj:fullyfaithful} The functor $\Phi_{\mathcal{E} } : \mathrm{D} ^{b}(X) \to \mathrm{D} ^{b}(\mathrm{M} (r, L))$ is fully faithful. Therefore, $\mathrm{D} ^{b}(X)$ is embedded into $\mathrm{D} ^{b}(\mathrm{M} (r, L))$. \end{conjecture} This is shown for $r = 2$ and $d = 1$ in \cite{Nar17, Nar18, FK18} and for $d = 1$ and $g \ge r + 3$ in \cite{BM19}. This paper proves that the same result holds for any pair $(r, d)$ if they are coprime and if $g$ is larger compared to $r$. \begin{theoremletter}\label{thm:mainthm} Let $r \ge 2$, $d < r$ be two coprime positive integers. If $g \ge r+3$, then $\Phi_{\mathcal{E} }$ is fully faithful. \end{theoremletter} \begin{remark}\label{rmk:derivedcategory} \begin{enumerate} \item Recently, the first author and Narasimhan proved that if $X$ is non-hyperelliptic and $g \ge 16$, then $\mathrm{D} ^{b}(X_{2})$ is embedded into $\mathrm{D} ^{b}(\mathrm{M} (2, L))$ (\cite{LN21}). \item For $r = 3$ and $d = 1$, Gomez and the first author provided an explicit conjectural semiorthogonal decomposition (\cite[Conjecture 1.9]{GL20}), and new motivic decompositions of $\mathrm{M} (r, L)$ for $r \le 3$, compatible with the conjecture, have been found. See \cite{BGM18, Lee18, GL20} and references therein for more details. \end{enumerate} \end{remark} \subsection{ACM bundles on $\mathrm{M} (r, L)$} Our second application is a discovery of a family of ACM bundles on $\mathrm{M} (r, L)$. For an $n$-dimensional projective variety $V$ with an ample line bundle $A$, a vector bundle $F$ is called \emph{ACM} if $\mathrm{H} ^{i}(V, F \otimes A^{j}) = 0$ for all $j \in \mathbb{Z} $ and $0 < i < n$. An ACM bundle $F$ is \emph{Ulrich} if $\mathrm{H} ^{0}(V, F \otimes A^{-1}) = 0$ and $\mathrm{H} ^{0}(V, F) = \mathrm{rank}\; F \cdot \deg V$. ACM bundles naturally appear in matrix factorization (\cite{Eis80}) and correspond to maximal Cohen-Macaulay modules in commutative algebra (\cite{Yos90}). Ulrich bundles enable us to compute their associated Chow forms, and Eisenbud and Schreyer conjectured that every projective variety admits an Ulrich sheaf (\cite{ES03}). However, since the above strong cohomology vanishing is not easy to expect and hard to verify, very few results are known for higher dimensional varieties, even for the existence of ACM bundles (except some trivial examples). As a second application of Theorem \ref{thm:nefintro}, we show the following theorem. \begin{theoremletter}\label{thm:ACMintro} Let $r \ge 2$, $0< d < r$ be two coprime positive integers. If $g \ge 3$, $\mathcal{E} _x$ is an ACM bundle on $\mathrm{M} (r, L)$ with respect to $\Theta$. \end{theoremletter} Indeed, our theorem proves that $\mathcal{E} _{x}$ is ACM with respect to every ample line bundle on $\mathrm{M} (r, L)$ -- see Definition \ref{def:ACM} and Remark \ref{rmk:veryample}. Our proof does not cover $g = 2$ case and it seems that it requires a new technique. We expect that $\mathcal{E} _{x}$ is still ACM in this case. It will be an interesting problem to verify it. Note that $\mathcal{E} _{x}$ is an ACM bundle when $g=r=2$ (\cite{CKL19, FK18}). \subsection{Sketch of proof} The key ingredient of the proof is a study of birational geometry of the moduli space of parabolic bundles. The moduli space $\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} )$ (see Section \ref{sec:parabolic} for the definition and notation) of parabolic bundles depends on the choice of stability condition $\mathbf{a} $ and the analysis of wall-crossing has been studied well (Section \ref{sec:wallcrossing}). Moreover, the birational geometry of $\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} )$ is governed by the wall-crossing: Every rational contraction that appears in Mori's program can be described in terms of wall-crossings or their degenerations -- the forgetful map (Example \ref{ex:forgetfulmap}) and generalized Hecke correspondences (Remark \ref{rmk:Hecke}). Consult \cite{MY20, MY21} to see a more general framework. \subsubsection{The positivity of $\mathcal{E} _{x}$ and $\mathcal{E} _{x}^{*}\otimes \Theta$} The first observation is that $\mathbb{P} (\mathcal{E} _{x})$ is isomorphic to the moduli space $\mathrm{M} (r, L, r-1, \epsilon)$ with sufficiently small parabolic weight $\epsilon > 0$. From the wall-crossing diagram, we can obtain two extremal rays of the nef cone of $\mathbb{P} (\mathcal{E} _{x})$. Theorem \ref{thm:nefintro} follows from the intersection computation on $\mathbb{P} (\mathcal{E} _{x})$ in Section \ref{sec:nef}. \subsubsection{Embedding of the derived category} Bondal-Orlov criterion (\cite[Theorem 1]{BO95}, Theorem \ref{thm:vanishing}) deduces Theorem \ref{thm:mainthm} to a problem checking the vanishing of cohomologies of the form $\mathrm{H} ^{i}(\mathrm{M} (r, L), \mathcal{E} _{x} \otimes \mathcal{E} _{y}^{*})$. We relate this vanishing with vanishing of a certain line bundle on $\mathbb{P} (\mathcal{E} _{x}) \times_{\mathrm{M} (r, L)}\mathbb{P} (\mathcal{E} _{y}^{*})$, which is identified with $\mathrm{M} (r, L, (r-1, 1), (\epsilon, \epsilon))$. In Section \ref{sec:derivedcategory}, we study the birational geometry of the latter space. We compute the effective cone and show that it is of Fano type. Then the vanishing follows from Kawamata-Viehweg vanishing and Le Potier vanishing theorems, and cohomology extensions. \subsubsection{ACM bundles} Since the ACM condition is a cohomology vanishing condition, we may apply the same technique. We replace the vanishing of the cohomology of $\mathcal{E} _{x} \otimes \Theta^{j}$ on $\mathrm{M} (r, L)$ with that of line bundles on $\mathbb{P} (\mathcal{E} _{x}) \cong \mathrm{M} (r, L, r-1, \epsilon)$. Then the above-mentioned technique can be applied in this setup, and we verity the vanishing in Section \ref{sec:ACM}. \subsection*{Conventions} \begin{itemize} \item We work over $\mathbb{C} $. \item In this paper, $X$ denotes a smooth connected projective curve of genus $g \ge 2$. \item The coarse moduli space of rank $r$, determinant $L$ semistable vector bundles on $X$ is denoted by $\mathrm{M} (r, L)$. The degree of $L$ is denoted by $d$. Unless stated explicitly, we assume that $(r, d) = 1$, so $\mathrm{M} (r, L)$ is a smooth projective variety. \item Let $\Theta$ be the ample generator on $\mathrm{Pic} (\mathrm{M} (r, L))$. \item Let $\mathcal{E} $ be the normalized Poincar\'e bundle on $X \times \mathrm{M} (r, L)$ such that for each $x \in X$, its restriction $\mathcal{E} _{x}$ to $x \times \mathrm{M} (r, L) \cong \mathrm{M} (r, L)$ has the determinant $\Theta^{\ell}$. Here $\ell$ is a unique integer satisfying $\ell d \equiv 1 \;\mathrm{mod}\; r$ and $0 < \ell < r$. \item For a vector space $W$, $\mathbb{P} (W)$ is the projective space of one-dimensional quotients. \item For a variety $V$, the bounded derived category of coherent sheaves on $V$ is denoted by $\mathrm{D} ^b(V)$. \item Every algebraic stack is defined over the fppf topology. \end{itemize} \subsection*{Acknowledgements} The authors thank M. S. Narasimhan for drawing their attention to this problem, sharing his idea, and providing valuable suggestions about this and related projects. Especially, the first author would like to express his deepest gratitude to him for invaluable teachings and warm encouragements for many years. Part of this work was done when the first author was visiting Indian Institute of Science (Bangalore) where he enjoyed wonderful working conditions. He thanks Gadadhar Misra for kind hospitality during his stay IISc. He also thanks Ludmil Katzarkov and Simons Foundation for partially supporting this work via Simons Investigator Award-HMS. \section{Parabolic bundles and their moduli spaces}\label{sec:parabolic} In this section, we introduce the notion of parabolic vector bundles and their moduli space. This paper only considers the parabolic structure consisting of at most one flag for each parabolic point. Fix a smooth connected projective curve $X$ and a finite ordered set $\mathbf{p} := (p_{1}, p_{2}, \cdots, p_{k})$ of distinct closed points of $X$. \begin{definition}\label{def:parabolicbundle} A \emph{parabolic bundle} over a pointed curve $(X, \mathbf{p} )$ of rank $r$ is a collection of data $(E, V_{\bullet})$ where \begin{enumerate} \item $E$ is a rank $r$ vector bundle over $X$; \item $V_{\bullet} = (V_{1}, V_{2}, \cdots, V_{k})$ where $V_{i}$ is a subspace of $E|_{p_{i}}$. The dimension of $V_{i}$ is called the \emph{multiplicity} of $V_{i}$ and denoted by $m_{i}$. \end{enumerate} The sequence $\mathbf{m} = (m_{1}, m_{2}, \cdots, m_{k})$ is called the \emph{multiplicity} of the parabolic bundle $(E, V_{\bullet})$. \end{definition} \begin{definition}\label{def:modulistackofparabolicbundle} Let $\mathcal{M} _{(X, \mathbf{p} )}(r, d, \mathbf{m} )$ (resp. $\mathcal{M} _{(X, \mathbf{p} )}(r, L, \mathbf{m} )$) be the moduli stack of parabolic bundles $(E, V_{\bullet})$ over $(X, \mathbf{p} )$ of rank $r$, degree $d$ (resp. determinant $L$), and multiplicity $\mathbf{m} $. If there is no confusion, we use $\mathcal{M} (r, d, \mathbf{m} )$ (resp. $\mathcal{M} (r, L, \mathbf{m} )$). \end{definition} It is straightforward to see that $\mathcal{M} (r, L, \mathbf{m} )$ is a $\times \mathrm{Gr}(m_{p_{i}}, r)$-bundle over $\mathcal{M} (r, L)$, the stack of all vector bundles of rank $r$ and determinant $L$. In particular, this Artin stack is highly non-separated. To obtain a projective coarse moduli space that enables us to do projective birational geometry, we need to introduce a stability condition. For a parabolic bundle $(E, V_{\bullet})$, a \emph{parabolic subbundle} $(F, W_{\bullet})$ is a pair such that $F \subset E$ is a subbundle and $W_{i} = F|_{i}\cap V_{i}$. A \emph{parabolic quotient bundle} is defined as a parabolic bundle $(E/F, Y_{\bullet})$ such that $Y_{i} = \mathrm{im}\, (V_{i} \to E/F|_{i})$. A \emph{parabolic weight} $\mathbf{a} = (a_{1}, a_{2}, \cdots, a_{k})$ is a sequence of rational numbers such that $0 < a_{i} < 1$. Intuitively, we may regard $\mathbf{a} $ as extra weight for the parabolic flags. For a parabolic bundle $(E, V_{\bullet})$, its \emph{parabolic degree} is $\mathrm{pardeg} (E, V_{\bullet}) := \deg E + \sum_{1\le i \le k}m_{i}a_{i}$. The same parabolic weight can induce the parabolic degree for parabolic subbundles and parabolic quotient bunddles of $(E, V_{\bullet})$. The \emph{parabolic slope} is $\mu(E, V_{\bullet}) := \mathrm{pardeg} (E, V_{\bullet})/\mathrm{rank}\, E$. \begin{definition}\label{def:stability} Fix a parabolic weight $\mathbf{a} $. A parabolic bundle $(E, V_{\bullet})$ is \emph{$\mathbf{a} $-(semi)stable} if for every parabolic subbundle $(F, W_{\bullet})$, $\mu(F, W_{\bullet}) \;(\le ) < \mu(E, V_{\bullet})$. A parabolic weight $\mathbf{a} $ is \emph{general} if the $\mathbf{a} $-semistability coincides with the $\mathbf{a} $-stability. \end{definition} \begin{definition}\label{def:modulispaceparabolicbundle} Let $(X, \mathbf{p} )$ be a $k$-pointed curve of genus $g \ge 2$. Let $\mathcal{M} (r, d, \mathbf{m} , \mathbf{a} )$ (resp. $\mathcal{M} (r, L, \mathbf{m} , \mathbf{a} )$) be the moduli stack of rank $r$, degree $d$ (resp. determinant $L$), $\mathbf{a} $-semistable parabolic bundles over $(X, \mathbf{p} )$. Let $\mathrm{M} (r, d, \mathbf{m} , \mathbf{a} )$ (resp. $\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} )$) be its good moduli space, which is a normal projective variety of dimension $r^{2}(g-1)+1 + \sum m_{i}(r-m_{i})$ (resp. $(r^{2}-1)(g-1) + \sum m_{i}(r-m_{i})$). When $\mathbf{a} $ is general, both $\mathrm{M} (r, d, \mathbf{m} , \mathbf{a} )$ and $\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} )$ are nonsingular. \end{definition} \begin{remark} When $g \le 1$, the moduli space behaves differently. For instance, if $g = 0$, depending on $\mathbf{a} $, $\mathcal{M} (r, L, \mathbf{m} , \mathbf{a} )$ may be empty. Consult \cite{MY21}. \end{remark} \begin{example}\label{ex:smallweight} The inequality $\mu(F, W_{\bullet}) \le \mu(E, V_{\bullet})$ defining the semistability can be understood as a perturbation of the inequality $\mu(F) \le \mu(E)$ for the semistability of the underlying bundle. If $(r, d = \det L) = 1$ and each coefficient of $\mathbf{a} $ is sufficiently small and general, then the parabolic weight does not affect on the stability. Therefore, a parabolic bundle $(E, V_{\bullet})$ is $\mathbf{a} $-stable if and only if the underlying bundle $E$ is stable. Thus, there is a forgetful morphism $\mathcal{M} (r, L, \mathbf{m} , \mathbf{a} ) \to \mathcal{M} (r, L)$ and that between coarse moduli spaces \[ \pi : \mathrm{M} (r, L, \mathbf{m} , \mathbf{a} ) \to \mathrm{M} (r, L) \] and $\pi$ is a $\times \mathrm{Gr}(m_{i}, r)$-fibration. Indeed, for a fixed Poincar\'e bundle $\mathcal{E} $ over $X \times\mathrm{M} (r, L)$, \[ \mathrm{M} (r, L, \mathbf{m} , \mathbf{a} ) \cong \times_{\mathrm{M} (r, L)}\mathrm{Gr}(m_{i}, \mathcal{E} _{p_{i}}). \] \end{example} \begin{example}\label{ex:forgetfulmap} More generally, if $\mathbf{a} = (a_{i})$ is general and one $a_{i}$ is sufficiently small, then forgetting one flag does not affect on the stability calculation. Thus, there is a forgetful morphism \[ \pi : \mathrm{M} _{(X, \mathbf{p} )}(r, L, \mathbf{m} , \mathbf{a} ) \to \mathrm{M} _{(X, \mathbf{p} ')}(r, L, \mathbf{m} ', \mathbf{a} ') \] where $\mathbf{p} ' = \mathbf{p} \setminus \{p_{i}\}$, $\mathbf{m} ' = \mathbf{m} \setminus \{m_{i}\}$, and $\mathbf{a} ' = \mathbf{a} \setminus \{a_{i}\}$. This is a $\mathrm{Gr}(m_{i}, r)$-fibration. \end{example} \begin{example}\label{ex:topandzerodimflags} Fix a $k$-pointed curve $(X, \mathbf{p} )$. Let $\mathbf{p} ' := \mathbf{p} \setminus \{p_{k}\}$. Let $\mathbf{m} ' = (m_{i})_{1 \le i \le k-1}$ and $\mathbf{a} ' = (a_{i})_{1 \le i \le k-1}$. Suppose that $m_{k} = 0$ or $r$. Then \[ \mathrm{M} _{(X, \mathbf{p} )}(r, L, \mathbf{m} , \mathbf{a} ) \cong \mathrm{M} _{(X, \mathbf{p} ')}(r, L, \mathbf{m} ', \mathbf{a} '). \] \end{example} When one of the parabolic weights is sufficiently close to one, there is another contraction morphism. \begin{proposition}\label{prop:generalizedHeckemodification} We use the notation in Example \ref{ex:topandzerodimflags}. For a general parabolic weight $\mathbf{a} = (a_{i})$, assume that $a_{k}$ is sufficiently close to one. Then there exists a functorial morphism \[ \pi_{1} : \mathrm{M} _{(X, \mathbf{p} )}(r, L, \mathbf{m} , \mathbf{a} ) \to \mathrm{M} _{(X, \mathbf{p} ')}(r, L(-(r-m_{k})p_{k}), \mathbf{m} ', \mathbf{a} '). \] \end{proposition} \begin{proof} It is sufficient to construct a morphism \[ \mathcal{M} _{(X, \mathbf{p} )}(r, L, \mathbf{m} , \mathbf{a} ) \to \mathcal{M} _{(X, \mathbf{p} ')}(r, L(-(r-m_{k})p_{k}), \mathbf{m} ', \mathbf{a} ') \] between algebraic stacks. Let $\widetilde{\mathbf{m} } = (\widetilde{m}_{i})$ be a multiplicity such that $\widetilde{m}_{i} = m_{i}$ for $1 \le i \le k-1$ and $\widetilde{m}_{k} = r$. By Example \ref{ex:topandzerodimflags}, there is a functorial isomorphism $\mathcal{M} _{(X, \mathbf{p} )}(r, L(-(r-m_{k})p_{k}), \widetilde{\mathbf{m} }, \mathbf{a} ) \cong \mathcal{M} _{(X, \mathbf{p} ')}(r, L(-(r-m_{k})p_{k}), \mathbf{m} ', \mathbf{a} ')$. Thus it is sufficient to show that there is a morphism \[ \mathcal{M} (r, L, \mathbf{m} , \mathbf{a} ) \to \mathcal{M} (r, L(-(r-m_{k})p_{k}), \widetilde{\mathbf{m} }, \mathbf{a} ). \] For a stable bundle $(E, V_{\bullet}) \in \mathcal{M} (r, L, \mathbf{m} , \mathbf{a} )$, let $E'$ be the kernel of the restriction map $E \to E|_{p_{k}} \to E|_{p_{k}}/V_{p_{k}}$. Then for each $i \ne k$, $E'|_{p_{i}}$ can be identified with $E|_{p_{i}}$. Set $V_{i}' = V_{i}$ under this identification. On the other hand, the restriction $f : E'|_{p_{k}} \to E|_{p_{k}}$ is a linear map with image $V_{k}$. We set $V_{k}' := f^{-1}(V_{k}) = E'|_{p_{k}}$. Then we obtain a parabolic bundle $(E', V_{\bullet}') \in \mathcal{M} (r, L(-(r-m_{k})p_{k}), \widetilde{\mathbf{m} }, \mathbf{a} )$. Thus, we have a morphism \begin{equation}\label{eqn:generalizedHecke} \begin{split} \mathcal{M} (r, L, \mathbf{m} , \mathbf{a} ) &\to \mathcal{M} (r,L(-(r-m_{k})p_{k}), \widetilde{\mathbf{m} })\\ (E, V_{\bullet}) & \mapsto (E', V_{\bullet}'). \end{split} \end{equation} We claim that $(E', V_{\bullet}')$ is $\mathbf{a} $-semistable. Then the morphism in Equation \eqref{eqn:generalizedHecke} factors through $\mathcal{M} (r,L(-(r-m_{k})p_{k}), \widetilde{\mathbf{m} }, \mathbf{a} )$. Suppose not. Then there is a parabolic subbundle $(F', W_{\bullet}')$ of $(E', V_{\bullet}')$ such that $\mu(F', W_{\bullet}') > \mu(E', V_{\bullet}')$. Let $\mathrm{rank}\, F' = s$, $\deg F' = e$, and $n_{i} = \dim W_{i}'$. Note that $n_{k} = s$. Set $d = \det L$. Then \begin{equation}\label{eqn:slopecomparison} \begin{split} \mu(E, V_{\bullet}) - \mu(E', V_{\bullet}') &= \frac{d + \sum m_{i}a_{i}}{r} - \frac{d - (r-m_{k}) + \sum_{i \ne k}m_{i}a_{i} + ra_{k}}{r}\\ &= \frac{(r-m_{k})(1-a_{k})}{r}. \end{split} \end{equation} In general, $F'$ is not a subbundle of $E$. But there is a subbundle $F$ of $E$ such that $F/F'$ is a sheaf supported on $p_{k}$ and $\dim (F/F')|_{p_{k}} = s - c$, where $c := \dim F|_{p_{k}} \cap V_{p_{k}}$. For the induced parabolic subbundle $(F, W_{\bullet})$ of $(E, V_{\bullet})$, \begin{equation}\label{eqn:slopecomparisonsubbundle} \begin{split} \mu(F, W_{\bullet}) - \mu(F', W_{\bullet}') &= \frac{e + (s-c) + \sum_{i\ne k}a_{i}n_{i} + a_{k}c}{s} - \frac{e + \sum_{i \ne k}a_{i}n_{i} + a_{k}s}{s}\\ &= \frac{(s-c)(1-a_{k})}{s}. \end{split} \end{equation} By combining \eqref{eqn:slopecomparison} and \eqref{eqn:slopecomparisonsubbundle}, we have \[ \mu(E, V_{\bullet}) - \mu(F, W_{\bullet}) = \mu(E', V_{\bullet}') - \mu(F', W_{\bullet}') + (1-a_{k})\left(\frac{r-m_{k}}{r} - \frac{s - c}{s}\right). \] Note that $\mu(E', V_{\bullet}') - \mu(F', W_{\bullet}')$ is independent from $a_{k}$, as the coefficient of $a_{k}$ in each term is one. Thus, if $a_{k}$ is sufficiently close to one, then the last term is negligible. By the assumption, $\mu(E', V_{\bullet}') - \mu(F', W_{\bullet}') < 0$ and hence the left hand side is also negative. It violates the stability of $(E, V_{\bullet})$ and obtain a contradiction. \end{proof} \begin{remark}\label{rmk:Hecke} The morphism in Proposition \ref{prop:generalizedHeckemodification} can be understood as a generalized Hecke correspondence. When $d = k = 1$ and $m = r-1$, up to taking a dual bundle, we obtain the classical Hecke correspondence in the sense of \cite[Section 4]{NR75}. A difference in $d > 1$ case is that $\mathrm{M} (r, L, m, a)$ does not admit morphisms to both $\mathrm{M} (r, L)$ and $\mathrm{M} (r, L(-x))$, so we need a birational modification on $\mathrm{M} (r, L, m, a)$. It can be explained in terms of parabolic wall-crossing, which will be explained in Section \ref{sec:wallcrossing} below. \end{remark} \section{Wall-crossing}\label{sec:wallcrossing} In this section, we review how the moduli space $\mathbf{M} (r, L, \mathbf{m} , \mathbf{a} )$ changes as $\mathbf{a} $ varies. \subsection{General theory} Let $k$ be the number of parabolic points. Recall that a parabolic weight is, under our restrictive setting that each parabolic point has a single parabolic flag, a length $k$ sequence of rational number $\mathbf{a} = (a_{i})$ with $0 < a_{i} < 1$. The closure of the set of parabolic weights is $[0, 1]^{k} \subset \mathbb{R} ^{k}$. There is a wall-chamber decomposition of $[0, 1]^{k}$. A parabolic bundle $(E, V_{\bullet}) \in \mathbf{M} (r, L, \mathbf{m} , \mathbf{a} )$ is strictly semi-stable if and only if there is a maximal destabilizing subbundle $(F, W_{\bullet})$ such that $\mu(F, W_{\bullet}) = \mu(E, V_{\bullet})$. More explicitly, this is true only if \begin{equation}\label{eqn:wallequation} \frac{e+\sum n_{i}a_{i}}{s} = \frac{d+\sum m_{i}a_{i}}{r} \end{equation} for some $0 < s < r$, $e \in \mathbb{Z} $, and $\mathbf{n} = (n_{i})$. Here $s$ is the rank, $d$ is the degree, and $\mathbf{n} $ is the multiplicity of $(F, W_{\bullet})$. So we require that $n_{i} \le \mathrm{min}\;\{s, m_{i}\}$. Let $\Delta(s, e, \mathbf{n} )$ be the set of weights that satisfy \eqref{eqn:wallequation}. Note that this is an intersection of a hyperplane and $[0, 1]^{k}$. We call $\Delta(s, e, \mathbf{n} )$ a \emph{wall} if it is nonempty. We also obtain \begin{equation}\label{eqn:wallduality} \Delta(s, e, \mathbf{n} ) = \Delta(r-s, d-e, \mathbf{m} - \mathbf{n} ). \end{equation} Note that $\Delta(s, e, \mathbf{n} ) = \Delta(ks, ke, k\mathbf{n} )$ if $ks < r$ for some $k > 1$. We call such a wall a \emph{multiple wall}, and otherwise, it is a \emph{simple wall}. A wall $\Delta(s, e, \mathbf{n} )$ is simple if and only if $\{s, e, n_{i}\}$ are coprime and $\{r-s, d-e, m_{i} - n_{i}\}$ are coprime. The stability changes only if a parabolic weight $\mathbf{a} $ lies on one of the walls. So for each open chamber $C \subset [0, 1]^{k}$, for any $\mathbf{a} , \mathbf{a} ' \in C$, $\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} ) \cong \mathrm{M} (r, L, \mathbf{m} , \mathbf{a} ')$. The stability coincides with the semistability if $\mathbf{a} \in (0, 1)^{k} \setminus \bigcup \Delta(s, e, \mathbf{n} )$. Let \[ \xymatrix{\mathbf{M} (r, L, \mathbf{m} , \mathbf{a} ^{-}) \ar@{<-->}[rr] \ar[rd]_{\pi_{-}}&& \mathbf{M} (r, L, \mathbf{m} , \mathbf{a} ^{+}) \ar[ld]^{\pi_{+}}\\ &\mathbf{M} (r, L, \mathbf{m} , \mathbf{a} )} \] be a wall-crossing. Suppose that $\mathbf{a} $ is a general point of $\Delta(s, e, \mathbf{n} )$, and $\mathbf{a} ^{-}$ and $\mathbf{a} ^{+}$ are two very close weights on the opposite chambers. The contraction maps $\pi_{\pm}$ are birational surjections. Let $Y^{\pm}$ be the exceptional locus on $\mathbf{M} (r, L, \mathbf{m} , \mathbf{a} ^{\pm})$ and let $Y := \pi_{\pm}(Y^{\pm})$. The subvarieties $Y^{\pm}$ are called the \emph{wall-crossing centers}. For $(E, V_{\bullet}) \in Y^{+}$, there is a unique maximal destabilizing subbundle $(E^{-}, V_{\bullet}^{-}) \in \mathrm{M} (s, e, \mathbf{n} , \mathbf{a} )$, which fits into an exact sequence \[ 0 \to (E^{-}, V_{\bullet}^{-}) \to (E, V_{\bullet}) \to (E^{+}, V_{\bullet}^{+}) \to 0 \] of parabolic bundles. The map $\pi_{-}$ is restricted to the map $Y^{-} \to Y$, which sends $(E, V_{\bullet})$ to $((E^{-}, V_{\bullet}^{-}), (E^{+}, V_{\bullet}^{+}))$. Conversely, if $ x := ((E^{-}, V_{\bullet}^{-}), (E^{+}, V_{\bullet}^{+}))$ is a general point in $Y$ so that both $(E^{-}, V_{\bullet}^{-})$ and $(E^{+}, V_{\bullet}^{+})$ are stable, then the fiber $\pi_{-}^{-1}(x)$ is a projective space $\mathbb{P} \mathrm{Ext} ^{1}((E^{+}, V_{\bullet}^{+}), (E^{-}, V_{\bullet}^{-}))$ (\cite[Lemma 1.4]{Yok95}). If $(E, V_{\bullet})$ is in a unique irreducible component of $Y^{-}$, the image of the component is isomorphic to $\mathrm{M} (s, e, \mathbf{n} , \mathbf{a} ) \times_{\mathrm{Pic}(X)}\mathrm{M} (r-s, d-e, \mathbf{m} - \mathbf{n} , \mathbf{a} )$. For our purpose, we need a lower bound of the codimension of $Y^{\pm}$. Observe that the parabolic bundles in $Y^{-}$ are stable with respect to $\mathbf{a} ^{-}$, but unstable with respect to $\mathbf{a} ^{+}$. Thus, $Y^{-}$ parametrizes unstable parabolic bundles with respect to some weight. The codimension of an unstable locus is estimated in \cite{Sun00}. For an outline of the proof, see also \cite[Section 3.2]{MY20}. \begin{theorem}[\protect{\cite[Proposition 5.1]{Sun00}}]\label{thm:codimunstable} In $\mathcal{M} (r, L, \mathbf{m} )$, the codimension of the unstable locus is at least $(r-1)(g-1) + 1$. \end{theorem} \begin{corollary}\label{cor:codimcenter} The codimension of the wall-crossing center is at least $(r-1)(g-1) + 1$. In particular, if $g \ge 2$, every wall-crossing is a flip. \end{corollary} We say a wall-crossing is \emph{simple} if: \begin{enumerate} \item The wall $\Delta(s, e, \mathbf{n} )$ is a simple wall and; \item $\mathbf{a} \in \Delta(s, e, \mathbf{n} )$ is on a unique wall. \end{enumerate} A simple wall-crossing has an explicit description. The wall-crossing centers $Y^{\pm}$ are irreducible and their image $Y \cong \mathrm{M} (s, e, \mathbf{n} , \mathbf{a} ) \times_{\mathrm{Pic}(X)}\mathrm{M} (r-s, d-e, \mathbf{m} - \mathbf{n} , \mathbf{a} )$ is a smooth variety. Let $(\mathcal{E} ^{-}, \mathcal{V} _{\bullet}^{-})$ (resp. $(\mathcal{E} ^{+}, \mathcal{V} _{\bullet}^{+})$) be the Poincar\'e family over $\mathrm{M} (s, e, \mathbf{n} , \mathbf{a} )$ (resp. $\mathrm{M} (r-s, d-e, \mathbf{m} - \mathbf{n} , \mathbf{a} )$). The standard GIT construction and the descent method imply the existence of Poincar\'e bundle (\cite[Chapter 5]{New78}, \cite[Section 4.6]{HL10}). Then $Y^{-} \cong \mathbb{P} R^{1}\pi_{- *}\mathcal{P}ar\mathcal{H}om ((\mathcal{E} ^{+}, \mathcal{V} _{\bullet}^{+}), (\mathcal{E} ^{-}, \mathcal{V} _{\bullet}^{-}))$ and $Y^{+} \cong \mathbb{P} R^{1}\pi_{+ *}\mathcal{P}ar\mathcal{H}om ((\mathcal{E} ^{-}, \mathcal{V} _{\bullet}^{-}), (\mathcal{E} ^{+}, \mathcal{V} _{\bullet}^{+}))$ (\cite[Section 1]{Yok95}). In particular, they are projective bundles over $Y$. Finally, it is well-known that the blow-up of $\mathbf{M} (r, L, \mathbf{m} , \mathbf{a} ^{-})$ along $Y^{-}$ is isomorphic to the blow-up of $\mathbf{M} (r, L, \mathbf{m} , \mathbf{a} ^{+})$ along $Y^{+}$. \subsection{Main example} From now on, we focus on one particular case where $k = 2$ and $\mathbf{m} = (r-1, 1)$, which is our primary interest in this paper. We set $\mathbf{p} = (x, y)$ and use an appropriate modification of the notation such as $\mathbf{m} = (m_{x}, m_{y})$ and $\mathbf{a} = (a_{x}, a_{y})$. Let $\Delta(s, e, \mathbf{n} )$ be a wall on $[0,1]^{2}$ and let $\mathbf{a} = (a_{x}, a_{y})$ be a general point on it. Let $(E, V_{\bullet}) \in Y \subset \mathrm{M} (r, L, \mathbf{m} , \mathbf{a} )$ be a general polystable parabolic bundle on the wall-crossing center. Then $(E, V_{\bullet}) \cong (F_{1}, W_{1 \bullet}) \oplus (F_{2}, W_{2\bullet})$ and $\mu(E, V_{\bullet}) =\mu(F_{1}, W_{1\bullet}) = \mu(F_{2}, W_{2\bullet})$. There are two possibilities. First of all, it is possible that one of $F_{i}$'s (say $F_{1}$) has the largest possible intersection with the flags of $E$. That means, $\dim F_{1}|_{x} \cap V_{x} = \dim F_{1}|_{x} = s$ and $\dim F_{1}|_{y} \cap V_{y} = \dim V_{y} = 1$. Then we have an equality \[ \frac{e+sa_{x} + a_{y}}{s} = \frac{d + (r-1)a_{x} + a_{y}}{r}, \] or equivalently, $sa_{x} + (r-s)a_{y} = sd - re$. The slope of the line on the $(a_{x}, a_{y})$-plane is negative, so we will call the wall a \emph{negative wall}. To intersect with the interior of $[0, 1]^{2}$, it is necessary that $0 < sd - re < r$. Since these walls are $\Delta(s, e, (s, 1)) = \Delta(r-s, d-e, (r-s-1, 0))$, they are simple walls. The second case is that $F_{1}$ has the maximum intersection with the flag on $x$, but does not intersect on $y$. In other words, $\dim F_{1}|_{x} \cap V_{x} = \dim F_{1}|_{x}$ and $\dim F_{1}|_{y} \cap V_{y} = 0$. Then we have \[ \frac{e+sa_{x}}{s} = \frac{d+(r-1)a_{x} + a_{y}}{r}, \] so $sa_{x} - sa_{y} = sd - re$. Thus, the slope of the wall $\Delta(s, e, (s, 0))$ is one and we call it a \emph{positive wall}. The nonempty intersection with $(0, 1)^{2}$ is equivalent to $-s < sd - re < s$. Since $(r, d) = 1$, $sd - re \ne 0$ and there is no wall passing through the origin. See Figure \ref{fig:wallchamber} for an example of the wall-chamber decomposition. \begin{figure}[!ht] \begin{tikzpicture}[scale=7] \draw[line width = 1pt] (0, 0) -- (1, 0); \draw[line width = 1pt] (1, 0) -- (1, 1); \draw[line width = 1pt] (0, 0) -- (0, 1); \draw[line width = 1pt] (0, 1) -- (1, 1); \draw[line width = 1pt] (0.33, 0) -- (0, 0.5); \draw[line width = 1pt] (1, 0.25) -- (0, 0.5); \draw[line width = 1pt] (0.75, 0) -- (0.5, 1); \draw[line width = 1pt] (1, 0.66) -- (0.5, 1); \draw[line width = 1pt] (0.75, 0) -- (1, 0.25); \draw[line width = 2pt] (0, 0.5) -- (0.5, 1); \draw[line width = 1pt] (0.33, 0) -- (1, 0.66); \node at (-0.05, -0.05) {$0$}; \node at (1.05, -0.05) {$1$}; \node at (-0.05, 1.05) {$1$}; \node at (-0.05, 0.5) {$\frac{1}{2}$}; \node at (0.33, -0.05) {$\frac{1}{3}$}; \node at (0.75, -0.05) {$\frac{3}{4}$}; \node at (1.05, 0.25) {$\frac{1}{4}$}; \node at (1.05, 0.66) {$\frac{2}{3}$}; \node at (0.5, 1.05) {$\frac{1}{2}$}; \node at (0.2, 0.2) {$\Delta(3, 1, (3, 1))$}; \node at (0.3, 0.5) {$\Delta(1, 0, (1, 1))$}; \node at (0.55, 0.65) {$\Delta(4, 1, (4, 1))$}; \node at (0.8, 0.8) {$\Delta(2, 0, (2, 1))$}; \node at (0.15, 0.75) {$\Delta(2, 1, (2, 0)) = \Delta(4, 2, (4, 0))$}; \node at (0.85, 0.5) {$\Delta(3, 1, (3, 0))$}; \node at (0.9, 0.15) {$\Delta(4, 1, (4, 0))$}; \end{tikzpicture} \caption{The wall-chamber decomposition for $r = 5$ and $d = 2$}\label{fig:wallchamber} \end{figure} \subsection{Mori's program} The wall-crossing picture can be incorporated with projective birational geometry of $\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} )$ in the nicest way. Let $\mathbf{a} $ be a general parabolic weight. Then every rational contraction of $\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} )$ can be obtained in terms of wall-crossings, forgetful maps, and generalized Hecke correspondences. Recall that $\mathrm{Pic}^{G}(V)$ is the group of linearized line bundles on $V$. Let $\mathrm{N} ^{1, G}(V)_{\mathbb{R} }$ be the numerical classes of linearized $\mathbb{R} $-line bundles. \begin{lemma}\label{lem:GIT} Let $G$ be a reductive group. Let $V$ be a normal $\mathbb{Q} $-factorial projective variety equipped with a linearized $G$-action. Let $L \in \mathrm{Pic}^{G}(V)$ be a linearized ample divisor such that $V^{ss}(L) = V^{s}(L) \ne \emptyset$. Then there is a surjective linear map $\mathrm{N} ^{1, G}(V)_{\mathbb{R} } \to \mathrm{N} ^{1}(V/\!/ _{L}G)_{\mathbb{R} }$. \end{lemma} \begin{proof} Let $E \in \mathrm{N} ^{1,G}(V)_{\mathbb{Q} }$. Then $E$ is represented by a linearized $\mathbb{Q} $-line bundle $E$. By taking some power, we may assume that $E$ is a genuine line bundle. The coincidence of the stability and the semistability implies that for each point $x \in V^{ss}(L)$, the stabilizer is a finite group. Thus, if we take some power again, we may assume that the stabilizer acts on each fiber of $E$ trivially. Now by Kempf's descent lemma (\cite[Theorem 2.3]{DN89}), $E$ descends to a line bundle $E/\!/ _{L}G$ over $V/\!/ _{L}G$. This map can be linearly extended and completed, so we have a desired linear map $\mathrm{N} ^{1, G}(V)_{\mathbb{R} } \to \mathrm{N} ^{1}(V /\!/ _{L}G)_{\mathbb{R} }$. It is surjective, since for any line bundle $F$ on $V/\!/ _{L}G$, its pull-back $\widetilde{F}$ on $V^{ss}(L)$ is a line bundle. By the $\mathbb{Q} $-factoriality of $V$, after taking some power, it can be extended to a line $\widetilde{F}$ bundle on $V$ (not necessarily uniquely, depending of the codimension of $V \setminus V^{ss}(L)$). Since $V$ is normal, some power of $\widetilde{F}$ admits a linearization (\cite[Corollary 1.6]{MFK94}). So we obtain an element in $\mathrm{N} ^{1, G}(V)_{\mathbb{R} }$. \end{proof} \begin{proposition}\label{prop:weightanddivisor} Let $[0, 1]^{k}$ be the closure of the set of parabolic weights. Let $\mathbf{a} \in (0, 1)^{k}$ be a general parabolic weight. Then there is a linear isomorphism between a cone over $[0, 1]^{k}$ and the effective cone $\mathrm{Eff}(\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} ))$ of divisors. \end{proposition} \begin{proof} By the standard construction of the moduli space of parabolic bundles as an $\mathrm{SL}_{N}$ GIT quotient (\cite[Section 4]{MS80}), all of them can be constructed as a GIT quotient of the same smooth variety $Z$ with various linearizations, and the parabolic weights depend linearly on the choice of linearization. In particular, there is a linear embedding $(0, 1)^{k} \to \mathrm{N} ^{1, \mathrm{SL}_{N}}(Z)_{\mathbb{R} }$. Lemma \ref{lem:GIT} induces a linear embedding $(0, 1)^{k} \to \mathrm{N} ^{1}(Z/\!/ _{L}\mathrm{SL}_{N})_{\mathbb{R} } = \mathrm{N} ^{1}(\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} ))_{\mathbb{R} }$. Since $Z$ is normal, the forgetful map $\mathrm{N} ^{1, \mathrm{SL}_{N}}(Z)_{\mathbb{R} } \to \mathrm{N} ^{1}(Z)_{\mathbb{R} }$ is surjective (\cite[Corollary 1.6]{MFK94}). Since the character group of $\mathrm{SL}_{N}$ is trivial, it is injective. So $\mathrm{N} ^{1, \mathrm{SL}_{N}}(Z)_{\mathbb{R} } = \mathrm{N} ^{1}(Z)_{\mathbb{R} }$. The map $\mathrm{N} ^{1}(Z)_{\mathbb{R} } \to \mathrm{N} ^{1}(Z/\!/ _{L}\mathrm{SL}_{N})_{\mathbb{R} }$ is bijective because the unstable locus has codimension $\ge 2$ (Theorem \ref{thm:codimunstable}). Therefore the map $(0, 1)^{k} \to \mathrm{N} ^{1}(\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} ))_{\mathbb{R} }$ is also a linear embedding. Thus, there is an embedding of the cone over $(0, 1)^{k}$ to $\mathrm{Eff}(\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} ))$ which maps $\mathbf{a} '$ to its associated line bundle $L_{\mathbf{a} '}$. Now we show that the cone over the closure $[0, 1]^{k}$ of $(0, 1)^{k}$ can be identified with $\mathrm{Eff}(\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} ))$. Recall that for any effective divisor $D$ (or equivalently, a line bundle $\mathcal{O} (D)$) of a normal $\mathbb{Q} $-factorial projective variety $V$, we may associate a rational contraction $V \dashrightarrow V(D)$ where \[ V(D) := \mathrm{Proj}\; \bigoplus_{m \ge 0}\mathrm{H} ^{0}(V, \mathcal{O} (mD)). \] Conversely, any rational contraction of $V$ can be obtained in this way. If $D \in \mathrm{int}\;\mathrm{Eff}(V)$, then $V \dashrightarrow V(D)$ is a birational map and if $D \in \partial \mathrm{Eff}(V)$, $V \dashrightarrow V(D)$ is a contraction with positive dimensional general fibers. Note that on the boundary $\partial [0, 1]^{k}$, one of the coordinates must be either zero or one. In the first case, we can obtain a rational contraction $\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} ) \to \mathrm{M} (r, L, \mathbf{m} ', \mathbf{a} ')$ in Example \ref{ex:forgetfulmap}. In the latter case, we have a generalized Hecke modification in Proposition \ref{prop:generalizedHeckemodification}. All of them are contractions with positive dimensional fibers, so they must be associated with divisors on the boundary of the effective cone. Since the effective cone is convex, it is sufficient to obtain the result. \end{proof} \section{Nef vector bundles}\label{sec:nef} Let $\mathcal{E} $ be the normalized Poincar\'e bundle over $X \times \mathrm{M} (r, L)$. Recall that for any $x \in X$, $\mathcal{E} _{x}$ is the vector bundle on $\mathrm{M} (r, L)$ obtained by restricting $\mathcal{E} $ on $x \times \mathrm{M} (r, L)$. In this section, we prove the nefness of $\mathcal{E} _{x}$. \begin{theorem}\label{thm:nef} The restricted Poincar\'e bundle $\mathcal{E} _{x}$ is a strictly nef vector bundle. \end{theorem} \begin{remark}\label{rmk:nef} The case of $d = 1$ of Theorem \ref{thm:nef} is shown in \cite[Proposition 3.3]{Nar17} and \cite[Lemma 13]{BM19}. So we assume that $d > 1$. Consult Remark \ref{rmk:d=1} to check the difference for $d = 1$ case. \end{remark} We obtain another strictly nef bundle immediately. This proves Theorem \ref{thm:nefintro}. \begin{corollary}\label{cor:nef} The vector bundle $\mathcal{E} _{x}^{*}\otimes \Theta$ is strictly nef. \end{corollary} \begin{proof} Fix a line bundle $A$ of degree $1$ on $X.$ Consider the vector bundle $\mathcal{E} ^* \otimes p^* A \otimes q^* \Theta$ on $X \times \mathrm{M} (r,L)$, where $p : X \times \mathrm{M} (r, L) \to X$ and $q : X \times \mathrm{M} (r, L) \to \mathrm{M} (r, L)$ are two projections. From the isomorphism $\mathrm{M} (r,L) \cong \mathrm{M} (r,L^{*}) \cong \mathrm{M} (r, A^{r} \otimes L^{*}),$ we see that $\mathcal{E} ^* \otimes p^* A\otimes q^* \Theta$ is the normalized Poincar\'e bundle on $X \times \mathrm{M} (r,A^{r} \otimes L^*) \cong X \times \mathrm{M} (r, L)$. The restriction of $\mathcal{E} ^* \otimes p^* A\otimes q^* \Theta$ to $x \times \mathrm{M} (r,L)$ is isomorphic to $\mathcal{E} _x^* \otimes \Theta$. From Theorem \ref{thm:nef}, we see that $\mathcal{E} _{x}^{*}\otimes \Theta$ is strictly nef. \end{proof} From now on, we prove the nefness of $\mathcal{E} _{x}$. By definition, we need to show that $\mathcal{O} _{\mathbb{P} (\mathcal{E} _{x})}(1)$ is nef. Observe that $\mathbb{P} (\mathcal{E} _{x}) \cong \mathrm{M} (r, L, r-1, \epsilon)$ for some very small $\epsilon > 0$ (Example \ref{ex:smallweight}). We explicitly analyze the first wall-crossing of the moduli space $\mathrm{M} (r, L, r-1, \epsilon)$ by increasing $\epsilon \to 1$. Recall that $\ell$ is a positive integer such that $\ell d \equiv 1 \;\mbox{mod} \;r$ and $0 < \ell < r$. \begin{lemma}\label{lem:1stwallcrossing} Let $a$ be the smallest parabolic weight on a wall. Then $a = 1/\ell$. Furthermore, a maximal destabilizing subbundle has rank $k\ell$ and degree $ke$ for some $k \in \mathbb{Z} $ and an integer $e$ satisfying $\ell d - re = 1$. \end{lemma} \begin{proof} Let $\Delta(s, e, n)$ be a wall. Note that $n$ is either $s$ or $s-1$. By Equation \eqref{eqn:wallduality}, by exchanging $s$ by $r-s$ if necessary, we may assume that $n = s$. Then from $(e+sa)/s = (d+(r-1)a)/r$, we have $a = (sd - re)/s$. Since $(r, d) = 1$, we can find a unique positive $0 < s < r$ and $e \in \mathbb{Z} $ such that $sd - re = 1$, which is $\ell$. We claim that $a = (\ell d - re)/\ell = 1/\ell$ provides the first wall. Suppose that there is another wall $a' = (s'd - re')/s'$. Then $s'd - re' = k > 1$. So $s'd \equiv k \;\mathrm{mod}\; r$. On the other hand, $k\ell d \equiv k \;\mathrm{mod}\; r$. So if $k\ell < r$, then $s' = k\ell$ and $e' = ke$. Then $a' = k/k\ell = 1/\ell = a$. If $k\ell \ge r$, there is a unique positive integer $t$ such that $0 < s' = k\ell - tr < r$. Then $a = k/s' = k/(k\ell-tr) > k/k\ell = 1/\ell$. The above numerical computation tells us that $\Delta(\ell, e, \ell) = \Delta(s', e', s')$ only if $(s', e')=(k\ell, ke)$. So we obtain the last assertion. \end{proof} We have the following diagram: \[ \xymatrix{&\mathbb{P} (\mathcal{E} _{x}) = \mathrm{M} (r, L, r-1, \epsilon) \ar[ld]_{\pi} \ar[rd]^{\pi_{-}}\\ \mathrm{M} (r, L) && \mathrm{M} (r, L, r-1, 1/\ell)} \] The first map $\pi$ is a projective bundle and $\pi_{-}$ is a small contraction by Corollary \ref{cor:codimcenter}. And $\rho(\mathbb{P} (\mathcal{E} _{x})) = \rho(\mathrm{M} (r, L)) + 1 = 2$. Since $\pi_{-}$ is a small contraction, $1 \le \rho(\mathrm{M} (r, L, r-1, 1/\ell)) < \rho(\mathrm{M} (r, L, r-1, \epsilon)) = 2$, so $\rho(\mathrm{M} (r, L, r-1, 1/\ell)) = 1$. Let $A$ be an ample generator of $\mathrm{Pic}(\mathrm{M} (r, L, r-1, 1/\ell))$. Then $\pi^{*}\Theta$ and $\pi_{-}^{*}A$ generates $\mathrm{N} ^{1}(\mathbb{P} (\mathcal{E} _{x}))_{\mathbb{R} }$. \begin{definition}\label{def:twocurves} Fix a \emph{general} point $((E^{-}, V^{-}), (E^{+}, V^{+}))$ in the component $\mathrm{M} (\ell, e, \ell, 1/\ell) \times_{\mathrm{Pic}(X)}\mathrm{M} (r-\ell, d-e, r-\ell-1, 1/\ell)$ of the wall-crossing center in $\mathrm{M} (r, L, r-1, 1/\ell)$. The fiber $\pi_{-}^{-1}(((E^{-}, V^{-}), (E^{+}, V^{+})))$ is a projective space $\mathbb{P} \mathrm{Ext} ^{1}((E^{+}, V^{+}), (E^{-}, V^{-}))$. Let $C$ be a line class in it. \end{definition} \begin{lemma}\label{lem:intersection2} The intersection number $\mathcal{O} _{\mathbb{P} (\mathcal{E} _{x})}(1) \cdot C$ is zero. \end{lemma} \begin{proof} The image $\pi(\mathbb{P} \mathrm{Ext} ^{1}((E^{+}, V^{+}), (E^{-}, V^{-}))) = \mathbb{P} \mathrm{Ext} ^{1}(E^{+}, E^{-}) =: \mathbb{P} $ parametrizes isomorphism classes of extensions, and there is an exact sequence over $X \times \mathbb{P} $ \[ 0 \to p^{*}E^{-} \otimes q^{*}\mathcal{O} _{\mathbb{P} }(1) \to E \otimes q^{*}\mathcal{O} _{\mathbb{P} }(m) \to p^{*}E^{+} \to 0 \] (\cite[Lemma 2.3]{Ram73}, \cite[Example 2.1.12]{HL10}). Here $p : X \times \mathbb{P} \to X$ and $q : X \times \mathbb{P} \to \mathbb{P} $ are two projections. If we restrict the exact sequence to $x \times C \cong x \times \mathbb{P} ^{1} \subset X \times \mathbb{P} $, we have \[ 0 \to E^{-}_{x} \otimes \mathcal{O} _{\mathbb{P} ^{1}}(1) \to E_{x} \otimes \mathcal{O} _{\mathbb{P} ^{1}}(m) \to E^{+}_{x} \to 0. \] Since $\mathcal{E} _{x}$ (and hence its restriction $E_{x}$) is normalized as $c_{1}(\mathcal{E} _{x}) = \Theta^{\ell}$ where $0 < \ell < r$, and $E^{-}_{x}$ and $E^{+}_{x}$ are constant, $\ell = c_{1}(E^{-}_{x} \otimes \mathcal{O} _{\mathbb{P} ^{1}}(1)) = c_{1}(E_{x} \otimes \mathcal{O} _{\mathbb{P} ^{1}}(m)) = \ell + rm$. Thus, we have $m = 0$. Then $E_{x}|_{\pi(C)}$ fits in $0 \to \mathcal{O} _{\mathbb{P} ^{1}}(1)^{\ell} \to E_{x} \to \mathcal{O} _{\mathbb{P} ^{1}}^{r-\ell} \to 0$. By a cohomology computation, we can show that this is a split extension. Therefore $\pi^{-1}(\pi(C)) = \mathbb{P} (\mathcal{O} _{\mathbb{P} ^{1}}(1)^{\ell} \oplus \mathcal{O} _{\mathbb{P} ^{1}}^{r-\ell})$. The parabolic flag in $E_{x}$ is determined by that of $E^{+}_{x}$ and it is fixed over $C$. This implies that $C \cong \mathbb{P} (\mathcal{O} _{\mathbb{P} ^{1}}) \hookrightarrow \mathbb{P} (\mathcal{O} _{\mathbb{P} ^{1}}(1)^{\ell} \oplus \mathcal{O} _{\mathbb{P} ^{1}}^{r-\ell})$. Therefore $\mathcal{O} _{\mathbb{P} (\mathcal{E} _{x})}(1)|_{C} = \mathcal{O} _{\mathbb{P} (\mathcal{O} _{\mathbb{P} ^{1}})}(1) = \mathcal{O} _{\mathbb{P} ^{1}}$ and $\mathcal{O} _{\mathbb{P} (\mathcal{E} _{x})}(1) \cdot C = 0$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:nef}] From $\rho(\mathbb{P} (\mathcal{E} _{x})) = 2$, $\pi_{-}^{*}A \cdot C = 0$, and Lemma \ref{lem:intersection2}, we can conclude that $\mathcal{O} _{\mathbb{P} (\mathcal{E} _{x})}(1)$ and $\pi_{-}^{*}A$ are proportional. $\mathcal{O} _{\mathbb{P} (\mathcal{E} _{x})}(1)$ is a positive multiple of $\pi_{-}^{*}A$ because it intersects with the line class in a fiber of $\pi : \mathbb{P} (\mathcal{E} _{x}) \to \mathrm{M} (r, L)$ positively. Therefore $\mathcal{O} _{\mathbb{P} (\mathcal{E} _{x})}(1)$ is semi-ample, and it is nef. By definition, $\mathcal{E} _{x}$ is nef. $\mathcal{E} _{x}$ is strictly nef because $\mathcal{O} _{\mathbb{P} (\mathcal{E} _{x})}(1)$ is not ample. \end{proof} We immediately obtain the nef cones of $\mathbb{P} (\mathcal{E} _{x})$ and $\mathbb{P} (\mathcal{E} _{x}^{*})$. The bigness in the statements follows from Lemma \ref{lem:pullbackofample} and Corollary \ref{cor:pullbackofample}. \begin{corollary}\label{cor:nefcone} The nef cone of $\mathbb{P} (\mathcal{E} _{x}) = \mathrm{M} (r, L, r-1, \epsilon)$ is generated by $\pi^{*}\Theta$ and $\mathcal{O} _{\mathbb{P} (\mathcal{E} _{x})}(1)$. If $d \ne 1$, $\mathcal{O} _{\mathbb{P} (\mathcal{E} _{x})}(1)$ is big. \end{corollary} \begin{corollary}\label{cor:nefconedual} The nef cone of $\mathbb{P} (\mathcal{E} _{x}^{*}) = \mathrm{M} (r, L, 1, \epsilon)$ is generated by $\pi^{*}\Theta$ and $\mathcal{O} _{\mathbb{P} (\mathcal{E} _{x}^{*})}(1)\otimes \pi^{*}\Theta$. If $d \ne r-1$, $\mathcal{O} _{\mathbb{P} (\mathcal{E} _{x}^{*})}(1) \otimes \pi^{*}\Theta$ is big. \end{corollary} \begin{remark}\label{rmk:d=1} It is worth to point out a difference in $d = 1$ case. The numerical computation in Lemma \ref{lem:1stwallcrossing} is still valid. But in this case, from $\ell d \equiv 1 \;\mathrm{mod}\; r$, we have $\ell = 1$ and thus, $a = 1$. Therefore, the first wall-crossing is precisely the fibration $\mathrm{M} (r, L, r-1, \epsilon) \to \mathrm{M} (r, L(-x))$ in Proposition \ref{prop:generalizedHeckemodification}, that is, a contraction in the Hecke correspondence. \end{remark} \section{Vanishing of cohomology and embedding of the derived category}\label{sec:derivedcategory} The aim of this section is to prove Theorem \ref{thm:mainthm}. \subsection{Bondal-Orlov criterion} Let $\mathcal{E} $ be the normalized Poincar\'e bundle over $X \times \mathrm{M} (r, L)$. Let $p : X \times \mathrm{M} (r, L) \to X$, $q : X \times \mathrm{M} (r, L) \to \mathrm{M} (r, L)$ be two projections. Consider the Fourier-Mukai transform \begin{eqnarray*} \Phi_{\mathcal{E} } : \mathrm{D} ^{b}(X) &\to& \mathrm{D} ^{b}(\mathrm{M} (r, L))\\ F^{\bullet} & \mapsto & Rq_{*}(\mathcal{E} \otimes^L Lp^{*} F^{\bullet}). \end{eqnarray*} The Bondal-Orlov criterion (\cite[Theorem 1.1]{BO95}) provides the necessary and sufficient condition for the fully-faithfulness of a Fourier-Mukai transform between two smooth algebraic varieties. The next theorem is a version applied to $\Phi_{\mathcal{E} }$. \begin{theorem}[Bondal-Orlov criterion]\label{thm:vanishing} For each $x \in X$, let $\mathcal{E} _{x}$ be the restriction of the normalized Poincar\'e bundle on $\mathrm{M} (r, L)$. Then $\Phi_{\mathcal{E} } : \mathrm{D} ^{b}(X) \to \mathrm{D} ^{b}(\mathrm{M} (r, L))$ is fully faithful if and only if the following conditions hold: \begin{enumerate} \item $\mathrm{H} ^{0}(\mathrm{M} (r, L), \mathcal{E} _{x} \otimes \mathcal{E} _{x}^{*}) \cong \mathbb{C} $. \item $\mathrm{H} ^{i}(\mathrm{M} (r, L), \mathcal{E} _{x} \otimes \mathcal{E} _{x}^{*}) = 0$ for $i \ge 2$. \item $\mathrm{H} ^{i}(\mathrm{M} (r, L), \mathcal{E} _{x} \otimes \mathcal{E} _{y}^{*}) = 0$ for all $x \ne y$ and all $i$. \end{enumerate} \end{theorem} The main result of this section is to show the vanishing of cohomologies when $g \ge r+3$. Then Theorem \ref{thm:mainthm} follows immediately. \begin{proof}[Proof of Theorem \ref{thm:mainthm}] Items (1) and (2) in Theorem \ref{thm:vanishing} are already proved in \cite[Section 3]{BM19} by extending the work of Narasimhan and Ramanan in \cite{NR75}. Item (3) is obtained by combining Corollary \ref{cor:vanishingforEx} and Proposition \ref{prop:highervanishing}. When $(r-1)(g-1) \ge r^{2}$, or equivalently, $g \ge r+3$, all $i$'s are covered by the above two statements. \end{proof} \begin{remark} For $d = 1$, Belmans and Mukhopadhyay proved the theorem for $g \ge r+3$ (\cite[Theorem 3]{BM19}). \end{remark} \begin{remark} We expect that the genus bound in Theorem \ref{thm:mainthm} is not essential. It would be a very interesting task to prove the statement for every rank and genus. \end{remark} \subsection{Vanishing of cohomology}\label{ssec:cohomologyvanishing} From now on, we investigate cohomology of line bundles on $\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} )$ where there are two parabolic points $\mathbf{p} = (x, y)$ and $\mathbf{m} = (r-1, 1)$. When the parabolic weight $\mathbf{a} $ is sufficiently small, $\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} ) \cong \mathbb{P} (\mathcal{E} _{x}) \times_{\mathrm{M} (r, L)}\mathbb{P} (\mathcal{E} _{y}^{*})$ by Example \ref{ex:smallweight}. By investigating the wall-crossing, we show the vanishing of some cohomology groups on $\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} )$. The following lemma is obtained by essentially the same computation with \cite[Proposition 3.1]{Nar17}. \begin{lemma}\label{lem:DetE} Let $\mathcal{E} $ be the normalized Poincar\'e bundle on $X \times \mathrm{M} (r, L)$. Then \[ \mathrm{Det}(\mathcal{E} ^{*}) := \det(Rq_{*}(\mathcal{E} ^{*}))^{-1} \cong \Theta^{\ell(1-g) - e}. \] \end{lemma} \begin{proof} For the notational simplicity, let $\mathrm{M} := \mathrm{M} (r, L)$ and $\mathrm{M} ' := \mathrm{M} (r, L^{*})$. Then there is an isomorphism $\psi : \mathrm{M} \to \mathrm{M} '$. Since the isomorphism maps the unique ample generator $\Theta_{\mathrm{M} '}$ to $\Theta_{\mathrm{M} }$, by \cite[Proposition 2.1]{Nar17}, \[ \Theta_{\mathrm{M} } = \psi^{*}(\Theta_{\mathrm{M} '}) = (\mathrm{Det}(\mathcal{E} ^{*}))^{r} \otimes (\det(\mathcal{E} ^{*}|_{\{x\} \times \mathrm{M} }))^{-d+r(1-g)} = \mathrm{Det}(\mathcal{E} ^{*})^{r} \otimes \Theta_{\mathrm{M} }^{-\ell(-d+r(1-g))}. \] Thus, $\mathrm{Det}(\mathcal{E} ^{*}) = \Theta_{\mathrm{M} }^{\frac{1 + \ell (-d + r(1-g))}{r}} = \Theta_{\mathrm{M} }^{-e + \ell(1-g)}$. \end{proof} Once we fix the parabolic points and the multiplicity, $\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} )$ are all birational, and for any general $\mathbf{a} $ and $\mathbf{a} '$, $\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} )$ and $\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} ')$ are connected by finitely many flips. In particular, their Picard groups are identified. For a notational simplicity, we will suppress all pull-backs (by flips and regular contractions) in our notation. For instance, when there is only one parabolic point $x$, there are two rational contractions $\pi : \mathrm{M} (r, L, r-1, \epsilon) \to \mathrm{M} (r, L)$ and $\pi_{1} : \mathrm{M} (r, L, r-1, \epsilon) \dashrightarrow \mathrm{M} (r, L, r-1, 1-\epsilon) \to \mathrm{M} (r, L(-x))$. If there is no chance of confusion, we use $A \otimes B$ instead of $\pi^{*}A \otimes \pi_{1}^{*}B$. We denote $\mathcal{O} _{\mathbb{P} (\mathcal{E} _{x})}(a)$ by $\mathcal{O} (a)$. We also set $\mathcal{O} (a, b) := p_{1}^{*}\mathcal{O} _{\mathbb{P} (\mathcal{E} _{x})}(a) \otimes p_{2}^{*}\mathcal{O} _{\mathbb{P} (\mathcal{E} _{y}^{*})}(b)$ where $p_{1} : \mathbb{P} (\mathcal{E} _{x})\times_{\mathrm{M} (r, L)}\mathbb{P} (\mathcal{E} _{y}^{*}) \to \mathbb{P} (\mathcal{E} _{x})$ and $p_{2} : \mathbb{P} (\mathcal{E} _{x})\times_{\mathrm{M} (r, L)}\mathbb{P} (\mathcal{E} _{y}^{*}) \to \mathbb{P} (\mathcal{E} _{y}^{*})$. \begin{lemma}\label{lem:pullbackofample} Let $k = (r, d-1)$. On $\mathrm{M} (r, L, r-1, a)$, \[ \Theta_{\mathrm{M} (r, L(-x))}^{k} = \mathcal{O} _{\mathbb{P} (\mathcal{E} _{x})}(r) \otimes \Theta_{\mathrm{M} (r, L)}^{1-\ell}. \] \end{lemma} \begin{proof} The proof is a careful refinement of that of \cite[Proposition 3.3]{Nar17}. We may assume that $a$ is sufficiently small, so $\mathrm{M} (r, L, r-1, a) \cong \mathbb{P} (\mathcal{E} _{x})$. Let $p : X \times \mathbb{P} (\mathcal{E} _{x}) \to X$ and $q : X \times \mathbb{P} (\mathcal{E} _{x}) \to \mathbb{P} (\mathcal{E} _{x})$ be two projections and $\pi : X \times \mathbb{P} (\mathcal{E} _{x}) \to X \times \mathrm{M} (r, L)$. Let $i_{x} : \mathbb{P} (\mathcal{E} _{x}) \cong x \times \mathbb{P} (\mathcal{E} _{x}) \hookrightarrow X \times \mathbb{P} (\mathcal{E} _{x})$. Recall that there are two exact sequences that appear on the construction of the Hecke correspondence: \[ 0 \to H(\mathcal{E} ) \to \pi^{\#}(\mathcal{E} ) \to p^{*}\mathcal{O} _{x} \otimes q^{*}\mathcal{O} _{\mathbb{P} (\mathcal{E} _{x})}(1) \to 0 \] and \begin{equation}\label{eqn:sesforpiE} 0 \to \pi^{\#}(\mathcal{E} ^{*}) \to K(\mathcal{E} ) \to i_{x *}(\mathcal{O} _{\mathbb{P} (\mathcal{E} _{x})}(-1)\otimes T_{x}) \to 0. \end{equation} Here $\pi^{\#}\mathcal{E} $ is the pull-back of $\mathcal{E} $ to $X \times \mathbb{P} (\mathcal{E} _{x})$ and $T_{x}$ is the tangent space of $X$ at $x$. By \cite[Proposition 2.1]{Nar17}, \[ \Theta_{\mathrm{M} (r, L(-x))}^{k} = \Theta_{\mathrm{M} (r, L^{*}(x))}^{k} = \mathrm{Det}(K(\mathcal{E} ))^{r} \otimes (\det K(\mathcal{E} )|_{z \times \mathbb{P} (\mathcal{E} _{x})})^{1-d+r(1-g)} \] for any $z \in X$. From \eqref{eqn:sesforpiE}, we have $\mathrm{Det}(\pi^{\#}(\mathcal{E} ^{*})) \otimes \mathcal{O} _{\mathbb{P} (\mathcal{E} _{x})}(1) = \mathrm{Det}(K(\mathcal{E} ))$. Since $\mathrm{Det}(\pi^{\#}(\mathcal{E} ^{*})) = \pi^{\#}\mathrm{Det}(\mathcal{E} )$ and $\pi^{\#}(\mathcal{E} ^{*})|_{z \times \mathbb{P} (\mathcal{E} _{x})} \cong K(\mathcal{E} )|_{z \times \mathbb{P} (\mathcal{E} _{x})}$ for any $z \ne x$, \begin{equation} \begin{split} &\mathrm{Det}(K(\mathcal{E} ))^{r} \otimes (\det K(\mathcal{E} )|_{z \times \mathbb{P} (\mathcal{E} _{x})})^{1-d+r(1-g))}\\ &= \mathrm{Det}(K(\mathcal{E} ))^{r}\otimes (\det \pi^{\#}(\mathcal{E} ^{*})|_{z \times \mathbb{P} (\mathcal{E} _{x})})^{1-d+r(1-g)} = \mathrm{Det}(K(\mathcal{E} ))^{r} \otimes \Theta_{\mathrm{M} (r, L)}^{-\ell(1-d+r(1-g))}\\ &= \pi^{\#}(\mathrm{Det}(\mathcal{E} ^{*}))^{r} \otimes \mathcal{O} _{\mathbb{P} (\mathcal{E} _{x})}(r) \otimes \Theta_{\mathrm{M} (r, L)}^{-\ell(1-d+r(1-g))}\\ &= \Theta_{\mathrm{M} (r, L)}^{r\ell(1-g)-re} \otimes \mathcal{O} _{\mathbb{P} (\mathcal{E} _{x})}(r) \otimes \Theta_{\mathrm{M} (r, L)}^{-\ell(1-d+r(1-g))} = \mathcal{O} _{\mathbb{P} (\mathcal{E} _{x})}(r) \otimes \Theta_{\mathrm{M} (r, L)}^{1-\ell}. \end{split} \end{equation} The second and the fourth equalities follow from the normalization of $\mathcal{E} $ and Lemma \ref{lem:DetE}, respectively. \end{proof} \begin{corollary}\label{cor:pullbackofample} Let $k = (r, d-(r-1))$. Then \[ \Theta_{\mathrm{M} (r, L(-(r-1)y))}^{k} = \mathcal{O} _{\mathbb{P} (\mathcal{E} _{y}^{*})}(r) \otimes \Theta_{\mathrm{M} (r, L)}^{1+\ell}. \] \end{corollary} \begin{proof} Within the identification $\mathrm{M} (r, L) \cong \mathrm{M} (r, L^{*})$, the normalized Poincar\'e bundle over $\mathrm{M} (r, L^{*})$ is $\mathcal{E} ^{*}\otimes \Theta_{\mathrm{M} (r, L)}$, and $c_{1}(\mathcal{E} ^{*}\otimes \Theta_{\mathrm{M} (r, L)}) = c_{1}(\Theta_{\mathrm{M} (r, L)}^{r-\ell})$. So $\mathrm{M} (r, L, 1, \epsilon) \cong \mathrm{M} (r, L^{*}, r-1, \epsilon) \cong \mathbb{P} (\mathcal{E} _{y}^{*} \otimes \Theta_{\mathrm{M} (r, L)})$. When $a \to 1$, we obtain a contraction $\mathrm{M} (r, L^{*}, r-1, a) \to \mathrm{M} (r, L^{*}(-y)) \cong \mathrm{M} (r, L(y)) \cong \mathrm{M} (r, L(-(r-1)y))$. By Lemma \ref{lem:pullbackofample}, \[ \Theta_{\mathrm{M} (r, L(-(r-1)y))}^{k} = \Theta_{\mathrm{M} (r, L^{*})}^{1-(r-\ell)} \otimes\mathcal{O} _{\mathbb{P} (\mathcal{E} _{y}^{*} \otimes \Theta)}(r) = \Theta_{\mathrm{M} (r, L)}^{1 - (r-\ell)} \otimes \mathcal{O} _{\mathbb{P} (\mathcal{E} _{y}^{*})}(r) \otimes \Theta_{\mathrm{M} (r, L)}^{r} = \mathcal{O} _{\mathbb{P} (\mathcal{E} _{y}^{*})}(r) \otimes \Theta_{\mathrm{M} (r, L)}^{1+\ell}. \] \end{proof} From now on, $\mathbf{p} = (x, y)$ and $\mathbf{m} = (r-1, 1)$. The line bundle $\Theta$ is the pull-back of $\Theta$ by $\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} ) \dashrightarrow \mathrm{M} (r, L)$. \begin{lemma}\label{lem:canonicaldivisor} For a general weight $\mathbf{a} $, the dualizing bundle of $\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} )$ is \[ \omega = \mathcal{O} (-r, -r) \otimes \Theta^{-2}. \] \end{lemma} \begin{proof} We may assume that $\mathbf{a} $ is sufficiently small and $\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} ) \cong \mathbb{P} (\mathcal{E} _{x}) \times_{\mathrm{M} (r, L)}\mathbb{P} (\mathcal{E} _{y}^{*})$. It follows from the relative Euler sequence applied for $\mathbb{P} (\mathcal{E} _{x}) \to \mathrm{M} (r, L)$ and $\mathbb{P} (\mathcal{E} _{x}) \times_{\mathrm{M} (r, L)}\mathbb{P} (\mathcal{E} _{y}^{*}) \to \mathbb{P} (\mathcal{E} _{x})$. \end{proof} \begin{proposition}\label{prop:effectivecone} Let $\mathbf{a} $ be a general weight. The effective cone of $\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} )$ is generated by four extremal rays \[ \Theta, \mathcal{O} (r, 0) \otimes \Theta^{1-\ell}, \mathcal{O} (0, r) \otimes \Theta^{1+\ell}, \mathcal{O} (r, r) \otimes \Theta. \] \end{proposition} \begin{proof} By Proposition \ref{prop:weightanddivisor}, it is sufficient to find four divisors associated to four extremal parabolic weights. When $\mathbf{a} = (a_{x}, a_{y}) = (0, 0)$, the associated rational contraction is $\mathrm{M} (r, L)$ and the associated divisor is a scalar multiple of $\Theta$. When $\mathbf{a} = (1/\ell, 0)$, by Section \ref{sec:nef}, the associated divisor is a multiple of $\mathcal{O} (1, 0)$. When $\mathbf{a} = (1, 0)$, the associated rational contraction is $\mathrm{M} (r, L(-x))$ and the associated divisor is a scalar multiple of $\mathcal{O} (r, 0) \otimes \Theta^{1-\ell}$ by Lemma \ref{lem:pullbackofample}. For $\mathbf{a} = (0, 1/(r-\ell))$, we have a multiple of $\mathcal{O} (0, 1) \otimes \Theta$. Finally, for $\mathbf{a} = (0, 1)$, a multiple of $\mathcal{O} (0, r) \otimes \Theta^{1+\ell}$ is associated. By an elementary computation, for each point $\mathbf{a} \in [0, 1]^{2}$, the associated divisor can be written as a multiple of $\Theta \otimes (\mathcal{O} (r, 0) \otimes \Theta^{-\ell})^{a_{x}} \otimes (\mathcal{O} (0, r) \otimes \Theta^{\ell})^{a_{y}}$. Thus, the last extremal ray, which is associated to $\mathbf{a} = (1, 1)$, is $\mathcal{O} (r, r) \otimes \Theta$. \end{proof} \begin{corollary}\label{cor:positivity} For some general parabolic weight $\mathbf{a} $, $\mathcal{O} (r+1, r+1) \otimes \Theta^{2}$ is nef and big on $\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} )$. \end{corollary} \begin{proof} The statement is immediate from the fact that the line bundle is on the interior of the effective cone, and there is no divisorial contraction in the wall-crossing (Corollary \ref{cor:codimcenter}). \end{proof} Recall that a normal $\mathbb{Q} $-factorial variety is of Fano type if there is an effective $\mathbb{Q} $-divisor $\Delta$ such that $-(K + \Delta)$ is ample and $(X, \Delta)$ is a klt pair. \begin{corollary}\label{cor:Fanotype} For any general $\mathbf{a} $, the moduli space $\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} )$ is of Fano type. \end{corollary} \begin{proof} It also follows from the fact that $\omega^{*} = \mathcal{O} (-K) = \mathcal{O} (r,r) \otimes \Theta^{2}$ is on the interior of the effective cone. If we pick a general weight $\mathbf{a} '$ such that the nef cone of $\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} ')$ includes $-K$, then $-K$ is nef and big, so $\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} ')$ is a smooth weakly Fano variety, hence of Fano type. For a general $\mathbf{a} \in (0, 1)^{2}$, $\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} )$ is obtained from $\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} ')$ by applying finitely many flips. Therefore it is of Fano type by \cite[Theorem 1.1]{GOST15}. \end{proof} \begin{corollary}\label{cor:vanishing} For $0 < i < (r-1)(g-1)$, $\mathrm{H} ^{i}(\mathrm{M} (r, L, \mathbf{m} , (\epsilon, \epsilon)), \mathcal{O} (1, 1)) = 0$. \end{corollary} \begin{proof} Since $\mathcal{O} (1, 1) = \mathcal{O} (r+1, r+1)\otimes \Theta^{2} \otimes \omega$, for $\mathbf{a} $ in Corollary \ref{cor:positivity}, $\mathrm{H} ^{i}(\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} ), \mathcal{O} (1, 1)) = 0$ for $i > 0$ by the Kawamata-Viehweg vanishing theorem. Since $\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} )$ and $\mathrm{M} (r, L, \mathbf{m} , (\epsilon, \epsilon))$ are connected by finitely many flips with the flipping centers of codimension $\ge (r-1)(g-1) + 1$ (Corollary \ref{cor:codimcenter}), $\mathrm{H} ^{i}(\mathrm{M} (r, L, \mathbf{m} , (\epsilon, \epsilon)), \mathcal{O} (1, 1)) = \mathrm{H} ^{i}(\mathrm{M} (r, L, \mathbf{m} , \mathbf{a} ), \mathcal{O} (1, 1))$ for $i < (r-1)(g-1)$ (\cite[III. Lemma 3.1]{Gro05}, \cite[Theorem 3.8]{Har67}). \end{proof} \begin{corollary}\label{cor:vanishingforEx} For $i < (r-1)(g-1)$, $\mathrm{H} ^{i}(\mathrm{M} (r, L), \mathcal{E} _{x}\otimes \mathcal{E} _{y}^{*}) = 0$. \end{corollary} \begin{proof} For $0 < i < (r-1)(g-1)$, it follows from Corollary \ref{cor:vanishing} and the Leray spectral sequence. For $i = 0$, it follows from the stability of $\mathcal{E} _{x}$, $\mathcal{E} _{y}$ (\cite[Proposition 2.1]{LN05}), and the fact that $\mathcal{E} _{x} \ne \mathcal{E} _{y}$ if $x \ne y$ (\cite[Theorem]{LN05}). \end{proof} Finally, the vanishing of the higher cohomology groups are obtained by Le Potier vanishing theorem (\cite[Theorem 7.3.5]{Laz04}). \begin{proposition}\label{prop:highervanishing} For $i \ge r^{2}$, $\mathrm{H} ^{i}(\mathrm{M} (r, L), \mathcal{E} _{x}\otimes \mathcal{E} _{y}^{*}) = 0$. \end{proposition} \begin{proof} Note that \[ \begin{split} \mathrm{H} ^{i}(\mathrm{M} (r, L), \mathcal{E} _{x} \otimes \mathcal{E} _{y}^{*}) &= \mathrm{H} ^{i}(\mathrm{M} (r, L), \omega_{\mathrm{M} (r, L)} \otimes \mathcal{E} _{x} \otimes \mathcal{E} _{y}^{*} \otimes \Theta^{2})\\ &= \mathrm{H} ^{i}(\mathrm{M} (r, L), \omega_{\mathrm{M} (r, L)} \otimes \mathcal{E} _{x} \otimes (\mathcal{E} _{y}^{*} \otimes \Theta) \otimes \Theta). \end{split} \] The bundle $\mathcal{E} _{x} \otimes (\mathcal{E} _{y}^{*} \otimes \Theta) \otimes \Theta$ is ample because it is a tensor product of nef and ample bundles (\cite[Theorem 6.2.12. (iv)]{Laz04}). Thus, Le Potier vanishing theorem implies the desired vanishing. \end{proof} \section{ACM bundles on $\mathrm{M} (r,L)$}\label{sec:ACM} We now turn to our second application (Theorem \ref{thm:ACMintro}) of the nefness of $\mathcal{E} _{x}$. Let $V$ be an $n$-dimensional projective variety with an ample line bundle $A$. We recall the definition of ACM bundles. \begin{definition}\label{def:ACM} A vector bundle $\mathcal{E} $ on $V$ is an \emph{ACM bundle} with respect to $A$ if $\mathrm{H} ^i(V,\mathcal{E} \otimes A^{j})=0$ for every $1 \leq i \leq n-1$ and $j \in \mathbb{Z} $. An ACM bundle $\mathcal{E} $ is \emph{Ulrich} if $\mathrm{H} ^0(V, \mathcal{E} \otimes A^{-1})=0$ and $\mathrm{H} ^0(V,\mathcal{E} )=\mathrm{rank}\, \mathcal{E} \cdot \deg V = \mathrm{rank}\, \mathcal{E} \cdot (A)^{n}$. \end{definition} For a smooth Fano variety of Picard rank one, it is straightforward to verify that every line bundle is ACM. It is also clear that if $\mathcal{E} $ is ACM with respect to $A$, then $\mathcal{E} \otimes A^{k}$ is ACM with respect to $A$ for all $k \in \mathbb{Z} $. But finding a non-trivial example of an ACM bundle is not an easy task for higher dimensional varieties. In this section, we show that $\mathcal{E} _{x}$ is ACM if $g \ge 3$. \begin{remark}\label{rmk:veryample} Many authors assume that $A$ to be very ample when they consider ACM bundles. Because the Picard number of $\mathrm{M} (r, L)$ is one, Theorem \ref{thm:ACMintro} implies that $\mathcal{E} _x$ is ACM for every very ample line bundle. On $\mathrm{M} (r, L)$, $\Theta^{k}$ is known to be very ample when $k \ge r^{2}+r$ (\cite[Theorem A]{EP04}), but an optimal $k$ for the very ampleness is unknown. \end{remark} Fix a point $x \in X$ and consider bundle morphisms $\mathbb{P} (\mathcal{E} _{x}) \to \mathrm{M} (r, L)$ and $\mathbb{P} (\mathcal{E} _{x}^{*}) \to \mathrm{M} (r, L)$. From the relative Euler sequence, we have \begin{equation}\label{eqn:dualizingsheaves} \omega_{\mathbb{P} (\mathcal{E} _{x})} \cong \mathcal{O} (-r) \otimes \Theta^{\ell - 2}, \quad \omega_{\mathbb{P} (\mathcal{E} _{x}^{*})} \cong \mathcal{O} (-r) \otimes \Theta^{-\ell-2}. \end{equation} Here we use the notational convention in Section \ref{ssec:cohomologyvanishing}. We set $n := \dim \mathrm{M} (r, L) = (r^{2}-1)(g-1)$ and assume that $g \ge 2$. We state three vanishing results coming from different sources. \begin{lemma}\label{lem:Kodairavanishing} We have $\mathrm{H} ^i(\mathrm{M} (r, L),\mathcal{E} _x \otimes \Theta^{j}) = 0$ for $i \geq 1$, $j \geq \ell-1$. \end{lemma} \begin{proof} By \eqref{eqn:dualizingsheaves}, $ \mathcal{O} (1) \otimes \Theta^{j} \cong \omega_{\mathbb{P} (\mathcal{E} _x)} \otimes \mathcal{O} (r+1) \otimes \Theta^{2-\ell+j}$. By Kodaira vanishing theorem and Corollary \ref{cor:nefcone}, we have \begin{equation}\label{eqn:Leraysequence} \mathrm{H} ^i(\mathrm{M} (r, L),\mathcal{E} _x \otimes \Theta^{j}) \cong \mathrm{H} ^i(\mathbb{P} (\mathcal{E} _x), \mathcal{O} (1) \otimes \Theta^{j}) \cong \mathrm{H} ^i(\mathbb{P} (\mathcal{E} _x), \omega_{\mathbb{P} (\mathcal{E} _x)} \otimes \mathcal{O} (r+1) \otimes \Theta^{2-\ell+j}) = 0 \end{equation} for $i \geq 1$, $j \geq \ell-1$. \end{proof} \begin{lemma}\label{lem:LePotiervanishing} We have $\mathrm{H} ^{i}(\mathrm{M} (r, L), \mathcal{E} _{x} \otimes \Theta^{j}) = 0$ for $i \ge r$, $j \ge -1$. \end{lemma} \begin{proof} Since $\mathcal{E} _{x} \otimes \Theta$ is ample (\cite[Theorem 6.2.12. (iv)]{Laz04}), Le Potier vanishing theorem (\cite[Theorem 7.3.5]{Laz04}) immediately implies that \[ \mathrm{H} ^{i}(\mathrm{M} (r, L), \mathcal{E} _{x} \otimes \Theta^{j}) = \mathrm{H} ^{i}(\mathrm{M} (r, L), \omega_{\mathrm{M} (r, L)} \otimes \mathcal{E} _{x} \otimes \Theta^{j+2}) = 0 \] for $i \ge r$ and $j \ge -1$. \end{proof} \begin{lemma}\label{lem:wallcrossing} We have $\mathrm{H} ^{i}(\mathrm{M} (r, L), \mathcal{E} _{x} \otimes \Theta^{j}) = 0$ for $1 \le i \le (r-1)(g-1) -1$, $j > -1+(1-\ell)/r$. \end{lemma} \begin{proof} By \eqref{eqn:Leraysequence}, it is sufficient to show that $\mathrm{H} ^{i}(\mathbb{P} (\mathcal{E} _{x}), \omega \otimes \mathcal{O} (r+1) \otimes \Theta^{2-\ell +j}) = 0$. Since $\mathbb{P} (\mathcal{E} _{x})$ and $\mathrm{M} (r, L, r-1, a)$ for a general parabolic weight $a$ is connected by finitely many flips with wall-crossing centers of codimension $\ge (r-1)(g-1) + 1$, for $0 < i < (r-1)(g-1)$, it is sufficient to show the vanishing for some $a$ (\cite[III. Lemma 3.1]{Gro05}, \cite[Theorem 3.8]{Har67}). Proposition \ref{prop:effectivecone} implies that the effective cone of $\mathbb{P} (\mathcal{E} _{x}) = \mathrm{M} (r, L, r-1, \epsilon)$, which is identified to the boundary of $\mathrm{Eff}(\mathrm{M} (r, L, \mathbf{m} , (\epsilon, \epsilon)))$ given by $a_{y} = 0$, is generated by $\Theta$ and $\mathcal{O} (r) \otimes \Theta^{1-\ell}$. Thus, for a bundle $F = \mathcal{O} (a) \otimes \Theta^{b}$, if $a > 0$ and $b/a > (1-\ell)/r$, then $F$ is big. Thus, for some general parabolic weight $a$, by Kawamata-Viehweg vanishing theorem, \[ \mathrm{H} ^{i}(\mathrm{M} (r, L, r-1, a), \omega \otimes \mathcal{O} (r+1) \otimes \Theta^{2-\ell+j}) = 0 \] for $i \ge 1$ and $(2-\ell +j)/(r+1) > (1-\ell)/r$, or equivalently, $j > -1 + (1-\ell)/r$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:ACMintro}] We divide the computation into several steps. \textsf{Step 1.} It is sufficient to show that $\mathrm{H} ^{i}(\mathrm{M} (r, L), \mathcal{E} _{x} \otimes \Theta^{j}) = 0$ for $1 \le i \le n-1$ and $j \ge -1$. By Serre duality, $\mathrm{H} ^{i}(\mathrm{M} (r, L), \mathcal{E} _{x} \otimes \Theta^{j}) \cong \mathrm{H} ^{n-i}(\mathrm{M} (r, L), \mathcal{E} _{x}^{*} \otimes \Theta \otimes \Theta^{-j-3})$. Since $\mathcal{E} _{x}^{*}\otimes \Theta$ is the restriction of the normalized Poincar\'e bundle over $\mathrm{M} (r, L^{*}(r)) \cong \mathrm{M} (r, L)$, the vanishing of $\mathcal{E} _{x} \otimes \Theta^{j}$ for $j \le -2$ follows from the vanishing of $\mathcal{E} _{x}^{*}\otimes \Theta \otimes \Theta^{j}$ for $j \ge -1$. \textsf{Step 2.} $\ell \ne 1$. It is straightforward to check that, if $g \ge 3$, then the vanishing results in Lemmas \ref{lem:Kodairavanishing}, \ref{lem:LePotiervanishing}, and \ref{lem:wallcrossing} imply that $\mathcal{E} _{x}$ is ACM. \textsf{Step 3.} $\ell = 1$. The above three lemmas cover all cohomology groups except $1 \le i \le r-1$, $j = -1$. For the $\ell = 1$ case, there is a contraction map $\pi_{1} : \mathbb{P} (\mathcal{E} _{x}) = \mathrm{M} (r, L, r-1, \epsilon) \to \mathrm{M} (r, L(-x))$ (Remark \ref{rmk:d=1}). Then by \cite[Lemma 13]{BM19}, \[ \begin{split} \mathrm{H} ^{i}(\mathrm{M} (r, L), \mathcal{E} _{x} \otimes \Theta^{-1}) &\cong \mathrm{H} ^{i}(\mathbb{P} (\mathcal{E} _{x}), \mathcal{O} (1) \otimes \Theta^{-1}) = \mathrm{H} ^{i}(\mathbb{P} (\mathcal{E} _{x}), \omega_{\mathbb{P} (\mathcal{E} _{x})} \otimes \mathcal{O} (r+1))\\ &= \mathrm{H} ^{i}(\mathbb{P} (\mathcal{E} _{x}), \omega_{\mathbb{P} (\mathcal{E} _{x})} \otimes \pi_{1}^{*}\Theta_{\mathrm{M} (r, L(-x))}^{r+1}). \end{split} \] By Koll\'ar's vanishing theorem (\cite[Theorem 2.1]{Kol86}), $R^{i}\pi_{1 *}\omega_{\mathbb{P} (\mathcal{E} _{x})}$ is torsion free for all $i$ and \[ \mathrm{H} ^{k}(\mathrm{M} (r, L(-x)), R^{i}\pi_{1 *}\omega_{\mathbb{P} (\mathcal{E} _{x})} \otimes \Theta_{\mathrm{M} (r, L(-x))}^{r+1}) = 0 \] for all $k > 0$. Since the Leray spectral sequence degenerates, $\mathrm{H} ^{0}(\mathrm{M} (r, L(-x)), R^{i}\pi_{1 *}\omega_{\mathbb{P} (\mathcal{E} _{x})} \otimes \Theta_{\mathrm{M} (r, L(-x))}^{r+1}) \cong \mathrm{H} ^{i}(\mathbb{P} (\mathcal{E} _{x}), \omega_{\mathbb{P} (\mathcal{E} _{x})} \otimes \pi_{1}^{*}\Theta_{\mathrm{M} (r, L(-x))}^{r+1})$. On the other hand, over the stable locus $\mathrm{M} (r, L(-x))^{s}$, $\pi_{1}$ is a $\mathbb{P} ^{r-1}$-fibration. Checking a general fiber, we can show that $R^{i}\pi_{1 *}\omega_{\mathbb{P} (\mathcal{E} _{x})} = 0$ for $i \ne r - 1$. Thus, we obtain the desired vanishing for $1 \le i \le r-2$. For $i = r-1$, since $R^{r-1} \pi_{1 *}\omega_{\mathbb{P} (\mathcal{E} _{x})}$ is a torsion free sheaf, we have an injective morphism $R^{r-1} \pi_{1 *}\omega_{\mathbb{P} (\mathcal{E} _{x})} \hookrightarrow (R^{r-1} \pi_{1 *}\omega_{\mathbb{P} (\mathcal{E} _{x})})^{\vee\vee}$. These two are isomorphic to $\omega_{\mathrm{M} (r, L(-x))}$ over an open subset of codimension $\ge 2$ (\cite[Exercise III.8.4]{Har77}) and the latter is reflexive. Since $\mathrm{M} (r, L(-x))$ is locally factorial (\cite[Theorem A]{DN89}), $(R^{r-1} \pi_{1 *}\omega_{\mathbb{P} (\mathcal{E} _{x})})^{\vee\vee} \cong \omega_{\mathrm{M} (r, L(-x))} \cong \Theta_{\mathrm{M} (r, L(-x))}^{-2r}$ (\cite[Theorem F]{DN89}). Now we have \[ \begin{split} \mathrm{H} ^{0}(\mathrm{M} (r, L(-x)), R^{r-1}\pi_{1 *}\omega_{\mathbb{P} (\mathcal{E} _{x})} \otimes \Theta_{\mathrm{M} (r, L(-x))}^{r+1}) &\hookrightarrow \mathrm{H} ^{0}(\mathrm{M} (r, L(-x)), \omega_{\mathrm{M} (r, L(-x))} \otimes \Theta_{\mathrm{M} (r, L(-x))}^{r+1})\\ &= \mathrm{H} ^{0}(\mathrm{M} (r, L(-x)), \Theta_{\mathrm{M} (r, L(-x))}^{-r+1}) = 0. \end{split} \] \end{proof} When $g = 2$, the only cohomologies that are not covered by the above vanishing results are $i = r-1$ and $0 \le j \le r-3$. Thus, the above proof provides the following statement. \begin{corollary}\label{cor:ACMthetak} If $g \ge 2$, $\mathcal{E} _{x} \otimes \Theta^{-1}$ is ACM with respect to $\Theta^{k}$ for $k \ge r-1$. \end{corollary} \begin{remark}\label{rmk:g=2} \begin{enumerate} \item The vanishing result in \text{Step 3} is proved in \cite[Proposition 19]{BM19} with a different method, under the assumption of $g \ge 3$. Our approach is valid for $g = 2$ as well. \item When $g = r = 2$, $\mathrm{M} (r, L)$ is an intersection of two quadrics and $\mathcal{E} _{x}$ is a spinor bundle (\cite{CKL19, FK18}). From this description, it was shown that $\mathcal{E} _{x}$ is ACM for all $x \in X$. \end{enumerate} \end{remark} \begin{question}\label{que:g=2} Can we extend Theorem \ref{thm:ACMintro} to the $g = 2$ case? \end{question} \begin{remark}\label{rmk:Ulrich} The bundle $\mathcal{E} _{x}$ is not Ulrich in general. For instance, if $g = r = 2$, $h^0(\mathrm{M} (r, L),\mathcal{E} _x)= 4 < 8 = 2 \deg(\mathrm{M} (r, L))$. It is an interesting problem to construct Ulrich bundles on $\mathrm{M} (r,L)$. See \cite{CKL19} for an alternative construction of Ulrich bundles for $g=r=2$ case. \end{remark} \bibliographystyle{alpha}
2,869,038,154,221
arxiv
\section{Introduction} \label{Sec:Intro} Open quantum systems \cite{Breuer2002,Rotter2015a} form the bridge between the world of unitary, deterministic evolution of closed quantum systems \cite{Polkovnikov2011a}, and the familiar experience of our macroscopic world. Recently, open quantum systems have received renewed interest in the context of quantum information processing and quantum circuits. The coupling to the environment can lead to decoherence in arrays of qubits, which limits the fidelity of quantum operations. A sufficiently high fidelity is essential for the performance of programmable quantum devices, in particular for ``quantum supremacy,'' which was reported to have been achieved recently~\cite{Arute2019a}. Quantum measurements, via their back-action on the measured system, can mimic the effect of an environment. In a sense, the environment also ``measures'' the system, but without ``recording'' the extracted information. To that effect, coupling to the environment is a special type of ``blind measurement'' \cite{Roy2020}. Designing specific measurement protocols can be considered as engineering of the environment to which the quantum system is coupled. Improving our understanding of quantum measurement processes is therefore of immediate practical importance. At the same time, controlling the decoherence induced by the coupling to the environment may also help to advance our fundamental understanding of quantum measurements \cite{Schlosshauer2005a, Wiseman2009}. This important role of measurements in the quantum-information context, as well as their relation to decoherence and entanglement spreading \cite{Calabrese2004, Horodecki2009, Laflorencie2016a}, has led to a flurry of activity on the subject, especially with regard to entanglement transitions using a quantum circuit description \cite{Li2018a, Chan2019a, Skinner2019a, Li2019a, Szyniszewski2019a, Szyniszewski2020a, Bao2020a, Gullans2020a, Jian2020a, Jian2020b, Choi2020a, Fan2020a, Chen2020a, Sang2020a, Zabalo2020a, Ippoliti2021a, Lavasani2021a, Ippoliti2021b}. The effect of local quantum measurements on entanglement has also been considered for systems described by a lattice Hamiltonian, in particular, for many-body localized systems \cite{Lunt2020a}, the quantum Ising chain \cite{Lang2020a, Rossini2020a, Biella2021a, Turkeshi2021}, non-interacting spinless fermionic models \cite{Cao2019a, Alberton2020a, Buchhold2021}, Hubbard-type interacting chains with short-range \cite{Fuji2020a} and long-range \cite{Minato2021a} interactions, and ultracold gases \cite{Goto2020a}. The question as to whether the effect of measurements on quantum circuits is qualitatively the same for real many-body systems is a subject of ongoing studies, as we detail below. A key feature that has emerged in the pioneering studies of monitored systems is the interplay between the entangling effect of time evolution and the disentangling effect of the measurement, leading to an entanglement phase transition \cite{Li2018a, Chan2019a, Skinner2019a, Li2019a}. Various types of measurements have been considered: local projective measurements, local weak measurements that only slightly perturb the system (for their recent applications in other contexts, see, e.g., Refs. \cite{Roy2020, Snizhko2020a, Snizhko2020b, Kumar2020b, Gebhart2020a, Xu2020a, Ivanov2020a, Manousakis2020a, Munoz2020a, Monroe2021a, Wang2021a}), non-local measurements of several sites of the system, and global measurements that act on the many-body system as a whole. The main diagnostic tool for the measurement-induced entanglement transition is the behavior of the entanglement entropy averaged over the measurement runs, but other indicators, such as mutual information or entanglement negativity, have also been used to explore the phenomenon. However, manifestations of the entanglement transition in more conventional density correlations are not yet sufficiently explored. The entanglement phase transition was discussed for measurement-only dynamics, where non-local measurements produce both entangling (by non-locality) and disentangling (by projection) trends \cite{Ippoliti2021a, VanRegemortel2021a}. The measurement-induced entanglement transition was argued to be related to the ``purification transition'' \cite{Gullans2020a}, which can be employed for quantum-state preparation, control, and manipulation by means of quantum measurements (see, e.g., Ref. \cite{Roy2020} and references therein). In addition, the properties of the entanglement phase transition have been linked to the theory of error corrections in quantum information processing \cite{Choi2020a, Fan2020a, Gullans2020b, Ippoliti2021a, Ippoliti2021b, Sang2020b, Li2021a}. In the presence of additional symmetries and constraints, a more sophisticated phase diagram may emerge, where the entanglement transition is accompanied by other types of phase transitions, see, e.g., Refs.~\cite{Sang2020a,Bao2021a}. A key open question concerns the degree of universality between the various types of measurement protocols, applied to various different types of systems. In particular, it has been argued \cite{Bao2020a, Szyniszewski2020a} that the effect of continuous (weak, or ``generalized'') quantum measurements on quantum circuits is, by and large, analogous to the effect of rare projective measurements. A generalized phase diagram of hybrid quantum circuits in the plane of frequency vs. strength of measurements was analyzed in Refs.~\cite{Szyniszewski2019a, Szyniszewski2020a}, where a transition between entangling (for weak or infrequent measurements) and disentangling (strong or frequent) phases was established. At the same time, indications of a possible essential difference between the transitions at strong and weak measurements were reported in Ref.~\cite{Szyniszewski2019a}. Thus, it remains a challenging task to explore the universality of the entanglement transition for various measurement setups. Another important---and still open---question concerns the properties of different phases around the transition. The scaling of entanglement entropy with the system size exactly at the entanglement transition has been argued to exhibit logarithmic behavior \cite{Li2020a,Chen2020a} familiar from models described by conformal field theory \cite{Calabrese2004, Calabrese2009, Laflorencie2016a}. On the entangling side of the transition, volume-law scaling was found in various hybrid unitary circuits, whereas in continuously monitored fermionic and spin systems both volume-law and logarithmic-law \cite{Fuji2020a, Alberton2020a, Buchhold2021, Turkeshi2021, Jian2021a} types of entangling scaling were reported. Such entangled phases were argued \cite{Cao2019a} to be unstable to arbitrarily weak measurements in the case of continuously monitored non-interacting fermionic chains that are measured at all sites. The logarithmic scaling corresponds to the emergence of a critical entangled steady state in the thermodynamic limit, while the properties of small-size systems would correspond to volume-law behavior \cite{Alberton2020a, Buchhold2021}. A critical phase characterized by the conformal scaling of observables was found in free models subject to non-unitary evolution governed by a non-Hermitian Hamiltonian \cite{Chen2020a,Gopalakrishnan2020a}. One difficulty in determining the degree of universality in this large array of different systems, however, is that numerical studies thus far have mostly focused on fine-tuned models and not generic many-body systems. This means that, for instance, the role of interactions between particles, integrability of the model, as well as the interplay between many-body and the measurement-induced effects are largely unclear. Another natural deficiency of numerical studies of the measurement-induced transitions in correlated many-body systems is a limited accessibility of large system sizes, which is especially crucial for exploring the predicted change of the behavior \cite{Alberton2020a} with increasing system size in the entangling phase. A particularly promising direction, therefore, is to use the versatile approach of matrix product states (MPS) \cite{Schollwock2011a, Paeckel2019a} to simulate the dynamics, as applied recently to quantum measurements \cite{Tang2020a, Goto2020a, VanRegemortel2021a}. Specifically, MPS are a class of variational Ans{\"a}tze that approximate the exponential complexity of generic many-body states through a polynomial number of parameters, restricted to low- to moderately-entangled states. In the present paper, we propose an MPS-based approach that describes quantum measurements in a continuous way. The dynamics of the monitored system is represented by the combination of unitary (governed by a many-body Hamiltonian) and non-unitary (effective non-Hermitian Hamiltonian) evolution. The latter models a local coupling to an external bath, thus bridging the concepts of environment-induced decoherence and quantum measurements. The advantage of the protocol we will formulate here is that it can be applied to any problem that permits an MPS description and hence includes interacting models. Importantly, the MPS approach can be controllably used for interacting many-body models with system sizes considerably larger than those accessible to exact methods, which are restricted to $\approx 20$ lattice sites. The model we consider here is an interacting many-body system of hard-core bosons on a lattice, which constitutes a Luttinger liquid in the low-energy, continuum limit. We study the measurement-induced dynamics starting from the (moderately entangled) ground state of the system. Depending on the strength and frequency (probability) of the measurement process, the dynamics is either entangling or disentangling, in agreement with recent predictions based on the aforementioned quantum circuit descriptions. However, we also uncover a distinct feature of the system under study: in a certain parameter range near the transition between the entangling and disentangling phases, a clustering of particles occurs such that extended regions of particles and holes emerge over time. This signifies that, while the overall phase diagram breaks down into two distinct phases according to the entanglement scaling, the properties of the phases and the transition between them, quantified in other observables, depend on the measurement implementation. The paper is organized as follows. In Sec.~\ref{Sec:Model}, we specify the system Hamiltonian and introduce the local measurements through an additional non-Hermitian Hamiltonian. We also compare our implementation of measurements with the existing approaches. In Sec.~\ref{Sec:Numerics}, we introduce the observables that we numerically calculate using the MPS and describe the simulation results. The obtained results for the entanglement entropy and density correlations are discussed in terms of the ``phase diagrams'' in Sec.~\ref{Sec:Phase}. Finally, we summarize our findings in Sec.~\ref{Sec:Discuss}. \section{Model and method \label{Sec:Model}} \subsection{System} We consider a hard-core boson model on a lattice of length $L$ (sites $x = 1, 2, \ldots, L$) characterized by the following Hamiltonian: \begin{equation} \mathcal{H}_0 = \sum_x \Big[ -\frac{J}{2}\Big(\hat{b}_x^\dagger \hat{b}_{x+1} + \mathrm{H.c.} \Big) + \Delta \hat{n}_x \hat{n}_{x+1} \Big]. \label{eq:ham} \end{equation} Here $\hat{b}_x^\dagger$, $\hat{b}_x$ are the bosonic creation and annihilation operators and $\hat{n}_x \equiv \hat{b}_x^\dagger \hat{b}_x$ on the $\{0,1\}$-manifold of local occupation. We set $J \equiv 1$ and $\hbar \equiv 1$. For $|\Delta| \leq 1$ the ground state is a Luttinger liquid; we will focus on this range of interaction below, choosing (unless otherwise specified) attractive interactions with $\Delta = -0.5$. (For stronger interaction, $|\Delta| \ge 1$, the ground state is ferromagnetic or antiferromagnetic.) The model \eqref{eq:ham} is known as the $t$-$V$ model in the spinless fermion language and the XXZ Heisenberg chain in the spin-$1/2$ language. Throughout the paper, we consider the case of half-filling, i.e., we fix the number of particles to be $L/2$. \subsection{Measurement} The ``measurement'' is implemented as a quench which takes the form of a purely imaginary, and hence non-Hermitian, on-site potential: \begin{equation} \mathcal{H}_\mathrm{meas}^{(j)} = -{i\mkern1mu} M \sum_x p_x^{(j)}\, \mathrm{sgn} \Big( n_x - m_x^{(j)} \Big) \hat{n}_x. \label{eq:measHam} \end{equation} Here $n_x \equiv \langle \hat{n}_x \rangle$ is the expectation value of the on-site density, $p_x^{(j)}$ is a binary random variable with values $\{ 0, 1 \}$ indicating whether the measurement is performed at the given time step $j$. Specifically, there is a probability $P$ that the site $x$ is measured: $p_x^{(j)}$ is 1 with probability $P$ and 0 otherwise (for simplicity, we take $P$ to be constant over the lattice). Further, $m_x^{(j)} \in [0, 1]$ is a uniformly distributed random variable, so that the factor $\mathrm{sgn} (n_x - m_x^{(j)})$ in Eq.~\eqref{eq:measHam} is akin to the Born rule characterizing the probabilistic character of the quantum-measurement process. We break the time axis into measurement intervals, each of duration $T$, i.e., $t_{j+1} - t_j = T$. The ``measurement Hamiltonian'' \eqref{eq:measHam} acts during the time interval $t_j \leq t < t_{j+1}$. At each consecutive interval, new $p_x$'s and $m_x$'s are generated (according to the probabilities specified above), and the measurement is repeated. The parameter $M$ in Eq.~\eqref{eq:measHam} controls the measurement strength, so that the regime $MT \gg 1$ corresponds to a strong measurement, and $MT \ll 1$ to a weak measurement. The imaginary-time propagation does not conserve the wave function norm; we therefore continuously restore the normalization, keeping the overall condition of half-filling. Let us emphasize a relation of our implementation of the measurement procedure to the imaginary-time propagation, a numerical method that entails time propagation under the Hamiltonian dynamics with $t \rightarrow -{i\mkern1mu} \tau$. Under the evolution with $\tau$, the ground state will decay the slowest. Hence, as $\tau \rightarrow \infty$, the system approaches asymptotically the ground state starting from an arbitrary initial state. The approach we formulate here corresponds to imaginary-time propagation of the terms local in space that describe the measurement, combined with the real-time dynamics of the original Hamiltonian $\mathcal{H}_0$. The imaginary-time propagation drives the local occupancy of sites to full occupancy or unoccupancy, which competes with the dynamics of the Hamiltonian \eqref{eq:ham} that drives the system towards a state with homogeneous density. The measurement procedure induces correlations throughout the system. These correlations travel with a velocity that is bounded, analogous to the Lieb-Robinson bound, but with a maximum velocity that is given by $M$ instead of $J$. We thus require the discretization time step of the numerical integrator to satisfy $\delta t \ll \min\{M^{-1}, 1\}$. We compute, using the density matrix renormalization group (DMRG), the ground state of the Hamiltonian \eqref{eq:ham}. Time evolution is implemented using the time-dependent variational principle (TDVP) \cite{Haegeman2016a}, where we use the same hybrid approach as in Ref.~\cite{Doggen2020a}, combining the one-site and two-site implementation of the TDVP. These methods are of the MPS class of algorithms, which are restricted to a variational subspace of the full Hilbert space, targeting low-entanglement states. The MPS framework outlined above allows for the simulation of the crossover from weak to strong measurements for arbitrary interacting lattice Hamiltonians. We opt to start the dynamics not from an unentangled product state, as is conventional, but instead from the ground state of the Hamiltonian \eqref{eq:ham}. This has the following key advantage. The ground state is only moderately entangled, with a characteristic logarithmic scaling of entanglement entropy \cite{Laflorencie2016a}. In the absence of measurements, the system remains in the ground state under unitary time evolution. Thus, any increase or reduction in entanglement is solely due to the measurement (and its interplay with $\mathcal{H}_0$). This should be contrasted with the case of an initial high-energy product state that rapidly entangles, reaching a volume-law entangled phase under the dynamics of $\mathcal{H}_0$ in the absence of any measurement. From a technical perspective, our choice of the initial condition speeds up the simulation of the dynamics, since a relatively modest size of the variational manifold suffices. \subsection{Protocol} We simulate the dynamics of the monitored chain governed by $\mathcal{H}_0 + \mathcal{H}_\mathrm{meas}$, using various choices of measurement strength $M$, probability of measuring each site $P$, and system size $L$, in a time window $t \in [0, 50]$. We start from the ground state of $\mathcal{H}_0$ with $\Delta = -1/2$. We choose the measurement interval $T = 1$ and the time step $\delta t = 0.005$ for the TDVP integrator. At $t = 50$, we switch off the measurement Hamiltonian \eqref{eq:measHam} and continue evolving according to the Hamiltonian \eqref{eq:ham} in a small time interval $t \in [50, 60]$. The size of the variational manifold is controlled by a numerical parameter called the bond dimension $\chi$ \cite{Schollwock2011a}. For finding the ground state we use $\chi = 500$, and for the dynamics we mainly use $\chi = 64$ (but for some values of $L$, $P$ and $M$ we benchmark our results using larger bond dimensions, see below). For each set of parameters, we repeat the procedure at least $R = 40$ times. At each of the measurement steps $t = j= 0, 1, \ldots$, we generate new measurement outcomes through the evolution of $n_x(t)$ and by taking new random numbers $m_x^{(j)}$. Hence, it is possible, depending on the dynamics and random chance, for one particular site to be measured as having a particle present, a hole present, or not being measured, and these outcomes can change during the time evolution. The dynamics under $\mathcal{H}_0$ alone tends to ``delocalize'' the system (a random product state will evolve towards a volume-law entangled state that is homogeneous in density), while the dynamics governed by $\mathcal{H}_\mathrm{meas}$ rather tends to localize particles at the measured sites. We therefore expect a competition between both mechanisms that can potentially lead to a transition between delocalized (entangling) and localized (disentangling) types of behavior. \subsection{Comparison to existing approaches} Our protocol is aimed at mimicking the coupling to a measurement apparatus. The latter can be (on average) described by the Lindblad formalism using matrix product states (cf.~Refs.~\cite{Tang2020a, Goto2020a}). This can be viewed as a continuum analog of discrete quantum circuit models that have been studied recently (see Sec.~\ref{Sec:Intro}). Similarly, we are capable of following individual quantum trajectories without averaging over the measurement outcomes encoded in the sequence of random variables $m_x^{(j)}$. However, in contrast to random hybrid circuits, the unitary part of the evolution in our scheme is governed by the physical Hamiltonian of the system, and, hence, the ``unitaries'' applied to the site at consecutive time steps are not random but rather are determined by the same fixed (time-independent) Hamiltonian $\mathcal{H}_0$. In addition, at each time step, we maintain half-filling, which provides a global constraint on the hybrid evolution of the system. The way the local measurements are implemented is somewhat similar to the method employed in Refs.~\cite{Cao2019a, Alberton2020a} to describe non-interacting spin chains. The advantage of the current approach is that it permits the investigation of generic many-body systems on a lattice. A key difference between our approach and the MPS-based approaches of Refs.~\cite{Tang2020a, Goto2020a} is that in our protocol the ``measurement'' and the time evolution occur at the same time. Hence, there is a direct interplay between both aspects in the continuum time domain. Our approach also bears some similarity to that in Ref.~\cite{Fuji2020a}, where the quantum trajectory approach \cite{Daley2014a} was applied. In Ref.~\cite{Fuji2020a}, exact diagonalization was employed, which is restricted to relatively small systems of $L \approx 20$. A continuous approach applied to random quantum circuits was also recently proposed in Ref.~\cite{Szyniszewski2020a} where it was implemented for system sizes $L \le 20$. There is also some similarity in spirit with the ``quantum jump'' algorithm used in the context of Monte Carlo simulations, as developed by Dalibard \textit{et al.}~\cite{Dalibard1992a}. In that algorithm, a stochastic element is introduced to model coupling of trapped dilute gases to an exterior electromagnetic field. \section{Numerical results \label{Sec:Numerics}} In this section, we introduce the physical observables and the numerical results used to probe the entanglement transition. \subsection{Signatures of measurement-induced transition} \subsubsection{Entanglement entropy} A measure that is useful as a diagnostic tool is the von Neumann entanglement entropy $S$ \cite{Laflorencie2016a}. An ergodic, thermalizing system is characterized by the volume-law growth of entanglement, whereas a localized system shows area-law growth of entanglement. In the case of a one-dimensional system, the volume law corresponds to a scaling $\propto L$, and the area law corresponds to just a constant. The von Neumann entropy of entanglement for a bipartition into subsystems $A$ and $B$ is given by: \begin{equation} S = -\mathrm{Tr}(\rho_A \ln \rho_A), \quad \rho_A \equiv \mathrm{Tr}_B |\Psi \rangle \langle \Psi |, \label{S} \end{equation} where $\rho_A$ is the reduced density matrix of subsystem $A$. The initial state we consider here---the ground state of a Luttinger liquid---has the feature that $S$ is neither volume-law-entangled nor area-law-entangled, with $S$ showing an intermediate behavior: a logarithmic growth with system size. This allows us to distinguish the entangling phase from the disentangling one by comparing the entropy after a sufficiently long time to the initial entropy. (Of course, strictly speaking, this requires the $L\to \infty$ limit, whereas in practice we are limited to large but finite $L$. The large-$L$ requirement becomes particularly stringent close to the transition.) Below we calculate the time evolution of the entanglement entropy for individual quantum trajectories, as well as the entropy averaged over $R$ runs. Throughout this work, we measure the entropy $S$ in units of $\ln 2$, which corresponds to a replacement $ \ln \to \log_2$ in Eq.~\eqref{S}. \subsubsection{Particle density and clustering} \label{sec:cluster} Another useful---and experimentally accessible---measure is the particle density. In the limit of $M \gg 1$, ``chokepoints'' of high or low density (particles and holes) are generated that serve as blockades to correlations. It turns out to be instructive to consider the \emph{cluster size}, which is a commonly used diagnostic tool in percolation transitions. Here we define a cluster as a set of consecutive sites with a density at most $0.2$ from the extreme values ($0$ and $1$). We then compute the maximum cluster length for each realization. \subsection{Simulations} \label{Sec:Simulations} \subsubsection{Strong frequent measurement: $M \gg 1$, $P = 1$} \label{sec:strong_frequent} First, we consider the case where measurement is strong, $M = 10$, and each site is always measured, $P=1$. The results for the entanglement entropy and particle density are shown in Fig.~\ref{fig:M10P1}. The entropy approaches zero on a time scale of order unity and remains very close to zero for the whole duration of the measurement run, with occasional spikes in the entropy representing rare fluctuations (``glitches''). The small, time-independent value of entropy clearly indicates that the system is characterized by the area-law scaling of entanglement, corresponding to the disentangling phase. In terms of the density, a random configuration of particles and holes is chosen at the very beginning of each run, $t\sim 1$, and we observe the quantum Zeno effect \cite{Li2018a}. The density profile in the $x$-$t$ plane forms stripes of occupied and unoccupied states. Whether a given site is occupied or not is essentially determined by the random variable $m^{(j=1)}_x$ at the first step (so that there are no correlations between different sites). The pattern established at $t \sim 1$ remains almost unchanged at later times. \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig1.pdf} \caption{Time evolution of the entropy $S$ (top panel), showing the average over $R=40$ realizations for $L = 50$, in the limit of strong frequent ($M=10,P=1$) measurement. The dashed red line corresponds to the entropy for an arbitrarily chosen single realization, with the corresponding density evolution in the bottom panel. The dashed line in the bottom panel indicates the time at which the measurement Hamiltonian \eqref{eq:measHam} is switched off.} \label{fig:M10P1} \end{figure} However, the particles are not exactly frozen: since $M$ is finite, the dynamics driven by $\mathcal{H}_0$ slightly perturbs the product state, leading to a nonzero probability of a flip in the sign of $\mathcal{H}_\mathrm{meas}$, as can be seen to occur in the Fig.~\ref{fig:M10P1}. Nonetheless, long-range correlations are strongly suppressed and we observe an area-law state with essentially zero entanglement between distant parts of the system. A substantial entanglement appears only between neighbouring sites and only during the rare processes of particle hopping. When such flips occur close to the center of the system, where the bipartition is taken, they affect the entropy. This leads to rare peaks in the individual traces of $S(t)$ that lead to a finite value of entropy in the averages. Only in the limit $M \rightarrow \infty$, which corresponds to a projective measurement, does the quantum Zeno state become fully robust (i.e., strictly time-independent). It is worth noting that the observation of rare glitches in the regime of strong measurements ($M$ large but finite) appears to be in contrast with Ref.~\cite{Biella2021a}, where the quantum Ising chain was studied and a transition to a \emph{robust} quantum Zeno phase at a \emph{finite} measurement strength was reported. At the same time, we will see below that our setting leads for weaker measurements to a phenomenon of clusterization, which bears a certain similarity to the emergence of a quantum Zeno phase. \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig2.pdf} \caption{Same as in Fig.~\ref{fig:M10P1}, but for a strong infrequent ($M=10,P=0.1$) measurement. The initial density profile (local half-filling represented by the white color) develops into a random red-blue pattern on the time scale $t\sim 5$. } \label{fig:M10P01} \end{figure} After switching off the measurement Hamiltonian, the density quickly settles to a homogeneous state (see the density plot above the dashed line in the lower panel of Fig. \ref{fig:M10P1}). The striped density pattern (appearing as an exact product state) at time $t=50$ induces light-cone structures in the density evolution for $t>50$. Since the energy of the measurement-stabilized striped phase is high, the entropy for $t>50$ (not shown in Fig. \ref{fig:M10P1}) grows towards a volume-law value that is much higher than the initial one (that corresponded to a weakly entangled ground state of the Luttinger \mbox{liquid}, $S \propto \ln L$). \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig3.pdf} \caption{Same as in Fig.~\ref{fig:M10P1}, but for a weak frequent ($M=0.1,P=1$) measurement. Measurement-induced perturbations of the density propagate ballistically through the system and are reflected at the boundaries.} \label{fig:M01P1} \end{figure} \subsubsection{Strong infrequent measurement: $M \gg 1$, $P \ll 1$} \label{sec:strong_infrequent} Next, we consider the case where measurement is strong, as in the previous case, but the probability of measurement is much lower than unity. The result is shown in Fig.~\ref{fig:M10P01} for $M=10$ and $P=0.1$. In this case, the rare measurements do create locally polarized sites but they are not sufficient to suppress entanglement across the system. On the contrary, the entanglement, while noisy, rapidly grows, reaching values substantially larger than the initial one. The system is thus in the entangling phase. The entanglement growth stems from quenching the system by the measurements that distort the initial homogeneous ground state by introducing rare polarized regions. These regions then develop in time according to the unitary dynamics governed by the many-body Hamiltonian \eqref{eq:ham}. As a result of this dynamics, the density becomes strongly inhomogeneous at later times; the inhomogeneity is further enhanced by subsequent rare strong measurements. Note that through the maintaining of global half-filling, local strong measurements affect neighboring sites: this can be clearly seen, e.g., at $t\sim 1$ around $x=10$ and $x=37$ in the lower panel of Fig.~\ref{fig:M10P01}, where the projection on the globally-half-filled state induces an excess (reddish) density at the neighboring sites. We thus see that, under the global constraint in a realistic system, a local strong measurement can induce correlations, similar to non-local measurements \cite{Roy2020, Ippoliti2021a}. The zigzag fluctuation pattern in the average entropy on the scale of a single time step corresponds to the decrease in the entropy caused by the strong measurement in the vicinity of the bipartition cut. Eventually, the average entropy saturates, as occasional decreases induced by the measurement and the entangling dynamics of the unitary Hamiltonian balance out, see Fig.~\ref{fig:M10P01}. \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig4.pdf} \caption{Same as in Fig.~\ref{fig:M10P1}, but for a weak infrequent ($M=0.1,P=0.1$) measurement.} \label{fig:M01P01} \end{figure} \subsubsection{Weak frequent measurement: $M \ll 1$, $P=1$} \label{sec:weak_frequent} We now consider the case where measurement is frequent but weak, see Fig.~\ref{fig:M01P1}, where $P = 1$ and $M = 0.1$. In this case, the characteristic timescale for projecting onto a particle or hole state under the dynamics of $\mathcal{H}_\mathrm{meas}$ is substantially larger than the timescale $T=1$ associated with the duration of the measurement. This leads to a situation where the initial state, which has homogeneous density (except close to the edges of the system, in view of open boundary conditions) is only weakly perturbed by the measurement. As a result, we observe particle and hole fluctuations induced by the measurement, which traverse the system ballistically. As seen in the upper panel of Fig.~\ref{fig:M01P1}, the entanglement entropy rapidly grows with time and becomes considerably larger than its initial value, which is a signature of the entangling phase (see Sec.~\ref{Sec:Entscaling}). The physics of the entanglement growth can be understood as follows. The process of weak measurement continuously heats the system---the imaginary-time propagation does not conserve energy---and the system trends toward a highly entangled state at high energy, in qualitative similarity to the case of strong infrequent measurements depicted in Fig.~\ref{fig:M10P01}. In the latter case, however, the heating (quenching) process is strongly inhomogeneous in space, whereas it is fairly uniform for a frequent, weak measurement. Consequently, the entropy curve shows a much smoother behavior. \subsubsection{Weak infrequent measurement: $M \ll 1$, $P \ll 1$} \label{sec:weakinfreq} \label{sec:weak_infrequent} Finally, we consider the case where the measurement is both weak, $M=0.1$, and infrequent, $P =0.1$, see Fig.~\ref{fig:M01P01}. This case is rather similar to the preceding one ($M \ll 1$ and $P=1$, Sec.~\ref{sec:weak_frequent}), except that the less frequent measurement naturally leads to less heating, so that the growth of the entropy is slower than in the case of frequent measurement. Nevertheless, the entanglement exhibits a clear growing trend, and the system is expected to eventually reach qualitatively the same, highly entangled high-energy state. Thus, at arbitrarily small values of $M$ and $P$ the entanglement will eventually grow to large values as the initial state at zero temperature is gradually heated by the (effective) coupling to the environment. Clearly, the time that is required for the entanglement to reach the saturation value becomes progressively longer when $M$ and $P$ are reduced. The density pattern (lower panel of Fig.~\ref{fig:M01P01}) is distorted already by a weak infrequent measurement, forming a structure of overlapping light-cone rays. Interestingly, the contrast appears to increase with time, which can be regarded as a result of the interplay between the unitary and measurement-induced dynamics: the measurements have a tendency to magnify density fluctuations. \section{Phase diagram \label{Sec:Phase}} \subsection{Entanglement entropy} We are now in a position to carry out a characterization of the dynamics of our monitored system in the parameter plane spanned by the measurement strength $M$ and the measurement probability $P$. In section \ref{Sec:Simulations}, we considered four limiting regimes corresponding to ``corners'' of this phase diagram. In the limit of large $M$ and $P$, entanglement in the system is destroyed and an area-law phase appears. In the other three regimes, we found that entanglement \emph{grows} with respect to the initial zero-temperature state because of the addition of energy to the system by measurements. Hence, we expect to find a transition between the two phases (disentangling and entangling) in the $M$--$P$ plane. For hybrid quantum circuits, the phase diagram of this type was analyzed in Ref. \cite{Szyniszewski2019a}, where the transition line connected two points, one located on the axis of weak constant measurement ($P=1$) and the other corresponding to the limit of rare projective measurements ($M=\infty$) . \subsubsection{Continuous measurements of all sites: constant $P=1$} In Fig.~\ref{fig:entropytime}a, we show the average entropy $S(t)$ for $P = 1$ and various choices of $M$ from very weak ($M=0.1$) to strong, nearly projective ($M=10$) measurements. To probe the dependence of the entropy on the system size $L$, we compare in this plot the data for $L = 50$ and $L=16$. For $L=16$ we choose $\chi = 256$, so that the dynamics is simulated exactly (the exact simulation with MPS requires $\chi = 2^{L/2}$). For large $M$, we see that the entropy decreases over time as was discussed in Sec.~\ref{sec:strong_frequent}, which is consistent with the disentangling (area-law) phase. This is also confirmed by the fact that the long-time saturation value of $S$ is independent on $L$ (within the uncertainty that results from fluctuations of the average in our finite ensemble). Conversely, for small $M$, we see an increase of $S$ with respect to its initial value as was discussed in Sec.\ref{sec:weak_frequent}. Furthermore, the entropy $S$ increases with increasing system size $L$. These are hallmarks of the entangling phase. Our data indicate that the system is in the disentangling (area-law) phase for $M=10$, 3, 1, and 0.5, and in the entangling phase for $M=0.1$, 0.2, and 0.3. We thus estimate the position of the transition point on the $P=1$ axis as $M_c \approx 0.4$. \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig5a.pdf} \includegraphics[width=\columnwidth]{Fig5b.pdf} \caption{\textbf{a)} Average entropy $S(t)$ as a function of time for various measurement strengths $M$ and fixed $P = 1$. The solid (dashed) lines show $L = 50$ ($L = 16$). Inset: $S(t=50)$ as a function of system size for $M = 0.2$, where exact results are shown for $L = 12, 14, 16$ and the colors indicate bond dimension for $L = 32, 50$, with $\chi = 64$ (green), $\chi = 96$ (red), $\chi = 128$ (blue). \textbf{b)} As in panel \textbf{a)}, but for fixed $M = 10$ and varying $P$. The inset shows results for $P = 0.1$.} \label{fig:entropytime} \end{figure} Interestingly, for a certain range of values of $M$ (see the curves for $M = 0.2, 0.3, 0.5$ in Fig.~\ref{fig:entropytime}a), the entropy as a function of time exhibits a maximum before saturation. A similar effect was observed in Ref.~\cite{Goto2020a} where, however, a very different initial condition was chosen (a high-energy, strongly inhomogeneous state, as opposed to the low-energy, homogeneous one considered here). \subsubsection{Strong measurements with varying measurement frequency: constant $M=10$} In Fig.~\ref{fig:entropytime}b, the time dependence of the average entropy $S(t)$ is shown for strong measurements. Specifically, in this figure we fixed $M = 10$, while the measurement frequency $P$ was varied from $P=0.1$ to $P=1$. Qualitatively, the evolution is quite similar to that shown in Fig.~\ref{fig:entropytime}a. For small $P$, we find entangling behavior (Sec.~\ref{sec:strong_infrequent}), while for large $P$ it is disentangling (Sec.~\ref{sec:strong_frequent}). As in Fig.~\ref{fig:entropytime}a, we have two complementary criteria for identification of the entangling phase: (i) the long-time value of the entropy is higher than its initial value; (ii) the entropy increases with system size ($L=50$ vs $L=16$). Both criteria yield consistent results, allowing us to identify the points $P=1$, 0.8, 0.5, and 0.3 as belonging to the disentangling phase, and the points $P=0.1$, 0.15, and 0.2 as belonging to the entangling phase, with the critical value being close to the latter point, $P_c \approx 0.2$--0.25. \subsubsection{Overall phase diagram} In Fig.~\ref{fig:phasediag} we summarize the results for the entropy $S$, averaged over the time interval $[40,50]$ and over $R=40$ realizations, in the whole $P$--$M$ parameter plane. In analogy with the above estimates of critical points on the $P=1$ and $M=10$ lines, we have estimated the transition line in the $P$--$M$ plane, which is also shown in the figure. It is worth commenting on the bottom left corner of the phase diagram (rare weak measurements), where the entropy is above its initial value but smaller than in the most of the entangling phase. The reason for this was discussed in Sec.~\ref{sec:weakinfreq}: the entropy grows, but it takes a time much longer than the duration of the protocol to saturate (corresponding to a strongly entangled state). \subsubsection{Entropy scaling in the entangling phase} \label{Sec:Entscaling} An important question is the dependence of the large-$t$ saturation value of the entanglement entropy $S(L)$ on the system size $L$ in the entangling phase. Most of works on entanglement transitions in quantum circuits indicate a volume-law scaling, $S(L) \propto L$, with a prefactor $L$ depending on the measurement strength. In the insets of Fig.~\ref{fig:entropytime} we show the behavior of the entropy at $t = 50$ as a function of system size, for two particular choices of parameters in the entangling phase ($M = 0.2$, $P = 1$ and $M = 10$, $P = 0.1$), which correspond to the largest values of $S$ in the upper and lower panels of Fig.~\ref{fig:entropytime}, respectively. For both these points in the parameter plane, saturation of the entropy as a function of time is reasonably reached by our largest time $t=50$. In order to better quantify the rate of the entanglement growth, we have calculated the values of $S(t=50)$ for these choices of $P$ and $M$ also for larger values of the bond dimension, $\chi=96$ and 128. For $L=32$ the obtained values of the entropy for $\chi = 64$ and $\chi = 128$ are close in value (within statistical error bars that are related to a finite size of realizations and are of the order of the symbol size), which is a signature of saturation of $S(t)$ with $\chi$. At the same time, for $L=50$ (where the entropy is larger), the obtained value of $S$ for $\chi=128$ is substantially above than that for $\chi=64$. This drift indicates that, for these points in the $P$--$M$ plane, the actual values of $S(t=50)$ are still somewhat above the $\chi=128$ results (shown by blue color in the insets); presumably by an amount of the order of a distance between the $\chi=128$ and $\chi=64$ points. (To find more accurately the saturated value, one would need a calculation with $\chi \approx 256$, which is in principle possible but requires very substantial computational time.) Keeping this in mind, we see that the values of the entanglement entropy at $L=50$ are broadly consistent with volume-law trends based on the data for $L=16$ and $L=32$. It is worth noticing that the slopes of the $S(L)$ dependencies that can be estimated in this way are somewhat smaller than those that would be found based only on the data for small systems (accessible to exact diagonalization). This indicates that finite-size effects are sizeable, so that supplementing exact-diagonalization numerical studies by approximate approaches (such as the MPS method used in this work) that can be applied to larger systems is crucial. The volume-law behavior of the entanglement entropy $S$ in the entangling phase is also supported by the time dependence $S(t)$. In Fig.~\ref{fig:entropyscaling} we show the dependence $S(t)$ at $P=1$, $M=0.2$ for several system sizes (and for two values of the bond dimension $\chi$ for the largest sizes $L=32$ and 50). We see that, for lengths $L \ge 24$, a linear increase $S(t) \propto t$ is found, with an $L$-independent slope. This is expected in the volume-law phase \cite{Skinner2019a}: the entanglement increases as $S(t) \simeq s v_S t$ until it saturates at a time $t \simeq L / 2 v_S$ at a value $S(L) \simeq sL/2$, where $v_S$ is the entanglement propagation velocity. When the bond dimension is insufficient, it leads to a cutoff of the linear increase of $S(t)$ and to the saturation before system-size effects kick in. The slope in Fig.~\ref{fig:entropyscaling} yields $sv_s \simeq 0.14$ for this point in the phase diagram. At the same time, the $L$-dependence of $S(L)$ in the inset of Fig.~\ref{fig:entropytime}a yields an estimate $s = 2(dS/dL) \simeq 0.15$\:--\:0.2 for the same values of the parameters $P$ and $M$. From these data, we estimate the entanglement propagation velocity, $v_S \equiv (sv_S)/s \simeq 0.7$\:--\:1.0. This velocity is close (or possibly identical) to the velocity of ballistic propagation of density fluctuations as observed, e.g., in Fig.~\ref{fig:M01P1}, which is consistent with predictions from Refs.~\cite{Calabrese2004, DeChiara2006a}. It was proposed in Refs.~\cite{Alberton2020a,Buchhold2021} that the volume-law scaling of $S(L)$ is only of transient character and crosses over to a $\ln L$ behavior for large $L$ at the transition. The system sizes that we can access are not sufficient to rigorously test the validity of this conjecture. \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig6.pdf} \caption{Phase diagram, showing the averaged entropy $S$ for $t \in [40, 50]$ for $R=40$ realizations. The black squares indicate data points and the background color depicts interpolated values between the data points. The estimated phase boundary, corresponding to an approximate contour of constant entropy equal to the initial value, is shown as dashed red line.} \label{fig:phasediag} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig7.pdf} \caption{Time dependence of the entanglement entropy $S(t)$ at at $P=1$, $M=0.2$ for system sizes from $L=18$ to $L=50$. For the two largest system sizes, the data with bond dimensions $\chi=64$ and 128 are shown, where $\chi = 64$ is indicated by a dashed line. The straight black dashed line is a guide to the eye obtained through a linear fit of the $L = 50, \chi = 128$ data in the linear regime $t \in [15, 30]$, yielding a slope $\approx 0.14$.} \label{fig:entropyscaling} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig8.pdf} \caption{As in Fig.~\ref{fig:M10P1}, but for $M=0.5$, $P=1$. In the bottom panel, the realization with the largest maximum cluster size is chosen. Inset: average maximum cluster length $C$ at $t = 50$ as a function of system size.} \label{fig:M05P1} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig9.pdf} \caption{Average maximum cluster length as a function of measurement probability $P$ and measurement strength $M$, evaluated at $t = 50$. Note that the region with high cluster size is located around a section of the phase boundary between entangling and disentangling phases (Fig.~\ref{fig:phasediag}), which is also shown in this plot.} \label{fig:phasediag_cluster} \end{figure} \subsection{Clusterization} \label{sec:clusterization} We discuss the clustering of particles (and holes), see Sec.~\ref{sec:cluster}. Interestingly, we observe, close to the transition from the disentangling to the entangling phase, the emergence of large domains resulting from the interplay of $\mathcal{H}_0$ and the measurement protocol. As an illustration, we show in Fig.~\ref{fig:M05P1} the dynamics for a single realization with $M = 0.5$ and $P = 1$. As discussed above, this point is on the area-law side of the transition, close to the phase boundary. For the single realization plotted in the lower panel---the one with the largest maximum cluster length---we observe formation of large polarized domains at time $t \approx 35$. As the dotted line in the upper panel shows, the entanglement entropy for this realization practically vanishes at $t \ge 40$. Hence, the large polarized regions effectively block transport, analogous to what is observed in models with local constraints \cite{Doggen2021a}. To characterize the clustering quantitatively, we consider for every realization the maximum cluster length and then average it over all realizations. The resulting averaged maximum cluster length is denoted $C(t)$. In the inset of Fig.~\ref{fig:M05P1}, we show the system-size dependence of $C(t=50)$ for the same parameters $M = 0.5$, $P = 1$. A clear increase of $C$ with system size $L$ is observed. The data suggests sublinear growth $C \sim L^y$, with $y \approx 0.5$. The results for $C(t=50)$ for various values of $P$ and $M$ are shown in Fig.~\ref{fig:phasediag_cluster}. They exhibit a peak---which reveals the clusterization phenomenon---on the right side of the diagram, around the phase boundary between the entangling and disentangling phases. It should be emphasized that this clusterization is observed only near the portion of the phase boundary that corresponds to frequent measurements ($P$ close to unity). No such peak is observed in the opposite corner of the phase diagram. This indicates that at least some important aspects of the entanglement transition are not fully universal and, in particular, differ qualitatively between the regimes of weak and strong measurements. Physically, the clusterization can be viewed as a result of enhancement of the attractive interaction by coupling of the system to the environment via the measurements, cf. Ref.~\cite{Buchhold2021}). As shown in Appendix \ref{appendix:clust-non-int}, this effect exists (in a weaker form) also in a non-interacting system ($\Delta=0$), where the measurements create an effective attractive interaction. As mentioned above, one can note a certain similarity between the clustering and the dynamical quantum Zeno transition \cite{Kumar2020,Biella2021a}. Indeed, the rapid emergence of clusterization in the lower panel of Fig.~\ref{fig:M05P1} at $t\approx 35$ suggests a kind of dynamical phase transition induced by constantly monitoring the system at moderately weak measurement strength. \section{Summary and discussion \label{Sec:Discuss}} In conclusion, we have proposed an MPS-based method for simulation of the dynamics of quantum many-body systems under continuous monitoring. The monitoring process is modelled as a site- and time-dependent non-Hermitian term in the Hamiltonian. The measurement protocol is controlled by two key parameters: the probability $P$ that a given site is measured at a given time interval (with $0 < P \le 1$) and the measurement strength $M$ (with $M \ll 1$ corresponding to weak measurement and $M\gg 1$ to strong, nearly projective measurement). In contrast to recent approaches, our protocol starts from the ground state of the original Hamiltonian, so that the observed evolution of the initial state is entirely due to the effect of the measurements (including, of course, the interplay with the unitary dynamics). We have applied the method to a 1D interacting many-body system, the ground state of which is a Luttinger liquid with a moderate entanglement ($S \propto \ln L$). The local measurement induces two competing processes: sufficiently strong measurements tend to disentangle the system through quasi-projections. At the same time, the measurement also leads to effective local heating of the initial zero-temperature ground state, which can lead to stronger entanglement. If the measurement leads to an area-law (disentangling) phase, it reduces the entanglement entropy with respect to the initial state (in the limit of large $L$). On the other hand, if the measurement drives the system to the volume-law (entangling) phase, it enhances the entanglement entropy compared to the initial state. Note that the competing effects of measurement reported in Ref.~\cite{Ippoliti2021a} involve nonlocal measurements, as opposed to our protocol. Exploring systems with a length up to $L=50$, we have determined the phase diagram of the entanglement transition in the $P$--$M$ plane (Fig.~\ref{fig:phasediag}). For sufficiently strong and at the same time sufficiently frequent measurement (as in Fig.~\ref{fig:M10P1}), we find a disentangling phase: the entanglement entropy gets suppressed by measurement down to an $L$-independent value (area law). On the other hand, if the measurement is sufficiently weak and/or sufficiently rare (as in Figs.~\ref{fig:M10P01}, \ref{fig:M01P1}, and \ref{fig:M01P01}), the system is in the entangling phase. Our results for the entanglement entropy in this phase are consistent, for the system sizes studied, with the volume law ($S \propto L$). Our results for the phase diagram of the entanglement transition for the Hamiltonian system are qualitatively similar to earlier results for quantum circuits. Our findings thus indicate that such entangling-to-disentangling transitions occur generically in quantum many-body systems. Furthermore, we find that, close to the phase boundary in the range of frequent measurements ($P \approx 1$), the entanglement transition is accompanied by an increase of the size of clusters of particles and holes (Figs.~\ref{fig:M05P1} and \ref{fig:phasediag_cluster}). We interpret this phenomenon as an enhancement of the attractive interaction by the measurements. A similar phenomenon, although in a somewhat weaker form, is found also for a non-interacting system. The divergence of cluster size close to the entanglement transition may be a useful experimental probe, since particle densities are generally easier to measure than the entanglement entropy. Indeed, a setup similar to the one described in this work may be readily prepared in experiments on ultracold atoms or trapped ions, as the local particle density can be measured using quantum microscopy \cite{Bakr2009a}. At the same time, the precise connection between the clusterization effect and the measurement transition remains to be clarified. Future work may focus on applying the method outlined here to various models of experimental relevance, such as the Hubbard model. An intriguing question to be explored is the role of interaction entering the unitary Hamiltonian for the entanglement transitions in various many-body problems. An experimental implementation of the open quantum Ising chain on IBM's quantum hardware was realized very recently \cite{Kamakari2021a}. \section{Acknowledgments} We thank M.~Buchhold, S.~Diehl, A.~Romito, S.~Roy, K.~Snizhko and M.~Szyniszewski for useful discussions. Numerical simulations were performed using the TeNPy library (version 0.6.1) \cite{tenpy}. We acknowledge financial support from the Deutsche Forschungsgemeinschaft (DFG): Project No. 277101999 -- TRR 183 (Project C01) and Grants No. EG 96/13-1 and No. GO 1405/6-1, as well as from the Israel Science Foundation.
2,869,038,154,222
arxiv
\section{Introduction} \label{sec_intro} Deep neural networks have achieved great success across a broad range of domains, such as computer vision, speech processing and natural language processing~\cite{2015_CVPR_He,2014_ICASSP_Wiesler,2017_AAAI_Tu}. While their deep and complex structure provides them powerful representation capacity and appealing advantages in learning feature hierarchies, it also makes the learning difficult. In the literatures, various heuristics and optimization algorithms have been studied, in order to improve the efficiency of the training, including weight initialization~\cite{1998_NN_Yann,2010_AISTATS_Glorot,2015_ICCV_He}, normalization of internal activation~\cite{2015_ICML_Ioffe}, and sophistic optimization methods~\cite{2015_ICML_Grosse,2017_Corr_Yu}. Despite the progress, training deep neural networks and ensuring satisfactory performance is still considerably an open problem, due to its non-convexity nature and the ill-conditioned problems. Deep neural networks (DNNs) have a large number of local minima, due to the fact that they usually suffer model identifiability problem. A model is called to be identifiable if a sufficiently large training set can rule out all but one setting of the model's parameters~\cite{Goodfellow-et-al-2016}. Neural networks are often not identifiable because we can obtain equivalent models by swapping their weights with each other, which is called \emph{weight space symmetry} \cite{1993_NC_Chen}. In addition, for the commonly used rectified nonlinear \cite{2010_ICML_Nair} or maxout network \cite{Goodfellow_CoRR_2013}, we can also construct equivalent models by scaling the incoming weight of a neuron by a factor of $\alpha$ while scaling its outgoing weight by $1/ \alpha$. We refer to this as \emph{scaling-based weight space symmetry} \cite{2015_NIPS_Neyshabur}. These issues imply that there can be an extremely large or even uncountably infinite amount of local minima for a neural network. Although it still remains an open question whether the difficulty of optimizing neural networks originates from local minima, we observe that the \emph{scaling-based weight space symmetry} can cause the Hessian matrix ill-conditioned, which is deemed to the most prominent challenge in optimization~\cite{2010_AISTATS_Glorot,2016_CoRR_Salimans}. To alleviate the negative effect of \emph{scaling-based weight space symmetry}, we propose to constrain the incoming weights of each neuron to be unit-norm. This simple strategy can ensure that the weight matrix in each layer has almost the same magnitude. Besides, it can keep the norm of back-propagation information during linear transformations. Training neural networks with such constraints can be formulated as an optimization problem over Oblique manifold~\cite{2006_ICASSP_Absil}. To address this optimization problem, we propose a projection based weight normalization method to improve both performance and efficiency. Our method executes standard gradient updates, followed by projecting the updated weight back to Oblique manifold. We point out that the proposed method has the property of regularization as weight decay \cite{1992_WD_Krogh}, and can be viewed as a regularization term with adaptive regularization factors. We further show that our method implicitly adjusts the learning rate and ensures the unit-norm characteristic for incoming weight of each neuron, under the condition that batch normalization \cite{2015_ICML_Ioffe} is employed in the networks. We conduct comprehensive experiments on several widely-used image datasets including CIFAR-10, CIFAR-100 \cite{2009_TR_Alex}, SVHN \cite{2011_NIPS_Netzer} and ImageNet \cite{2009_ImageNet} for supervised learning over the state-of-the-art Convolutional Neural Networks (CNNs), such as Inception \cite{2014_CoRR_Szegedy}, VGG \cite{2014_CoRR_Simonyan} and residual network \cite{2015_CVPR_He,2016_CoRR_Zagoruyko}. The experimental results show that our method can improve the performance of deep neural networks with different architectures without revising any experimental setups. We also consider semi-supervised learning for permutation invariant MNIST dataset by applying our method to Ladder network \cite{2015_NIPS_Rasmus}. Our method outperforms the state-of-the-art results in this task: we achieve test errors as $2.52\%$, $1.06\%$, and $0.91\%$ with only 20, 50, and 100 labeled training samples, respectively. Code to reproduce our experimental results is available on: \textcolor[rgb]{0.00,0.50,1.00}{https://github.com/huangleiBuaa/NormProjection}. Our contributions are as below. \begin{enumerate} \item We propose to optimize neural networks over Oblique manifold, which can alleviate the ill-conditioned problem caused by \emph{scaling-based weight space symmetry}. \item We propose projection based weight normalization method (PBWN), which serves as a simple, yet effective and efficient solution to optimization over Oblique manifold in DNNs. We further analyze that PBWN has the property of regularization as weight decay, and also collaborates well with commonly used batch normalization technique. \item We apply PBWN to the state-of-the-art CNNs over large scale datasets, and improve the performance of networks with different architectures without revising any experimental setups. Besides, the additional computation cost introduced by PBWN is negligible. \end{enumerate} \section{Optimization over Oblique Manifold in DNNs} Consider a learning problem with training data $\mathbb{D}=\{(\mathbf{x}_i, \mathbf{y}_i)\}_{i=1}^{M}$ using a feed-forward neural network $f(\mathbf{x})$ with $L$-layers, where $\mathbf{x}$ refers to the input and $\mathbf{y}$ the corresponding target. The network is parameterized by a set of weights $\mathbb{W}=\{ \mathbf{W}_{l}, 1\leq l \leq {L} \}$ and biases $\mathcal{B}=\{ \mathbf{b}_{l}, 1 \leq l \leq {L} \}$, in which each layer is composed of a linear transformation and an element-wise nonlinearity: $\mathbf{h}_l=\varphi(\mathbf{W}_{l} \mathbf{h}_{l-1}+ \mathbf{b}_l) $. In this paper, we mainly focus on rectifier activation function that has a property of $\varphi(\alpha x)=\alpha \varphi(x)$, and drop the biases $\mathcal{B}$ for simplifying discussion and description. Given a loss function $\mathcal{L}(\mathbf{y}, f(\mathbf{x}; \mathbb{W}))$ that measures the mismatch between the desired output $\mathbf{y}$ and the predicted output $f(\mathbf{x}; \mathbb{W})$, we can train a neural network $f$ by minimizing the empirical loss as follows: \begin{eqnarray} \label{eqn:optimization_normal} \min_{\mathbb{W}} ~~\mathbb{E}_{(\mathbf{x},\mathbf{y})\in \mathbb{D}} [\mathcal{L}(\mathbf{y}, f(\mathbf{x}; \mathbb{W}))]. \end{eqnarray} In the above formulation, gradient information dominates how to tuning the network parameters. The weight updating rule of each layer for one iteration is usually designed based on Stochastic Gradient Descent (SGD): \begin{eqnarray} \label{eqn:update_normal} \mathbf{W}^{*}_{l}=\mathbf{W}_{l} - \eta \frac{\partial \mathcal{L} }{\partial \mathbf{W}_{l}}, \end{eqnarray} where $\eta$ is the learning rate and the gradient of the loss function with respect to the parameters $\frac{\partial \mathcal{L} }{\partial \mathbf{W}_{l}}$ is approximated by the mini-batch $\mathbf{x}_{1\ldots m}$ of size $m$ by computing $\frac{\partial \mathcal{L} }{\partial \mathbf{W}_{l}}= \frac{1}{m} \Sigma_{i=1}^{m} \frac{\partial \mathcal{L}(\mathbf{y}_i, f(\mathbf{x}_i; \mathbb{W}))}{\partial \mathbf{W}_{l}}$. \subsection{Scaling-Based Weight Space Symmetry} In this part, we will show why the scaling-based weight space symmetry can cause the Hessian matrix ill-conditioned, and this behaviour makes training deep neural network more challenging. We consider a very simple two-layer linear model with only one neuron per layer, and abuse the rectified nonlinear layer for simplifying discussion without loss of generalization. Let $y=w_2 h_1$ and $h_1=w_1 x$ for the two layers, and define the loss function $\mathcal{L}(y)$. We further assume $w_1$ and $w_2$ are in the same magnitude. Based on the \emph{scaling-based weight space symmetry}, we consider another two-layer linear model parameterized by $ \hat{w}_1= \alpha w_1$ and $\hat{w}_2= \frac{1}{\alpha} w_2$ where $\alpha>1$. Under this parameterization, we can still have the same model output as $\hat{y}=y$ for the same input $x$. For these two models, we can get the back-propagated gradient information $\frac{\partial \mathcal{L} }{\partial y}$ and $\frac{\partial \mathcal{L} }{\partial \hat{y}}$, and further have $\frac{\partial \mathcal{L} }{\partial y}=\frac{\partial \mathcal{L} }{\partial \hat{y}}$ due to the fact $\hat{y}=y$. Based on simple algebra derivation, it is easy to obtain that $\frac{\partial \mathcal{L} }{\partial \hat{w}_2} = \alpha \frac{\partial \mathcal{L} }{\partial w_2}$ and $\frac{\partial \mathcal{L} }{\partial \hat{w}_1} =\frac{1} {\alpha} \frac{\partial \mathcal{L} }{\partial w_1}$. This phenomenon implies that if $w_1$ and $w_2$ are in different magnitude, their gradient information $\frac{\partial \mathcal{L} }{\partial w_1}$ and $\frac{\partial \mathcal{L} }{\partial w_2}$ will be inversely different in terms of magnitude. Subsequently, as $\alpha$ becomes larger, it is more likely that the Hessian matrix will be ill-conditioned, as shown in Figure \ref{fig:motivation}. \begin{figure*}[t] \centering \hspace{-0.02\linewidth} \subfigure[normal parameterizations]{ \includegraphics[width=0.36\linewidth]{figures/figure1.pdf} } \subfigure[scaled parameterizations]{ \includegraphics[width=0.36\linewidth]{figures/figure2.pdf} } \caption{\small An illustrative example of scaling-based weight space symmetry that can cause ill-conditioned problem. (a) the error landscape of $w_1$ and $w_2$ in the same magnitude; (b) the error landscape of $\hat{w}_1$ and $\hat{w}_2$ by scaling a factor $\alpha$ and $\frac{1}{\alpha}$ respectively in different magnitudes.} \label{fig:motivation} \end{figure*} \subsection{Formulation for Unit-Norm Constraint} To relieve the negative effect of \emph{scaling-based weight space symmetry}, in this paper we propose to constrain the incoming weights of each neuron\footnote{We can also constrain the outgoing weights to be unit-norm. However, it seems more intuitive with filters being unit-norm.} to be unit-norm. Specifically, we reformulate the optimization problem of Eqn. ~\ref{eqn:optimization_normal} as follows: \begin{eqnarray} \label{eqn:optimization_constrain} & \min_{\mathbb{W}} ~~\mathbb{E}_{(\mathbf{x},\mathbf{y})\in \mathbb{D}} [\mathcal{L}(\mathbf{y}, f(\mathbf{x}; \mathbb{W}))] \nonumber \\ & s.t.~~ \text{ddiag}(\mathbf{W}_l \mathbf{W}_l^T)=\mathbf{I}, ~l=1,2,...,L. \end{eqnarray} where $\text{ddiag}(\mathbf{M})$ denotes an operation that extracts the diagonal elements of matrix $\mathbf{M}$ and sets the off-diagonal elements as 0. We drop the index of $\mathbf{W}_l$ for simplifying denotation. Indeed, the constraint of the weight matrix $\mathbf{W} \in \mathbb{R}^{n \times p}$ in each layer defines a embedded submanifold of $\mathbb{R}^{n \times p}$ called the Oblique manifold \cite{2006_ICASSP_Absil}: \begin{eqnarray} \mathcal{OB}(n,p)=\{\mathbf{W} \in \mathbb{R}^{n \times p}: \text{ddiag}(\mathbf{W} \mathbf{W}^T)=\mathbf{I} \} \end{eqnarray} Note that here we adopt $\mathcal{OB}(n,p)$ to denote the set of all $n \times p$ matrices with normalized rows, which is different from the standard denotation with normalized columns \cite{2006_ICASSP_Absil,2008_Book_Absil}. First, we can apply Riemannian optimization method ~\cite{2008_Book_Absil} to solve Problem~\ref{eqn:optimization_constrain}. We calculate the Riemannian gradient $\widehat{\frac{\partial \mathcal{L} }{\partial \mathbf{W}}}$ in the tangent space of $\mathcal{OB}(n,p)$ at current point $\mathbf{W}$ by: \begin{eqnarray} \label{eqn:gradient_Reim} \widehat{\frac{\partial \mathcal{L} }{\partial \mathbf{W}}}=\frac{\partial \mathcal{L} }{\partial \mathbf{W}} - \text{ddiag}(\mathbf{W} \frac{\partial \mathcal{L} }{\partial \mathbf{W}}^T) \mathbf{W} \end{eqnarray} where $\frac{\partial \mathcal{L} }{\partial \mathbf{W}}$ is the ordinary gradient. Given Riemannian gradient, we update the weight along the negative Riemannian gradient with $-\eta \widehat{\frac{\partial \mathcal{L} }{\partial \mathbf{W}}}$ in the tangent space, where $\eta>0$ is the learning rate. We then use a \emph{retraction} as suggested by \cite{2006_ICASSP_Absil} that maps the tangent vectors to the points on the manifolds as: \begin{eqnarray} \label{eqn:retract} \Upsilon_{\mathbf{W}}(-\eta \widehat{\frac{\partial \mathcal{L} }{\partial \mathbf{W}}}) =(\mathbf{W}-\eta \widehat{\frac{\partial \mathcal{L} }{\partial \mathbf{W}}} ) (\text{ddiag}(\mathbf{M}))^{-1/2} \end{eqnarray} where $\mathbf{M}=(\mathbf{W}-\eta \widehat{\frac{\partial \mathcal{L} }{\partial \mathbf{W}}}) (\mathbf{W}-\eta \widehat{\frac{\partial \mathcal{L} }{\partial \mathbf{W}}})^T$. Therefore, we can get the new point $\mathbf{W}^*$ in the Oblique manifold as: $\mathbf{W}^*=\Upsilon_{\mathbf{W}}(-\eta \widehat{\frac{\partial \mathcal{L} }{\partial \mathbf{W}}})$. We update the weight matrices iteratively until convergence. \section{Projection Based Normalization} The Riemannian optimization method provides a good solution to Problem \ref{eqn:optimization_constrain}. However, it also introduces extra non-ignorable computational cost. For instance, we have to calculate the Riemannian gradient by subtracting an extra term $\text{ddiag}(\mathbf{W} \frac{\partial \mathcal{L} }{\partial \mathbf{W}}^T) \mathbf{W}$ and then project the weight in the tangent space back to the Oblique manifold by multiplying $(\text{ddiag}(\mathbf{M}))^{-1/2}$ in each iteration. Is it possible to reduce the computational cost without performance loss and meanwhile guarantee the solution satisfying the unit-norm constraints? To make the following analysis more clear, let us first consider one neuron with its incoming weight $\mathbf{w}$ satisfying the unit-norm constraint $\mathbf{w}^T \mathbf{w}=1$. Based on Eqn. \ref{eqn:gradient_Reim}, its Riemannian gradient $\widehat{\frac{\partial \mathcal{L} }{\partial \mathbf{w}}}$ can be obtained as follows: \begin{eqnarray} \label{eqn:gradient_Riem_perUnit} \widehat{\frac{\partial \mathcal{L} }{\partial \mathbf{w}}}=\frac{\partial \mathcal{L} }{\partial \mathbf{w}} - (\mathbf{w}^T \frac{\partial \mathcal{L} }{\partial \mathbf{w}}) \mathbf{w}. \end{eqnarray} From Eqn. \ref{eqn:gradient_Riem_perUnit}, we can find that the Riemannian gradient actually adjusts the ordinary gradient by subtracting an extra term $(\mathbf{w}^T \frac{\partial \mathcal{L} }{\partial \mathbf{w}}) \mathbf{w}$. Besides, we have the following fact \begin{eqnarray} \| (\mathbf{w}^T \frac{\partial \mathcal{L} }{\partial \mathbf{w}}) \mathbf{w} \| & \leq & |\mathbf{w}^T \frac{\partial \mathcal{L} }{\partial \mathbf{w}}| \| \mathbf{w} \| \nonumber \\ &\leq& \| \mathbf{w} \| \| \frac{\partial \mathcal{L} }{\partial \mathbf{w}} \| \| \mathbf{w} \| = \| \frac{\partial \mathcal{L} }{\partial \mathbf{w}} \|, \end{eqnarray} which means that $(\mathbf{w}^T \frac{\partial \mathcal{L} }{\partial \mathbf{w}}) \mathbf{w}$ is not a dominant term compared to $\frac{\partial \mathcal{L} }{\partial \mathbf{w}}$ in Eqn. \ref{eqn:gradient_Riem_perUnit}. We also observe this fact in our experiments. Therefore, we recommend simply using the ordinary gradient to solve Problem \ref{eqn:optimization_constrain} with much less computation cost as follows: \begin{eqnarray} \label{eqn:update_norm_perUnit} \tilde{\mathbf{w}}=\mathbf{w}- \eta \frac{\partial \mathcal{L} }{\partial \mathbf{w}},\\ \label{eqn:Norm_projection} \mathbf{w}^*= \tilde{\mathbf{w}}/ \| \tilde{\mathbf{w}} \|. \end{eqnarray} Here, Eqn. \ref{eqn:Norm_projection} works by projecting the updated weight $\tilde{\mathbf{w}}$ back to the Oblique manifold, and we thus call this operation \emph{norm projection}. Indeed, the operation combining Eqn. \ref{eqn:update_norm_perUnit} and \ref{eqn:Norm_projection} is equivalent to the retractor operation in Eqn. \ref{eqn:gradient_Reim}, when given the Riemannian gradient $\widehat{\frac{\partial \mathcal{L} }{\partial \mathbf{w}}}$. Note that if the weight updating is based on the ordinary gradient in Eqn. \ref{eqn:Norm_projection}, the \emph{norm projection} operation can not make the updating go along the negative gradient direction, and subsequently disturbs the gradient information. We find that such a disturbance eventually does not harm the learning as shown in Figure \ref{fig:exp_T} (a). From it, we observe that using the ordinary gradient has nearly identical training loss curve to using Riemannian gradient. For more efficient computation, we can also execute the\emph{ norm projection} operation of Eqn. \ref{eqn:Norm_projection} by an interval $T$ rather than in each iteration. We empirically find this trick can work well in practice. It should be pointed out that when executing \emph{norm projection} operation with a large $T$, our method may lose some information learned in the weight matrix and also suffer instability after the\emph{ norm projection} as shown in Figure \ref{fig:exp_T} (b). From it, we can find that in the initial phase, executing \emph{norm projection} by large interval results in the sudden increase of loss. This is mainly because we change the scale of each filter, which results in the predictions different for the same input. Fortunately, we can remedy this issue by combing with batch normalization \cite{2015_ICML_Ioffe}. We will discuss it in the next subsection. To summarize, we show our projection based weight normalization framework in Algorithm \ref{alg_forward}, in which an extra \emph{norm projection} is executed by interval. Note that the proposed Riemannian optimization over Oblique manifold described before can be viewed as a specific instance of our framework, under the conditions that we use Riemannian gradient, steepest gradient descent and interval $T=1$. \begin{algorithm}[] \caption{Projection based weight normalization framework for training DNNs.} \label{alg_forward} \begin{small} \begin{algorithmic}[1] \STATE \textbf{Input}: A neural network with learnable parameters $\mathbb{W}$, and the updating interval $T$. \STATE \textbf{Output}: A trained model with optimal $\mathbb{W}$. \STATE Initialize $\mathbb{W}$ by using the regular initialization methods, and set the iteration $t=0$. \WHILE {the training is not finished} \STATE Execute forward step to obtain the loss $\mathcal{L}$. \STATE Execute backward step to obtain the gradient information. \STATE Update $\mathbb{W}$ based on the proposed optimization algorithm. \STATE Update the iteration $t\leftarrow t+1$. \IF { $mod(t, T)==0$ } \STATE Perform \emph{norm projection} on $\mathbf{w} \in \mathbb{W}$ according to (\ref{eqn:Norm_projection}). \ENDIF \ENDWHILE \end{algorithmic} \end{small} \end{algorithm} \begin{figure*}[t] \centering \hspace{-0.02\linewidth} \subfigure[Effect of norm projection]{ \includegraphics[width=0.36\linewidth]{figures/0MLP/0MNIST_debug_loss.pdf} } \subfigure[Effect of updating intervals]{ \includegraphics[width=0.36\linewidth]{figures/0MLP/0MNIST_PN_T_loss.pdf} } \caption{\small An illustrative experiment on MNIST, using multi-layer perceptron (MLP) structure with layer sizes of 1024-750-250-250-10. We train the model by stochastic gradient descent with the mini-batch size of 256. We search the learning rate over $\{0.01,0.03,0.1,0.3,1\}$ and report the best performance of each method (All are under learning rate of 0.3). `Normal' indicates the original network. `PBWN-Riem' and `PBWN' refers to the projection based weight normalization methods that respectively apply \emph{norm projection} for each iteration based on Riemannian and ordinary gradient, while `PBWN-T$T$' performs \emph{norm projection} every $T$ iterations based on ordinary gradient.} \label{fig:exp_T} \end{figure*} \subsection{Combined with Batch Normalization} Batch normalization is a popular technique that stabilizes the distribution of activations in each layer and thus accelerates the convergence. It works by normalizing the pre-activation of each neuron to zero-mean and unit-variance over each mini-batch, and an extra learnable scale and bias parameters are recommended to restore the representation power of the networks. Specifically, for each neuron, batch normalization has a formulation as follows: \begin{eqnarray} \label{eqn:BN} BN(\mathbf{x}; \mathbf{w})= \gamma \frac{\mathbf{w}^T \mathbf{x}- \mathbb{E}(\mathbf{w}^T \mathbf{x})}{\sqrt{Var (\mathbf{w}^T \mathbf{x})}}+\beta. \end{eqnarray} One interesting property of batch normalization is that the incoming weight of each neuron is scaling invariant, that is \begin{eqnarray} BN(\mathbf{x}; \alpha \mathbf{w})=BN(\mathbf{x}; \mathbf{w}). \end{eqnarray} The \emph{norm projection} operation of Eqn. \ref{eqn:Norm_projection} can be viewed as a scaling of $\alpha=\frac{1}{\| \tilde{\mathbf{w}} \|}$. Therefore, when combined with batch normalization, the \emph{norm projection} also can keep the same output during training in a rectifier network, that is $\mathcal{L}(\mathbf{x}; \alpha \mathbf{w})=\mathcal{L}(\mathbf{x}; \mathbf{w})$. Therefore, we can ensure that \emph{norm projection} does not drop any learned information in the weight matrix, even thought we execute \emph{norm projection} outside the gradient descent steps. Another interesting point is that \emph{norm projection} eventually affects the backpropagation information when combined with batch normalization. Batch normalization owns a property of \begin{eqnarray} \frac{\partial BN(\mathbf{x}; \alpha \mathbf{w}) }{\partial (\alpha \mathbf{w})}=\frac{1}{\alpha} \frac{\partial BN(\mathbf{x}; \mathbf{w}) }{\partial \mathbf{w}}. \end{eqnarray} Therefore, we can get \begin{eqnarray} \frac{\partial \mathcal{L} }{\partial (\alpha \mathbf{w})} =\frac{\partial \mathcal{L} }{\partial BN(\mathbf{x}; \alpha \mathbf{w}) } \frac{\partial BN(\mathbf{x}; \alpha \mathbf{w}) }{\partial (\alpha \mathbf{w})} =\frac{1}{\alpha } \frac{\partial \mathcal{L} }{\partial \mathbf{w}}. \end{eqnarray} This indicates that the \emph{norm projection} operation implicitly adjusts the learning rate by a factor of $\| \tilde{\mathbf{w}}\|$. To summarize, when combined with batch normalization in a rectifier network, the \emph{norm projection} operation enjoys the following characteristics: (1) guaranteeing that the incoming weight $\mathbf{w}$ is unit-norm; (2) keeping the output same as before the operation during the training; (3) implicitly adjusting the learning rate by a factor of $\| \tilde{\mathbf{w}}\|$. These characteristics make our projection based weight normalization have stable optimization process. \subsection{Connecting to Weight Decay} We find that our projection based weight normalization has strong connections to weight decay~\cite{1992_WD_Krogh}. Weight decay ~\cite{1992_WD_Krogh} is a simple yet effective technique to regularize the neural networks. The update formulation of weight decay is: \begin{eqnarray} \label{eqn:WD} \mathbf{w}^*= \mathbf{w}- \lambda \mathbf{w} - \eta \frac{\partial \mathcal{L} }{\partial \mathbf{w}}, \end{eqnarray} where $\lambda>0$ is a constant weight decay factor. Indeed, weight decay can be considered as a solution to the loss function $\mathcal{L}(\mathbf{y}, f(\mathbf{x}; \theta))$ appended with a regularization term $\lambda \| \mathbf{w} \|^2$. From this perspective, we can treat weight decay as a soft constraint and while our method a hard constraint with each neuron's incoming weight $\| \mathbf{w}\|=1$. From another perspective, we can get the weight updating formulation of our method based on Eqn. \ref{eqn:update_norm_perUnit} and \ref{eqn:Norm_projection}: \begin{eqnarray} \label{eqn:weight} \mathbf{w}^*= \mathbf{w} - \frac{\lambda_{\eta,\mathbf{w}}-1 }{\lambda_{\eta,\mathbf{w}}} \mathbf{w} - \frac{\eta}{\lambda_{\eta,\mathbf{w}} }\frac{\partial \mathcal{L} }{\partial \mathbf{w}} \end{eqnarray} where $\lambda_{\eta,\mathbf{w}}= \| \mathbf{w} - \eta \frac{\partial \mathcal{L} }{\partial \mathbf{w}} \|$. We can find that Eqn. \ref{eqn:weight} has a similar weight updating form as weight decay. Particularly, we have a weight-specific decay rate and also a weight-specific learning rate. Therefore, the solution to optimization over Oblique manifold can be viewed as a regularization method with adaptive regularization factors. Eventually, the weight matrix in $\mathcal{OB}(n,p)$ only has free degree of $(n-1)\times p$. \subsection{Computational Cost} \label{sec:computationCost} Let's consider a standard linear layer: $\mathbf{y}=\mathbf{W} \mathbf{x}$ with $\mathbf{W} \in \mathbb{R}^{n \times p}$ and a mini-batch input data of size $m$. For each iteration, the computational cost of the standard linear layer (i.e., calculating $\mathbf{y}, \frac{\partial \mathcal{L} }{\partial \mathbf{x}}$ and $\frac{\partial \mathcal{L} }{\partial \mathbf{W}}$) is $6m\times n\times p$ FLOPs. The extra cost for Riemannaian optimization is $6n\times p$ FLOPs. When using our \emph{norm projection} with ordinary gradient, the extra cost is $3n\times p$ FLOPs. Particularly, if we use interval $T$, the extra cost is ${3n\times p}/ {T}$ FLOPs. We can find that the computational cost of \emph{norm projection} with interval update $T$ is negligible to that of the standard linear layer For a convolution layer with filters $\mathbf{W}_c \in \mathbb{R}^{n \times p \times F_h \times F_w}$, where $F_h$ and $F_w$ respectively indicate the height and width of the filter, we perform norm propagation over the unrolled $\mathbf{W} \in \mathbb{R}^{n \times p\cdot F_h \cdot F_w}$. Assuming the input feature map with size $h \times w$, the cost of the convolution layer is $6m\times n\times p\times F_h\times F_w\times h\times w$ FLOPs. Norm projection with interval updating $T$ has an extra cost of ${3n\times p\times F_h\times F_w}/{T}$ FLOPs, which is also exactly negligible, compared to the convolution operation \begin{table}[t] \caption{Comparison of test errors ($\%$) on Inception architecture over CIFAR-10 and CIFAR-100. The results are averaged over five random seeds.} \label{table:BN-Inception} \vskip 0.0in \begin{center} \begin{small} \begin{tabular}{lccr} \hline Methods & CIFAR-10 & CIFAR-100 \\ \hline Normal & 6.48 $\pm$ 0.14 & 25.71 $\pm$ 0.15 \\ WN & 6.20 $\pm$ 0.07 & 24.22 $\pm$ 0.53 \\ PBWN-Riem (ours) & 5.33 $\pm$ 0.19 & \textbf{22.46 $\pm$ 0.25} \\ PBWN (ours) & \textbf{5.22 $\pm$ 0.05} & 22.70 $\pm$ 0.65 \\ PBWN-Epoch (ours) & 5.46 $\pm$ 0.22 & 22.83 $\pm$ 0.87 \\ \hline \end{tabular} \end{small} \end{center} \vspace{-0.15in} \end{table} \begin{table}[t] \caption{Comparison of test errors ($\%$) on VGG architecture over CIFAR-10 and CIFAR-100 dataset. The results are averaged over five random seeds.} \label{table:VGG} \vskip 0.0in \begin{center} \begin{small} \begin{tabular}{lccr} \hline Methods & CIFAR-10 & CIFAR-100 \\ \hline Normal & 7.23 $\pm$ 0.29 & 27.80 $\pm$ 0.31 \\ WN & 7.40 $\pm$ 0.21 & 29.86 $\pm$ 0.38 \\ PBWN-Riem (ours) & \textbf{6.23 $\pm$ 0.10} & 27.49 $\pm$ 0.35 \\ PBWN (ours) & 6.31 $\pm$ 0.11 & 27.33 $\pm$ 0.21 \\ PBWN-Epoch (ours) & 6.27 $\pm$ 0.11 & \textbf{26.91 $\pm$ 0.25} \\ \hline \end{tabular} \end{small} \end{center} \vspace{-0.15in} \end{table} \section{Experiments} In this section, we first conduct extensive experiments for supervised learning on four widely-used image datasets, i.e., CIFAR-10, CIFAR-100, SVHN and ImageNet, and investigate the performance over various types of CNNs. We also consider semi-supervised learning tasks for permutation invariant MNIST dataset by using Ladder network \cite{2015_NIPS_Rasmus}. For all experiments, we adopt random weight initialization by default as described in \cite{1998_NN_Yann}, unless we specify the weight initialization methods. \begin{table*}[t] \caption{Comparison of test errors ($\%$) on residual network with variational layers over CIFAR-10 and the results are averaged over five random seeds. `Res-$L$' indicates residual network with $L$ layers, and `BaseLine*' indicates the results reported in \cite{2015_CVPR_He}, for which res-20, 32, 44, 56 are reported by one run, while res-110 is reported with 5 runs.} \label{table:resnet1} \vskip 0.0in \begin{center} \begin{small} \begin{tabular}{l|ccccc} \toprule & Res-20 & Res-32 & Res-44 & Res-56 & Res-110 \\ \hline BaseLine* & 8.75 &7.51 &7.17 & 6.97& 6.61 $\pm$ 0.16\\ BaseLine & 7.94 $\pm$ 0.16 &7.70 $\pm$ 0.26 &7.17 $\pm$ 0.25 & 7.21 $\pm$ 0.25 & 7.09 $\pm$ 0.24\\ WN & 8.12 $\pm$ 0.18 &7.25 $\pm$ 0.14 &6.86 $\pm$ 0.06 & 7.01 $\pm$ 0.52 & 7.56 $\pm$ 1.11\\ PBWN-Riem (ours) & 8.03 $\pm$ 0.17 &7.18 $\pm$ 0.18 &6.69 $\pm$ 0.15 & 6.42 $\pm$ 0.25 & 6.68 $\pm$ 0.31\\ PBWN (ours) & 8.08 $\pm$ 0.07 &7.09 $\pm$ 0.18 &6.89 $\pm$ 0.17 & 6.48 $\pm$ 0.17 & \textbf{6.27 $\pm$ 0.34}\\ PBWN-Epoch (ours) & \textbf{7.86 $\pm$ 0.25} &\textbf{6.99 $\pm$ 0.27 } &\textbf{6.59 $\pm$ 0.17} & \textbf{6.41 $\pm$ 0.13} & 6.39 $\pm$ 0.45\\ \bottomrule \end{tabular} \end{small} \end{center} \vskip -0.15in \end{table*} \begin{table*}[t] \caption{Comparison of test errors ($\%$) on residual network with variational layers over CIFAR-100. The results are averaged over five random seeds. \label{table:resnet2} \vskip 0.0in \begin{center} \begin{small} \begin{tabular}{l|ccccc} \toprule & Res-20 & Res-32 & Res-44 & Res-56 & Res-110 \\ \hline BaseLine & 32.28 $\pm$ 0.16 &30.62 $\pm$ 0.35 &29.95 $\pm$ 0.66 & 29.07 $\pm$ 0.40 & 28.79 $\pm$ 0.63\\ WN & 31.90 $\pm$ 0.45 &30.63 $\pm$ 0.37 &29.57 $\pm$ 0.29 & 29.16 $\pm$ 0.45 & 28.38 $\pm$ 0.99\\ PBWN-Riem (ours) & 31.81 $\pm$ 0.28 &30.12 $\pm$ 0.36 &29.15 $\pm$ 0.18 & \textbf{28.13 $\pm$ 0.49} & \textbf{27.03 $\pm$ 0.33}\\ PBWN (ours) & 31.99 $\pm$ 0.14 &30.21 $\pm$ 0.20 &29.04 $\pm$ 0.43 & 28.23 $\pm$ 0.31 & 27.16 $\pm$ 0.57\\ PBWN-Epoch (ours) & \textbf{31.61 $\pm$ 0.40} &\textbf{29.85 $\pm$ 0.17 } &\textbf{28.83 $\pm$ 0.09 } & 28.17 $\pm$ 0.24 & 27.15 $\pm$ 0.58\\ \bottomrule \end{tabular} \end{small} \end{center} \end{table*} \subsection{The State-of-the-Art CNNs} In the following part, we evaluated our method on CIFAR (both CIFAR-10 and CIFAR-100) datasets over the state-of-the-art CNNs, including Inception \cite{2014_CoRR_Szegedy}, VGG \cite{2014_CoRR_Simonyan} and residual network \cite{2015_CVPR_He,2016_CoRR_Zagoruyko}. CIFAR-10 consists of 50,000 training images and 10,000 test images from 10 classes, while CIFAR-100 from 100 classes. Each input image consists of $32\times 32$ pixels. The dataset was preprocessed as described in \cite{2015_CVPR_He} by subtracting the means and dividing the variance for each channel. We follow the simple data augmentation that 4 pixels are padded on each side, and a 32 $\times$ 32 crop is randomly sampled from the padded image or its horizontal flip as described in \cite{2015_CVPR_He}. We refer to the original networks as `Normal'. For our projection based weight normalization methods, we evaluate three setups as follows: (1) `PBWN-Riem': performing \emph{norm projection} for each iteration based on Riemannian gradients; (2) `PBWN': performing \emph{norm projection} for each iteration based on ordinary gradients; (3) `PBWN-Epoch': performing \emph{norm projection} for each epoch based on ordinary gradients. We also choose another very related work named Weight Normalization \cite{2016_CoRR_Salimans} (referred to as `WN') as one baseline. \subsubsection{Inception Architecture} We first evaluate our method on Inception architecture~\cite{2014_CoRR_Szegedy} equipped with batch normalization (BN), inserted after each convolution layer. All the models are trained by SGD with a mini-batch size of 64, considering the memory constraints on one GPU. We adopt a momentum of 0.9 and weight decay of 0.0005. Regarding the learning rate annealing, we start with a learning rate of 0.1, divide it by 5 at 50, 80 and 100 epochs, and terminate the training at 120 epochs empirically. The results are also obtained by averaging over five random seeds. Figure \ref{fig:exp_Cifar} (a) and (b) show the training error with respect to epochs on CIFAR-10 and CIFAR-100 dataset respectively, and Table \ref{table:BN-Inception} lists the test errors. From Figure \ref{fig:exp_Cifar}, we observe that our model can converge significantly faster than the baselines. Particularly, `PBWN-Riem' and `PBWN' have nearly identical training curves, which means that there is no need to calculate the Reimannian gradient when performing \emph{norm projection} in Inception network with BN. The test performance in Table \ref{table:BN-Inception} further demonstrates that our methods also can achieve significant improvements over the baselines, mainly owing to their desirable regularization ability. \subsubsection{VGG Architecture} We further investigate the performance on the VGG-E architecture \cite{2014_CoRR_Simonyan} with global average pooling and batch normalization inserted after each convolution layer. We initialize the model with \emph{He-Init} \cite{2015_ICCV_He}. The models are again trained by SGD with a mini-batch size of 128, the momentum of 0.9 and weight decay of 0.0005. Here, we start with a learning rate of 0.1, divide it by 5 at 80 and 120 epochs, and terminate the training at 160 epochs empirically. The averaged test errors after training are shown in Table \ref{table:VGG}, from which we can easily get the same conclusion as Inception architecture that our model can significantly boost the test performance of the baselines. \begin{figure*}[t] \centering \hspace{-0.02\linewidth} \subfigure[CIFAR-10]{ \includegraphics[width=0.36\linewidth]{figures/1Conv/4GoogleNet_Cifar10_train_PN.pdf} } \subfigure[CIFAR-100]{ \includegraphics[width=0.36\linewidth]{figures/1Conv/4GoogleNet_Cifar100_train_PN.pdf} } \caption{\small Comparison of the training loss with respect to epochs on Inception over CIFAR datasets.} \label{fig:exp_Cifar} \vspace{-0.15in} \end{figure*} \subsubsection{Residual Network} In this experiment, we further apply our method on famous residual network architecture \cite{2015_CVPR_He}. We follow the exactly same experimental protocol as described in \cite{2015_CVPR_He} and adopt the publicly available Torch implementation\footnote{https://github.com/facebook/fb.resnet.torch} for residual network. Table \ref{table:resnet1} and \ref{table:resnet2} respectively show all the results of different methods on CIFAR-10 and CIFAR-100, using the residual network architecture with varied depths $L=\{20, 32, 44, 56, 110 \}$. We can find that our methods consistently achieve better performance when using different depths. Especially, with the depth increasing, our methods obtain more performance gains. Besides, we observe that there is no significant difference among the performance of different \emph{norm projection} methods, when using different gradient information or updating intervals. Indeed, `PBWN-Epoch' works the best for most cases. This further indicates the effectiveness of our efficient model by executing norm projection by interval, meanwhile without performance degeneration. \subsubsection{Efficiency Analysis} We also investigate the wall clock times of training above networks, including Inception, VGG and 110 layer residual network. The experiment is implemented based on Torch and conducted on one Tesla K80 GPU. From the results reported in Table \ref{table:TimeCost}, we can find that our `PBWN-epoch' costs almost the same time as `Normal' on all architectures, which means that it does not introduce extra time cost in practice as we analyzed in previous sections. `PBWN' also requires little extra time cost, while `PBWN-Riem' needs non-ignorable extra time. The results show that the \emph{norm projection} solution can faithfully improve the efficiency of the optimization with unit-norm constraints and meanwhile achieve satisfying performance. \subsection{Large-Scale Classification Task} \paragraph{SVHN dataset} To comprehensively study the performance of the proposed method, we consider a larger datasets SVHN \cite{2011_NIPS_Netzer} for digit recognition. SVHN consists of $32 \times 32$ color images of house numbers collected by Google Street View. It includes 73,257 train images and 26,032 test images. Besides, we further appended the extra augmented 531,131 images into the training set. The experiment is based on wide residual network that achieves the state-of-the-art results on this dataset. We use the WRN-16-4 as \cite{2016_CoRR_Zagoruyko} does, and follow the experimental setting provided in \cite{2016_CoRR_Zagoruyko}: (1) The input images are divided by 255 to ensure them in [0,1] range; (2) During the training, SGD is used with momentum of 0.9 and dampening to 0, weight decay of 0.0005 and mini-batch size of 128. The initial learning rate is set to 0.01 and dropped at 80 and 120 epochs by 0.1, until the total 160 epochs complete. Dropout is set to 0.4. Here, we only apply our method `PBWN-Epoch' on this WRN-16-4 architecture, namely, we execute \emph{norm projection} per epoch considering the time cost for such a large dataset. The results are shown in Table \ref{table:WR} comparing several state-of-the-art methods in the literature. It can be easily to see that WRN achieves the best performance compared to other baselines, and our method can further improves WRN by simply executing the efficient \emph{norm projection} operation for each epoch. \begin{table}[t] \caption{Time costs (hour) of different methods spent on training Inception, VGG and 110 layer residual networks.} \label{table:TimeCost} \vskip 0.0in \begin{center} \begin{small} \begin{tabular}{lcccr} \hline Methods & Inception & VGG & Res-110 \\ \hline Normal & 20.96 & 4.20 & 5.96 \\ WN & 23.33 & 5.27 & 6.42 \\ PBWN-Riem & 23.92 & 5.01 & 7.49 \\ PBWN & 21.21 & 4.23 & 6.29 \\ PBWN-Epoch & 20.97 & 4.20 & 5.97 \\ \hline \end{tabular} \end{small} \end{center} \vspace{-0.15in} \end{table} \begin{table}[t] \caption{Comparison of test errors ($\%$) on SVHN dataset. WRN* indicates our reproduced results.} \label{table:WR} \vskip 0.0in \begin{center} \begin{small} \begin{tabular}{lcc|cr} \hline Methods & test error \\ \hline DSN~\cite{2015_AISTATS_Lee} & 1.92 \\ RSD ~\cite{2016_ECCV_Huang} & 1.75 \\ GPF ~\cite{2015_AISTATS_Lee} & 1.69 \\ WRN ~\cite{2016_CoRR_Zagoruyko} & 1.64 \\ \hline WRN* & 1.644($\pm$ 0.046) \\ WRN-PBWN-Epoch & \textbf{1.607}($\pm$ 0.005) \\ \hline \end{tabular} \end{small} \end{center} \vspace{-0.1in} \end{table} \begin{table}[t] \caption{Comparison of test errors ($\%$) on 34 layers residual networks and its pre-activation version over ImageNet-2012 dataset.} \label{table:ImageNet} \vspace{0.1in} \centering \begin{small} \begin{tabular}{c|cc|cc} \toprule & \multicolumn{2}{c|}{Residual} & \multicolumn{2}{c}{Pre-Residual} \\ method & Top-1 & Top-5 & Top-1 & Top-5 \\ \hline Normal & 28.62 & 9.69 & 28.81 & 9.78 \\ PBWN-Epoch & \textbf{27.88} & \textbf{9.23} &\textbf{28.2 } & \textbf{9.45} \\ \bottomrule \end{tabular} \end{small} \vspace{-0.1in} \end{table} \begin{table*}[t] \caption{Comparison of test errors ($\%$) for semi-supervised setup on permutation invariant MNIST dataset. We show the test error for a given number of samples=$\{20, 50, 100\}$. Ladder* indicates our implementation of Ladder network \cite{2015_NIPS_Rasmus}.} \label{table:semi} \vskip 0.0in \begin{center} \begin{small} \begin{tabular}{l|ccc} \toprule method & \multicolumn{3}{c}{Test error($\%$) for a given number of labeled samples} \\ & 20 & 50 & 100 \\ \hline CatGAN \cite{2016_ICLR_Springenberg} & - & - & 1.91 $\pm$ 0.1 \\ Skip Deep Generative Model \cite{2016_ICML_Maal} & - & - & 1.32 $\pm$ 0.07 \\ Auxiliary Deep Generative Model\cite{2016_ICML_Maal} & - & - & 0.96 $\pm$ 0.02 \\ Virtual Adversarial \cite{2017_CoRR_Miyato} & - &- & 1.36 \\ Ladder \cite{2015_NIPS_Rasmus}& - & 1.62 $\pm$ 0.65 & 1.06 $\pm$ 0.37 \\ Ladder+AMLP \cite{2016_ICML_Pezeshki}& - &- & 1.002 $\pm$ 0.038 \\ GAN with feature matching \cite{2016_NIPS_Goodfellow}& 16.77 $\pm$ 4.52 & 2.21 $\pm$ 1.36 & 0.93 $\pm$ 0.065 \\ Triple-GAN \cite{2017_Corr_Li}& 4.81 $\pm$ 4.95 & 1.56 $\pm$ 0.72 & \textbf{0.91 $\pm$ 0.58} \\ \hline Ladder* (our implementation) & 9.67 $\pm$ 10.1 & 3.53 $\pm$ 6.6 & 1.12 $\pm$ 0.59 \\ Ladder+PBWN (ours) & \textbf{2.52 $\pm$ 2.42 }& \textbf{1.06 $\pm$ 0.48} & \textbf{0.91 $\pm$ 0.05} \\ \hline \bottomrule \end{tabular} \end{small} \end{center} \vspace{-0.1in} \end{table*} \paragraph{ImageNet 2012} To further validate the effectiveness of our method on large-scale dataset, we employ ImageNet 2012 consisting of 1,000 classes~\cite{2009_ImageNet}. We train the models on the given official 1.28M training images, and evaluated on the validation set with 50k images. We evaluate the classification performance based on top-1 and top-5 error. Note that in this part, we mainly focus on whether our proposed method is able to handle diverse and large-scale datasets and provide a relative benefit for the conventional architecture, rather than achieving the state-of-the-art results. We use the 34 layers residual network \cite{2015_CVPR_He} and its pre-activation version ~\cite{2016_CoRR_He} to perform the classification task. The stochastic gradient descent is again applied with a mini-batch size of 64, a momentum of 0.9 and a weight decay of 0.0001. We use exponential decay to $1/100$ of the initial learning rate until the end of 50 training epochs. We run with the initial learning rate of $\{0.05, 0.1\}$ and select the best results shown in Table \ref{table:ImageNet}. We can find that `PBWN-Epoch' achieves lower test errors compared to the original residual network and pre-activation residual networks. \subsection{Semi-supervised Learning for Permutation Invariant MNIST } In this section, we applied our proposed method to semi-supervised learning tasks on Ladder network \cite{2015_NIPS_Rasmus} over the permutation invariant MNIST dataset. Three semi-supervised classification tasks are considered respectively with 20, 50, 100 labeled examples. These labeled examples are sampled randomly with a balanced number for each class. We re-implement Ladder network based on Torch, following the Theano implementation by \cite{2015_NIPS_Rasmus}. Specifically, we adopt the setup as described in \cite{2015_NIPS_Rasmus} and \cite{2016_ICML_Pezeshki}: (1) the layer sizes of the model is 784-1000-500-250-250-250-10; (2) the models are trained by Adam optimization \cite{2014_CoRR_Kingma} respectively with mini-batch size of 100 (the task of 100 labeled examples), 50 (the task of 50 labeled examples) and 20 (the task of 20 labeled examples); (3) all the models are trained for 50,000 iterations with the initial learning rate, followed by 25,000 iterations with a decaying linearly to 0. We execute simple hyper-parameters search with learning rate in $\{0.002, 0.001, 0.0005 \}$ and weight decay in $\{0.0002, 0.0001, 0\}$\footnote{The detailed experimental configuration to reproduce our results in our codes available on: https://github.com/huangleiBuaa/NormProjection}. In this case, all experiments are run with 10 random seeds. In Table \ref{table:semi}, we report the results of Ladder based on our implementation (denoted by Ladder*) and our `PBWN' that performs \emph{norm projection} in each iteration. From Table \ref{table:semi}, we can see that our method significantly improves the performance of the original Ladder network and achieves new state-of-the-art results in the tasks with 20, 50, and 100 labeled examples. Especially, with 20 labeled examples our method achieves $2.52\%$ test error. We conjecture that these appealing results of our method are mainly stemming from its well regularization ability. \section{Related Work and Discussion} There exist a number of methods that regularize neural networks by bounding the magnitude of weights. One commonly used method is weight decay ~\cite{1992_WD_Krogh}, which can be considered as a solution to the loss function appended with a regularization term of squared \emph{L2-norm }of the weight vector. Max-norm ~\cite{2005_Nathan_2005,2014_JMLR_Nitish} constrains the norm of the incoming weights at each hidden unit to be bounded by a constant. It can be viewed as a constrained optimization problem over a ball in the parameter space, while our method addresses the optimization problem over an Oblique manifold. Path normalization ~\cite{2015_NIPS_Neyshabur} follows the idea of max-norm, but bounds the product of weights along a path from the input to output nodes, which can also be viewed as a regularizer as weight decay \cite{1992_WD_Krogh}. Weight normalization ~\cite{2016_CoRR_Salimans} decouples the length of each incoming weight vector from its directions. If the extra scaling parameter is not considered, weight normalization can be viewed as normalizing the incoming weight. However, it solves the problem via re-parameterization and can not guarantee whether the conditioning of Hessian matrix over proxy parameter will be improved; while our method performs normalization via projection and optimization over the original parameter space, which ensures the improvement of conditioning of Hessian matrix as shown in Figure~\ref{fig:motivation}. We experimentally show that our method outperforms weight normalization~\cite{2016_CoRR_Salimans}, in terms of both the effectiveness and computation efficiency. There are large amount of work introducing orthogonality to the weight matrix ~\cite{2016_ICML_Arjovsky,2016_NIPS_Wisdom,2016_CoRR_Dorobantu,2017_ICML_Eugene,2017_Corr_Harandi,2016_Corr_Ozay,Huang_2017_arxiv} in deep neural networks to address the gradient vanish and explosion problem. Solving the problem with such orthogonality constraint is usually limited to the hidden-to-hidden transformation in Recurrent neural networks~\cite{2016_ICML_Arjovsky,2016_NIPS_Wisdom,2016_CoRR_Dorobantu,2017_ICML_Eugene}. Some work also consider orthogonal weight matrix in feed forward neural networks~\cite{2017_Corr_Harandi,2016_Corr_Ozay,Huang_2017_arxiv}, while their solutions introduce expensive computation costs. Normalizing the activations ~\cite{2015_ICML_Ioffe,2016_CoRR_Ba,2017_ICLR_Ren} in deep neural networks have also been studied. Batch normalization~\cite{2015_ICML_Ioffe} is a famous and effective technique to normalize the activations. It standardizes the pre-activation of each neuron to zero-mean and unit-variance over each mini-batch. Layer normalization ~\cite{2016_CoRR_Ba} computed the statics of zero-mean and unit-variance over all the hidden units in the same layers, targeting at the scenario where the size of mini-batch is limited. Division normalization ~\cite{2017_ICLR_Ren} is proposed from a unified view of normalization, which includes batch and layer normalization as special cases. These methods focus on normalizing the activations and are data dependent normalization, while our method normalizing the weights and therefore is data independent normalization. Based on the fact that our method is orthogonal to these methods, we provide analysis and experimental results showing that our method can improve the performance of batch normalization by combining them together. Concurrent to our work, Cho and Lee ~\cite{2017_Corr_Cho} propose to optimize over Grassmann manifold, aiming to improve the performance of neural networks equipped with batch normalization~\cite{2015_ICML_Ioffe}. The differences between their work and our work are in two aspects: (1) they only use the traditional Riemannian optimization method (`Riemannian gradient + exponential maps'~\cite{2004_Math}) to solve the constraint optimization problem, which introduce non-trivial commutation cost; while we consider both Riemannian optimization method (`Riemannian gradient+ retraction' ~\cite{2006_ICASSP_Absil} ) and further proposed a more general and efficient projection based weight normalization framework, which introduces negligible extra computation cost; (2) ~\cite{2017_Corr_Cho} requires gradient clipping technique~\cite{2013_ICML_Pascanu} to make optimization stable and also needs tailored revision for SGD with momentum. On the contrary, our method is more general without requiring any extra tailored revision, and it can also collaborate well with other techniques of training neural networks. \section{Conclusions} The scaling-based weight space symmetry can cause ill-conditioning problem when optimizing deep neural networks. In this paper, we propose to address the problem by constraining the incoming weights of each neuron to be unit-norm. We provide the projection based weight normalization method, which serves as a simple, yet effective and efficient solution to such a constrained optimization problem. Our extensive experiments demonstrate that the proposed method greatly improves the performance of various state-of-the-art network architectures over large scale datasets. We show that the projection based weight normalization offers a good direction for improving the performance of deep neural networks by alleviating the ill-conditioning problem. \bibliographystyle{plain}
2,869,038,154,223
arxiv
\section{Introduction} \emph{Introduction - } Caloric effects in ferroic materials, where application/removal of external fields (magnetic, electric, or stress) can result in significant temperature changes, potentially allow for the development of clean and energy-efficient cooling technologies~\cite{Faehler_et_al:2011,Moya2014}. More recently, there has been growing interest in so-called multicaloric effects~\cite{stern-taulats2018,VOPSON20122067}, where more than one type of caloric effect can occur simultaneously, possibly allowing to further optimize the total caloric response. The thermodynamic theory of multicaloric effects has been discussed in some detail~\cite{MENG2013567,Anand_2014,planes_multical}. However, most specific studies have been focusing on combining either electrocaloric or magnetocaloric with elastocaloric effects, thereby using applied stress or strain as an additional control parameter to enhance the overall caloric response~\cite{Lisenkov_et_al:2013,PhysRevB.94.214113,doi:10.1002/adma.201404725} and/or to reduce irreversibility problems~\cite{Liu2012,PhysRevB.91.224421,Liu2016,Gottschall2018}. Multicaloric effects in (single phase) materials combining magnetic and ferroelectric (FE) order, meanwhile, have remained relatively unexplored~\cite{Moya2014,stern-taulats2018}, perhaps due to challenges in finding suitable materials. Multiferroic materials with coexisting magnetic and FE orders have received much attention, not only because of a broad fundamental interest, but also due to promises of technological applications~\cite{Spaldin/Cheong/Ramesh:2010,Spaldin/Ramesh:2019}. Often, however, their practical usefulness is hindered by low ordering temperatures or weak magnetoelectric (ME) coupling. Additionally, most magnetic ferroelectrics are in fact antiferromagnetic (AFM), which restricts their potential applications, since the AFM order does not couple to a homogeneous magnetic field. Here we show that an AFM multiferroic can, nevertheless, exhibit a very strong cross-caloric magnetic contribution to the electrocaloric effect (ECE)~\cite{multical_note}. Since caloric effects are generally largest near the relevant phase transitions, a strong cross-caloric effect can be expected near a so-called \emph{tetracritical point} (TCP)~\cite{LL_statphys}, where the critical temperatures of the two phase transitions coincide. Such a TCP has recently been predicted to occur in strained SrMnO$_3$~\cite{PhysRevMaterials.2.104409}; its existence can also be inferred from previous theoretical~\cite{PhysRevLett.104.207204} and experimental~\cite{Becher2015,acs.nanolett.5b04455,PhysRevB.97.235135} work. While perovskite structure bulk SrMnO$_3$ is a cubic paraelectric G-type antiferromagnet~\cite{JPSJ.37.275}, it develops a FE distortion under tensile epitaxial strain~\cite{PhysRevLett.104.207204,Becher2015,acs.nanolett.5b04455,PhysRevB.97.235135}. Thereby, the FE critical temperature increases strongly with strain~\cite{PhysRevMaterials.2.104409}, while the AFM N\'eel temperature is less affected, resulting in an intersection of the FE and AFM phase boundaries at a certain strain value, and thus a TCP. Furthermore, since the Mn cation carries the magnetic moment and also takes part in the FE distortion, SrMnO$_3${} is expected to exhibit strong ME coupling, which is also implied by recent studies reporting a particularly strong spin-phonon coupling in this material~\cite{PhysRevB.84.104440,PhysRevB.89.064308}. In this work, we explore ME coupling effects and cross-caloric response in SrMnO$_3${} by constructing a Landau-type theoretical model considering all relevant magnetic and ferroelectric order parameters. We extract all parameters entering the free energy from {\emph{first principles}}-based calculations, thus allowing for a realistic materials-specific description. We then apply the model to study ME coupling phenomena around the multiferroic TCP in SrMnO$_3$. We show that an applied electric field has a strong effect on the AFM order, shifting its critical temperature and increasing the corresponding order parameter, thereby drastically changing the entropy of the magnetic sub-system. This results in a huge magnetic cross-caloric contribution to the ECE, which is increased by about 60\,\% due to the ME coupling. \emph{Methods -} SrMnO$_3$ under epitaxial strain is predicted to show a number of different magnetic phases, including G, C and A-type AFM~\cite{Wollan/Koehler:1955}, and possibly also ferromagnetic (FM) at large strains (near 5\%)~\cite{PhysRevLett.104.207204,PhysRevMaterials.2.104409}. Furthermore, in the cubic structure, there are three different degenerate $\mathbf{q}$-vectors corresponding to each of the A-type and C-type AFM orders. When the cubic symmetry is broken this degeneracy is also broken. Thus, we consider eight magnetic order parameters: FM [$\mathbf{q}=(0,0,0)$], G [$\mathbf{q}=(1,1,1)$], A [$\mathbf{q}=(0,0,1)$ or $\mathbf{q}=(0,1,0)$ or $\mathbf{q}=(1,0,0)$] and C [$\mathbf{q}=(1,1,0)$ or $\mathbf{q}=(1,0,1)$ or $\mathbf{q}=(0,1,1)$], where the reciprocal space vectors are given in units of $\pi$ divided by the real space lattice constant along that direction. This includes all magnetic orders that have been reported to appear in SrMnO$_3$~\cite{PhysRevMaterials.2.104409}. Each of these magnetic order parameters can couple to the polar order $P$ that emerges under strain. Hence, we consider a Landau free energy of the form \begin{widetext} \begin{equation}\label{eq.LandauF} \mathcal{F}_q = \frac{1}{2} a_P(T,\eta) P^2 + \frac{b_P}{4} P^4 + \frac{1}{2} a_q(T,\eta) M_q^2 + \frac{b_q}{4} M_q^4 + \frac{\lambda_q(\eta)}{2} M_q^2P^2 - EP , \end{equation} \end{widetext} for each magnetic order parameter $M_\mathbf{q} = \frac{1}{N} \sum_i^N \mathrm{e}^{\mathrm{i} \mathbf{q} \cdot \mathbf{R}_i} \langle S_i \rangle$, where $\langle S_i \rangle$ is the thermodynamic average of the normalized spin at site $\mathbf{R}_i$, projected on the spin-quantization axis, and $N$ is the number of spins. The strain and temperature dependence enters in the second order coefficients as $a_P = \alpha_P(T-T_{0}^{P}) + c_P \eta $ and $ a_q = \alpha_q(T-T_{0}^{q}) + c_q \eta$. At each strain $\eta$, temperature $T$, and electric field $E$, the free energy $\mathcal{F}_q$ is minimized with respect to $P$ and $M_q$, and the free energy is determined from ${\cal F} = \min_q {\cal F}_q$. The $q$ which corresponds to the lowest free energy defines the equilibrium magnetic phase at that point in the phase diagram. All parameters in Eq.~\eqref{eq.LandauF} were determined from total energy calculations using density functional theory (DFT) and DFT-based effective Hamiltonian simulations~\cite{PhysRevMaterials.2.104409,suppl}. Specifically, the magnetic parameters were obtained by mapping DFT total energy calculations on a Heisenberg Hamiltonian and extracting exchange interaction parameters. By calculating exchange interactions as functions of strain, the coupling between strain and magnetism was obtained, while exchange interactions evaluated with FE structural distortions allowed the determination of the biquadratic magnetoelectric coupling coefficients $\lambda_q$. The purely ferroelectric parameters are determined from the strain-dependent transition temperature and saturation polarization obtained from first-principles-based effective Hamiltonians~\cite{PhysRevMaterials.2.104409}, and from DFT calculated elastic/electro-strictive parameters. \emph{Results -} We first consider the case without ME coupling and zero applied field, i.e., $\lambda_q=0$ and $E=0$, and minimize the free energy in Eq.~\eqref{eq.LandauF} with respect to the various order parameters for different temperatures and strains in the range $0 \leq \eta \leq 5\%$. Identifying the phases with the lowest free energy for each strain and temperature results in the phase diagram shown in Fig.~\ref{fig.phasediag}, which agrees well with the one from our previous study using microscopic first-principles-based Hamiltonians~\cite{PhysRevMaterials.2.104409}. For small $T$ and $\eta$, there is a G-type AFM paraelectric (PE) phase, while at approximately 2\% strain there is a transition into a FE region and also a change to C-type [$\mathbf{q}=(1,0,1)$] AFM order. For large strain and low temperatures, an A-type [$\mathbf{q}=(0,0,1)$] AFM FE region appears. In the following C and A-type AFM always refer to $\mathbf{q}$-vectors $(1,0,1)$ and $(0,0,1)$, since these are the only ones that appear in the phase diagram. We note that the ferromagnetism that has been predicted for large strains is only stabilized due to its coupling to the FE order~\cite{PhysRevMaterials.2.104409}, which at this point is not yet included in our free energy. Most notably, the phase diagram in Fig.~\ref{fig.phasediag} reveals a TCP where the magnetic and FE critical temperatures coincide within the region with C-type AFM order at $\eta_\text{tcp}=2.63\%$ and $T_\text{tcp}=162~\mathrm{K}$. \begin{figure}[bt] \centering \includegraphics[trim={0cm 0 0 0},clip,width=0.495\textwidth]{PD_uncoupled_coupled_inset.pdf} \caption{Ferroic phase diagram of SrMnO$_3$ at zero applied field obtained within our Landau theory for the case without ME coupling ($\lambda_q=0$). The inset shows the effect of non-zero ME coupling on the region around the TCP. The dashed lines in the inset indicate the FE-PE and the C-paramagnetic (PM) phase boundaries with $\lambda_q=0$.} \label{fig.phasediag} \end{figure} Next, we evaluate the strain-dependent ME coupling parameters, $\lambda_q$, by computing magnetic exchange interactions as functions of the FE displacements for different strains. As shown in the supplementary material~\cite{suppl}, it turns out that the lowest order biquadratic coupling in Eq.~\eqref{eq.LandauF} is insufficient to describe the variation of the exchange couplings for large polarization, which occurs in the region of the phase diagram with large strain and low temperatures. A satisfactory description of this region would require coupling terms of higher order in $P$, which, however, would require additional higher order terms to guarantee stable, physical solutions, and thus more parameters in the free energy. In the following, we therefore focus on the part of the phase diagram which is most interesting in the present context, i.e., the region around the TCP, where both order parameters are small~\cite{ME-coupling-note}. For the C-type order relevant around the TCP, we find a negative ME coupling, which varies relatively weakly with strain. We point out that, previously, a positive ME coupling coefficient $\lambda_\mathrm{G}$ has been found for cubic Sr$_{1-x}$Ba$_x$MnO$_3$ ~\cite{PhysRevLett.107.137601,PhysRevLett.109.107601}, meaning that G-type AFM order and ferroelectricity couple unfavorably. This is indeed consistent with our results~\cite{suppl}. However, we also find that the coupling coefficients differ for different types of magnetic order and, furthermore, are strongly strain-dependent. The zero field phase diagram for the region $2.2\% \leq \eta \leq 3.0\%$ and $100~\mathrm{K} \leq T \leq 300~\mathrm{K}$, now including ME coupling, is shown in the inset of Fig.~\ref{fig.phasediag}. One drastic effect of the coupling is that it eliminates the A-type AFM region from the phase diagram. This is because $\lambda_A$ is found to be strongly positive and since A-type order only appears in the FE region, it is highly unfavored by the coupling, while C-type is favored. In contrast, the coupling does not alter the position of the TCP, since both $M_C$ and $P$, and thus the effect of the coupling term, vanish at this point. Away from the TCP, the upper of the two ordering temperatures also remains unaltered, while the lower one is increased by the negative ME coupling. This can also be seen from Figs.~\ref{fig.OrdPar_T}(a) and (b), which show the temperature dependence of the FE polarization $P$ and the C-AFM order parameter $M_\text{C}$, both with (black) and without (blue) ME coupling, for three different strain values. At $\eta = 2.80\%$ (where $T_\text{c}^{C}<T_\text{c}^{P}$), $T_\text{c}^{P}$ is unaffected, while the magnetic order is changed from A-type to C-type with an increase in ordering temperature from $T_c^A = 170~\mathrm{K}$ to $T_\text{c}^{C} = 174~\mathrm{K}$. In addition, the polarization is unaffected by the coupling at temperatures above $T_\text{c}^{C}=174~\text{K}$, while the coupling enhances the polarization at lower temperatures, producing a kink in $P(T)$ at $T_\text{c}^{C}$. The analogous behavior, but with the roles of $P$ and $M_\text{C}$ exchanged, is observed at $\eta = 2.50\%$ (where $T_\text{c}^{C}>T_\text{c}^{P}$). Here, the coupling does not alter $T_\text{c}^{M}$, while it shifts $T_\text{c}^{P}$ from 127\,K to 139\,K, resulting in a kink in $M_\text{C}(T)$ at $T_\text{c}^{P} = 139~\text{K}$. On the other hand, at $\eta_\text{tcp} = 2.63\%$, the coinciding critical temperatures are unaltered by the coupling term. However, below $T_\text{tcp}=162~\text{K}$, both order parameters are enhanced compared to the case with $\lambda_q=0$. This behavior is consistent with the general phenomenological theory outlined in Ref.~[\onlinecite{planes_multical}], where it was also shown that both transitions remain second order if $\lambda_q^2 < b_q b_p$ (or if $\lambda_q > 0$). According to our results this condition is fulfilled for every magnetic order and strain considered. \begin{figure}[hbt] \centering \includegraphics[trim={0cm 0 0 0},clip,width=0.495\textwidth]{ordpars_of_T_E_eta.pdf} \caption{(a) and (b): Order parameters (black, left) and susceptibilities (red, right) as functions of temperature for the three strains of 2.5\% (solid line), 2.63\% (dashed line) and 2.8\% (dashed dotted line). (a) shows the electric polarization and electric susceptibility, while (b) shows the magnetic order parameter and ME susceptibility. The order parameters for zero ME coupling are shown in blue. (c) and (d): Temperature dependence of FE (c) and magnetic (d) order parameters for strains of 2.5\% (red solid lines), 2.63\% (green dashed lines) and 2.8\% (blue dashed dotted lines) with applied electric fields of 0, 50, 100, 150 and 200 kV/cm. The darker colors correspond to larger fields. The inset in (d) shows the magnetic transition temperature as function of electric field, with color coding corresponding to the main plot.} \label{fig.OrdPar_T} \end{figure} The zero-field electric susceptibility $\chi_E=\frac{\mathrm{d} P}{\mathrm{d} E}|_{E=0}$ (for the case with ME coupling), is also plotted in Fig.~\ref{fig.OrdPar_T}(a) (red, right $y$-axis). As expected, this susceptibility diverges at the FE transitions. Additionally, the magnetoelectric susceptibility \begin{equation} \chi_{ME} = \left. \frac{\mathrm{d} M_q}{\mathrm{d} E} \right|_{E=0} = \begin{cases} 0, & \text{if}\ M_q=0~\text{or}~P=0 \\ -\frac{\lambda_q P}{b_q M_q} \chi_E, & \text{if}\ M_q\neq0~\text{and}~P\neq0 \end{cases} \end{equation} is plotted in Fig.~\ref{fig.OrdPar_T}(b) (red, right $y$-axis). This quantity describes the magnetic response to an applied electric field and is non-zero only in the multiferroic regions of the phase diagram, i.e., where both magnetic and FE order parameters are non-zero. The ME susceptibility then diverges at the lower of the two transition temperatures, either because $\chi_E$ diverges if the FE transition is lower, or because $M_q \rightarrow 0$ if the magnetic transition is lower. Thus, $\chi_{ME}$ diverges at $T_\text{c}^{P} = 139~\text{K}$ for $\eta = 2.50\%$, at $T_\text{c}^{M} = T_\text{c}^{P} = 162~\text{K}$ for $\eta_\mathrm{tcp} = 2.63\%$, and at $T_\text{c}^{M} = 174~\text{K}$ for $\eta = 2.80\%$. The divergence is particularly pronounced at $\eta_\mathrm{tcp}$, where $\chi_E$ diverges simultaneously as $M_q \rightarrow 0$, causing $\chi_\mathrm{ME}$ to diverge as $(T_\mathrm{c} - T)^{-1}$ instead of $(T_\mathrm{c} - T)^{-1/2}$ when the relevant critical temperature $T_\mathrm{c}$ is approached from below~\cite{critexpcomment}. We now discuss the effect of applying a finite electric field. In Fig.~\ref{fig.OrdPar_T}(c)-(d), the FE and magnetic order parameters are plotted as functions of temperature for the previously discussed strain values and various applied electric fields. As expected, an electric field induces a finite electric polarization at all temperatures, which however, decreases towards high $T$, and thus removes the second order FE transition. The effect on the magnetic order parameter is markedly different. While the electric field enhances also $M_\text{C}$, due to the negative sign of $\lambda_\text{C}$, the magnetic order parameter still shows a second order transition, and is identically zero above the corresponding transition temperature. The magnetic transition temperature is, however, field dependent and the inset of Fig.~\ref{fig.OrdPar_T}(d) shows $T_\text{c}^{C}$ as a function of applied electric field. The increase in $T_\text{c}^{C}$ with $E$ appears close to linear, and an applied field of 100 keV/cm$^2$ increases $T_\text{c}^{C}$ by 2.1 K for $\eta=2.5\%$, by 5.3 K for $\eta=2.63\%$, and by 3.5 K for $\eta=2.8\%$. The largest effect of the electric field on $T_\text{c}^{C}$ is thus found at $\eta_\mathrm{tcp}$. We note that SrMnO$_3$ is not a linear ME material. Nevertheless, in order to get a better idea of the magnitude of the electric field effect on $M_\text{C}$, one can see from Fig.~\ref{fig.OrdPar_T}(d) that an electric field of $50~\mathrm{kV/cm}$ alters $M_\mathrm{C}$ by about 0.15 at the TCP. Considering a Mn magnetic moment of $3\mu_\mathrm{B}$, one can estimate an effective ME coefficient of $\alpha_\mathrm{eff} = \frac{\Delta M}{\Delta E} = 15 \cdot 10^{-3}~\mathrm{\Omega^{-1}}$, which is four orders of magnitude larger than that found in conventional linear magnetoelectrics such as Cr$_2$O$_3$~\cite{doi:10.1080/00150199408245099,PhysRevLett.101.117201}. Based on the electric field response of both FE and magnetic order parameters, we can now address the ECE in SrMnO$_3$. From the results presented so far, it is apparent that, due to the negative ME coupling coefficient, an applied electric field has an ordering tendency on both the FE and magnetic subsystems, and hence reduces the entropy in both. This will result in a magnetic contribution to the ECE, referred to as cross-caloric~\cite{planes_multical}. The caloric response is quantified by the isothermal entropy change under field application or removal. From the free energy in Eq.~\eqref{eq.LandauF}, the entropy at a given temperature and field $E$ is $S(T,E)=-\left( \frac{\partial \mathcal{F} }{\partial T} \right)_E$, while the entropy change when increasing the field from 0 to $E$ is $\Delta S (T,E) = S(T,E) - S(T,0) = -\frac{1}{2} \alpha_P \left( P^2(T,E) - P^2(T,0)\right) -\frac{1}{2} \alpha_q \left( M_q^2(T,E) - M_q^2(T,0)\right)$. Here, the first term is the usual ECE, while the second term is the magnetic contribution, i.e., the cross-caloric response. \begin{figure}[tbh] \centering \includegraphics[trim={0cm 0 0 0},clip,width=0.495\textwidth]{dS_dT_of_T_eta.pdf} \caption{The ECE as function of temperature. (a)-(c) show the isothermal entropy change, as a field of 150~kV/cm is applied, for the three different strains of 2.5\%, 2.63\% and 2.8\%, respectively, while (b) also contains an inset showing the field dependence at $T=140~\mathrm{K}$ (dashed) and $T=162~\mathrm{K}$ (solid). The total entropy change is decomposed into magnetic (red) and electric (blue) contributions. Additionally, the result occurring without ME coupling ($\lambda_q=0$) is shown (black line). (d) shows an estimate of the adiabatic temperature change corresponding to the total isothermal entropy change for strains of 2.5\%, 2.63\% and 2.8\%, and applied electric fields of 100, 150 or 200 kV/cm. } \label{fig.Calorics} \end{figure} Fig.~\ref{fig.Calorics}(a)-(c) show the isothermal entropy change in SrMnO$_3$ as function of temperature for an applied field of 150\,kV/cm, at the three representative strain values discussed previously ($\eta=2.5\%$, $\eta=2.63\%$, and $\eta=2.8\%$). The total entropy change has been decomposed in magnetic and electric contributions and the ECE obtained without ME coupling ($\lambda_q = 0$) is also plotted as a black line. The total caloric response exhibits features (peaks and/or kinks) at all critical temperatures (with or without field). Generally, the electric contribution is non-zero at all temperatures and peaks at the zero field $T_\text{c}^{P}{}$. Below but near $T_\text{c}^{C}(E)$ it is enhanced compared to the case without ME coupling. For $\eta = 2.9\,\%$ this even leads to an additional small peak at $T_\text{c}^{C}(0)$. Hence, the ME coupling can enhance the ECE not only by adding the magnetic cross-caloric effect, but also by enhancing the electric part. The magnetic contribution vanishes above $T_\text{c}^{C}(E)$, but rises sharply between $T_\text{c}^{C}(E)$ and $T_\text{c}^{C}(0)$, peaking at $T_\text{c}^{C}(0)$, then slowly decreases again towards lower $T$, except for the case of $\eta=2.5\,\%$, where it actually peaks at $T_\text{c}^{P}{}$. This is related to the kink in $M_\text{C}(T)$ at this temperature for zero field (see Fig.~\ref{fig.OrdPar_T}(b)). The inset in Fig.~\ref{fig.Calorics}(b) shows the magnetic and electric contributions to the entropy change at $\eta=2.63\%$ and temperatures 140~K and 162~K, as functions of applied electric field, which illustrates an approximately linear increase in the magnitude of the entropy change with the field. Most strikingly, at all three strains, the magnetic contribution reaches approximately 60\% of the electric contribution, or more than a third of the total entropy change. This is a result of particular relevance, since it shows that the ME cross-caloric effect can significantly increase the caloric response suitable for solid state cooling. Furthermore, the effect is of similar size for the three different strains, indicating that a very careful tuning of the two critical temperatures to coincide is not necessary. It is also interesting to note that, in the case of $\eta = 2.8\%$, the largest total ECE is not obtained at the FE phase transition, but at the magnetic one. This is because it is the lower temperature phase transition in this case and thus the two contributions add up, while at the FE transition the magnetic contribution vanishes. Another instructive quantity to characterize caloric effects is the adiabatic temperature change $\Delta T$, which can be estimated from the entropy change $\Delta S$ via the thermodynamic relation $\mathrm{d} T = -\frac{T}{C} \mathrm{d} S$, where $C$ is the specific heat at constant field, without the contributions of the FE or magnetic degrees of freedom~\cite{Gruenebohm_et_al:2018}. We use this relation to estimate $\Delta T \approx - \frac{T}{C} \Delta S$, assuming that $\Delta T \ll T$ and that $C$ varies negligibly over $\left[ T, T+\Delta T\right]$. For $C$, we use the temperature dependent phonon specific heat, which we obtained for cubic SrMnO$_3$, from frozen phonon calculations in the harmonic approximation~\cite{TOGO20151}. This results in a double counting of the phonon modes responsible for ferroelectricity, which might slightly underestimate $\Delta T$. The resulting $\Delta T$ is plotted in Fig.~\ref{fig.Calorics}(d), for the same strains as in (a)-(c), and three different applied fields. The largest temperature changes, for $E=200~\mathrm{kV/cm}$, are about 5 K. This is of the order of magnitude needed to be technologically relevant and of similar size as the ECE found in high performing electrocaloric materials for similar field strengths~\cite{Moya2014}. Although being estimates, the temperature changes in Fig.~\ref{fig.Calorics}(d) show that multiferroic perovskite oxides can indeed be of potential technological relevance within the area of solid state cooling. \emph{Summary and conclusions -} We have used a Landau theory, allowing several magnetic order parameters to couple to a FE polarization, to study ME coupling phenomena around the TCP appearing in the strain-temperature phase diagram of SrMnO$_3$. Since all parameters entering the theory have been determined from {\emph{first principles}} DFT-based calculations, realistic materials specific predictions can be made without experimental input. The ME coupling is found to be enhanced at the TCP and a huge response to electric fields is observed in the magnetic order parameter. Investigating the ECE, we find a large cross-caloric contribution due to the electric-field-induced magnetic entropy change, resulting in an increase of about 60\% in the total caloric response. This provides a new way for greatly enhancing caloric effects for solid state cooling applications, by using multiferroic materials with coupled magnetic and electric order parameters. It also provides a unique example where AFM order in a multiferroic material can be of great practical usefulness. Recent work proving that highly strained multiferroic films of SrMnO$_3$ can be grown~\cite{PhysRevB.97.235135} is promising regarding the experimental verification of these results, while similar studies on Ba-doped systems\cite{PhysRevLett.107.137601,PhysRevMaterials.2.054408,doi:10.1063/1.5090824} would also be of interest. Further insights could also be obtained by studies using other computational methods, e.g., based on microscopic models for coupled spin-lattice dynamics~\cite{PhysRevLett.99.227602,PhysRevB.99.104302}. \emph{Acknowledgments -} A.E. is grateful to Quintin Meier for discussions. This work was supported by the Swiss National Science Foundation (project code 200021E-162297) and the German Science Foundation under the priority program SPP 1599 (``Ferroic Cooling''). Computational work was performed on resources provided by the Swiss National Supercomputing Centre (CSCS). \section{First Principles Computational Methods} All density functional theory (DFT) calculations are performed as in Ref.~[\onlinecite{PhysRevMaterials.2.104409}], i.e. with VASP~\cite{KRESSE199615,PhysRevB.49.14251,PhysRevB.47.558} and projector augmented wave (PAW) pseudopotentials~\cite{PhysRevB.50.17953,PhysRevB.59.1758}. The exchange-correlation functional is described with the PBEsol version of the generalized gradient approximation~\cite{PhysRevLett.100.136406} (GGA). A Coulomb repulsion~\cite{PhysRevB.57.1505} of $U_\mathrm{eff}=3~\mathrm{eV}$ is included on the Mn $d$-electrons. The phonon specific heat was obtained from frozen phonon calculations performed with the Phonopy software~\cite{TOGO20151}. \section{Landau Theory for {S\MakeLowercase{r}M\MakeLowercase{n}O$_3$}} We consider a Landau free energy of the form \begin{equation}\label{eq.LandauF} \mathcal{F}_q = \frac{1}{2} a_P(T,\eta) P^2 + \frac{b_P}{4} P^4 + \frac{1}{2} a_q(T,\eta) M_q^2 + \frac{b_q}{4} M_q^4 + \frac{\lambda_q(\eta)}{2} M_q^2P^2 - EP , \end{equation} for each magnetic order $q$, with strain and temperature dependence entering as \begin{equation} a_q = \alpha_q (T - T_\text{0}^{q}) + c_q \eta \end{equation} \begin{equation} a_P = \alpha_P (T - T_\text{0}^{P}) + c_P \eta, \end{equation} with $T$ denoting temperature, $T_{0}^i$\footnote{We use the notation that $T_{c}^i$ is the strain and field dependent critical temperature of order parameter $i$, while $T_{0}^i$ is the value at zero strain and field.} the critical temperature at zero strain and applied field for the order parameter $i$, $P$ is the electric polarization and $M_q = \frac{1}{N} \sum_j^N \mathrm{e}^{\mathrm{i} \mathbf{q} \cdot \mathbf{R}_j} \langle S_j \rangle$ is the magnetic order parameter for the order labelled $q$, with $\langle S_j \rangle$ denoting the thermodynamic average of the $j$th unitless, normalized spin $S_j$ out of $N$, and $\eta$ denotes (biaxial tensile) strain. $a_i$ and $b_i$ are the quadratic and quartic coefficients for the order parameter $i$, $c_i$ its coupling to strain and $\lambda_q$ the coupling between $P$ and the magnetic order parameter labeled by $q$. We consider magnetic order parameters corresponding to ferromagnetism (F) as well as G, C, and A-type antiferromagnetism. In the following we describe in detail how all the parameters entering the Landau free energy in Eq.~\eqref{eq.LandauF} are determined. We first consider the uncoupled case ($\lambda_q=0$), in which ferroelectricity and magnetism can be considered separately. This is discussed in Secs.~\ref{sec.FE}-\ref{sec.mag}, respectively. The calculation of the coupling constant $\lambda_q$ is then discussed in Sec.~\ref{sec.coup}, after which the zero-field susceptibilities and thermodynamics of the given Landau theory is discussed in Sec.~\ref{sec.susc_therm}. \subsection{Ferroelectricity}\label{sec.FE} The ferroelectricity, without coupling to magnetism and with no applied field, is described by a free energy \begin{equation}\label{P_F} F_P = \frac{1}{2}\left[ \alpha_P(T-T_\text{0}^{P}) + c_P \eta \right] P^2 + \frac{b_P}{4} P^4, \end{equation} with four parameters, $\alpha_P$, $T_\text{0}^{P}$, $c_P$ and $b_P$, to be determined. Some of these could be determined by fitting total energy DFT calculations for fixed atomic displacements, corresponding to the ferroelectric (FE) soft mode displacement amplitude $u$ (see Ref.~\onlinecite{PhysRevMaterials.2.104409}), and fitting a curve $E(u)=E_0 + a_2 u^2 + a_4 u^4$, where the polarization $P$ is proportional to $u$ via the Born effective charges $Z^*$. Such fittings are essentially presented in Ref.~[\onlinecite{PhysRevMaterials.2.104409}]. Instead of using such a fitting procedure here, we notice that we need any four suitable pieces of information to fit four unknown parameters, by solving a linear system of equations. The following four are chosen: two parameters, namely $T_\text{0}^{P}$ and the quotient $-c_P / \alpha_P$, are obtained from the fitting of $T_\text{c}^{P}(\eta) = T_\text{0}^{P} - \frac{c_P}{\alpha_P} \eta$ to the results of the {\emph{first principles}}-based effective Hamiltonian calculations of Ref.~[\onlinecite{PhysRevMaterials.2.104409}], where the FE critical temperature is found to be essentially linear in strain. The parameters $c_P$ and $\alpha_P$ are separately determined by setting $c_P=B_\mathrm{1xx} + B_\mathrm{1yy}(1-2B_{12}/B_{11})$, where the parameters $B_\mathrm{1xx}$, $B_\mathrm{1yy}$, $B_{12}$ and $B_{11}$ are taken from Ref.~[\onlinecite{PhysRevMaterials.2.104409}]. Finally $b_P$ is set to reproduce the DFT saturation polarization at 5\% strain at zero temperature using $P^2(T=0,\eta=0.05)=(-\alpha_PT_\text{0}^{P} + 0.05c_P)/b_P$. With the resulting parameter values, we obtain the FE phase diagram shown in Fig.~\ref{fig.P}(a), which precisely reproduces the behavior of the effective Hamiltonian of Ref.~\onlinecite{PhysRevMaterials.2.104409} in terms of critical temperatures as function of strain and saturation polarization at $\eta=5\%$. However, all parameters of the effective Hamiltonian were obtained from DFT calculations for the G-type antiferromagnetic (AFM) state, and thus the effect of the coupling between G-type AFM order and the ferroelectricity needs to be subtracted. According to Eq.~\eqref{eq.LandauF} this coupling will contribute a term $\frac{\lambda_G}{2}M_\text{G}^2P^2$ to $F_P$ compared to Eq.~\eqref{P_F}, which will shift the zero strain critical temperature by $\lambda_G M_\text{G}^2/\alpha_P$ and modify $b_P$ by $-\lambda_G M_\text{G}^2/P_s^2(T=0,\eta=0.05)$, with $M_\text{G}^2=1$ being the fully saturated G-type AFM order parameter at $T=0$. The ferroelectric phase diagram, with corrected parameters $T_\text{c}^{P}$ and $b_P$ (using the value obtained for $\lambda_G$ as described in Sec.~\ref{sec.coup}) is shown in Fig.~\ref{fig.P}(b). The main effect of correcting for the magnetoelectric (ME) coupling is a shift in the critical temperature as function of strain, and corresponding changes in the critical strain, as well as saturation polarization values. \begin{figure}[hbt!] \centering \includegraphics[width=0.45\textwidth]{pol_eta_T_4.pdf} \includegraphics[width=0.45\textwidth]{pol_eta_T_5.pdf} \caption{FE soft mode displacement $u \propto P$ as function of strain and temperature from the Landau theory with parameters obtained before (a) and after (b) correcting for the ME coupling, as discussed in the text. The red crosses and lines show the critical temperatures from the effective Hamiltonian approach of Ref.~[\onlinecite{PhysRevMaterials.2.104409}] and the corresponding linear fit.} \label{fig.P} \end{figure} \subsection{Magnetic order}\label{sec.mag} The magnetic order, excluding the coupling to the polarization, is described by the free energy \begin{equation}\label{Fmag} F_q = \frac{1}{2}\left[ \alpha_q(T-T_\text{0}^{q}) + c_q \eta\right] M_q^2 + \frac{b_q}{4} M_q^4 . \end{equation} The equilibrium magnetic phase at a given strain and temperature is obtained by minimizing each $F_q$ with respect to $M_q$ and the free energy is then given by $F_M = \min_{q} \left( \min_{M_q} F_q \right)$, where the $q$ corresponding to the minimum free energy describes the equilibrium magnetic phase at that point of the phase diagram. Each magnetic order parameter is characterized by the parameters $\alpha_q$, $T_\text{0}^{q}$, $c_q$ and $b_q$. These will be determined via a mapping of the magnetic energies to a Heisenberg Hamiltonian \begin{equation}\label{eq.Heis} E = -\frac{1}{2} \sum_{i,j} J_{ij} \mathbf{S}_i \cdot \mathbf{S}_j . \end{equation} Fig.~\ref{fig.supercell} illustrates the three nearest neighbor exchange interactions $J_x$, $J_y$ and $J_z$ for the coupling between spins on Mn atoms located relative to each other in the $x$, $y$ and $z$-directions respectively. In the cubic structure these are all equivalent $J_x=J_y=J_z=J_1$. With biaxial tensile strain these are split into two inequivalent interactions, in plane (ip) $J_x=J_y=J_1^\mathrm{ip}$ and out-of-plane (op) $J_z=J_1^\mathrm{op}$. Considering also polar displacements can make all three interactions inequivalent. Also the second nearest neighbor interactions $J_2$ are all equivalent in the cubic phase, while biaxial tensile strain results in two inequivalent couplings, in-plane or out-of-plane, as shown in Fig.~\ref{fig.supercell}. The second nearest neighbor interactions are, however, only considered fixed at the values for the cubic structure, since they are small and do not change sign with strain or polar distortions. They must, nevertheless, be included at least in the cubic structure in order to stabilize C-type AFM order over A-type in some strain range, as predicted by the DFT calculations to which these parameters are fitted. The magnetic order parameters are defined as \begin{equation} \label{eq.Mq} M_\mathbf{q} = \frac{1}{N} \sum_j^N \mathrm{e}^{\mathrm{i} \mathbf{q} \cdot \mathbf{R}_j} \langle S_j \rangle , \end{equation} in terms of the $N=8$ spins, corresponding to the Mn sites inside a $2\times 2\times 2$ supercell of the SrMnO$_3$ structure, as shown in Fig.~\ref{fig.supercell}. This cell is compatible with all relevant magnetic order parameters, including ferromagnetic [$\mathbf{q}=(0,0,0)$], G-type AFM [$\mathbf{q}=(1,1,1)$], C-type AFM [$\mathbf{q}=(1,1,0)$, $(1,0,1)$, or $(0,1,1)$], and A-type AFM [$\mathbf{q}=(0,0,1)$, $(0,1,0)$, or $(1,0,0)$] order. Here, all wave-vectors are given in units of $\pi$ divided by the corresponding real space lattice constant. Within cubic symmetry, all three $\mathbf{q}$-vectors corresponding to C-type or A-type AFM order, respectively, are equivalent and thus energetically degenerate. When we consider biaxial tensile strain, resulting in tetragonal symmetry, however, $(1,1,0)$ is different from $(1,0,1)$ and $(0,1,1)$. Similarly, for A-type, $(0,0,1)$ becomes inequivalent to $(1,0,0)$ and $(0,1,0)$. Introducing also a polarization, results in additional symmetry breaking. Hence, we will initially consider all eight $\mathbf{q}$-vectors listed above, that is one each for ferromagnetic and G-type AFM orders, and three each for A and C-type antiferromagnetism. \begin{figure}[hbt!] \centering \includegraphics[width=0.5\textwidth]{simple_tet.pdf} \caption{$2\times2\times 2$ cubic (or tetragonal in the strained case) perovskite supercell, considering only the B-site (Mn) atoms, which form a simple cubic (tetragonal) lattice. } \label{fig.supercell} \end{figure} Specifically, for these 8 order parameters, Eq.~\eqref{eq.Mq} becomes: \begin{align}\label{eq.ordpar1} M_{000} & = \frac{1}{8}\left( \langle S_1\rangle + \langle S_2\rangle + \langle S_3\rangle + \langle S_4\rangle + \langle S_5\rangle + \langle S_6\rangle + \langle S_7\rangle + \langle S_8\rangle \right) \\ M_{111} & = \frac{1}{8}\left( \langle S_1\rangle - \langle S_2\rangle + \langle S_3\rangle - \langle S_4\rangle - \langle S_5\rangle + \langle S_6\rangle - \langle S_7\rangle + \langle S_8\rangle \right)\\ M_{110} & = \frac{1}{8}\left( \langle S_1\rangle - \langle S_2\rangle + \langle S_3\rangle - \langle S_4\rangle + \langle S_5\rangle - \langle S_6\rangle + \langle S_7\rangle - \langle S_8\rangle \right)\\ M_{101} & = \frac{1}{8}\left( \langle S_1\rangle + \langle S_2\rangle - \langle S_3\rangle - \langle S_4\rangle - \langle S_5\rangle - \langle S_6\rangle + \langle S_7\rangle + \langle S_8\rangle \right)\\ M_{011} & = \frac{1}{8}\left( \langle S_1\rangle - \langle S_2\rangle - \langle S_3\rangle + \langle S_4\rangle - \langle S_5\rangle + \langle S_6\rangle + \langle S_7\rangle - \langle S_8\rangle \right)\\ M_{001} & = \frac{1}{8}\left( \langle S_1\rangle + \langle S_2\rangle + \langle S_3\rangle + \langle S_4\rangle - \langle S_5\rangle - \langle S_6\rangle - \langle S_7\rangle - \langle S_8\rangle \right) \\ M_{010} & = \frac{1}{8}\left( \langle S_1\rangle - \langle S_2\rangle - \langle S_3\rangle + \langle S_4\rangle + \langle S_5\rangle - \langle S_6\rangle - \langle S_7\rangle + \langle S_8\rangle \right) \\ M_{100} & = \frac{1}{8}\left( \langle S_1\rangle + \langle S_2\rangle - \langle S_3\rangle - \langle S_4\rangle + \langle S_5\rangle + \langle S_6\rangle - \langle S_7\rangle - \langle S_8\rangle \right) \label{eq.ordpar8} \end{align} or $\mathbf{M} = V \mathbf{S}$ with \begin{equation} \label{eq.Pmat} V = \frac{1}{8} \begin{pmatrix} 1 & ~~1 & ~~1 & ~~1 & ~~1 & ~~1 & ~~1 & ~~1 \\ 1 & -1 & ~~1 & -1 & -1 & ~~1 & -1 & ~~1 \\ 1 & -1 & ~~1 & -1 & ~~1 & -1 & ~~1 & -1 \\ 1 & ~~1 & -1 & -1 & -1 & -1 & ~~1 & ~~1 \\ 1 & -1 & -1 & ~~1 & -1 & ~~1 & ~~1 & -1 \\ 1 & ~~1 & ~~1 & ~~1 & -1 & -1 & -1 & -1 \\ 1 & -1 & -1 & ~~1 & ~~1 & -1 & -1 & ~~1 \\ 1 & ~~1 & -1 & -1 & ~~1 & ~~1 & -1 & -1 \\ \end{pmatrix} \quad \mathrm{and} \quad V^{-1}=8V^\mathrm{T}=\begin{pmatrix} 1 & ~~1 & ~~1 & ~~1 & ~~1 & ~~1 & ~~1 & ~~1 \\ 1 & -1 & -1 & ~~1 & -1 & ~~1 & -1 & ~~1 \\ 1 & ~~1 & ~~1 & -1 & -1 & ~~1 & -1 & -1 \\ 1 & -1 & -1 & -1 & ~~1 & ~~1 & ~~1 & -1 \\ 1 & -1 & ~~1 & -1 & -1 & -1 & ~~1 & ~~1 \\ 1 & ~~1 & -1 & -1 & ~~1 & -1 & -1 & ~~1 \\ 1 & -1 & ~~1 & ~~1 & ~~1 & -1 & -1 & -1 \\ 1 & ~~1 & -1 & ~~1 & -1 & -1 & ~~1 & -1 \\ \end{pmatrix} . \end{equation} Here $\langle S_i\rangle $ denotes the thermal average of spin $S_i$ on site $i$, projected on the spin quantization axis. Using $V^{-1}$, Eq.~\eqref{eq.ordpar1}-\eqref{eq.ordpar8} can be inverted to \begin{align}\label{eq.spinordpar1} \langle S_1\rangle & = M_{000} + M_{111} + M_{110} + M_{101} + M_{011} + M_{001} + M_{010}+ M_{001} \\ \langle S_2\rangle & = M_{000} - M_{111} - M_{110} + M_{101} - M_{011} + M_{001} - M_{010}+ M_{001} \\ \langle S_3\rangle & = M_{000} + M_{111} + M_{110} - M_{101} - M_{011} + M_{001} - M_{010}- M_{001} \\ \langle S_4\rangle & = M_{000} - M_{111} - M_{110} - M_{101} + M_{011} + M_{001} + M_{010}- M_{001} \\ \langle S_5\rangle & = M_{000} - M_{111} + M_{110} - M_{101} - M_{011} - M_{001} + M_{010}+ M_{001} \\ \langle S_6\rangle & = M_{000} + M_{111} - M_{110} - M_{101} + M_{011} - M_{001} - M_{010}+ M_{001} \\ \langle S_7\rangle & = M_{000} - M_{111} + M_{110} + M_{101} + M_{011} - M_{001} - M_{010}- M_{001} \\ \langle S_8\rangle & = M_{000} + M_{111} - M_{110} + M_{101} - M_{011} - M_{001} + M_{010}- M_{001} . \label{eq.spinordpar8} \end{align} In the mean field approximation, the spins in Eq.~\eqref{eq.Heis} can be substituted with their thermal averages. \begin{equation}\label{eq.Heis_mf} E = -\frac{1}{2} \sum_{i,j} J_{ij} \mathbf{S}_i \cdot \mathbf{S}_j \approx -\frac{1}{2} \sum_{i,j} J_{ij} \langle S_i\rangle \cdot \langle S_j\rangle \end{equation} Considering a cubic structure with first and second nearest neighbor interactions, substituting Eq.~\eqref{eq.spinordpar1}-Eq.~\eqref{eq.spinordpar8} into Eq.~\eqref{eq.Heis_mf} yields \begin{align}\label{eq.Heis_E_of_ordpar} \frac{E}{8} = & - \frac{1}{4} J_x (\langle S_1\rangle \langle S_4\rangle + \langle S_2\rangle \langle S_3\rangle + \langle S_5\rangle \langle S_8\rangle + \langle S_6\rangle \langle S_7\rangle ) - \nonumber \\ & - \frac{1}{4} J_y (\langle S_1\rangle \langle S_2\rangle + \langle S_3\rangle \langle S_4\rangle + \langle S_5\rangle \langle S_6\rangle + \langle S_7\rangle \langle S_8\rangle ) - \nonumber \\ & - \frac{1}{4} J_z (\langle S_1\rangle \langle S_5\rangle + \langle S_2\rangle \langle S_6\rangle + \langle S_3\rangle \langle S_7\rangle + \langle S_4\rangle \langle S_8\rangle ) = \nonumber \\ & - \frac{1}{2} J_2 ( \langle S_1\rangle \langle S_3\rangle + \langle S_2\rangle \langle S_4\rangle + \langle S_5\rangle \langle S_7\rangle + \langle S_6\rangle \langle S_8\rangle + \nonumber \\ & + \langle S_1\rangle \langle S_6\rangle + \langle S_2\rangle \langle S_5\rangle + \langle S_4\rangle \langle S_7\rangle + \langle S_3\rangle \langle S_8\rangle + \nonumber \\ & + \langle S_1\rangle \langle S_8\rangle + \langle S_4\rangle \langle S_5\rangle + \langle S_2\rangle \langle S_7\rangle + \langle S_3\rangle \langle S_6\rangle ) = \nonumber \\ = & -(J_x + J_y + J_z + 6 J_2)M_{000}^2 + (J_x + J_y + J_z - 6J_2)M_{111}^2 + \nonumber \\ & + (J_x + J_y - J_z + 2 J_2)M_{110}^2 + (J_x - J_y + J_z + 2 J_2)M_{101}^2 + (-J_x + J_y + J_z + 2 J_2)M_{011}^2 - \nonumber \\ & -(J_x + J_y - J_z - 2 J_2)M_{001}^2 - (J_x - J_y + J_z - 2 J_2)M_{010}^2 - (-J_x + J_y + J_z - 2 J_2)M_{100}^2 , \end{align} where the division by eight gives normalization per perovskite unit cell and $J_2$ is assumed to be the same in every direction. In the cubic structure, where also $J_1$ is the same in every direction, Eq.~\eqref{eq.Heis_E_of_ordpar} can be written \begin{equation}\label{eq.Heis_E_of_ordpar_cube} \frac{E}{8} = \underbrace{-3(J_1 + 2J_2) }_{-\frac{1}{2}\alpha_FT_{0}^F}M_F^2 + \underbrace{3(J_1 - 2J_2)}_{-\frac{1}{2}\alpha_GT_{0}^G}M_G^2 + \underbrace{(J_1 + 2J_2)}_{-\frac{1}{2}\alpha_CT_{0}^C}M_C^2 \underbrace{-(J_1 - 2J_2)}_{-\frac{1}{2}\alpha_AT_{0}^A}M_A^2 , \end{equation} Since the three different A or C-type AFM orders are now degenerate they were only included once each and labeled by the corresponding letter instead of $\mathbf{q}$-vector. By identifying the energy of the Heisenberg Hamiltonian with the free energy at zero temperature, the quadratic coefficients of the Landau free energy can be identified in terms of the Heisenberg exchange interactions in Eq.~\eqref{eq.Heis_E_of_ordpar_cube}. The critical temperatures at zero strain, for each of the magnetic order parameters, are obtained using multi-sublattice mean field theory~\cite{PhysRevB.70.024427,Anderson196399}. For a unit cell with $N$ magnetic atoms, an exchange matrix $\mathcal{J}$ is constructed as \begin{equation} [\mathcal{J}]_{AB} = \sum_i J_{A_0B_i}, \end{equation} where $A$ and $B$ denote magnetic sublattices and $i$ is an index running over different atomic sites of type $B$. $\mathcal{J}$ is thus a symmetric $N \times N$ matrix. The critical temperature is \begin{equation} \label{eq.mft_Tc} T_\mathrm{c} = \frac{J_0}{3k_\mathrm{B}}, \end{equation} where $k_\mathrm{B}$ is Boltzmann's constant and $J_0$ is the largest eigenvalue of $\mathcal{J}$. The corresponding eigenvector describes the magnetic order. By looking at the other eigenvalues, information regarding the hypothetical critical temperatures of magnetic orders other than the most stable one can be obtained. In this manner, a critical temperature for each of the magnetic orders in the cubic structure is found to be \begin{equation} \label{eq.Tcs} T_{0}^F = -345.6~\mathrm{K} \quad , \quad T_{0}^G = 262.6~\mathrm{K} \quad , \quad T_{0}^C = 115.2~\mathrm{K} \quad , \quad T_{0}^A = -87.5~\mathrm{K}. \end{equation} The negative ordering temperatures indicate that the non-magnetic ($M_q=0$) solution is energetically favored relative to the corresponding magnetic order parameter being non-zero, in the cubic structure. The energy in Eq.~\eqref{eq.Heis_mf}, normalized per unit cell, can be written \begin{equation} \frac{E}{8} = -\frac{1}{8} \frac{1}{2} \mathbf{S}^T \mathcal{J} \mathbf{S} = -\frac{1}{8} \frac{1}{2} \mathbf{M}^T (V^{-1})^T \mathcal{J} V^{-1} \mathbf{M} = - \frac{1}{2} \mathbf{M}^T \underbrace{V \mathcal{J} V^{-1}}_{D} \mathbf{M} = - \frac{1}{2} \mathbf{M}^T D \mathbf{M} , \end{equation} where we used $(V^{-1})^T = 8V$, see Eq.~\eqref{eq.Pmat}, and $\mathbf{S}$ denotes an $N$-dimensional vector of $N$ spins, instead of a three-dimensional spin vector. Since no cross-coupling terms between different order parameters occur in Eq.~\eqref{eq.Heis_E_of_ordpar}, it is clear that $D$ must be diagonal and thus the change of variables defined by Eq.~\eqref{eq.ordpar1}-\eqref{eq.ordpar8} diagonalizes the spin Hamiltonian in Eq.~\eqref{eq.Heis_mf}. Thus, the matrix $D$ contains the eigenvalues of $\mathcal{J}$, i.e., $3k_\mathrm{B}$ times the corresponding ordering temperature (see Eq.~\eqref{eq.mft_Tc}), and these are also the coefficients of the squared order parameters in Eq.~\eqref{eq.Heis_E_of_ordpar}. From this it follows that $\alpha_q = 3k_\mathrm{B}$ for each $q$. One can also show that if two magnetic orders have equal $\alpha_q$ and there is a phase boundary as function of strain between them, the phase boundary will be vertical in the strain-temperature phase diagram (i.e., independent of temperature). Hence, all magnetic phase boundaries are vertical in the model used here, as is also seen in Fig.~\ref{fig.Tc_eta}. For a given order parameter, excluding any coupling to other order parameters, the value which minimizes the free energy is \begin{equation} M_q^2 = -\frac{a_q}{b_q} . \end{equation} According to the definitions in Eq.~\eqref{eq.ordpar1}-\eqref{eq.ordpar8}, normalization of the spins implies that the zero temperature values of the order parameters will also be normalized to unity. Therefore, the quartic coefficients are set to the strain dependent values of \begin{equation} b_q = -a_q (T=0). \end{equation} The strain dependence of $b_q$ ensures that the zero temperature normalization of the spins is correct at all strains. With all the necessary parameters determined, the magnetic phase diagram, excluding coupling to the FE polarization, can be obtained. Looking at the definitions of the magnetic order parameters in Eq.~\eqref{eq.ordpar1}-\eqref{eq.ordpar8}, one can deduce that the different magnetic order parameters exclude each other, i.e., one being unity implies that the others are zero. Furthermore, the adiabatic magnon spectra show minima only for zone center or zone boundary $\mathbf{q}$-vectors, at every strain considered~\cite{PhysRevMaterials.2.104409}, within the current model based on the Heisenberg Hamiltonian on a tetragonal lattice. Thus, the magnetic transitions are described by minimizing the total energy independently for each of the order parameters and taking the one that gives the lowest free energy as the only one non-zero for a given $(\eta,T)$. The resulting phase diagram is shown in Fig.~\ref{fig.Tc_eta}. The phase diagram obtained without the strain-independent $J_2$ is also shown to illustrate that $J_2$ is necessary to stabilize the C-type AFM region, which has been predicted by the DFT calculations which the parameters are fitted to. \begin{figure}[hbt!] \centering \includegraphics[width=0.49\textwidth]{mag_PD.pdf} \includegraphics[width=0.49\textwidth]{mag_PD_noJ2.pdf} \caption{Magnetic phase diagram, excluding the coupling to the FE polarization (left). To the right is the result when neglecting $J_2$. In this case there is no region with C-type antiferromagnetism.} \label{fig.Tc_eta} \end{figure} \subsection{Coupling}\label{sec.coup} The last part needed before the complete multiferroic phase diagram can be established, are the biquadratic coupling coefficients $\lambda_q$. These are obtained by computing the magnetic exchange interactions as functions of a polarization $P$, by performing DFT calculations while freezing in atomic displacements according to a soft mode vector with amplitude $\mathbf{u} \propto \mathbf{P}$ . The computed exchange interactions as function of soft mode amplitude/polarization for various strains, is shown in Fig.~\ref{fig.J_of_u}, with (a) showing results for $\mathbf{u} \parallel (110)$ and (b) showing $\mathbf{u} \parallel (100)$. With biaxial tensile (001)-strain and polarization along the $(100)$-direction, all three nearest neighbor directions $J_x=J^\parallel$, $J_y=J^\perp$ and $J_z = J^\mathrm{op}$, are inequivalent, while with $\mathbf{u} \parallel (110)$ the two in-plane coupling parameters $J^\mathrm{ip}$ are equivalent. The biquadratic coupling between magnetism and ferroelectricity is obtained from a quadratic fit of the $J$'s as function of $u$. As can be seen in Fig.~\ref{fig.J_of_u}, this coupling is strongly strain dependent. Furthermore, it is not possible to produce a good quadratic fit for the whole range of relevant displacements/polarizations (which for large strain and low temperature can be larger than $u=0.2~\AA$). This is particularly clear for $J^\parallel$. A good fit over the whole range of temperature and strain considered, therefore, would require at least a higher order coupling term $\sim M^2 P^4$, and each coupling term needs to be strain dependent. However, this makes it necessary to consider also other higher order terms to guarantee stable solutions. This significantly increases the complexity and the number of fitting parameters of the model used. If one is interested in effects near the phase transitions, where the order parameters are small, it is sufficient to include only the biquadratic coupling term. From Fig.~\ref{fig.J_of_u} one can estimate that such a fit is good for polar displacements up to $\sim 0.06~\AA$, and it is to the data up to this point that the curves in Fig.~\ref{fig.J_of_u} have been fitted. \begin{figure}[hbt!] \centering \includegraphics[width=0.48\textwidth]{J_of_P_u110.pdf} \includegraphics[width=0.48\textwidth]{J_of_P_u100.pdf} \caption{Change of the nearest neighbor exchange interactions (relative to their values for $u=0$) as function of FE soft mode amplitude $u$, for various strains $\eta$. The polarization is along the (110)-direction in (a) and along (100) in (b), while a 001-biaxial tensile strain is applied. Curves show quadratic fits of the data up to $u=0.06~\mathrm{\AA}$. } \label{fig.J_of_u} \end{figure} According to Refs.~\onlinecite{PhysRevMaterials.2.104409} and \onlinecite{PhysRevB.84.104440}, under tensile epitaxial strain $\mathbf{P} \parallel (110)$ for all magnetic order parameters except C-type, where instead $\mathbf{P} \parallel (100)$. Therefore, the coupling coefficient $\lambda_C$ is evaluated for $\mathbf{P} \parallel (100)$, while all others are evaluated for $\mathbf{P} \parallel (110)$. Writing $J_x(P) = J_{x}(0) + j_x P^2$ and similarly for $y$, $z$, and inserting this into Eq.~\eqref{eq.Heis_E_of_ordpar}, leads to the terms proportional to $M_q^2 P^2$ needed to identify $\lambda_q$: \begin{align} E_\mathrm{coupling} = & \frac{1}{2} [ \underbrace{-2(j_x + j_y + j_z)}_{\lambda_{000}}M_{000}^2 + \underbrace{2(j_x + j_y + j_z)}_{\lambda_{111}}M_{111}^2 + \nonumber \\ & + \underbrace{2(j_x + j_y - j_z)}_{\lambda_{110}}M_{110}^2 + \underbrace{2(j_x - j_y + j_z)}_{\lambda_{101}}M_{101}^2 + \underbrace{2(-j_x + j_y + j_z)}_{\lambda_{011}}M_{011}^2 - \nonumber \\ & \underbrace{-2(j_x + j_y - j_z)}_{\lambda_{001}}M_{001}^2 \underbrace{- 2(j_x - j_y + j_z)}_{\lambda_{010}}M_{010}^2 \underbrace{- 2(-j_x + j_y + j_z)}_{\lambda_{100}}M_{100}^2 ] P^2. \end{align} By computing and fitting $J_x(\eta, P) = J_{x}(\eta,0) + j_x(\eta) P^2$, and similarly for $y$ and $z$, for various $\eta$, strain-dependent coupling parameters $\lambda_q(\eta)$ are obtained, as shown in Fig.~\ref{fig.lambda_of_eta}. Only the coupling parameters for the magnetic order parameters observed in the phase diagram are shown, i.e., A-type AFM order with $\mathbf{q}=(0,0,1)$ and C-type AFM order with $\mathbf{q}=(1,0,1)$. The coupling is to a polarization along the (110)-direction for all the magnetic orders except C, for which the polarization is along (100). The calculated $\lambda_q(\eta)$ are reasonably well described by a linear strain dependence, whereby it could be appropriate to describe the whole strain dependent phase diagram using linearly fitted coupling parameters. In this work, however, where focus is on the region $2\% < \eta < 3\%$, and only the C-type AFM coupling is relevant, a linear interpolation between only these two data points is used. While the relevant C-type coupling parameter is negative for all strains and increases slightly in magnitude with increasing strain, the G-type coupling changes sign from positive to negative. At low strains, the G-type coupling is positive, which is consistent with previous suggestions that in cubic G-type AFM Sr$_{1-x}$Ba$_{x}$MnO$_3$ electric polarization and magnetic order disfavour each other~\cite{PhysRevLett.107.137601,PhysRevLett.109.107601}. \begin{figure}[hbt!] \centering \includegraphics[width=0.45\textwidth]{lambda_of_strain.pdf} \caption{ Biquadratic magnetoelectric coupling parameters, $\lambda_q$, for different magnetic order parameters as functions of strain. The lines show linear fits to the data. } \label{fig.lambda_of_eta} \end{figure} \subsection{Summary of parameters}\label{sec.partable} Table~\ref{tab.param} contains the numerical values of all parameters entering the free energy density in Eq.~\eqref{eq.LandauF}, used in this work, except the strain dependent coupling parameters contained in Fig.~\ref{fig.lambda_of_eta}. \begin{table}[] \caption{Values used for all parameters entering the expression for the free energy (u.c. = ``simple perovskite unit cell''). } \begin{tabular}{l|ccccccccc|cc} \hline \hline $i$ & $M_{000}$ & $M_{111}$ & $M_{110}$ & $M_{101}$ & $M_{011}$ & $M_{001}$ & $M_{010}$ & $M_{100}$ & & $P$ & \\ \hline $\alpha_i$ & $3$ & $3$ & $3$ & $3$ & $3$ & $3$ & $3$ & $3$ & $k_\mathrm{B}$ & $7.8\cdot 10^{-3}$ & \si{eV\angstrom^{-2}\kelvin^{-1}/{u.c.}} \\ $T_0^i$ & -345.6 & 262.6 & 115.2 & 115.2 & 115.2 & -87.5 & -87.5 & -87.5 & \si{\kelvin} & -575.7 & \si{\kelvin} \\ $c_i$ & -1.46 & 1.46 & 2.39 & -0.46 & -0.46 & -2.39 & 0.46 & 0.46 & \si{eV/{u.c.}} & -218.8 & \si{eV\angstrom^{-2}/{u.c.}} \\ $b_i$ & 8.94 & 6.79 & 2.98 & 2.98 & 2.98 & 2.26 & 2.26 & 2.26 & \si{10^{-2}}{eV/{u.c.}}& 96.4 & \si{eV\angstrom^{-4}/{u.c.}} \\ \hline \hline \end{tabular} \label{tab.param} \end{table} \subsection{Susceptibilities and thermodynamics}\label{sec.susc_therm} We consider again a free energy such as that in Eq.~\eqref{eq.LandauF} but limit ourselves to one magnetic order parameter $M$. This is sufficient since the magnetic phase does not change within the region of the phase diagram of interest within this work. The equilibrium $M$ and $P$ fulfill \begin{equation}\label{eq.Mcond} \frac{\partial F}{\partial M} = M \left[ a_M + b_M M^2 + \lambda P^2 \right] = 0 \quad \rightarrow \quad M=0 \quad \mathrm{or} \quad M^2 = -\frac{1}{b_M} (a_M + \lambda P^2) \end{equation} and \begin{equation}\label{eq.Pcond} \frac{\partial F}{\partial P} = P \left[ a_P + b_P P^2 + \lambda M^2 \right] - E = 0. \end{equation} For $E=0$ the above conditions can easily be solved to produce the four different solutions in the top of Table~\ref{tab.E0Landau}. Taking derivatives of Eqs.~\eqref{eq.Mcond}-\eqref{eq.Pcond} with respect to $E$ and setting $E=0$, yields the zero field electric susceptibility \begin{equation} \chi_E = \frac{\partial P}{\partial E} \bigg|_{E=0} \end{equation} and magnetoelectric susceptibility \begin{equation} \chi_{ME} = \frac{\partial M}{\partial E} \bigg|_{E=0}, \end{equation} which are also listed in Table~\ref{tab.E0Landau}. From the equilibrium solutions for $M$ and $P$ it is also straight forward to evaluate the free energy $F$, from which the entropy \begin{equation} S = -\left( \frac{\partial F}{\partial T} \right)_E \end{equation} and specific heat \begin{equation} C = -T\left( \frac{\partial^2 F}{\partial T^2} \right)_E \end{equation} can be evaluated. It is useful to note that $\frac{\partial a_M}{\partial T} = \alpha_M$ and $\frac{\partial a_P}{\partial T} = \alpha_P$. \begin{table}[] \caption{Solutions to the zero field Landau theory in Eq.~\eqref{eq.LandauF} (with $E=0$), as well as corresponding susceptibilities, free energy, entropy and specific heat. } \begin{tabular}{l|c|c|c|c} \hline\hline & \multicolumn{1}{l|}{$M_0 = 0$, $P_0 = 0$} & \multicolumn{1}{l|}{$M_0 = 0$, $P_0^2 = \frac{-a_P}{b_P}$} & \multicolumn{1}{l|}{$M_0^2 = \frac{-a_M}{b_M}$, $P_0 = 0$} & \multicolumn{1}{c}{$M_0^2 = \frac{\lambda a_P - a_M b_P}{b_M b_P - \lambda^2}$, $P_0^2 = \frac{\lambda a_M - b_M a_P}{b_M b_P - \lambda^2}$} \\ \hline $\chi_E$ & $a_P^{-1}$ & $-\frac{1}{2}(a_P)^{-1}$ & $(a_P -\lambda \frac{a_M}{b_M})^{-1}$ & $-\frac{1}{2}(a_P -\lambda \frac{a_M}{b_M})^{-1}$ \\ \hline $\chi_{ME}$ & 0 & 0 & 0 & $-\frac{\lambda P_0}{b_M M_0} \chi_E = -\frac{\lambda}{2}(\lambda^2 a_M a_P + a_M a_P b_M b_P - \lambda a_P^2 b_M - \lambda a_M^2 b_p)^{-1/2}$ \\ \hline $F$ & 0 & $\frac{-a_P^2}{4b_P}$ & $\frac{-a_M^2}{4b_M}$ & $\frac{2 \lambda a_M a_P - a_P^2 b_M - a_M^2 b_P}{4(b_M b_P - \lambda^2)}$ \\ \hline $S$ & 0 & $\frac{\alpha_P a_P}{2 b_P}$ & $\frac{\alpha_M a_M}{2 b_M}$ & -$\frac{( \lambda (\alpha_M a_P + a_M \alpha_P) - \alpha_P a_P b_M - \alpha_M a_M b_P)}{2(b_M b_P - \lambda^2)}$ \\ \hline $C$ & 0 & $\frac{\alpha_P^2}{2 b_P}T$ & $\frac{\alpha_M^2}{2 b_M}T$ & -$\frac{( 2\lambda \alpha_M \alpha_P - \alpha_P^2 b_M - \alpha_M^2 b_P)}{2(b_M b_P - \lambda^2)}T$ \\ \hline\hline \end{tabular} \label{tab.E0Landau} \end{table} For $E \neq 0$, Eq.~\eqref{eq.Pcond} can no longer be solved as a quadratic for $P^2$ and it is more convenient to produce numerical solutions. After eliminating $M$, the free energy in the phase with $M \neq 0$ and $P \neq 0$ is \begin{equation} F = -\frac{a_M^2}{4 b_M} + \frac{1}{2} \left( a_P - \frac{\lambda a_M}{ b_M}\right) P^2 + \frac{1}{4} \left( b_P - \frac{\lambda^2}{b_M} \right) P^4 - EP \end{equation} and the entropy at a fixed $P$ is \begin{equation} S(T,E) = -\left( \frac{\partial F}{\partial T} \right)_E = \frac{\alpha_M^2}{2b_M}(T- T_\mathrm{c}^M) - \frac{1}{2} \left( \alpha_P -\lambda\frac{\alpha_M}{b_M} \right) P^2(T,E) . \end{equation} The isothermal entropy change, resulting from varying a field from $E_1$ to $E_2$, is then the difference in the entropy for $P (T,E_2)$ and $P (T,E_1)$. \begin{equation}\label{eq.deltaS} \Delta S = -\frac{1}{2}\alpha_P \left[ P^2 (T,E_2) - P^2(T,E_1)\right] -\frac{1}{2}\alpha_M \left[ M^2 (T,E_2) - M^2(T,E_1)\right] , \end{equation} which provides an obvious decomposition of the total entropy change into an electric and a magnetic contribution. \begin{equation} \Delta S = -\frac{1}{2} \left( \alpha_P - \lambda \frac{\alpha_M}{b_M} \right) \left[ P^2 (T,E_2) - P^2(T,E_1)\right] \end{equation} is an alternative formulation of Eq.~\ref{eq.deltaS}, valid when the field does not cause the magnetic phase to change. If one applied a magnetic field instead of an electric, the equivalent results as above could be obtained with $P$ exchanged for $M$ and $E$ exchanged for $B$, assuming that $M$ would be a ferromagnetic order parameter which couples to a magnetic field. From $\Delta S$, the adiabatic temperature change $\Delta T$ is estimated via $ \Delta T = -T \frac{\Delta S }{C}$, where $C$ is the specific heat. The temperature dependent specific heat is calculated from frozen phonon calculations for the cubic crystal structure, as shown in Fig.~\ref{fig.Cv_of_T}. These calculations largely agree with earlier such calculations which, however, neglected spin polarization~\cite{PhysRevB.75.214307}, whereas the result in Fig.~\ref{fig.Cv_of_T} was evaluated for G-type antiferromagnetism. \begin{figure}[hbt!] \centering \includegraphics[width=0.55\textwidth]{Cv_of_T.pdf} \caption{Phonon specific heat calculated from frozen phonon calculations. } \label{fig.Cv_of_T} \end{figure}
2,869,038,154,224
arxiv
\section{Introduction} We study a conjecture by Hans Zassenhaus, which says that the outer derivation algebra ${\rm Out}(\mathfrak{g})$ is {\em solvable} for all simple modular Lie algebras $\mathfrak{g}$, over an algebraically closed field $\mathbb{F}$ of characteristic $p>0$. Zassenhaus posed this conjecture in $1939$ in his book \cite{ZAS}. We have collected several results on this conjecture from the literature, and proved some results in \cite{BU67}. For simple modular Lie algebras over an algebraically closed field of characteristic $p>3$ the Zassenhaus conjecture is true. The outer derivation algebra ${\rm Out}(\mathfrak{g})$ is solvable of derived length at most three. In characteristic $p=2$ and $p=3$, however, there is a counterexample known in each case. For $p=3$ this is a simple constituent of the classical Lie algebra $\mathfrak{g}_2$, namely $\mathfrak{psl}_3(\mathbb{F})$. For $p=2$ it is a simple constituent of dimension $26$ of the classical Lie algebra $\mathfrak{f}_4$. \\[0.2cm] One motivation for us to study the Zassenhaus conjecture comes from commutative post-Lie algebra structures, or {\em CPA-structures}, on finite-dimensional Lie algebras over a field $\mathbb{F}$, see \cite{BU67}. Indeed, every perfect modular Lie algebra in characteristic $p>2$ having a solvable outer derivation algebra admits only the trivial CPA-structure. Here CPA-structures are a special case of post-Lie algebra structures on Lie algebras, which have been studied in the context of geometric structures on Lie groups, \'etale representations of algebraic groups, deformation theory, homology of partition posts, Kozul operads, Yang-Baxter equations, and many other topics. For references see \cite{BU41,BU44,BU51,BU52,BU57,VAL}. \\[0.2cm] In this article we provide an infinite family of new counterexamples to the Zassenhaus conjecture in characteristic $3$. We show that the Hamiltonian Lie algebras $H(2;(1,n))^{(2)}$, which are central simple modular Lie algebras in characteristic $3$ of dimension $3^{n+1}-2$, are counterexamples for all $n\ge 1$. For $n=1$ we have the isomorphism $H(2;(1,1))^{(2)}\cong \mathfrak{psl}_3(\mathbb{F})$, which recovers the known counterexample in characteristic $3$. We show that there are no other counterexamples among the Hamiltonian Lie algebras $H(2r;\underline{n})^{(2)}$ in characteristic $p\ge 3$. We also determine the structure of the outer derivation algebra of the Hamiltonian Lie algebras in characteristic $p=3$. Finally, we study the Zassenhaus conjecture for known simple Lie algebras of {\em new type} over an algebraically closed field of characteristic three, such as Brown's algebras $Br_8$ and $Br_{29}$, Kostrikin's series $K(\varepsilon,\delta,\rho)$ of dimension $10$, the Ermolaev algebras $R(\underline{n})$, the Frank algebras $Fr(n)$ and several series of new simple Lie algebras of Skryabin. We do not find new counterexamples there. \section{Preliminaries} Let $\mathfrak{g}$ be a finite dimensional Lie algebra over an arbitrary field $\mathbb{F}$. Denote by ${\rm Der}(\mathfrak{g})$ the derivation algebra of $\mathfrak{g}$ and by ${\rm Inn}(\mathfrak{g})$ the ideal of inner derivations of the Lie algebra ${\rm Der}(\mathfrak{g})$. The quotient algebra ${\rm Out}(\mathfrak{g})={\rm Der}(\mathfrak{g})/{\rm Inn}(\mathfrak{g})$ is called the algebra of {\em outer derivations} of $\mathfrak{g}$. Hans Zassenhaus posed in $1939$ in his book \cite{ZAS} on page $80$, between ``Satz $7$'' and ``Satz $8$'', the following conjecture. \begin{con}[Zassenhaus] The outer derivation algebra ${\rm Out}(\mathfrak{g})$ of a simple Lie algebra $\mathfrak{g}$ in prime characteristic is solvable. \end{con} For the conjecture it is reasonable to assume that $\mathfrak{g}$ is defined over an algebraically closed field of characteristic $p>0$. For characteristic zero, the corresponding conjecture is true, because then ${\rm Out}(\mathfrak{g})\cong H^1(\mathfrak{g},\mathfrak{g})=0$ for a simple Lie algebra $\mathfrak{g}$ by the first Whitehead Lemma. Clearly this need not be true in prime characteristic, and indeed the outer derivation algebra of a simple modular Lie algebra need not be trivial in general. \begin{rem} The Zassenhaus conjecture for Lie algebras can be seen as an analogue of the {\em Schreier conjecture} for finite groups. The Schreier conjecture asserts that the outer automorphism group of every finite simple non-abelian group is solvable. It was proposed by Otto Schreier in $1926$ and is known to be true as a result of the classification of finite simple groups. Up to now no simpler proof is known for it. \end{rem} What is known about the Zassenhaus conjecture? There are many different results in the literature, in particular in the context of the classification of simple modular Lie algebras over an algebraically closed field of characteristic $p>3$. Let us summarize the main results, which we have collected in \cite{BU67}. A simple modular Lie algebra in the classification is either of classical type, Cartan type, or of Melikian type in characteristic $p=5$. The results are as follows. \begin{prop} Let $\mathfrak{g}$ be a classical simple Lie algebra over an algebraically closed field $\mathbb{F}$ of characteristic $p>3$. Then ${\rm Out}(\mathfrak{g})=0$ unless $\mathfrak{g}=\mathfrak{psl}_{n+1}(\mathbb{F})$ with $p\mid n+1$ in which case ${\rm Der}(\mathfrak{g})\cong \mathfrak{pgl}_{n+1}(\mathbb{F})$ and ${\rm Out}(\mathfrak{g})\cong \mathbb{F}$. \end{prop} \begin{prop} Let $\mathfrak{g}$ be a simple Lie algebra of Cartan type over an algebraically closed field of characteristic $p>3$. Then ${\rm Out}(\mathfrak{g})$ is solvable. More precisely, ${\rm Out}(\mathfrak{g})$ is solvable of derived length $d\le 1$ for type $W$ and type $K$, of derived length $d\le 2$ for type $S$ and of derived length $d\le 3$ for type $H$. \end{prop} \begin{prop} Let $\mathcal{M}=\mathcal{M} (n_1,n_2)$ be a Melikian algebra of dimension $5^{n_1+n_2+1}$ over an algebraically closed field of characteristic $5$. Then ${\rm Out}(\mathcal{M})$ is abelian. \end{prop} So the Zassenhaus conjecture has a positive answer for algebraically closed fields of characteristic $p>3$: \begin{thm} Let $\mathfrak{g}$ be a simple modular Lie algebra over an algebraically closed field of characteristic $p>3$. Then ${\rm Out}(\mathfrak{g})$ is solvable of derived length at most three. \end{thm} Moreover, if $\mathfrak{g}$ is a central simple Lie algebra over an arbitrary field $\mathbb{F}$ of characteristic $p>3$, then $\mathfrak{g} \otimes_{\mathbb{F}}\overline{\mathbb{F}}$ is simple over $\overline{\mathbb{F}}$. Hence the Zassenhaus conjecture also holds for central simple Lie algebras over an arbitrary field of characteristic $p>3$. \\[0.2cm] However, in characteristic $p=3$ there is one known {\em counterexample} to the Zassenhaus conjecture. The same is true for $p=2$. We will show in the next section that there exists a whole family of counterexamples for $p=3$ of dimension $3^{n+1}-2$ for all $n\ge 1$. \section{Simple modular Lie algebras in characteristic three} We want to study the Zassenhaus conjecture for simple modular Lie algebras of characteristic $p=3$. For the theory of modular Lie algebras, see for example \cite{SEL}. First we recall that there is a {\em counterexample}, see \cite{BU67}, Proposition $3.6$. \begin{prop}\label{3.1} Let $\mathbb{F}$ be a field of characteristic $p=3$. Then the derivation algebra of $\mathfrak{g}=\mathfrak{psl}_3(\mathbb{F})$ is isomorphic to the exceptional Lie algebra $\mathfrak{g}_2$, and the quotient by ${\rm Inn}(\mathfrak{g})\cong \mathfrak{g}$ is given by ${\rm Out}(\mathfrak{g})\cong \mathfrak{g}$. In particular the outer derivation algebra of $\mathfrak{g}$ is simple and non-solvable. \end{prop} The next question then is, whether or not there are more counterexamples in characteristic $p=3$. Here we first distinguish {\em the classical type, the Cartan type, and the new type}. \subsection{Classical type} For $p>3$ the list of classical simple modular Lie algebras is given by \[ A_n, p\nmid n+1, \mathfrak{psl}_n(\mathbb{F}),p\mid n, B_n,C_n,D_n, \mathfrak{g}_2,\mathfrak{f}_4,\mathfrak{e}_6,\mathfrak{e}_7,\mathfrak{e}_8. \] For $p=3$ these Lie algebras are still simple, except for $\mathfrak{g}_2$ and $\mathfrak{e}_6$. In fact, $\mathfrak{g}_2$ has a simple ideal $I\cong \mathfrak{psl}_3(\mathbb{F})$, generated by the short roots, with $\mathfrak{g}_2/I\cong I$. This leads to the counterexample mentioned above. The algebra $\mathfrak{e}_6$ has a $1$-dimensional center so that $\mathfrak{e}_6/\mathfrak{z}$ is a simple modular Lie algebra of dimension $77$ in characteristic $3$. Its derivation algebra is abelian, so that we do not obtain another counterexample. For the other simple classical Lie algebras we have the following results \cite{BU67}: \begin{prop} Let $\mathbb{F}$ be an algebraically closed field of characteristic $3$ and $\mathfrak{g}$ be a simple Lie algebra of classical type different from $\mathfrak{psl}_{3m}(\mathbb{F})$, $\mathfrak{g}_2$, or $\mathfrak{e}_6$. Then ${\rm Out}(\mathfrak{g})=0$. \end{prop} \begin{prop} Let $\mathbb{F}$ be an algebraically closed field of characteristic $3$. Then we have ${\rm Der} (\mathfrak{psl}_{3m}(\mathbb{F}))\cong \mathfrak{pgl}_{3m}(\mathbb{F})$ for all $m\ge 2$. Hence ${\rm Out}(\mathfrak{psl}_{3m}(\mathbb{F}))\cong \mathbb{F}$ is abelian for all $m\ge 2$. \end{prop} Hence we also obtain no new counterexamples here. \subsection{Cartan type} The list of simple modular Lie algebras of {\em Cartan type} for $p>3$ is given the {\em graded} simple Lie algebras of Cartan type \[ W(m;\underline{n}),\; S(m;\underline{n})^{(1)},\; H(2r;\underline{n})^{(2)},\; K(2r+1;\underline{n})^{(1)}, \] and their filtered deformations. Here $m\in \mathbb{N}$, $\underline{n}:=(n_1,\ldots ,n_m)\in \mathbb{N}^m$ and $\abs{n}:=n_1+\cdots +n_m$. \\[0.2cm] These algebras are called {\em Witt algebras, special algebras, Hamiltonian algebras and contact algebras}. They are the finite-dimensional versions defined over a field $\mathbb{F}$ of characteristic $p>0$ of the infinite dimensional Lie algebras of characteristic zero occurring in E. Cartan's work of $1909$ on pseudogroups in differential geometry. For the precise definition of these algebras see H. Strade's book \cite{STR1}. All these algebras are still simple for characteristic $p=3$, where we need $m\ge 3$ for the special algebras. The dimensions of these algebras are given by \begin{align*} \dim W(m;\underline{n}) & = m\cdot p^{\abs{n}}, \\ \dim S(m;\underline{n})^{(1)} & = (m-1)(p^{\abs{n}}-1)\\ \dim H(2r;\underline{n})^{(2)} & = p^{\abs{n}}-2, \\ \dim K(2r+1;\underline{n})^{(1)} & =\begin{cases} p^{\abs{n}}, \hspace*{0.7cm} \text{ if } 2r+1\not\equiv -3 \bmod p,\\ p^{\abs{n}}-1, \text{ if } 2r+1\equiv -3 \bmod p.\end{cases} \end{align*} The derivation algebras have been computed for an algebraically closed field $\mathbb{F}$ of characteristic $p\ge 3$, see Theorem $7.1.2$ in \cite{STR1}. In particular, the result for $p>3$ still holds for $p=3$, except for the Hamiltonian algebras. So it follows from the work of Celousov \cite{CEL}, that the Zassenhaus conjecture is true for Witt algebras, special algebras and contact algebras for an algebraically closed field of characteristic $p\ge 3$. However, there are new counterexamples in the Hamiltonian case for $p=3$. The following table gives a survey. \vspace*{0.5cm} \begin{center} \begin{tabular}{c|cccc} $\mathfrak{g}$ & conditions & $\dim {\rm Der}(\mathfrak{g})$ & $\dim {\rm Out}(\mathfrak{g})$ & conjecture \\[2pt] \hline $W(m;\underline{n})$ & $p>2$ & $m ( p^{\abs{n}}-1) +\abs{n}$ & $\abs{n}-m$ & $\checkmark$ \\[4pt] $S(m;\underline{n})^{(1)}$ & $p>0,m\ge 3$ & $(m-1)(p^{\abs{n}}-1)+\abs{n}+1$ & $\abs{n}+1$ & $\checkmark$ \\[4pt] $H(2;(1,1))^{(2)}$ & $p=3$ & $14$ & $7$ & $-$ \\[4pt] $H(2;(1,n_2))^{(2)}$ & $p=3,n_2>1$ & $3^{n_2+1}+n_2+2$ & $n_2+4$ & $-$ \\[4pt] $H(2r;\underline{n})^{(2)}$ & $p>3$ or $p=3,r>1$, & $p^{\abs{n}}+\abs{n}$ & $\abs{n}+2$ & $\checkmark$ \\ & or $p=3,r=1,1<n_1\le n_2$ & & & \\[4pt] $K(2r+1;\underline{n})^{(1)}$ & $p>2, p\nmid 2r+4$ & $p^{\abs{n}}+\abs{n}-2r-1$ & $\abs{n}-2r-1$ & $\checkmark$ \\[4pt] $K(2r+1;\underline{n})^{(1)}$ & $p>2, p\mid 2r+4$ & $p^{\abs{n}}+\abs{n}-2r-1$ & $\abs{n}-2r$ & $\checkmark$ \\ \end{tabular} \end{center} \vspace*{0.5cm} Note that we also have \[ H(2;(1,n_2))^{(2)}\cong H(2;(n_1,1))^{(2)} \] for $p\ge 3$, see \cite{STR1}, $(3)$ on page $199$. \\[0.2cm] We have first guessed these results for $p=3$ in low dimensions by doing a computation with GAP. In fact, we computed the dimensions of the derived series of the outer derivation algebras for the Hamiltonian algebras $H(2r;\underline{n})^{(2)}$ in a few cases. The following table shows the results. The last computation was only possible on the CoCalc server of Anton Mellit, with $192$ GB RAM. \vspace*{0.5cm} \begin{center} \begin{tabular}{c|cccc} $\mathfrak{g}$ & $\dim(\mathfrak{g})$ & $\dim {\rm Der}(\mathfrak{g})$ & $\dim {\rm Out} (\mathfrak{g})^{(i)}$ & ${\rm Out}(\mathfrak{g})$ \\[4pt] \hline $H(2;(1,1))^{(2)}$ & $7$ & $14$ & $(7,7,\ldots )$ & simple \\[4pt] $H(2;(1,2))^{(2)}$ & $25$ & $31$ & $(6,5,5,\ldots )$ & non-solvable \\[4pt] $H(2;(1,3))^{(2)}$ & $79$ & $86$ & $(7,5,5,\ldots )$ & non-solvable \\[4pt] $H(2;(2,2))^{(2)}$ & $79$ & $85$ & $(6,3,1,0)$ & solvable \\[4pt] $H(4;(1,1,1,1))^{(2)}$ & $79$ & $85$ & $(6,4,0)$ & solvable \\[4pt] $H(2;(2,3))^{(2)}$ & $241$ & $248$ & $(7,3,1,0)$ & solvable \end{tabular} \end{center} \vspace*{0.5cm} In order to prove our results, let us introduce further notations. Let $\mathbb{F}$ be an algebraically closed field of characteristic $p>2$. Denote by $\mathcal{O}(m)$ the associative and commutative algebra with unit element over $\mathbb{F}$ defined by generators $x_i^{(r)}$ for $r\ge 0$ and $1\le i\le m$, and relations \[ x_i^{(0)}=1,\quad x_i^{(r)}x_i^{(s)}=\binom{r+s}{r}x_i^{(r+s)} \] for $r,s\ge 0$. Put $x_i:=x_i^{(1)}$ and $x^{(a)}:=x_1^{(a_1)}\cdots x_m^{(a_m)}$ for a tuple $a=(a_1,\ldots ,a_m)\in \mathbb{N}^m$. Then the {\em divided power algebra} of dimension $p^{\abs{n}}$ is defined by \[ \mathcal{O}(m;\underline{n}):={\rm span} \{x^{(a)}\mid 0\le a_i<p^{n_i}\} \] The product is given by \[ x^{(a)}x^{(b)}:=\binom{a+b}{b}x^{(a+b)}, \] where $\binom{a}{b}=\prod_{i=1}^m\binom{a_i}{b_i}$ and $x^{(c)}=0$ for $c\not\in \mathcal{O}(m;\underline{n})$. For each $i$ denote by $\partial_i$ the derivation of the algebra $\mathcal{O}(m)$ given by \[ \partial_i(x_j^{(r)})=\delta_{i,j}x_j^{(r-1)}. \] The {\em generalized Jacobson-Witt algebra} is defined by \[ W(m,\underline{n}):=\sum_{i=1}^m \mathcal{O}(m;\underline{n}) \partial_i, \] together with the Lie bracket \[ [x^{(a)}\partial_i,x^{(b)}\partial_j]=\binom{a+b-\varepsilon_i}{a}x^{(a+b-\varepsilon_i)}\partial_j-\binom{a+b-\varepsilon_j}{b}x^{(a+b-\varepsilon_j)}\partial_i \] where $\varepsilon_i=(\delta_{i,1},\ldots ,\delta_{i,m})\in \mathbb{N}^m$. \\[0.2cm] Consider the linear operator $D_H\colon \mathcal{O}(2r;\underline{n})\rightarrow W(2r;\underline{n})$ defined by \[ D_H(x^{(a)})=\sum_{i=1}^{2r}\sigma(i)\partial_i(x^{(a)})\partial_{i'}, \] where \[ \sigma (i):=\begin{cases} 1, \hspace*{0.32cm} \text{ if } 1\le i\le r,\\ -1, \text{ if } r+1\le i\le 2r,\end{cases} \] and \[ i':=\begin{cases} i+r, \text{ if } 1\le i\le r,\\ i-r, \text{ if } r+1\le i\le 2r.\end{cases} \] The {\em Hamiltonian algebra} is defined by \[ H(2r;\underline{n})^{(2)}={\rm span} \{D_H(x^{(a)})\mid 0<a<\tau(\underline{n})\}, \] where $\tau(\underline{n})=(p^{n_1}-1,\ldots ,p^{n_m}-1)\in \mathbb{N}^m$. The Lie bracket is given by \[ [D_H(x^{(a)}),D_H(x^{(b)})]=D_H(D_H(x^{(a)})(x^{(b)})). \] Our main result is the following. \begin{thm}\label{3.4} For all $n\ge 1$ the simple modular Lie algebra $H(2;(1,n))^{(2)}$ of dimension $3^{n+1}-2$ in characteristic $3$ does not have a solvable outer derivation algebra. Hence we obtain an infinite family of counterexamples to the Zassenhaus conjecture. \end{thm} \begin{proof} We will use the basis of $\mathfrak{g}=H(2;(1,n))^{(2)}$ given above, for the special case of $p=3$, $m=2$, and $\underline{n}=(n_1,n_2)=(1,n)$. For $x^{(\alpha)}$ we will write $x_1^{a}x_2^{b}$. Then the explicit Lie brackets are given by \[ [D_H(x_1^ax_2^b), D_H(x_1^cx_2^d)]=f_{a,b,c,d}\cdot D_H(x_1^{a+c-1}x_2^{b+d-1}), \] where \[ f_{a,b,c,d}:= e_ae_d\cdot \binom{a+c-1}{a-1} \binom{b+d-1}{d-1} -e_be_c\cdot \binom{a+c-1}{c-1} \binom{b+d-1}{b-1}, \] with $e_k:= 1-\delta_{k,0}$. \\[0.2cm] Let us order the basis elements $D_H(x_1^ax_2^b)$ of $\mathfrak{g}$ with respect to the formal exponents as follows: \[ D_H(x_1)\prec D_H(x_1^2)\prec D_H(x_2)\prec D_H(x_1x_2)\prec D_H(x_1^2x_2) \prec \cdots , \] so that we can write a general inner derivation $D \in {\rm Der}(\mathfrak{g})$ as \[ \alpha \cdot {\rm ad} (D_H(x_1))+\beta \cdot{\rm ad} (D_H(x_1^2)) +\gamma \cdot {\rm ad} (D_H(x_2)) + \delta \cdot {\rm ad} (D_H(x_1x_2)) +\varepsilon \cdot {\rm ad} (D_H(x_1^2x_2))+\cdots \] Using the Lie brackets, the matrix of $D$ with respect to this ordered basis is of the form \[ D= \left(\begin{array}{@{}ccc|ccc|ccc@{}} -\delta & -\gamma & & & & & & & \\ -\varepsilon & \delta & 0 & & & & & & \\ & 0 & \delta & -\gamma & & & & & \\ \hline & & \varepsilon & 0 & -\gamma & & & &\\ & & & \varepsilon & -\delta & 0 & & &\\ & & & & 0 & -\delta & & & \\ \hline \vdots & \vdots & \vdots & & & & \vdots & \vdots & \vdots \\ \hline 0 & & & & & & & & \\ 0 & & & & & & & & \\ 0 & 0 & 0 & & & & & & \\ \end{array}\right) \] For $n=1$ we have $\mathfrak{g}=H^2(2;(1,1))^{(2)}\cong \mathfrak{psl}_2(\mathbb{F})$, where we already know that ${\rm Out}(\mathfrak{g})\cong \mathfrak{g}$ is not solvable, see Proposition $\ref{3.1}$. So we may assume that $n>1$. Consider the linear maps $E,F,H\in {\rm End}(\mathfrak{g})$ defined by \begin{align*} E&\colon \mathfrak{g} \to \mathfrak{g}, \quad D_H(x_1^ax_2^b)\mapsto \delta_{a,2}\cdot D_H(x_2^{b+1});\\ F&\colon \mathfrak{g} \to \mathfrak{g}, \quad D_H(x_1^ax_2^b)\mapsto \delta_{a,0}\cdot D_H(x_1^2x_2^{b-1});\\ H&\colon \mathfrak{g} \to \mathfrak{g}, \quad D_H(x_1^ax_2^b)\mapsto (1-a)\cdot D_H(x_1^ax_2^b). \end{align*} We claim that $E,F,H\in {\rm Der}(\mathfrak{g})$ are derivations of $\mathfrak{g}$. This follows easily from a direct computation. Indeed, we have \begin{align*} E([D_H(x_1^ax_2^b),D_H(x_1^cx_2^d)]) & =f_{a,b,c,d}\cdot E(D_H(x_1^{a+c-1}x_2^{b+d-1}))\\ & =f_{a,b,c,d}\cdot \delta_{a+c-1,2}\cdot D_H(x_2^{b+d})\\ & = {{b+d}\choose{b}}\cdot \delta_{(a,c),(1,2)}\cdot D_H(x_2^{b+d})-{{b+d}\choose{b}}\cdot \delta_{(a,c),(2,1)}\cdot D_H(x_2^{b+d})\\ & = -\delta_{a,2} e_c\cdot {{b+d}\choose{b}} D_H(x_1^{c-1}x_2^{b+d})+ \delta_{c,2}e_a\cdot {{b+d}\choose{b}} D_H(x_1^{a-1}x_2^{b+d})\\ & = \delta_{a,2}\cdot f_{0,b+1,c,d}\cdot D_H(x_1^{c-1}x_2^{b+d}) + \delta_{c,2}\cdot f_{a,b,0,d+1}\cdot D_H(x_1^{a-1}x_2^{b+d}) \\[0.1cm] & = [\delta_{a,2}\cdot D_H(x_2^{b+1}),D_H(x_1^{c}x_2^{d})]+[D_H(x_1^{a}x_2^{b}),\delta_{c,2}\cdot D_H(x_2^{d+1})]\\[0.1cm] & = [E(D_H(x_1^{a}x_2^{b})),D_H(x_1^{c}x_2^{d})] + [D_H(x_1^{a}x_2^{b}),E(D_H(x_1^{c}x_2^{d}))].\\ \end{align*} Here we have used that $2=-1$ in $\mathbb{F}$ and Pascal's identity \[ \binom{b+d-1}{d-1}+\binom{b+d-1}{d}=\binom{b+d}{d}. \] A similar computation shows that also $F$ and $H$ are derivations. On the other hand, this follows anyway, because $F$ coincides with the restriction of the inner derivation ${\rm ad} (D_H(x_1^3))$ of the larger Lie algebra $H(2;(1,n))$, and $H$ coincides with the commutator $[E,F]$, and hence is a derivation. It is easy to see that we have \[ [E,H]=E=-2E,\; [F,H]=-F=2F, [E,F]=H. \] Thus $(E,F,H)$ forms an $\mathfrak{sl}_2(\mathbb{F})$-triple in ${\rm Der}(\mathfrak{g})$, i.e., the subalgebra $\mathfrak{s}$ of ${\rm Der}(\mathfrak{g})$ generated by $E,F,H$ is isomorphic to $\mathfrak{sl}_2(\mathbb{F})$. Now the matrix of $\lambda E+\mu F+\nu H$ with respect to the ordered basis of $\mathfrak{g}$ has the form \[ D= \left(\begin{array}{@{}ccc|ccc|ccc@{}} 0 & & & & & & & & \\ & -\nu & \mu & & & & & & \\ & \lambda & \nu & & & & & & \\ \hline & & \varepsilon & 0 & & & & &\\ & & & & -\nu & \mu & & &\\ & & & & \lambda & \nu & & & \\ \hline \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ & & & & & & & & \\ \end{array}\right) \] Comparing this with the form for the general inner derivation $D$ we conclude that the subalgebra $\mathfrak{s}$ satisfies $\mathfrak{s}\cap {\rm ad} (\mathfrak{g})=0$. Hence ${\rm Out}(\mathfrak{g})$ contains the subalgebra \[ (\mathfrak{s}+{\rm ad}(\mathfrak{g}))/{\rm ad}(\mathfrak{g})\cong \mathfrak{s}/\mathfrak{s}\cap {\rm ad}(\mathfrak{g})\cong \mathfrak{s}\cong \mathfrak{sl}_2(\mathbb{F}). \] Thus ${\rm Out}(\mathfrak{g})$ is not solvable. \end{proof} We can be more precise about the structure of the outer derivation algebra of $H(2;(1,n))^{(2)}$. Denote by $V(2)$ the natural representation of $\mathfrak{sl}_2(\mathbb{F})$. Then the Lie algebra $\mathfrak{sl}_2(\mathbb{F})\ltimes V(2)$ in characteristic $3$ has a basis $(e_1,\ldots ,e_5)$ with Lie brackets \begin{align*} [e_1,e_2] & = e_3, & [e_2,e_3] & = 2e_2, & [e_3,e_4] & = e_4,\\ [e_1,e_3] & = e_1, & [e_2,e_4] & = e_5, & [e_3,e_5] & = 2e_5. \\ [e_1,e_5] & = e_4, & \end{align*} \begin{thm}\label{3.5} Let $n>1$. Then the outer derivation algebra of $H(2;(1,n))^{(2)}$ in characteristic $3$ is isomorphic to $(\mathfrak{sl}_2(\mathbb{F})\ltimes V(2))\oplus \mathbb{F}^{n-1}$. \end{thm} \begin{proof} Let $\mathfrak{g}=H(2;(1,n))^{(2)}$. According to \cite{STR1}, Theorem $7.1.2$, $(3)$ part $(b)$ on page $358$ we have \[ {\rm Der}(\mathfrak{g}) \cong CH(2;(1,n))+ \sum_{i=1}^{n-1} \mathbb{F} \cdot \partial_2^{3^i}+ \mathbb{F}\cdot d, \] where $d$ is the derivation, which we called $F$ in the proof of Theorem $\ref{3.4}$, and \[ CH(2;(1,n))=H(2;(1,n))\oplus \mathbb{F}\cdot (x_1\partial_1+x_2\partial_2). \] We have $\dim CH(2;(1,n))=3^{n+1}+2$, see \cite[page 273]{KS}, so that we obtain $\dim {\rm Der}(\mathfrak{g})=3^{n+1}+n+2$ and $\dim {\rm Out}(\mathfrak{g})=n+4$. Consider the linear maps given by \begin{align*} V\ &\colon\ L\to L, \quad D_{H}(x_1^ax_2^b) \mapsto \delta_{b,0} \cdot D_H(x_1^{a-1}x_2^{3^n-1}) \\ W \ &\colon \ L\to L, \quad D_{H}(x_1^ax_2^b) \mapsto \delta_{a+b,1}\cdot(-1)^a\cdot D_H(x_1^{a+1}x_2^{b+3^n-2}). \end{align*} They are derivations of $\mathfrak{g}$, because each of them is a restriction of inner derivations of the larger Lie algebra $H(2;(1,n))$ to $\mathfrak{g}$, namely of ${\rm ad}(D_H(x_2^{3^n}))$, respectively of ${\rm ad}(D_H(x_1^2x_2^{3^n-1}))$. By a computation we see that \[ [E,W]=V,\; [F,V]=W,\; [H,V]=V,\; [H,W]=2W, \] where $E,F,H$ are the derivations of $\mathfrak{g}$ given in the proof of Theorem $\ref{3.4}$. Hence the subalgebra $\mathfrak{t}$ of ${\rm Der}(\mathfrak{g})$ generated by $E,F,H,V,W$ is isomorphic to $\mathfrak{sl}_2(\mathbb{F})\ltimes V(2)$. \\ The matrix of $\lambda E+\mu F+\nu H + \eta V+\xi W$ with respect to the ordered basis of $\mathfrak{g}$ is of the form \[ D= \left(\begin{array}{@{}ccc|ccc|ccc@{}} 0 & & & & & & & & \\ & -\nu & \mu & & & & & & \\ & \lambda & \nu & & & & & & \\ \hline & & \varepsilon & 0 & & & & &\\ & & & & -\nu & \mu & & &\\ & & & & \lambda & \nu & & & \\ \hline \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ \hline -\xi & 0 & 0 & & & & \ddots & & \\ \eta & 0 & 0 & & & & & \ddots & \\ 0 & \eta & \xi & & & & & & \ddots \\ \end{array}\right) \] Comparing with the matrix $D$ of inner derivations we obtain $\mathfrak{t} \cap {\rm ad}(\mathfrak{g})=0$, so that ${\rm Out}(\mathfrak{g})$ has a subalgebra isomorphic to $\mathfrak{sl}_n(\mathbb{F})\ltimes V(2)$. We claim that the derivations $\partial_2^{3^i}$ belong to the center of ${\rm Out}(\mathfrak{g})$. Indeed, they commute pairwise, and they commute with $E,F,H$. Furthermore we have, using also \cite[Lemma 2.1.2(1), page 61]{STR1}, \begin{align*} [\partial_2^{3^i},V]&={\rm ad}(D(x_2^{3^n-3^i})),\\ [\partial_2^{3^i},W]&={\rm ad}(D(x_1^2x_2^{3^n-3^i-1})), \end{align*} for $i=1,\dots,n-1$. This implies that ${\rm Out}(\mathfrak{g})\cong \mathfrak{t} \oplus \mathbb{F}^{n-1}$, where $\mathfrak{t}\cong \mathfrak{sl}_2(\mathbb{F})\ltimes V(2)$. \end{proof} We will show now that the remaining cases for the Hamiltonian Lie algebras $H(2r;\underline{n})^{(2)}$ do not provide new counterexamples to the Zassenhaus conjecture for $p=3$. We have two cases, namely first $r>1$, and secondly $r=1$ and $1<n_1\le n_2$, where $\underline{n}=(n_1,n_2)\in \mathbb{N}^2$. Let $\mathfrak{h}_3(\mathbb{F})$ be the Heisenberg Lie algebra over $\mathbb{F}$ with basis $\{e_1,e_2,e_3\}$ and Lie bracket $[e_1,e_2]=e_3$. Recall that a Lie algebra over a field $\mathbb{F}$ is called {\em almost abelian} if it is nonabelian and has an ideal of codimension $1$. Hence every almost abelian Lie algebra can be written as $\mathbb{F}^r\rtimes \mathbb{F}$, and is $2$-step solvable. \begin{thm}\label{3.6} Let $\mathfrak{g}$ be the Hamiltonian Lie algebra $H(2r;\underline{n})^{(2)}$ over an algebraically closed field $\mathbb{F}$ of characteristic $p=3$. Then, for $r>1$ the outer derivation algebra ${\rm Out}(\mathfrak{g})$ is $2$-step solvable, and for $r=1$, $1<n_1\le n_2$, it is $3$-step solvable. More precisely, we have \[ {\rm Out}(\mathfrak{g})\cong \begin{cases} (\mathfrak{h}_3(\mathbb{F})\rtimes \mathbb{F})\oplus \mathbb{F}^{\abs{n}-2}, \hspace*{0.13cm} \text{ if } r=1,\; 1<n_1\le n_2,\\ (\mathbb{F}^{2r+1}\rtimes \mathbb{F})\oplus \mathbb{F}^{\abs{n}-2r}, \text{ if } r>1, r\equiv 0 \bmod 3, \\ (\mathbb{F}^{2r+1}\rtimes \mathbb{F})\oplus \mathbb{F}^{\abs{n}-2r}, \text{ if } r>1, r\equiv 1 \bmod 3, \\ (\mathbb{F}^{2r}\rtimes \mathbb{F})\oplus \mathbb{F}^{\abs{n}-2r+1}, \text{ if } r>1, r\equiv 2 \bmod 3. \\ \end{cases} \] Here in the first case $\mathbb{F}$ acts on $\mathfrak{h}_3(\mathbb{F})$ by the derivation $D={\rm diag}(1,1,-1)$, in the second case $\mathbb{F}$ acts on $\mathbb{F}^{2r+1}$ by the derivation $D=\id$, in the third case $\mathbb{F}$ acts on $\mathbb{F}^{2r+1}$ by the derivation $D={\rm diag}(1,\ldots ,1,-1)$, and in the last case $\mathbb{F}$ acts on $\mathbb{F}^{2r}$ by the derivation $D=\id$. \end{thm} \begin{proof} Let us write $x^a$ for $x^{(a)}= x_1^{a_1}\cdots x_m^{a_m}$ and \[ \tau=(3^{n_1}-1,\dots,3^{n_m}-1)\in \mathbb{N}^m. \] By ~\cite[Theorem 7.1.2(3)(b), page 358]{STR1}, the structure of ${\rm Der} H(2r;\underline{n})^{(2)}$ is given by \[ {\rm Der} H(2r;\underline{n})^{(2)}\cong CH(2r;\underline{n})^{(2)} \oplus \sum_{i=1}^{2r}\sum_{0<j_i<n_i}\mathbb{F} \cdot \partial_i^{j_i}, \] where \[ CH(2r;\underline{n})^{(2)} = H(2r;\underline{n})\oplus \mathbb{F}\cdot \left(\sum_{i=1}^{2r}x_i\partial_i\right). \] So we obtain the following dimensions: \begin{align*} \dim {\rm Der} H(2r;\underline{n})^{(2)} & = \dim H(2r;\underline{n}) + 1 + |n|-2r \\ & = (3^{|n|}-2+2r+1) + 1 + |n| - 2r \\ & = 3^{|n|} + |n|, \end{align*} see also~\cite[page 273]{KS}. So we have \begin{align*} \dim {\rm Out} H(2r;\underline{n})^{(2)} & = \dim {\rm Der} H(2r;\underline{n})^{(2)}- \dim H(2r;\underline{n})^{(2)}\\ & =3^{|n|} + |n|-(3^{|n|}-2)\\ & = |n|+2. \end{align*} Consider the restrictions to $H(2r;\underline{n})^{(2)}$ of the derivations $D_H(x_i^{p^{n_i}})$ and $D_H(x^{\tau})$ of the larger Lie algebra $H(2r;\underline{n})$. They are given explicitly as the linear maps \begin{align*} A_i & \colon H(2r;\underline{n})^{(2)} \to H(2r;\underline{n})^{(2)},\quad D_H(x^a) \mapsto \delta_{a_i,0}\cdot \sigma(i)\cdot D_H(x^{a+(\tau_i-a_i)\varepsilon_i-\varepsilon_{i'}})\\ B & \colon H(2r;\underline{n})^{(2)} \to H(2r;\underline{n})^{(2)} ,\quad D_H(x^a) \mapsto \delta_{|a|,1}\cdot \sigma(k) \cdot D_H(x^{\tau-\varepsilon_k}) \end{align*} for $i=1,\dots, 2r$, and where $k\in\{1,\ldots ,2r \}$ is the only index such that $a_{k'}\neq 0$. Recall the definition of $k'$ before Theorem $\ref{3.4}$. It is clear that $A_i,B\in {\rm Der} H(2r;\underline{n})^{(2)}$. Moreover the derivations $C:= \sum_{i=1}^{2r}x_i\partial_i$ and $D_{i,j_i}:= \partial_i^{j_i}$ for $i=1,\cdots ,2r$ and $0<j_i<n_i$ for each $i$ are explicitly given by \begin{align*} C & \colon H(2r;\underline{n})^{(2)} \to H(2r;\underline{n})^{(2)}, \quad D_H(x^a) \mapsto (|a|-2)\cdot D_H(x^a)\\ D_{i,j_i} & \colon H(2r;\underline{n})^{(2)} \to H(2r;\underline{n})^{(2)}, \quad D_H(x^a) \mapsto D_H(x^{a-p^{j_i}\varepsilon_i}). \end{align*} We claim that \[ \{A_1,\ldots , A_{2r},B,C,D_{1,1},\ldots D_{1,n_1-1},\ldots ,D_{2r,1},\ldots ,D_{2r,n_{2r}-1}\} \] are representatives of a basis of ${\rm Out} H(2r;\underline{n})^{(2)}$. Its cardinality is given by $2r+2+\sum_{i=1}^{2r}n_i -2r=\abs{n}+2$. The arguments are the same as used in the proofs of Theorem \ref{3.4} and Theorem \ref{3.5}, i.e., one can easily check that the intersection of the linear span of these derivations and ${\rm Inn} H(2r;\underline{n})^{(2)}$ is zero. Indeed, this follows just from comparing the images of $D_H(x_i)$ for $i=1,\dots, 2r$, under a general inner derivation and $\sum_{i=1}^{2r}\alpha_i A_i + \beta B + \gamma C + \sum_{i=1}^{2r}\sum_{0<j_i<n_i} \delta_{i,j_i} D_{i,j_i}$. The projections onto ${\rm Out} H(2r;\underline{n})^{(2)}$ of $A_i$, $B$, $C$ and $D_{i,j_i}$ are then $|n|+2$ linearly independent derivations which therefore constitute a basis of ${\rm Out} H(2r;\underline{n})^{(2)}$. \\[0.2cm] It is straightforward to compute the Lie brackets between the representatives in ${\rm Der} H(2r;\underline{n})^{(2)}$ of the basis vectors of ${\rm Out} H(2r;\underline{n})^{(2)}$. The nonzero brackets are given as follows, with $1\le i<i'\le 2r$, \begin{align*} [A_i,A_{i'}] & =\begin{cases} B & \text{ if } r=1 \\ {\rm ad} D_H(x_i^{\tau_i}x_{i'}^{\tau_{i'}}) & \text{ if } r>1 \end{cases} \\ [A_i,C] & = -A_i, \\ [A_i,D_{i,j_i}] & =-{\rm ad} D_H(x_i^{\tau_i-p^{j_i}+1}),\\ [B,C] & = (2r-1)B,\\ [B,D_{i,j_i}] & =-{\rm ad} D_H(x^{\tau-p^{j_i}\varepsilon_i}).\\ \end{align*} Note that $[B,C]=0$ for the case $r\equiv 2\bmod 3$. For $r>1$, the Lie brackets yield a direct sum of an almost abelian Lie algebra $\mathbb{F}^{2r+1}\rtimes \mathbb{F}$ (or $\mathbb{F}^{2r}\rtimes \mathbb{F}$ for $r\equiv 2\bmod 3$), and an abelian Lie algebra. Hence ${\rm Out}(\mathfrak{g})$ is $2$-step solvable in this case. For $r=1$ we have $[C,B]=-B$, $[C,A_i]=A_i$ for $i=1,2$, and $[A_1,A_2]=B$, so that \[ {\rm Out} H(2r;\underline{n})^{(2)}\cong {\rm span}(A_1,A_2,B,C) \oplus {\rm span}(D_{i,j_i}) \cong (\mathfrak{h}_3(\mathbb{F})\rtimes \mathbb{F})\oplus \mathbb{F}^{\abs{n}-2}. \] The ideal $\mathfrak{a}={\rm span}(A_1,A_2,B,C)$ satisfies $\mathfrak{a}^{(1)}={\rm span}(A_1,A_2,B)$, $\mathfrak{a}^{(2)}={\rm span}(B)$ and $\mathfrak{a}^{(3)}=0$. Thus ${\rm Out}(\mathfrak{g})$ is $3$-step solvable for $r=1$. \end{proof} \subsection{New type} There are several new simple modular Lie algebras over an algebraically closed field of characteristic $3$, which are not of classical or Cartan type. For example, the $1$-parameter family of $10$-dimensional Kostrikin algebras $L(\varepsilon)$, the Ermolaev algebras $R(\underline{n})$, the Frank algebras $Fr(n)$, and the Skryabin algebras $X(\underline{n})$ and $Y(\underline{n})$. Chan Nam Zung studied their properties in \cite{CNZ}, published in $1993$. He computed the outer derivation algebras of these algebras. It turns out that we do not obtain any new counterexample to the Zassenhaus conjecture. The following table gives a survey. \vspace*{0.5cm} \begin{center} \begin{tabular}{c|cccc} $\mathfrak{g}$ & conditions & $\dim (\mathfrak{g})$ & $\dim {\rm Out}(\mathfrak{g})$ & ${\rm Out}(\mathfrak{g})$ \\[2pt] \hline $L(\varepsilon)$ & $\varepsilon\in \mathbb{F}$ & $10$ & $0$ & abelian \\[4pt] $R(\underline{n})$ & $\underline{n}=(n_1,n_2)\in \mathbb{N}^2$ & $3^{\abs{n}+1}-1$ & $\abs{n}+1$ & abelian \\[4pt] $Fr(n)$ & $n\in \mathbb{N}$ & $2\cdot 3^{n+1}$ & $n-1$ & abelian \\[4pt] $X(\underline{n})$ & $\underline{n}=(n_1,n_2,n_3)\in \mathbb{N}^3$ & $3^{\abs{n}+1}-4$ & $\abs{n}+1$ & solvable \\[4pt] $Y(\underline{n})$ & $\underline{n}=(n_1,n_2,n_3)\in \mathbb{N}^3$ & $2\cdot 3^{\abs{n}+1}$ & $\abs{n}-3$ & abelian \\ \end{tabular} \end{center} \vspace*{0.5cm} However, there are three further infinite families of simple Skryabin algebras in characteristic $3$, denoted by $Z'(\underline{n})$, and $X_i(\underline{n},\omega)$, for $i=1,2$ of type $1$ and type $2$, see \cite{SKR}. Zung does not determine the outer derivation algebras of these families in \cite{CNZ}. He mentiones that the determination for $Z'(\underline{n})$ is still an open problem. However, this was solved $2001$ in \cite{KUM}. The outer derivation algebra is abelian. Unfortunately we could not find a result for the algebras $X_i(\underline{n},\omega)$. But we believe, that the outer derivation algebra will be solvable, too. Let us explain the result of \cite{KUM} on the derivation algebra of $Z'(\underline{n})$. In the construction of the Lie algebra $Z'(\underline{n})$, Skryabin introduces a Lie algebra $Z(\underline{n})$ of dimension $3^{\abs{n}+2}+1$ with \[ Z'(\underline{n})=[Z(\underline{n}),Z(\underline{n})]. \] Using this notation, the result of \cite{KUM} is as follows, see Corollary $1$ on page $3925$. \begin{prop} Let $\mathfrak{g}=Z'(\underline{n})$, with $\underline{n}=(n_1,n_2,n_3)\in \mathbb{N}$. Then we have \[ {\rm Der}(\mathfrak{g})\cong \overline{\mathfrak{g}_{\overline{0}}}+Z(\underline{n}). \] \end{prop} Here $\mathfrak{g}_{\overline{0}}\cong W(3,\underline{n})$ and $\overline{\mathfrak{g}_{\overline{0}}}$ denotes the $p$-closure of ${\rm ad}(\mathfrak{g}_{\overline{0}})$ in ${\rm Der}(\mathfrak{g})$. This implies that ${\rm Out}(\mathfrak{g})$ is abelian, since \[ [{\rm Der}(\mathfrak{g}),{\rm Der}(\mathfrak{g})]\subseteq [Z(\underline{n}),Z(\underline{n})]=Z'(\underline{n})\cong {\rm ad} (\mathfrak{g}). \] Furthermore we have the $10$-dimensional simple Lie algebras $K(\varepsilon,\delta,\rho)$ in characteristic three of Kostrikin \cite{KOS}, which are deformations of the algebras $K(\varepsilon)$. Here it is known that all derivations are inner. All known simple Lie algebras of dimension $10$ for $p=3$ can be realized within the family $K(\varepsilon,\delta,\rho)$, but a classification up to isomorphism is still not known. \\[0.2cm] Finally we have the $8$-dimensional and the $29$-dimensional simple Lie algebras $Br_8$ and $Br_{29}$ of Brown \cite{BR3,BR2}. Both Lie algebras are central simple. A direct computation shows that the outer derivation algebra is abelian in each case. Surprisingly, $Br_8$ is not mentioned in later works on simple Lie algebras of characteristic three. Thus, for the convenience of the reader, let us give all Lie brackets of $Br_8$ explicitly, with respect to the basis \[ (x_1,\ldots ,x_8)=(K_{12},K_{21},K_{13},K_{31},K_{23},K_{32},H,K) \] introduced in \cite{BR3} on page $440$: \begin{align*} [x_1,x_{2}] & = x_{7}, & [x_2,x_{6}] & =2x_{4}, & [x_4,x_{5}] & = 2x_{2},\\ [x_1,x_{4}] & = 2x_{6}, & [x_2,x_{7}] & = 2x_{2}, & [x_4,x_{7}] & = x_{4}, \\ [x_1,x_{5}] & = x_{3}, & [x_2,x_{8}] & = 2x_{6}, & [x_5,x_{6}] & = x_{7}, \\ [x_1,x_7] & = x_1, & [x_3,x_{4}] & = 2x_{7}, & [x_5,x_{7}] & =x_{5},\\ [x_2,x_{3}] & = x_{5}, & [x_3,x_{6}] & = x_{1}, & [x_{5},x_{8}]& = x_{1},\\ [x_2,x_{5}] & = x_{8}, & [x_3,x_{7}] & = 2x_{3}, & [x_{6},x_{7}]& = 2x_{6}. \\ \end{align*} This algebra is central simple and non-restricted. Its outer derivation algebra is $2$-dimensional and abelian. Note that $Br_8$ is isomorphic to a deformed Hamiltonian algebra $H(2; (1,1), \omega)$, where $\omega =(1+x_1^{(2)}x_2^{(2)})(dx_1\wedge dx_2)$. For the family of simple deformed Hamiltonian algebras $H(2r;\underline{n},\omega)$ of dimension $p^{\abs{\underline{n}}}-1$ see \cite{STR1}, p. $340-341$. The following table gives a survey of the preceding discussion. \vspace*{0.5cm} \begin{center} \begin{tabular}{c|cccc} $\mathfrak{g}$ & conditions & $\dim (\mathfrak{g})$ & $\dim {\rm Out}(\mathfrak{g})$ & ${\rm Out}(\mathfrak{g})$ \\[2pt] \hline $Br_8$ & $-$ & $8$ & $2$ & abelian \\[4pt] $K(\varepsilon,\delta,\rho)$ & $\varepsilon,\delta,\rho\in \mathbb{F}$ & $10$ & $0$ & abelian \\[4pt] $Br_{29}$ & $-$ & $29$ & $0$ & abelian \\[4pt] $Z'(\underline{n})$ & $\underline{n}=(n_1,n_2,n_3)\in \mathbb{N}^3$ & $3^{\abs{n}+2}-2$ & $\abs{n}$ & abelian \\[4pt] $X_1(\underline{n},\omega)$ & $\underline{n}=(n_1,n_2,n_3)\in \mathbb{N}^3$ & $3^{\abs{n}+1}-3$ & ? & ? \\[4pt] $X_2(\underline{n},\omega)$ & $\underline{n}=(n_1,n_2,n_3)\in \mathbb{N}^3$ & $3^{\abs{n}+1}-1$ & ? & ? \\ \end{tabular} \end{center} \vspace*{0.5cm} There are other simple Lie algebras for $p=3$, which we have not studied here, e.g., deformed Hamiltonian and special Lie algebras of Cartan type for $p=3$, or other families, where no explicit realization is known. \begin{rem} We also studied the Zassenhaus conjecture for simple Lie algebras over an algebraically closed field of characteristic $p=2$. Here it was already known since $1955$ that a simple constituent $J$ of dimension $26$ of the Lie algebra $\mathfrak{f}_4$ provides a counterexample, see \cite{SCT}, and \cite{BU67} for references. Note that $J$ is given as the simple ideal in $\mathfrak{f}_4$ generated by the short roots. We tried to find an infinite family of simple Lie algebras such that the algebra $J$ is the lowest-dimensional member. One possibility is the family of simple Lie algebras $\mathfrak{si}(\mathfrak{sle}(n))$ of dimension $2^{2n-1}-2^{n-1}-2$ for $n\ge 3$, see \cite{KOL}, Lemma $2.2.2$. This algebra is denoted by $\mathfrak{sh}(2n;\underline{m})$ in Purslow's thesis \cite{PUR}, Theorem $5.4.3$. We used Purslow's construction for $n=4$, see \cite{PUR} p. $138-141$, to compute the outer derivation algebra of this $118$-dimensional algebra. It is a solvable Lie algebra of derived length $5$. So it is not a counterexample, but the derived length is higher than in all other known cases. For $n=5$ the algebra has dimension $494$, but we could not compute the derivation algebra so far. \\[0.2cm] We also tested the table of B. Eick in \cite{EIK} with known simple Lie algebras up to dimension $20$, but found no counterexample there. There are various families of simple Lie algebras of new type, and it seems to be very complicated to obtain an overview on the Zassenhaus conjecture here. So far, all families we have been able to study did not yield a new counterexample. \end{rem} \section*{Acknowledgments} Dietrich Burde is supported by the Austrian Science Foun\-da\-tion FWF, grant I 3248 and grant P 33811. Pilar P\'aez-Guill\'an is supported by the Austrian Science Foun\-da\-tion FWF, grant P 33811. We thank Bettina Eick and Tobias Moede for help with some computations, and Anton Mellit for providing us access to the CoCalc server for GAP computations.
2,869,038,154,225
arxiv
\section{Introduction} \label{sec:intro} A \emph{polytope} $P$ is the convex hull of finitely many points in $\mathbb{R}^d$. A polytope $P$ is called a lattice (resp. rational) polytope if all vertices are contained in $\mathbb{Z}^d$ (resp. $\mathbb{Q}^d$). The set of lattice points in a rational polytope $P$ is important subject in enumerative combinatorics (\cite{bec-rob, st-ec1}). In particular, the function $\mathbb{Z}_{>0}\ni t\longmapsto L_P(t):=\#(t P\cap\mathbb{Z}^d)$ is known to be a quasi-polynomial \cite{ehrhart}. In other words, there exists a positive integer $\rho>0$ and polynomials $f_1(x), f_2(x), \dots, f_\rho(x)\in\mathbb{Z}[x]$ such that \begin{equation} L_P(t)= \begin{cases} f_1(t), & \mbox{ if }t\equiv 1\mod\rho,\\ f_2(t), & \mbox{ if }t\equiv 2\mod\rho,\\ & \vdots\\ f_\rho(t), & \mbox{ if }t\equiv \rho\mod\rho, \end{cases} \end{equation} where $\rho$ is called the \emph{period}, and $f_1(x), \dots, f_\rho(x)$ are called the \emph{constituents}. ($f_k(x)$ is called the $k$-th constituent. We identify the $\rho$-th constituent $f_\rho(x)$ with the $0$-th one $f_0(x)$.) $L_P(t)$ is called the \emph{Ehrhart quasi-polynomial} of $P$. It is obvious that a multiple of a period of $L_P(t)$ is again a period of $L_P(t)$. Define the minimal period as the smallest possible period. Then any period is a multiple of the minimal one. It is known that the minimal period divides the $\operatorname{LCM}$ of denominators of coordinates of vertices of $P$. For simplicity, we mainly consider the minimal period. (However, this is not essential for our purposes. See Proposition \ref{prop:indgcd} and Remark \ref{rem:indsym}.) Generally, the constituents of $L_P(t)$ are mutually distinct. However, in many examples, some of the constituents turn out to be identical and the number of distinct constituents becomes strictly less than the minimal period $\rho$. Typical examples are as follows. \begin{example} \label{ex:01} Let $P_1=\frac{1}{9}\cdot [0, 1]^3$ be the $3$-cube with size $\frac{1}{9}$, \[ P_2=(\frac{5}{9}, \frac{5}{9}, \frac{2}{3})^t+ \operatorname{Conv}\{\pm e_i\mid i=1, 2, 3\} \] the octahedron translated by a rational vector (where $e_1, e_2, e_3$ is the standard basis of $\mathbb{R}^3$), and $P_3=(\frac{1}{9}, \frac{2}{9}, \frac{1}{3})^t+[0, 1]^3$ the unit cube translated by a rational vector. The Ehrhart quasi-polynomials $L_{P_1}(t)$, $L_{P_2}(t)$, and $L_{P_3}(t)$ have the same minimal period $\rho=9$. The constituents of $L_{P_1}(t)$ are $f_k(t)=\left(\frac{t+9-k}{9}\right)^3$ ($k=0, 1, \dots, 8$). Hence, they are mutually distinct. On the other hand, the constituents of $L_{P_2}(t)$ and $L_{P_3}(t)$ are as follows. \[ \begin{split} L_{P_2}(t)&= \begin{cases} \frac{4}{3}t^3-\frac{4}{3}t, & (t\equiv 1, 8\mod 9),\\ \frac{4}{3}t^3+\frac{2}{3}t, & (t\equiv 2, 7\mod 9),\\ \frac{4}{3}t^3+t^2+\frac{2}{3}t, & (t\equiv 3, 6\mod 9),\\ \frac{4}{3}t^3-\frac{1}{3}t, & (t\equiv 4, 5\mod 9),\\ \frac{4}{3}t^3+2t^2+\frac{8}{3}t+1, & (t\equiv 9\mod 9), \end{cases} \\ & \\ L_{P_3}(t)&= \begin{cases} t^3 & (t\equiv 1, 2, 4, 5, 7, 8\mod 9),\\ t^3+t & (t\equiv 3, 6\mod 9),\\ (t+1)^3 & (t\equiv 9\mod 9). \end{cases} \end{split} \] Observe that in $L_{P_2}(t)$ and $L_{P_3}(t)$ some of constituents coincide. \end{example} The motivation of this paper is to study the relationship between such coincidences of constituents and the shape of the polytope $P$. In order to formalize this kind of ``coincidences of constituents'', we introduce the following notion. \begin{definition} Let $L(t)$ be a quasi-polynomial with period $\rho$ and constituents $f_1, \dots, f_{\rho-1}, f_\rho (=f_0)$. \begin{itemize} \item[(1)] We say that $L(t)$ is \emph{symmetric} if $f_k=f_{\rho-k}$ for $0\leq k\leq\rho$. \item[(2)] We say that $L(t)$ has \emph{$\operatorname{GCD}$-property} if $f_k=f_\ell$ if $\operatorname{GCD}(\rho, k)=\operatorname{GCD}(\rho, \ell)$. \end{itemize} \end{definition} Clearly, if a quasi-polynomial satisfies $\operatorname{GCD}$-property, then it is symmetric. \begin{remark} Quasi-polynomials with $\operatorname{GCD}$-property appear in the theory of hyperplane arrangements. See \S \ref{sec:hyp} for more information. \end{remark} In Example \ref{ex:01}, $L_{P_2}(t)$ is symmetric and $L_{P_3}(t)$ satisfies $\operatorname{GCD}$-property. Surprisingly, these properties of Ehrhart quasi-polynomials are closely related to the fact that ``$P_2$ is centrally symmetric'' and ``$P_3$ is a zonotope'', respectively. In fact, the main purpose of this paper is to establish the correspondence between two columns of the following table. \begin{equation} \begin{array}{c|c} \mbox{Shape of $P$}&\mbox{Property of $L_P(t)$}\\ \hline \mbox{General polytope}&\mbox{General quasi-polynomial}\\ \cup & \cup\\ \mbox{Centrally symmetric}&\mbox{Symmetric}\\ \cup & \cup\\ \mbox{Zonotope}&\mbox{$\operatorname{GCD}$-property} \end{array} \end{equation} The main results (Theorem \ref{charsym} and Theorem \ref{charzono}) prove that for \emph{almost integral polytopes} (rationally translated lattice polytope), properties of the left column imply those of the right. Furthermore, we can also characterize the left column using the property of the right column. The paper is organized as follows. In \S \ref{sec:eqp}, after introducing basic notions, we recall a formula by Ardila-Beck-McWhirter which expresses the Ehrhart quasi-polynomial of an almost integral zonotope. Applying the formula, we prove that the Ehrhart quasi-polynomial satisfies $\operatorname{GCD}$-property. In \S \ref{sec:tlpe}, we introduce the \emph{translated lattice point enumerator}, \begin{equation} L_{(P, c)}(t):=\#((c+t P)\cap\mathbb{Z}^d), \end{equation} where $P$ is a lattice polytope in $\mathbb{R}^d$ and $c\in\mathbb{R}^d$. We will show that $L_{(P, c)}(t)$ is a polynomial in $t\in\mathbb{Z}_{>0}$ (Theorem \ref{tlpe}). We will also prove that the constituents of an almost integral polytope can be described in terms of $L_{(P, c)}(t)$ (Corollary \ref{cor:consti}), which easily yields that if $P$ is centrally symmetric almost integral polytope, then $L_P(t)$ is symmetric (Corollary \ref{sym1}). In \S \ref{sec:symm}, we will prove the first main result. Let $P$ be a lattice polytope. Then $P$ is centrally symmetric if and only if the Ehrhart quasi-polynomial $L_{c+P}(t)$ is symmetric for any rational vector $c$ (Theorem \ref{charsym}). The ``only if'' part has been proved in \S \ref{sec:tlpe}. The remaining part is done by proving if $P$ is not centrally symmetric, then there exists a rational vector $c$ such that $L_{(P, c)}(t)\neq L_{(P, -c)}(t)$. For this, we use Minkowski's result that a polytope is characterized by normal vectors and volumes of facets. In \S \ref{sec:zono}, we will prove the second main result. Let $P$ be a lattice polytope. Then $P$ is a zonotope if and only if the Ehrhart quasi-polynomial $L_{c+P}(t)$ satisfies $\operatorname{GCD}$-property for any rational vector $c$ (Theorem \ref{charzono}). The ``only if'' part has been proved in \S \ref{sec:eqp}. The remaining part is proved by an involved argument using McMullen's characterization of zonotope in terms of central symmetricity of faces. What we actually prove is that if $P$ is not a zonotope, then there exists a rational vector $c$ with odd denominators such that $L_{(P, c)}(t)\neq L_{(P, 2c)}(t)$, which implies $L_{c+P}(t)$ does not satisfy the $\operatorname{GCD}$-property. In \S \ref{discuss}, we will discuss related problems. First, we discuss minimal periods of almost integral polytopes and ask the relationship between Ehrhart quasi-polynomials of almost integral zonotopes and characteristic quasi-polynomials of hyperplane arrangements. We pose several related questions. \begin{proposition} \label{prop:indgcd} Let $Q$ be a quasi-polynomial with the minimal period $\rho_0$ and the $i$-th constituent $f_i$. Let $k>1$. Then $Q$ has the $\operatorname{GCD}$-property for $\rho_0$ if and only if $Q$ has the $\operatorname{GCD}$-property for some $k\rho_0$ with $k\in\mathbb{Z}_{>0}$. \end{proposition} \begin{proof} Suppose that $Q$ has $\operatorname{GCD}$-property for the minimal period $\rho_0$. Note that $\operatorname{GCD}(\rho_0, i)=\operatorname{GCD}(\rho_0, \operatorname{GCD}(k\rho_0, i))$. Hence if $\operatorname{GCD}(k\rho_0, i)=\operatorname{GCD}(k\rho_0, j)$, then $\operatorname{GCD}(\rho_0, i)=\operatorname{GCD}(\rho_0, j)$. Therefore, it satisfies $\operatorname{GCD}$-property for $k\rho_0$. Conversely, suppose $Q$ has $\operatorname{GCD}$-property for $k\rho_0$. Suppose $\operatorname{GCD}(i, \rho_0)=\operatorname{GCD}(j, \rho_0)=d$. We shall prove $f_i=f_d=f_j$. On the other hand assume that $Q$ satisfies the gcd-property for $k\rho_0$. We prove that for $0\le i < j \le \rho_0$ with $i| \rho_0$ and $\operatorname{GCD}(i,\rho_0)=\operatorname{GCD}(j,\rho_0)$ implies there exists a $m\in \mathbb{Z}$, such that $\operatorname{GCD}(i,k\rho_0)=\operatorname{GCD}(j+m\rho_0,k\rho_0)$. This means, by the gcd-property for $k\rho_0$, that $f_i=f_{j+m\rho_0}=f_j$. Define \[m=\frac{\operatorname{rad} (k)}{\operatorname{rad} (\operatorname{GCD}(\frac{j}{i}, k))}, \] where $\operatorname{rad} (a)$ is the product of all prime $q|a$. By using that definition, it follows from $\operatorname{GCD}(\frac{j}{i}+m\frac{\rho_0}{i}, \frac{\rho_0}{i})= \operatorname{GCD}(\frac{j}{i}, \frac{\rho_0}{i})=1$ that $GCD(\frac{j}{i}+m\frac{\rho_0}{i}, k\frac{\rho_0}{i})=\operatorname{GCD}(\frac{j}{i}+m\frac{\rho_0}{i}, k)$. Now let $r\in \mathbb{N}$ be prime such that $r|k$. By definition of $m$ we have for every $r|k$ that $r|m$ if and only if $r$ does not divide $\frac{j}{i}$. Hence, $\operatorname{GCD}(\frac{j}{i}+m\frac{\rho_0}{i}, k)=1$, which equals $GCD(\frac{j}{i}+m\frac{\rho_0}{i}, k\frac{\rho_0}{i})$. Therefore, $\operatorname{GCD}(j+m\rho_0, k\rho_0)=i$. \end{proof} \begin{remark} \label{rem:indsym} Similarly, being symmetric is independent of the period. \end{remark} \section{Ehrhart quasi-polynomials for rational polytopes} \label{sec:eqp} \subsection{Notations} A polytope $P\subset\mathbb{R}^d$ is called \emph{centrally symmetric} if there exists $c\in\mathbb{R}^d$ such that $P=c+(-P)$. Then $\frac{c}{2}\in P$ is the center of $P$. A \emph{zonotope} $\mathcal{Z}(u_1, \dots, u_n)$ spanned by vectors $u_1,\ldots , u_n\in \mathbb{R}^d$ is the Minkowski sum of the line segments $[0, u_i]$. In other words, \[ \mathcal{Z}(u_1, \dots, u_n)= \{\lambda_1 u_1+\dots+\lambda_n u_n \mid 0\leq \lambda_i\leq 1, i=1, \dots, n\}. \] It is easily seen that zonotopes are centrally symmetric. Let $P\subset\mathbb{R}^d$ be a polytope. We denote the minimal affine subspace containing $P$ by $\operatorname{aff}(P)$. We also denote by $\operatorname{aff}_0(P)$ the linear subspace of $\mathbb{R}^d$ which is parallel to $\operatorname{aff}(P)$ and containing the origin. The dimension of a polytope $P$ is defined as $\dim \operatorname{aff}_0(P)$ and is denoted by $\dim P$. Let $P$ be a lattice polytope. Let $X\subset P$ be an $m$-dimensional face. Then $\operatorname{aff}(X)\cap\mathbb{Z}^d\simeq\mathbb{Z}^m$. The \emph{relative volume} of a polytope $X$ is the volume of $X$ rescaled such that the unit cube in $\operatorname{aff}(X)\cap \mathbb{Z}^d\simeq\mathbb{Z}^m$ has volume 1. We denote the relative volume of $X$ by $\operatorname{relvol}(X)$. We also denote the $k$-dimensional Euclidean volume by $\operatorname{vol}_k$. Observe that if $P\subset \mathbb{R}^d$ is a $d$-polytope (that is, a polytope of dimension $d$), then $\operatorname{relvol}(P)=\operatorname{vol}_d(P)$. If $P$ is a rational polytope of dimension $m\le d$, then the leading coefficient of every constituent (the coefficient in degree $m$) of $L_P(t)$ is the relative volume $\operatorname{relvol}(P)$ of $P$. \begin{definition} A polytope $P\subset \mathbb{R}^d$ is called \textit{almost integral}, if there exists a lattice polytope $P'\subset \mathbb{R}^d$ and a translation vector $c\in \mathbb{Q}^d$, such that $P=c+P'$. \end{definition} A period of the Ehrhart quasi-polynomial $L_P$ of an almost integral polytope $P=c+P'$ translated by $c=(c_1, \dots, c_d)\in \mathbb{Q}^d$ is $\operatorname{den}(c):=\operatorname{lcm}\{\operatorname{den}(c_i)|i=1,\ldots,d\}\}$, where $\operatorname{den}(c_i)$ denotes the denominator of the reduced fraction of $c_i$. It is expected that $\operatorname{den}(c)$ is the minimal period of $L_P$. See \S \ref{sec:min} related discussion. \subsection{Ehrhart quasi-polynomials for almost integral zonotopes} The next proposition by Ardila, Beck, and McWhirter describes the Ehrhart quasi-polynomial of almost integral zonotopes. \begin{proposition}\cite[Proposition 3.1]{ard} \label{aiz} Let $U\in \mathbb{Z}^d$ be a finite set of integer vectors and $c\in \mathbb{Q}^d$ be a rational vector. Then, the quasi-polynomial of the almost integral zonotope $c+\mathcal{Z} (U)$ equals \begin{equation} L_{c+\mathcal{Z} (U)}(t)=\sum_{\substack{W \subseteq U \\ W \textrm{ lin. indep.}}} \chi_W(t)\cdot \operatorname{relvol}(\mathcal{Z} (W)) \cdot t^{|W|}, \end{equation} where \[\chi_W(t)=\begin{cases} 1, & \text{if } (tc+\operatorname{aff} (W))\cap \mathbb{Z}^d \neq \varnothing \\ 0, & \text{otherwise} .\end{cases}\] \end{proposition} Let $\rho_0=\operatorname{den}(c)$. Then $\rho_0$ is a period of $L_{c+\mathcal{Z}(U)}(t)$. Proposition \ref{aiz} says that the $k$-th constituent of $L_{c+\mathcal{Z}(U)}(t)$ is \begin{equation} \label{k-const} f_k(t)= \sum_{\substack{W \subseteq U \\ W \textrm{ lin. indep.}}} \chi_W(k)\cdot \operatorname{relvol}(\mathcal{Z} (W)) \cdot t^{|W|}. \end{equation} The first result of this paper is the following. \begin{theorem}\label{zono1} Let $P=c+\mathcal{Z}(U)\subset \mathbb{R}^d$ be an almost integral zonotope. Then $L_P$ satisfies the $\operatorname{GCD}$-property. \end{theorem} \begin{proof} In order to prove the $\operatorname{GCD}$-property for the Ehrhart quasi-polynomials of almost integral zonotope, it is enough to show that the function $\chi_W(t)$ in Proposition \ref{aiz} satisfies $\operatorname{GCD}$-property. Let $W=\{u_1,\ldots, u_k\}\subseteq U$ be linearly independent subset. Denote by \[\langle W \rangle=\left(\sum_{i=1}^k \mathbb{R} u_i\right) \cap \mathbb{Z}^d,\] the intersection of the linear subspace generated by $W$ with $\mathbb{Z}^d$. By extending a $\mathbb{Z}$-basis of $\langle W\rangle$ to that of $\mathbb{Z}^d$, we have $u_{k+1}, \dots, u_d\in\mathbb{Z}^d$ such that $\mathbb{Z}^d=\langle W\rangle\oplus \bigoplus_{i=k+1}^d\mathbb{Z} u_i$. Decompose $c=(c_1, \dots, c_d)\in\mathbb{Z}^d$ as $c=c'+a_{k+1}u_{k+1}+\dots+a_d u_d$, where $c'\in \langle W\rangle$ and $a_i\in\mathbb{Q}$. Then $(tc+\operatorname{aff} (W))\cap \mathbb{Z}^d \neq \varnothing$ if and only if $ta_{k+1}, \dots, ta_d\in\mathbb{Z}$. It is also equivalent to the fact that $t$ is divisible by $\operatorname{lcm}(\operatorname{den}(a_{k+1}), \dots, \operatorname{den}(a_d))$. Note that $\operatorname{lcm}(\operatorname{den}(a_{k+1}), \dots, \operatorname{den}(a_d))$ is a divisor of $\rho_0=\operatorname{den}(c)$. Thus $\chi_W(t)$ depends only on $\operatorname{GCD}(\rho_0, t)$. \end{proof} \section{Translated lattice point enumerator} \label{sec:tlpe} In order to verify symmetry or the $\operatorname{GCD}$-property for quasi-polynomials, we need to compare different constituents of a quasi-polynomial. Thus, we describe the constituents, by introducing a new function. \begin{definition} Let $P\subset \mathbb{R}^d$ be a polytope and $c\in\mathbb{R}^d$. The function $L_{(P, c)}(t)=\#((c+tP)\cap \mathbb{Z}^d)$ for $t\in\mathbb{Z}_{>0}$ is called \emph{translated lattice point enumerator}. \end{definition} \begin{theorem} \label{tlpe} \begin{itemize} \item[$(1)$] If $P\subset \mathbb{R}^d$ is a lattice polytope of dimension $d$ and $c\in \mathbb{R}^d$, then $L_{(P, c)}(t)\in\mathbb{Q}[t]$. Furthermore, the leading coefficient of $L_{(P, c)}(t)$ is $\operatorname{relvol}(P)$. \item[$(2)$] Let $P\subset \mathbb{R}^d$ be a lattice polytope and $c\in \mathbb{R}^d$. Then $L_{(P, c)}(t)\in\mathbb{Q}[t]$. Furthermore, if $L_{(P, c)}(t)\neq 0$, then it is polynomial of degree $\dim P$ with the leading coefficient $\operatorname{relvol}(P)$. \end{itemize} \end{theorem} \begin{proof} We first prove that $(1)$ implies $(2)$. Suppose that $\dim P=k<d$. If $(c+\operatorname{aff}(P))\cap\mathbb{Z}^d=\varnothing$, then clearly $L_{(P,c)}=0$. Now suppose that $(c+\operatorname{aff}(P))\cap\mathbb{Z}^d\neq\varnothing$. Let $c'\in (c+\operatorname{aff}(P))\cap\mathbb{Z}^d$. Then \[P+c=(P+c')+(c-c'),\] where $P+c'$ is a lattice polytope in $(c+\operatorname{aff}(P))\cap\mathbb{Z}^d$ and $(c-c')\in \operatorname{aff}_0(P)$ is a translation vector. Since $c'+P$ is of full dimension in $\operatorname{aff}(c'+P)$, which is isomorphic to $\mathbb{R}^k$, we can apply $(1)$ to obtain $(2)$. Now we prove $(1)$. We define \[\begin{aligned} L(t)&=\bigl((tP+[0,c])\backslash (c+tP)\bigr)\cap \mathbb{Z}^d \quad \text{``lost points''} \\ N(t)&=\bigl((tP+[0,c])\backslash tP\bigr)\cap \mathbb{Z}^d \quad \text{``new points''}. \\\end{aligned} \] Denote by $l_{c+P}(t)=\# L(t)$ be the number of lost and $n_{c+P}(t)=\#N(t)$ be the number of new obtained lattice points, by translating the polytope $P$ by $c$. Observe that $(c+tP)\cap \mathbb{Z}^d$ and $L(t)$ are disjoint. Similarly, $tP\cap \mathbb{Z}^d$ and $N(t)$ are disjoint. Let $p\in (c+tP)\cap \mathbb{Z}^d$, then $p\in tP$ if and only if $p\notin N(t)$. Next, take $p\in L(t)$, it follows $p\in N(t)$ if and only if $p\notin tP$. On the contrary, let $p\in tP$, then $p\in L(t)$ if and only if $p\notin ((c+tP)\cap \mathbb{Z}^d)$. Finally, if $p\in N(t)$ we get $p\in L(t)$ if and only if $p\notin ((c+tP)\cap \mathbb{Z}^d)$. As a consequence, \begin{equation} \label{eq:lpc} L_{(P,c)}(t)+l_{c+P}(t)=L_P(t)+n_{c+P}(t). \end{equation} It is sufficient to show that $l_{c+P}$ and $n_{c+P}$ are polynomials. This is done by induction on $d$. \\ For dimension $d=0$ there exists just one polytope $P=\{0\}$ with one translation vector $c=0$. Hence, $L_{(P,c)}(t)=L_P(t)=1$. Next, let $P\subset \mathbb{R}^d$ be a lattice polytope of dimension $d$ and $c\in \mathbb{R}^d$. If $c=0$, then $L_{(P,c)}(t)=L_P(t)$ the translated lattice point enumerator equals the Ehrhart polynomial. If $c\neq0$, expand $c$ to a basis $(b_1,\ldots, b_{d-1},c)$ of $\mathbb{R}^d$ with the corresponding dual basis $(b_1^*,\ldots, b_{d-1}^*,c^*)$ of $(\mathbb{R}^d)^*$. We call a face $F$ of $P$ a \emph{lower face} if there exists a $v\in (\mathbb{R}^d)^*$, such that the last coordinate is negative $v_d<0$ and $F=\{x\in P|v\cdot x \text{ is maximal}\}$. Likewise, a face $F$ of $P$ is called an \emph{upper face} if there exists a $v\in (\mathbb{R}^d)^*$, such that the last coordinate is positive $v_d>0$ and $F=\{x\in P|v\cdot x \text{ is maximal}\}$. For every facet $F$ that is either an upper or a lower face we have $\mathbb{Z}^d=\cup ((kc+\operatorname{aff}(F))\cap \mathbb{Z}^d)$, where the union is taken over all $k\in \mathbb{R}$ such that $(kc+\operatorname{aff}(F))\cap \mathbb{Z}^d\neq \varnothing$. This construction yields \[N(t)=\bigcup_{\hat{c}, \text{ upper faces } F}(\hat{c}+F)\cap\mathbb{Z}^d, \] where the union is taken over all upper faces $F$ and translation vectors $\hat{c}=sc\in \mathbb{R}^d$ with $0<s\le 1$, such that $\bigl(\hat{c}+\operatorname{aff}(F)\bigr)\cap \mathbb{Z}^d\neq\varnothing$. By using the inclusion-exclusion principle, \[n_{c+L}(t)=\sum_{\substack{\text{inclusion-exclusion} \\ \text{upper faces }F}}\sum_{\tilde{c}}L_{(F,\tilde{c})}(t),\] where $\tilde{c}\in \mathbb{R}^d$ is the translation of $F$ in $\hat{c}+F$. In fact, $\hat{c}+\operatorname{aff} (F)$ is isomorphic to $\mathbb{R}^{\dim (F)}$ and $\bigl(\hat{c}+\operatorname{aff} (F)\bigr)\cap \mathbb{Z}^d=\mathbb{Z}^{\dim (F)}$. Choose $c'\in\bigl(\hat{c}+\operatorname{aff} (F)\bigr)\cap \mathbb{Z}^d$, then $F+\hat{c}=F+c'+(\hat{c}-c')$. Therefore, by induction hypothesis $L_{(F,\hat{c})}(t)=L_{(F+c',\hat{c}-c')}(t)$ is a polynomial of degree at most $d-1$. In particular, $n_{\hat{c}+F}(t)$ is a polynomial with $\deg\leq d-1$. In the same way we observe \[L(t)=\bigcup_{\hat{c}, \text{ lower faces }F}(\hat{c}+F),\] where the union is taken over all lower faces $F$ and translation vectors $\hat{c}=sc\in \mathbb{Q}^d$ with $0\le s< 1$, such that $\bigl(\hat{c}+\operatorname{aff}(F)\bigr)\cap \mathbb{Z}^d\neq\varnothing$. Since the leading coefficient of $L_P(t)$ is $\operatorname{relvol}(P)$, then so is $L_{(P, c)}(t)$. \end{proof} \begin{example} Let $P=\operatorname{conv}\{(1,0)^t,(0,1)^t,(0,2)^t,(1,3)^t,(2,1)^t\}\subset \mathbb{R}^2$ be a lattice polytope and $c=(\frac{3}{4},\frac{3}{4})^t\in \mathbb{Q}^d$ a rational translation vector. Then the upper faces of $P$ with respect to $c$ are $[(1,3)^t,(2,1)^t],\{(1,3)^t\},\{(2,1)^t\}$ and the lower faces can be described as $[(0,2)^t,(0,1)^t],[(0,1)^t,(1,0)^t]$ with their corresponding vertices, which is illustrated in Figure \ref{fig:exUL}. The set of lost points is $L(1)=\bigl((P+[0,c])\backslash (c+P)\bigr)\cap \mathbb{Z}^2=\{(1,0)^t,(0,1)^t,(0,2)^t,(1,1)^t\}$ and the newly obtained points $N(1)=\bigl((P+[0,c])\backslash P\bigr)\cap \mathbb{Z}^2=\{(2,2)^t,(2,3)^t\}$, which can be seen in Figure \ref{fig:exNL}. In that manner we obtain the following number of lattice points \\ \begin{center} \begin{tabular}{|l|c|c|c|} \hline $t$ & 0 & 1 & 2 \\ \hline $L_{(P,c)}(t)$ & 0 & 5 & 17 \\ \hline $L_P(t)$ & 1 & 7 & 20 \\ \hline $l_{c+P}(t)$ & 1 & 4 & 7 \\ \hline $n_{c+P}(t)$ & 0 & 2 & 4 \\ \hline \end{tabular} \end{center} From that we calculate the polynomials to \[\begin{aligned} L_{(P,c)}(t)& =\frac{7}{2}t^2+\frac{3}{2}t & L_P(t)&=\frac{7}{2}t^2+\frac{5}{2}t+1 \\ l_{c+P}(t)&=3t+1 & n_{c+P}(t)&=2t. \end{aligned}\] \end{example} \begin{figure}[htbp] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \begin{tikzpicture}[scale=1.2] \filldraw[fill=gray!20!white, draw=black, very thin] (0,1)--(0,2)--(1,3)--(2,1)--(1,0)--cycle; \fill[black] (0,0) circle (0.05); \fill[black] (0,1) circle (0.05); \fill[black] (0,2) circle (0.05); \fill[black] (0,3) circle (0.05); \fill[black] (0,4) circle (0.05); \fill[black] (1,0) circle (0.05); \fill[black] (1,1) circle (0.05); \fill[black] (1,2) circle (0.05); \fill[black] (1,3) circle (0.05); \fill[black] (1,4) circle (0.05); \fill[black] (2,0) circle (0.05); \fill[black] (2,1) circle (0.05); \fill[black] (2,2) circle (0.05); \fill[black] (2,3) circle (0.05); \fill[black] (2,4) circle (0.05); \fill[black] (3,0) circle (0.05); \fill[black] (3,1) circle (0.05); \fill[black] (3,2) circle (0.05); \fill[black] (3,3) circle (0.05); \fill[black] (3,4) circle (0.05); \draw[->] (-0.5,0)--(3.25,0); \draw[->] (0,-0.5)--(0,4.25); \draw[ultra thick] (1,3)--(2,1)node[pos=0.3,right]{upper faces}; \draw[ultra thick] (1,0)--(0,1)--(0,2) node[midway, left]{lower faces}; \draw[->,red,thick] (0,0)--(0.75,0.75) node[right]{$c$}; \end{tikzpicture} \caption{Upper and lower facets of $P$.} \label{fig:exUL} \end{subfigure} \begin{subfigure}[t]{0.45\textwidth} \centering \begin{tikzpicture}[scale=1.2] \fill[black] (0,0) circle (0.05); \fill[black] (0,1) circle (0.05); \fill[black] (0,2) circle (0.05); \fill[black] (0,3) circle (0.05); \fill[black] (0,4) circle (0.05); \fill[black] (1,0) circle (0.05); \fill[black] (1,1) circle (0.05); \fill[black] (1,2) circle (0.05); \fill[black] (1,3) circle (0.05); \fill[black] (1,4) circle (0.05); \fill[black] (2,0) circle (0.05); \fill[black] (2,1) circle (0.05); \fill[black] (2,2) circle (0.05); \fill[black] (2,3) circle (0.05); \fill[black] (2,4) circle (0.05); \fill[black] (3,0) circle (0.05); \fill[black] (3,1) circle (0.05); \fill[black] (3,2) circle (0.05); \fill[black] (3,3) circle (0.05); \fill[black] (3,4) circle (0.05); \draw[->] (-0.5,0)--(3.25,0); \draw[->] (0,-0.5)--(0,4.25); \draw[-, green, thin] (1.33,3.33)--(2.33,1.33); \draw[-, green, thin] (1.66,3.66)--(2.66,1.66); \draw[-, red, thin] (1,0)--(0,1)--(0,2); \draw[-, red, thin] (1.5,0.5)--(0.5,1.5)--(0.5,2.5); \filldraw[fill=green, draw=black] (2,2) circle (2pt) ; \filldraw[fill=green, draw=black] (2,3) circle (2pt) node[above right]{$N$}; \filldraw[fill=red, draw=black] (1,0) circle (2pt) ; \filldraw[fill=red, draw=black] (0,1) circle (2pt) ; \filldraw[fill=red, draw=black] (0,2) circle (2pt) ; \filldraw[fill=red, draw=black] (1,1) circle (2pt) node[above right]{$L$}; \end{tikzpicture} \caption{The sets $N$ and $L$ for the polytope $P$.} \label{fig:exNL} \end{subfigure} \caption{New and lost points} \end{figure} We can express constituents of the Ehrhart quasi-polynomial by using the translated lattice point enumerator. \begin{corollary} \label{cor:consti} Let $P\in \mathbb{R}^d$ be a lattice polytope and $c\in\mathbb{Q}^d$. Then the $k$-th constituent of $L_{c+P}(t)$ is $L_{(P, kc)}(t)$. \end{corollary} \begin{proof} Let $f_k$ be the $k$-constituent of the Ehrhart quasi-polynomial $L_{c+P}$. Let $\rho=\operatorname{den}(c)$. Then $L_{c+P}$ has a period $\rho$. Since a translation by a vector in $\mathbb{Z}^d$ does not affect the number of lattice points, we get \[\begin{aligned} f_k(t)&=\#(t(P+c)\cap \mathbb{Z}^d) \quad \text{for }t\equiv k\mod\rho\\ &=\#(tP+tc)\cap \mathbb{Z}^d) \quad \text{for }t\equiv k\mod\rho\\ &=\#(tP+kc)\cap \mathbb{Z}^d) \quad \text{for }t\equiv k\mod\rho\\ &=L_{(P,kc)}(t) \quad \text{for }t\equiv k\mod\rho\\ \end{aligned}\] \end{proof} We close this chapter by conducting that the Ehrhart quasi-polynomial of every almost integral centrally symmetric polytope is symmetric. \begin{corollary}\label{sym1} Let $P\in \mathbb{R}^d$ be a centrally symmetric lattice polytope. Then for any $c\in\mathbb{Q}^d$, the Ehrhart quasi-polynomial $L_{c+P}(t)$ is symmetric. \end{corollary} \begin{proof} Let $f_0,\ldots ,f_{\rho-1}$ be the constituents of $L_{c+P}(t)$. Note that since $P$ is centrally symmetric, $c+P$ and $-c+P=-(c+P)$ contain the same number of lattice points. For $k\in \{1,\ldots,\rho-1\}$ we get \[\begin{aligned} f_k(t)&=L_{(P,kc)}(t)\\ &=\#(kc+tP)\cap\mathbb{Z}^d \\ &=\#(-kc+tP)\cap \mathbb{Z}^d \\ &=\#((\rho-k)c+tP)\cap \mathbb{Z}^d\\ &=L_{(P,(\rho-k)c)}(t)=f_{(\rho-k)c}(t). \end{aligned}\] Therefore, $L_{c+P}(t)$ is symmetric. \end{proof} \section{Characterizing centrally symmetric polytopes} \label{sec:symm} Recall that a polytope $P$ is characterized up to translation by the normal vectors and the $(d-1)$-volumes of its facets (this fact was first proved by Minkowski \cite{min}. See also \cite{gov, gru, sch}.) From this fact, it follows. \begin{lemma}\label{facets} Let $P\subset \mathbb{R}^d$ be a $d$-polytope. Then $P$ is centrally symmetric if and only if for each facet $F$ there exists a parallel facet $F^{\operatorname{op}}$, such that $\operatorname{vol}_{d-1}(F)=\operatorname{vol}_{d-1}(F^{\operatorname{op}})$, where $\operatorname{vol}_{d-1}$ is the $(d-1)$-dimensional Euclidean volume. \end{lemma} \begin{proof} If $P$ is centrally symmetric, then clearly a facet $F$ and its $F^{\operatorname{op}}$ have the same volume. Conversely, if $\operatorname{vol}_{d-1}(F)=\operatorname{vol}_{d-1}(F^{\operatorname{op}})$ holds for all facets, $P$ and $-P$ have the same data of normal vectors and $(d-1)$-volumes. It follows from Minkowski's result, that $P$ is centrally symmetric. \end{proof} The next result is the first main result of this article. Centrally symmetric polytopes are characterized by symmetry of Ehrhart quasi-polynomials. \begin{theorem}\label{charsym} Let $P\subset \mathbb{R}^d$ be a lattice polytope. Then the following are equivalent. \begin{itemize} \item[$(i)$] $P$ is centrally symmetric. \item[$(ii)$] $L_{c+P}(t)$ is symmetric for any $c\in\mathbb{Q}^d$. \end{itemize} \end{theorem} \begin{proof} $(i)$ implies $(ii)$ was proved in Corollary \ref{sym1}. Now let $P\subset \mathbb{R}^d$ be a non-centrally symmetric polytope of dimension $m$. We will prove that there exists a translation vector $c\in \mathbb{Q}^d$ such that $L_{(P,c)}(t)\neq L_{(P,-c)}(t)$ for some $t\in \mathbb{N}$. Since $P$ is not centrally symmetric, by using Lemma \ref{facets}, we find a facet $F$ that has either no parallel facet or a parallel facet $F^{\operatorname{op}}$ with different $(m-1)$-dimensional Euclidean volumes $\operatorname{vol}_{m-1}(F)\neq\operatorname{vol}_{m-1}(F^{\operatorname{op}})$. Consider the first case as a parallel facet with volume $0$. Since $F$ and $F^{\operatorname{op}}$ are parallel, the unit cubes in $\operatorname{aff}(F)\cap \mathbb{Z}^d$ and $\operatorname{aff}(F^{\operatorname{op}})\cap \mathbb{Z}^d$ have the same $(m-1)$-dimensional Euclidean volume. On the other hand $F$ and $F^{\operatorname{op}}$ have different volumes, so that their relative volumes are different. This means for the Ehrhart polynomials $L_F(t)=c_{m-1}t^{m-1}+\ldots +c_0$ and $L_{F^{\operatorname{op}}}(t)=c'_{m-1}t^{m-1}+\ldots +c'_0$ that $c_{m-1}\neq c'_{m-1}$. Without loss of generality, we may assume that $c_{m-1}>c'_{m-1}$. Let $c\in \operatorname{aff}(F)_0$ be a nonzero vector. By choosing $c$ generically, we may assume $(\operatorname{aff}(X)+c)\cap\mathbb{Z}^d=\varnothing$ for every proper face $X\neq F, F^{\operatorname{op}}$. Then $(tX+c)\cap\mathbb{Z}^d=\varnothing$ for any positive integer $t$. From the argument above, the leading coefficients of $L_{(F, c)}(t)$ and $L_{(F^{\operatorname{op}}, -c)}(t)$ are different. Hence, we have $L_{(F, c)}(t)\neq L_{(F^{\operatorname{op}}, -c)}(t)$ and $L_{(X, c)}(t)=0$ for other proper faces $X$. Next let $c'\in\operatorname{aff}_0(P)$ be constructed by inclining $c$ slightly such that $F$ becomes an upper face (see Figure \ref{fig:sym}), $F^{\operatorname{op}}$ becomes a lower face and every other face remains their status (in particular, $c'+\partial P$ does not contain lattice points). Then we have \begin{equation} \label{prime} \begin{split} L_{(P, c')}(t)&=L_{(P, c)}(t)-L_{(F^{\operatorname{op}}, c)}(t),\\ L_{(P, -c')}(t)&=L_{(P, -c)}(t)-L_{(F, -c)}(t). \end{split} \end{equation} Next let $c''\in \mathbb{Q}^d$ be obtained by inclining $c$ slightly inward (Figure \ref{fig:sym}), such that $F$ becomes a lower face and $F^{\operatorname{op}}$ becomes an upper face. Then we have \begin{equation} \label{second} \begin{split} L_{(P, c'')}(t)&=L_{(P, c)}(t)-L_{(F, c)}(t),\\ L_{(P, -c'')}(t)&=L_{(P, -c)}(t)-L_{(F^{\operatorname{op}}, -c)}(t). \end{split} \end{equation} Now suppose that both the equations \begin{itemize} \item[(a)] $L_{(P, c')}(t)=L_{(P, -c')}(t)$ and \item[(b)] $L_{(P, c'')}(t)=L_{(P, -c'')}(t)$ \end{itemize} hold. Then (\ref{prime}) and (\ref{second}) deduce \begin{equation} \begin{split} L_{(P, c)}(t)-L_{(P, -c)}(t) &= L_{(F^{\operatorname{op}}, c)}(t)-L_{(F, -c)}(t)\\ &= L_{(F, c)}(t)-L_{(F^{\operatorname{op}}, -c)}(t). \end{split} \end{equation} Recall that the leading coefficients of $L_{(F, \pm c)}(t)$ and $L_{(F^{\operatorname{op}}, \pm c)}(t)$ are $c_{m-1}$ and $c'_{m-1}$, respectively. Therefore, the leading coefficient of $L_{(F^{\operatorname{op}}, c)}(t)-L_{(F, -c)}(t)$ is negative, while that of $L_{(F, c)}(t)-L_{(F^{\operatorname{op}}, -c)}(t)$ is positive. This is a contradiction. \end{proof} In the proof of the above theorem, we may suppose (a) does not hold, i.e., $L_{(P, c')}(t)\neq L_{(P, -c')}(t)$. Then $c'$ can be perturbed slightly. That is, there exist a small open neighborhood $U\subset\operatorname{aff}_0(P)$ of $c'$ such that any replacement of $c'$ by an element of $U\cap\mathbb{Q}^d$ works similarly. Thus we have the following. \begin{corollary} \label{openness} Let $P\subset \mathbb{R}^d$ be a non-centrally symmetric lattice polytope. Then there exists an open set $U\subset \operatorname{aff}_0(P)$ such that $L_{(P,c)}\neq L_{(P,-c)}$ for every translation vector $c\in U$. \end{corollary} \begin{figure}[htbp] \centering \begin{tikzpicture}[scale=1.2] \filldraw[fill=gray!20!white, draw=black, very thin] (1,1)--(0,2)--(2,4)--(3,4)--(5,2)--(4,1)--cycle ; \fill[black] (0,0) circle (0.05); \fill[black] (0,1) circle (0.05); \fill[black] (0,2) circle (0.05); \fill[black] (0,3) circle (0.05); \fill[black] (0,4) circle (0.05); \fill[black] (0,5) circle (0.05); \fill[black] (1,0) circle (0.05); \fill[black] (1,1) circle (0.05); \fill[black] (1,2) circle (0.05); \fill[black] (1,3) circle (0.05); \fill[black] (1,4) circle (0.05); \fill[black] (1,5) circle (0.05); \fill[black] (2,0) circle (0.05); \fill[black] (2,1) circle (0.05); \fill[black] (2,2) circle (0.05); \fill[black] (2,3) circle (0.05); \fill[black] (2,4) circle (0.05); \fill[black] (2,5) circle (0.05); \fill[black] (3,0) circle (0.05); \fill[black] (3,1) circle (0.05); \fill[black] (3,2) circle (0.05); \fill[black] (3,3) circle (0.05); \fill[black] (3,4) circle (0.05); \fill[black] (3,5) circle (0.05); \fill[black] (4,0) circle (0.05); \fill[black] (4,1) circle (0.05); \fill[black] (4,2) circle (0.05); \fill[black] (4,3) circle (0.05); \fill[black] (4,4) circle (0.05); \fill[black] (4,5) circle (0.05); \fill[black] (5,0) circle (0.05); \fill[black] (5,1) circle (0.05); \fill[black] (5,2) circle (0.05); \fill[black] (5,3) circle (0.05); \fill[black] (5,4) circle (0.05); \fill[black] (5,5) circle (0.05); \draw (2.5,1) node[below]{$F$}; \draw (2.5,4) node[above]{$F^{\operatorname{op}}$}; \draw [->](4,1)--(4.5,1) node[right]{$c$}; \draw [->](4,1)--(4.5,0.8) node[below right]{$c'$}; \draw [->](4,1)--(4.5,1.2) node[above right]{$c''$}; \end{tikzpicture} \caption{Translation by $c,c'$ and $c''$.} \label{fig:sym} \end{figure} \section{Characterizing zonotopes} \label{sec:zono} This section deals with a characterization for zonotopes in a similar way to Theorem \ref{charsym}. More specifically, if the Ehrhart quasi-polynomial of every rational shift of a lattice polytope satisfies the $\operatorname{GCD}$-property, then the polytope is a zonotope. Notice that for non-centrally symmetric polytopes we proved the statement already. In fact, $\operatorname{GCD} (1,\rho)=\operatorname{GCD}(\rho-1,\rho)$ for all periods $\rho\ge 1$, but by Theorem \ref{charsym} we find a $c\in \mathbb{Q}^d$ such that $L_{(P,c)}\neq L_{(P,-c)}$. In order to construct a translation vector for the centrally symmetric polytopes, that are no zonotopes, we consider almost constant functions: \begin{definition} A function $f\colon \mathbb{R} \to \mathbb{R}$ is called \textit{almost locally constant}, if it is locally constant except for a discrete point set. A function $f\colon \mathbb{R} \to \mathbb{R}$ is called \textit{almost constant}, if it is constant except for a discrete point set. \end{definition} Let $P\subset \mathbb{R}^d$ be a polytope and $c\in \mathbb{R}^d$. Consider the function $L^P_c\colon \mathbb{R} \to \mathbb{R}$ defined by $x\mapsto \#\left((xc+P)\cap \mathbb{Z}^d\right)$. It is an almost locally constant function. If furthermore $c\in\mathbb{Q}^d$, then $L^P_c(t)$ is periodic with a period $\rho_0=\operatorname{den}(c)$. \begin{proposition} \label{nongcd} Let $P\subset\mathbb{R}^d$ be a lattice $d$-polytope and $c\in\mathbb{Q}^d$. If $L^P_c(x)$ is not almost constant, then there exists $c'\in\mathbb{Q}^d$ such that $L_{c'+P}(t)$ does not satisfy the $\operatorname{GCD}$-property. \end{proposition} \begin{proof} Since $L^P_{kc}(x/k)=L^P_c(x)$ for $k\in\mathbb{Z}_{>0}$, we may assume that $c\in\mathbb{Z}^d$. Consider the function $\delta(x):=L^P_c(x)-L^P_c(2x)$. Then $\delta(x)$ is clearly an almost locally constant function, which is $0$ near $x=0$. That is, there exists $\varepsilon>0$ such that $L^P_c(x)-L^P_c(2x)$ is almost constantly $0$ on the interval $(0, \varepsilon)$. We shall prove that $\delta(x)$ is not almost constant. Suppose the contrary. Then $\delta(x)=0$ except for a discrete point set. By induction on $n$, we have $L^P_c(x)=L^P_c(2^nx)$ for almost all $x\in(0, \varepsilon)$. Since $L^P_c(x)$ is periodic, $L^P_c(x)$ is almost constant, which contradicts the assumption. Hence $\delta(x)$ is not almost constant. There exists an interval $(a, b)\subset\mathbb{R}$ such that $\delta(x)\neq 0$ for all $x\in (a, b)$. Let $x\in (a, b)\cap \mathbb{Q}$ be a rational number with odd denominator. Then $L^P_c(x)\neq L^P_c(2x)$, which is equivalent to $L_{(P, xc)}(1)\neq L_{(P, 2xc)}(1)$. Consider the polytope $xc+P$. Since $xc\in\mathbb{Q}^d$ has odd denominator, the minimal period $\rho_0$ is an odd integer. Therefore $\operatorname{GCD}(1, \rho_0)=\operatorname{GCD}(2, \rho_0)=1$. However, the argument above implies $L_{(P, xc)}(t)\neq L_{(P, 2xc)}(t)$. Hence the first and the second constituents of the Ehrhart quasi-polynomial $L_{xc+P}(t)$ are different. $L_{xc+P}(t)$ does not have $\operatorname{GCD}$-property. \end{proof} \begin{example} Let $P_1=\operatorname{conv}(\begin{pmatrix} 0 \\ 0 \end{pmatrix},\begin{pmatrix} 1 \\ 0 \end{pmatrix},\begin{pmatrix} 0 \\ 1 \end{pmatrix},\begin{pmatrix} 1 \\ 1 \end{pmatrix})\subset \mathbb{R}^2$ and $c_1=\begin{pmatrix} \frac{1}{2} \\ \frac{1}{4} \end{pmatrix}\in \mathbb{Q}^2$. Then, \[L_{c_1}^{P_1}(x)=\begin{cases} 4 & \text{if } x\in 4\mathbb{Z}_{\ge 0} \\ 2 & \text{if } x\in 2+4\mathbb{Z}_{\ge 0} \\ 1 & \text{else} \end{cases}\] is almost constant, which is illustrated in Figure \ref{fig:example1}. Thus, it is not possible to choose an $x\in \mathbb{R}$ such that $xc\in \mathbb{Q}^d$ has just coordinates with odd denominators and $L_{c_1}^{P_1}(x)\neq L_{c_1}^{P_1}(2xc)$. In contrast to $P_1$, consider the 3-dimensional cross-polytope $P_2=\operatorname{conv}(\pm e_i|i=1,2,3)\subset \mathbb{R}^d$ and $c_2=\frac{1}{3}\begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}\in \mathbb{Q}^3$. We observe that $L_{c_2}^{P_2}$, see Figure \ref{fig:example2}, is not almost constant: \[L_{c_2}^{P_2}(x)=\begin{cases} 7 & \text{if } x\in 3\mathbb{Z}_{\ge 0} \\ 1 & \text{if } x\in (k,k+1] \text{ for } k\in 3\mathbb{Z}_{\ge 0} \text{ or } x\in [k-1,k) \text{ for } k\in 3\mathbb{Z}_{> 0}\\ 0 & \text{else} \end{cases}.\] Therefore, we are able to find for example $x=\frac{3}{5}$ for that $L_{(xc+P_2)}$ does not satisfy the $\operatorname{GCD}$-property. Namely, the Ehrhart quasi-polynomial of the Octahedron $P_2$ translated by $xc+(\frac{1}{5},\frac{1}{5},\frac{1}{5})^t$ equals \[L_{xc+P_2}(n)=\begin{cases} \frac{4}{3} n^3+2 n^2 + \frac{8}{3} n + 1 & \text{if } n \equiv 0 \mod 5 \\ \frac{4}{3} n^3- \frac{1}{3} n & \text{if } n \equiv 1 \mod 5 \\ \frac{4}{3} n^3- \frac{4}{3} n & \text{if } n \equiv 2 \mod 5 \\ \frac{4}{3} n^3- \frac{4}{3} n & \text{if } n \equiv 3 \mod 5 \\ \frac{4}{3} n^3- \frac{1}{3} n & \text{if } n \equiv 4 \mod 5 \\ \end{cases}, \] for which the first and second constituent are different, but $\operatorname{GCD} (1,5)=\operatorname{GCD} (2,5)$. \end{example} \begin{figure}[htbp] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \begin{tikzpicture}[scale=1.2] \draw[->] (-0.5,0)--(4.25,0); \draw[->] (0,-0.5)--(0,4.25); \filldraw[black] (0,4) circle (2pt); \filldraw[black] (1,2) circle (2pt); \filldraw[black] (2,4) circle (2pt); \filldraw[black] (3,2) circle (2pt); \filldraw[black] (4,4) circle (2pt); \draw[line width=2.5pt] (0,1)--(4,1); \filldraw[fill=white, draw=black] (0,1) circle (2pt) ; \filldraw[fill=white, draw=black] (1,1) circle (2pt) ; \filldraw[fill=white, draw=black] (2,1) circle (2pt) ; \filldraw[fill=white, draw=black] (3,1) circle (2pt) ; \filldraw[fill=white, draw=black] (4,1) circle (2pt) ; \draw (0,1) node[left]{$1$}; \draw (0,2) node[left]{$2$}; \draw (0,3) node[left]{$3$}; \draw (0,4) node[left]{$4$}; \draw (1,0) node[below]{$2$}; \draw (2,0) node[below]{$4$}; \draw (3,0) node[below]{$6$}; \draw (4,0) node[below]{$8$}; \end{tikzpicture} \caption{$L^{P_1}_{c_1}$} \label{fig:example1} \end{subfigure} \begin{subfigure}[t]{0.45\textwidth} \centering \begin{tikzpicture}[scale=1.2] \filldraw[black] (0,4) circle (2pt); \filldraw[black] (3,4) circle (2pt); \draw[->] (-0.5,0)--(4.25,0); \draw[->] (0,-0.5)--(0,4.25); \draw [line width=2.5pt] (0,0.57)--(1,0.57); \draw [line width=2.5pt] (1,0)--(2,0); \draw [line width=2.5pt] (2,0.57)--(4,0.57); \draw [line width=2.5pt] (4,0)--(4.2,0); \filldraw[fill=white, draw=black] (0,0.57) circle (2pt) ; \filldraw[fill=white, draw=black] (3,0.57) circle (2pt) ; \draw (0,0.57) node[left]{$1$}; \draw (0,1.14) node[left]{$2$}; \draw (0,1.71) node[left]{$3$}; \draw (0,2.28) node[left]{$4$}; \draw (0,2.85) node[left]{$5$}; \draw (0,3.42) node[left]{$6$}; \draw (0,4) node[left]{$7$}; \draw (1,0) node[below]{$1$}; \draw (2,0) node[below]{$2$}; \draw (3,0) node[below]{$3$}; \draw (4,0) node[below]{$4$}; \filldraw[black] (1,0.57) circle (2pt); \filldraw[black] (2,0.57) circle (2pt); \filldraw[black] (4,0.57) circle (2pt); \filldraw[fill=white, draw=black] (1,0) circle (2pt) ; \filldraw[fill=white, draw=black] (2,0) circle (2pt) ; \filldraw[fill=white, draw=black] (4,0) circle (2pt) ; \end{tikzpicture} \caption{$L^{P_2}_{c_2}$} \label{fig:example2} \end{subfigure} \caption{Examples almost locally constant functions} \end{figure} We will use the following characterization of a zonotope. \begin{proposition}\cite{mc} \label{gov} Let $P\subset R^d$ be a polytope of dimension $m\le d$. Then $P$ is a zonotope if and only if all faces of dimension $j$ are centrally symmetric for some $2\le j \le m-2$. \end{proposition} The second main result of this paper is the following. \begin{theorem} \label{charzono} Let $P\subset\mathbb{R}^d$ be a lattice polytope. Then the following are equivalent \begin{itemize} \item[$(i)$] $P$ is a zonotope. \item[$(ii)$] $L_{c+P}(t)$ satisfies the $\operatorname{GCD}$-property for any $c\in\mathbb{Q}^d$. \end{itemize} \end{theorem} \begin{proof} $(i)$ implies $(ii)$ was proved in Theorem \ref{zono1}. Now let $P\subset \mathbb{R}^d$ be an $m$-polytope that is not a zonotope. From now on, we consider $c\in\mathbb{Q}^d$ which coordinates have just odd denominators greater than $3$. We will prove that there exists such a $c\in \mathbb{Q}^d$ such that $L_{(P, c)}\neq L_{(P, 2c)}$. This implies that the Ehrhart quasi-polynomial $L_{c+P}$ does not satisfy the $\operatorname{GCD}$-property. In order to prove this, we will prove that there exists $c$ such that $L_c^P$ is not almost constant. We distinguish three types of polytopes, that are not zonotopes: non-centrally symmetric polytopes, centrally symmetric polytopes with at least one facet that is not centrally symmetric and centrally symmetric polytopes which facets are all centrally symmetric. To begin with, let $P$ be a non-centrally symmetric polytope. As mentioned above, this case is done by Theorem \ref{charsym}. Secondly, let $P$ be a centrally symmetric polytope with at least one facet $F$ that is not centrally symmetric. Let $F\subset P$ be a non-symmetric facet with its opposite facet $F^{\operatorname{op}}$. By Corollary \ref{openness}, there exists an open subset $U\subset\operatorname{aff}_0(F)$ such that for every $c'\in U\cap\mathbb{Q}^d$, we get $L_{(F, c')}(t)\neq L_{(F, -c')}(t)(=L_{(F^{\operatorname{op}}, c')}(t))$. Now choose $c'\in U\cap\mathbb{Q}^d$ with odd denominator and generic enough so that we may assume that $(c'+\operatorname{aff}(X))\cap\mathbb{Z}^d=\varnothing$ for all faces $X\neq F, F^{\operatorname{op}}$ of $P$. Let $c''\in\mathbb{Z}^d$ be an integral vector such that $F$ is an upper face and $F^{\operatorname{op}}$ is a lower face. Consider $c=c'+c''$. Then $L^{tP}_{c}(x)$ is not almost constant. Because, at $x=1$, the number of lattice points in the upper faces $\#((c+tF)\cap\mathbb{Z}^d)$ and that of the lower faces $\#((c+tF^{\operatorname{op}})\cap\mathbb{Z}^d)$ are different. Hence the function $L^{tP}_{c}(x)$ takes different values on the interval $x\in (1-\varepsilon, 1)$ and $x\in(1, 1+\varepsilon)$. Finally, consider a centrally symmetric polytope $P$ which facets are also centrally symmetric, but that is not a zonotope. By using Proposition \ref{gov}, there exists a face $G$ of dimension $(m-2)$ that is not centrally symmetric. Let $F_0$ be a facet of $P$ such that $G$ is a facet of $F_0$. Let $F_1$ be the facet of $P$ such that $G_0:=G=F_0\cap F_1$. Let $G_1$ be the opposite facet of $G_0$ in $F_1$. Since $F_1$ is symmetric, $G_1$ is a translation of $G^{\operatorname{op}}$ which itself is a translate of $-G$. Continue like this and let $F_i$ be the facet such that $G_{i-1}=F_{i-1}\cap F_i$ and let $G_i$ be the opposite site of $G_{i-1}$ in $F_i$. Observe that by this construction $G_i$ is a translate of $G$ (resp. $G^{\operatorname{op}}$), if $i$ is even (resp. odd). This process terminates at $n\in \mathbb{N}$, which is even. Otherwise $G$ would be a translate of $-G$, which is equivalent to be symmetric. Furthermore, $G_\frac{n}{2}$ is opposite of $G$ and thus, a translation of $-G$. Hence, $\frac{n}{2}$ is odd. Let $\pi:\mathbb{R}^d\to \mathbb{R}^2$ be the orthogonal projection to the orthogonal complement $\operatorname{aff}(G)^{\perp}$ of $\operatorname{aff}(G)$. Then the image $\pi(P)$ is a $n$-gon with vertices $v_1,\ldots ,v_n$ and edges $u_1,\ldots , u_n$. We suppose that $G_i=\pi^{-1}(v_i)\cap P$ and $F_i=\pi^{-1}(u_i)\cap P$. We claim that $\operatorname{aff}_0(G)$ is transversal to $\operatorname{aff}_0(F)$ for all facets $F\neq F_i$, $i=1,\ldots ,n$. Since $F$ is of codimension $1$, we have $\operatorname{aff}_0(G)+\operatorname{aff}_0(F)=\operatorname{aff}_0(F)$ or $\operatorname{aff}_0(G)+\operatorname{aff}_0(F)=\operatorname{aff}_0(P)$. The latter case means that $\operatorname{aff}_0(G)$ and $\operatorname{aff}_0(F)$ are transversal. The former case is equivalent to $\operatorname{aff}_0(G)\subseteq \operatorname{aff}_0(F)$. Then $\pi(F)\subset \pi(P)$ is a segment. However, since $F$ does not separate $P$, $\pi(F)$ does not separate $\pi(P)$. Thus, $\pi(F)=u_i$ for some $i\in \{1,\ldots, n\}$ and $F=F_i$. By Corollary \ref{openness}, we are able to choose a rational vector $c\in \operatorname{aff}_0(G)\cap\mathbb{Q}^d$ such that $\#((c+G)\cap \mathbb{Z}^d)\neq \#((-c+G)\cap \mathbb{Z}^d)$ and $(c+(\partial P \backslash (F_1\cup F_2 \cup \ldots \cup F_n)))\cap \mathbb{Z}^d=\varnothing$. The latter is possible since $\operatorname{aff}_0(G)$ is transversal to $\operatorname{aff}_0(F)$ for all facets $F\neq F_i$, $i=1,\ldots ,n$. \begin{figure}[htbp] \centering \begin{tikzpicture}[scale=1.2] \draw (-0.7,0) node[below]{$v_0$} --(0.7,0) node[below]{$v_1$} --(1.8,0.8) node[below right]{$v_2$} --(2.3,2) node[right]{$v_3$} --(1.8,3.2) node[above right]{$v_4$} --(0.7,4) node[above]{$v_5$} --(-0.7,4) node[above]{$v_6$} --(-1.8,3.2) node[above left]{$v_7$} --(-2.3,2) node[left]{$v_8$} --(-1.8,0.8) node[below left]{$v_9$} --cycle; \draw [->](0.7,0)--(1.7,0) node[right]{$v$}; \end{tikzpicture} \caption{projection ($n=10$)} \label{fig:example} \end{figure} Lastly, take a small vector $v\in \operatorname{aff}_0(F_1)\cap\mathbb{Q}^d=\operatorname{aff}_0(F_{\frac{n}{2}+1})\cap\mathbb{Q}^d$ such that $F_2,\ldots, F_{\frac{n}{2}}$ are upper faces. Thus, $F_{\frac{n}{2}+2},\ldots, F_{n}$ are lower face with respect to $v$. Then we count the number of lattice points lost and newly obtained by translating $P$ by $c'=c+v$. The opposite face of $F_i$ is $F_{i+\frac{n}{2}}$, where the indices are considered modulo $n$. In addition, $F_{i+\frac{n}{2}}$ is a translate of $-F$ which itself is a translate of $F$, since all facets are symmetric. In order to make notation simpler let $(X)_{\mathbb{Z}}$ denote $X\cap \mathbb{Z}^d$ for a set $X\subset \mathbb{R}^d$. Since $\frac{n}{2}$ is odd, the number of newly obtained points is \[\begin{aligned} & \operatorname{int} (c+F_2)_{\mathbb{Z}}+\ldots +\operatorname{int} (c+F_{\frac{n}{2}})_{\mathbb{Z}}+(c+G_1)_{\mathbb{Z}}+\ldots + (c+G_{\frac{n}{2}})\\ &= \sum_{i=2}^{\frac{n}{2}} \operatorname{int} (c+F_i)_{\mathbb{Z}}+ \frac{n+2}{4}(c+G)_{\mathbb{Z}}+ \frac{n-2}{4}(-c+G)_{\mathbb{Z}}.\end{aligned}\] Whereas the number of lost points equals \[\begin{aligned} & \operatorname{int} (c+F_{\frac{n}{2}+2})_{\mathbb{Z}}+\ldots +\operatorname{int} (c+F_n)_{\mathbb{Z}}+(c+G_{\frac{n}{2}+1})_{\mathbb{Z}}+\ldots + (c+G_n)\\ &= \sum_{i=2}^{\frac{n}{2}} \operatorname{int} (c+F_{i+\frac{n}{2}})_{\mathbb{Z}}+ \frac{n-2}{4}(c+G)_{\mathbb{Z}}+ \frac{n+2}{4}(-c+G)_{\mathbb{Z}}.\end{aligned}\] From $\#(c+G)\cap \mathbb{Z}^d\neq \#(-c+G)\cap \mathbb{Z}^d$ follows, that these numbers are different. Therefore, we find $\#\left((xc'+P)\cap \mathbb{Z}^d\right)\neq \#\left((x'c'+P)\cap \mathbb{Z}^d\right)$ for $x\in (1-\epsilon,1), x'\in (1, 1+\epsilon)$ with $\epsilon \in \mathbb{R}$ small enough. Thus, $L_{c'}^P$ is not almost constant. \end{proof} \section{Discussions and further problems} \label{discuss} \subsection{Minimal periods} \label{sec:min} In Ehrhart theory, the minimal period of an Ehrhart quasi-polynomial sometimes becomes strictly smaller than the LCM of denominators of vertices. This is the so-called \emph{period collapse} phenomenon. For almost integral polytopes, it is natural to ask the following. \begin{problem} \label{minpbm} Let $P$ be a lattice polytope in $\mathbb{R}^d$ and $c\in\mathbb{Q}^d$. Is the minimal period of $L_{c+P}(t)$ equal to $\operatorname{den}(c)$? \end{problem} As a partial result, we have the following for zonotopes. \begin{proposition} \label{zonmax} Let $P\subset\mathbb{R}^d$ be a lattice zonotope and $c\in\mathbb{Q}^d$. \begin{itemize} \item[(1)] The minimal period of $L_{c+P}(t)$ is $\operatorname{den}(c)$. \item[(2)] The inequality \begin{equation} \label{eq:ineq} \#((c+P)\cap\mathbb{Z}^d)\leq \#(P\cap\mathbb{Z}^d) \end{equation} holds. Furthermore, in (\ref{eq:ineq}), the equality holds if and only if $c\in\mathbb{Z}^d$. \end{itemize} \end{proposition} \begin{proof} Consider the term $W=\varnothing$ in Ardila-Beck-McWhirtner's formula (Proposition \ref{aiz}). Then $\chi_W(t)=1$ if and only if $tc\in\mathbb{Z}^d$, which is equivalent to $t$ is divisible by $\operatorname{den}(c)$. This yields (1). (2) is the special case $t=1$. \end{proof} One of the difficulties of Problem \ref{minpbm} is that Proposition \ref{zonmax} (2) does not hold for general polytopes. The next example shows that $c+P$ can contain more lattice points than the original lattice polytope $P$. It seems of interest in its own right. \begin{example} \label{counterex} Let $n>7$. Define the lattice polytope $P_n$ in $\mathbb{R}^3$ by \begin{equation} \operatorname{conv}\{ 0, ne_1, ne_2, n(e_1+e_2), e_3, ne_2+e_3, (1-n)e_3\}, \end{equation} where $e_1, e_2, e_3$ is the standard basis of $\mathbb{R}^3$. For $0<k<n$, let $c_k=\frac{k}{n}e_3$. Then a straightforward computation shows \begin{equation} |P_n\cap\mathbb{Z}^3|=\frac{2n^3+3n^2+19n+12}{6}, \end{equation} and then we have \begin{equation} \alpha(n, k):=|(c_k+P_n)\cap\mathbb{Z}^3|-|P_n\cap\mathbb{Z}^3|= k(n+1)-k^2-2n-1, \end{equation} which becomes positive for some $k$, e.g., $k=3$. If $n$ is odd, $\alpha(n, k)$ attains maximum at $k=\frac{n+1}{2}$ and \begin{equation} \label{eq:upperbd} \alpha(n, \frac{n+1}{2}):=\frac{n^2-6n-3}{4}. \end{equation} \end{example} \begin{problem} For which lattice $d$-polytope $P$ and $c\in\mathbb{Q}^d$ does the inequality \begin{equation} |P\cap\mathbb{Z}^d|<|(c+P)\cap\mathbb{Z}^d| \end{equation} hold? For a lattice polytope $P$, what is $\max\{|(c+P)\cap\mathbb{Z}^d| c\in\mathbb{Q}^d\}$? \end{problem} \begin{problem} Do there exist certain constants $\kappa, \lambda>0$ such that the following holds for any $P$ and $c$? \begin{equation} |(c+P)\cap\mathbb{Z}^d|\leq |P\cap\mathbb{Z}^d|+\kappa |P\cap\mathbb{Z}^d|^\lambda \end{equation} (The formula (\ref{eq:upperbd}) in Example \ref{counterex} may suggest that $\lambda\geq\frac{d-1}{d}$.) \end{problem} \subsection{Rational polytopes with $\operatorname{GCD}$-property and hyperplane arrangements} \label{sec:hyp} In our main results Theorem \ref{charsym} and Theorem \ref{charzono}, we assumed that $P$ is an almost integral polytope. The authors do not know the assumptions are necessary or not. Thus we pose the following. \begin{problem} Let $P\subset\mathbb{R}^d$ be a rational polytope. \begin{itemize} \item[(1)] Suppose $L_{c+P}(t)$ is symmetric for all $c\in\mathbb{Q}^d$. Then, is $P$ a centrally symmetric (almost integral) polytope? \item[(2)] Suppose $L_{c+P}(t)$ satisfies $\operatorname{GCD}$-property for all $c\in\mathbb{Q}^d$. Then, is $P$ a (almost integral) zonotope? \end{itemize} \end{problem} Almost integral zonotopes are not the only examples of polytopes which Ehrhart quasi-polynomials satisfy $\operatorname{GCD}$-property. Indeed, some interesting rational polytopes (simplices) have $\operatorname{GCD}$-property. For example, Suter \cite{sut} observed that the fundamental alcove (certain rational simplex) of a root system has an Ehrhart quasi-polynomial with $\operatorname{GCD}$-property. The following are the examples corresponding to the root systems of type $E_6, E_7, E_8, F_4$ and $G_2$. \begin{example} \label{alcove} Let $e_1, \dots, e_\ell$ be the standard basis of $\mathbb{R}^\ell$. \begin{itemize} \item[(1)] Let $P_{E_6}\subset\mathbb{R}^6$ be the $6$-dimensional rational simplex defined by \begin{equation} P_{E_6}=\operatorname{conv}\left\{0, e_1, e_2, \frac{1}{2}e_3, \frac{1}{2}e_4, \frac{1}{2}e_5, \frac{1}{3}e_6\right\}, \end{equation} then the Ehrhart quasi-polynomial $L_{P_{E_6}}(t)$ has the minimal period $\rho=6$ and satisfies $\operatorname{GCD}$-property. \item[(2)] Let $P_{E_7}\subset\mathbb{R}^7$ be the $7$-dimensional rational simplex defined by \begin{equation} P_{E_7}=\operatorname{conv}\left\{0, e_1, \frac{1}{2}e_2, \frac{1}{2}e_3, \frac{1}{2}e_4, \frac{1}{3}e_5, \frac{1}{3}e_6, \frac{1}{4}e_7\right\}, \end{equation} then the Ehrhart quasi-polynomial $L_{P_{E_7}}(t)$ has the minimal period $\rho=12$ and satisfies $\operatorname{GCD}$-property. \item[(3)] Let $P_{E_8}\subset\mathbb{R}^8$ be the $8$-dimensional rational simplex defined by \begin{equation} P_{E_8}=\operatorname{conv}\left\{0, \frac{1}{2}e_1, \frac{1}{2}e_2, \frac{1}{3}e_3, \frac{1}{3}e_4, \frac{1}{4}e_5, \frac{1}{4}e_6, \frac{1}{5}e_7, \frac{1}{6}e_8\right\}, \end{equation} then the Ehrhart quasi-polynomial $L_{P_{E_8}}(t)$ has the minimal period $\rho=60$ and satisfies $\operatorname{GCD}$-property. \item[(4)] Let $P_{F_4}\subset\mathbb{R}^4$ be the $4$-dimensional rational simplex defined by \begin{equation} P_{F_4}=\operatorname{conv}\left\{0, \frac{1}{2}e_1, \frac{1}{2}e_2, \frac{1}{3}e_3, \frac{1}{4}e_4\right\}, \end{equation} then the Ehrhart quasi-polynomial $L_{P_{F_4}}(t)$ has the minimal period $\rho=12$ and satisfies $\operatorname{GCD}$-property. \item[(5)] Let $P_{G_2}\subset\mathbb{R}^2$ be the $2$-dimensional rational simplex defined by \begin{equation} P_{G_2}=\operatorname{conv}\left\{0, \frac{1}{2}e_1, \frac{1}{3}e_2\right\}, \end{equation} then the Ehrhart quasi-polynomial $L_{P_{G_2}}(t)$ has the minimal period $\rho=6$ and satisfies $\operatorname{GCD}$-property. \end{itemize} \end{example} As we saw in Example \ref{ex:01}, not all rational polytopes have $\operatorname{GCD}$-property. It is rather rare. It seems natural to pose the following. \begin{problem} Which rational polytope (even simplex) $P$ has Ehrhart quasi-polynomial with $\operatorname{GCD}$-property (or symmetric quasi-polynomial)? \end{problem} The $\operatorname{GCD}$-property of quasi-polynomials appears in the theory of hyperplane arrangements. Let us recall briefly. Let $\alpha_i:\mathbb{Z}^\ell\longrightarrow\mathbb{Z}$ ($i=1, \dots, n$) be non-zero homomorphisms of abelian groups. One can associate an arrangement of hyperplanes defined by $\operatorname{Ker}(\alpha_i\otimes\mathbb{R}: \mathbb{R}^\ell\longrightarrow\mathbb{R})$, $i=1, \dots, n$. On the other hand, for any positive integer $q>0$, $\alpha_i$ induces a homomorphism $\alpha_i\otimes(\mathbb{Z}/q\mathbb{Z}):(\mathbb{Z}/q\mathbb{Z})^\ell\longrightarrow\mathbb{Z}/qZ$. Kamiya, Takemura and Terao \cite{ktt-cent} proved that \begin{equation} \left| (\mathbb{Z}/q\mathbb{Z})^\ell\smallsetminus\bigcup_{i=1}^n \operatorname{Ker}(\alpha_i\otimes(\mathbb{Z}/q\mathbb{Z})) \right| \end{equation} is a quasi-polynomial in $q$ with $\operatorname{GCD}$-property, which is called the \emph{characteristic quasi-polynomial} of the arrangement. One of the most important properties is that the first constituent of the characteristic quasi-polynomial is equal to the characteristic polynomial of a hyperplane arrangement \cite{ot}. The characteristic quasi-polynomial is also important in the context of toric arrangements \cite{ers, lty, ty}. We also note that Suter's computations (Example \ref{alcove}) is deeply related to characteristic quasi-polynomials. Indeed, up to scalar multiplications, the Ehrhart quasi-polynomials in Example \ref{alcove} are equal to characteristic quasi-polynomials of corresponding reflection arrangements \cite{ath, yos-wor}. Since the relationship between zonotopes and hyperplane arrangements is an actively studied research topic (e.g., see \cite[Chap 7]{zie}), it would be worthwhile to investigate the following. \begin{problem} Are there direct relations between characteristic quasi-polynomials of hyperplane arrangements and Ehrhart quasi-polynomials of almost integral zonotopes? (Note that both are quasi-polynomials with $\operatorname{GCD}$-property.) \end{problem} \medskip \noindent {\bf Acknowledgements.} Christopher de Vries was supported by the German Academic Exchange Service. Masahiko Yoshinaga was partially supported by JSPS KAKENHI Grant Numbers JP19K21826, JP18H01115. The authors thank Akihiro Higashitani, Shigetaro Tamura, and Tan Nhat Tran for fruitful discussions during the preparation of this paper. \medskip
2,869,038,154,226
arxiv
\section{Introduction} Online single object tracking is a fundamental task in computer vision and has many important applications including intelligent surveillance, autonomous driving, human-machine interaction, to name a few. In recent years, as deep learning matures and large-scale tracking datasets~\cite{DVT-Review,LaSOT,GOT10K} are introduced, the single object tracking field has developed rapidly. Current state-of-the-art trackers~\cite{SiameseRPN,DSiam,SiamRPNplusplus,fan2017robust,li2017object,ATOM,DiMP} can be grouped into two categories: deep discriminative trackers~\cite{ATOM,DiMP,LTMU-CVPR2020}, and SiameseRPN-based trackers~\cite{SiameseRPN,DSiam,Deeper-wider-SiamRPN,SiamRPNplusplus}. Deep discriminative trackers decompose tracking into two sub-problems: classification and state estimation. The first one is solved by an online-learning classifier and the second one is achieved by maximizing the overlap between candidates and the ground truth. SiameseRPN-based trackers formulate tracking as a one-shot detection problem, locating objects that have similar appearance with the initial template on the search region in each frame. Considering their balance between accuracy and speed, SiameseRPN-series trackers have attracted more attention than deep discriminative trackers. Adversarial attack is originated from~\cite{Intriguing-properties-NN}, which has shown that state-of-the-art deep learning models can be fooled by adding small perturbations to original images. Research on adversarial attack is beneficial to understand deep neural networks and design robust models. Popular adversarial attack methods can be roughly summarized into two categories: iterative-optimization-based and deep-network-based attacks. The former method~\cite{FGSM,Deepfool,DAG} applies many times of gradient ascent to maximize an adversarial objective function for deceiving deep networks and is usually time-consuming. However, the latter one~\cite{advGAN,UEA} applies tremendous data to train an adversarial perturbation-generator. The latter method is faster than the former method because only one-time forward propagation is needed for each attack after training. In recent years, adversarial attack has become a popular topic and has extended from image classification to more challenging tasks, such as object detection~\cite{DAG,UEA} and semantic segmentation~\cite{DAG}. However, an effective and efficient adversarial attack method for single object tracking remains lacking. In this study, we choose the state-of-the-art SiamRPN++~\cite{SiamRPNplusplus} tracker as our main research object and propose a novel cooling-shrinking attack method. This method learns an efficient perturbation generator to make the tracker fail by simultaneously cooling down hot regions where the target exists on the heatmaps and forcing the predicted bounding box to shrink during online tracking. Our main contribution can be summarized as follows. \vspace{-3mm} \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \setlength{\parskip}{0pt} \item \emph{A novel and efficient cooling-shrinking attack method is proposed to effectively fool the SiamRPN++ tracker. Experiments on OTB100~\cite{OTB2015}, VOT2018~\cite{VOT2018report}, and LaSOT~\cite{LaSOT} show that our method can successfully deceive the state-of-the-art SiamRPN++ tracker.} \item \emph{Numerous experiments show that a discriminator is unnecessary in this task because existing $L_2$ loss and fooling loss have already achieved our goal.} \item \emph{Our attacking method has good transferability. Experimental results demonstrate that state-of-the-art trackers (such as DaSiamRPN and DiMP) can also be deceived by our method, even though this method is not specially designed for them.} \end{itemize} \vspace{-4mm} \section{Related Works} \subsection{Single Object Tracking} Given the tracked target in the first frame, single object tracking (SOT) aims at capturing the location of the target in the subsequent frames. Different from object detection that recognizes objects of predefined categories, the SOT task belongs to one-shot learning, requiring trackers to be capable of tracking any possible targets. Efficient and robust trackers are difficult to be designed because of challenges, such as occlusion, similar distractors, deformation, and motion blur, during tracking. Recently, with the prosperity of deep learning and the introduction of large-scale object tracking datasets~\cite{LaSOT,GOT10K}, the study of SOT has undergone a rapid development. Currently, state-of-the-art trackers can be divided into two categories. One is based on SiamRPN (including SiamRPN~\cite{SiameseRPN}, DaSiamRPN~\cite{DSiam}, SiamRPN+~\cite{Deeper-wider-SiamRPN}, SiamRPN++~\cite{SiamRPNplusplus}, and SiamMask~\cite{SiamMask}), and the other is based on deep discriminative models (including ATOM~\cite{ATOM} and DiMP~\cite{DiMP}). SiamRPN~\cite{SiameseRPN} formulates SOT as a one-shot detection problem and is the first attempt to introduce RPN~\cite{FasterRCNN} in the tracking filed. With the help of RPN, SiamRPN removes heavy multi-scale correlation operations, running at a high speed and producing accurate results. DaSiamRPN~\cite{DSiam} relieves the SiamRPN's weakness of being susceptible to distractors, by introducing challenging samples to the training set. However, the negative effect of image padding makes SiamRPN and DaSiamRPN only apply the shallow and padding-free AlexNet's variant as their backbone, which does not fully take advantage of the capability of modern deep neural networks~\cite{inception,ResNet}. To overcome this problem, some studies have proposed the addition of a cropping-inside residual unit and a spatial-aware sampling strategy in SiamRPN+~\cite{Deeper-wider-SiamRPN} and SiamRPN++~\cite{SiamRPNplusplus}. These works relieve the center bias problem caused by image padding, making the SiameseRPN framework benefit from modern backbones and significantly improving the Siamese tracker's performance. SiamMask~\cite{SiamMask} proposes a unified framework for visual object tracking and semi-supervised video object segmentation, further increasing the accuracy of predicted bounding boxes. ATOM~\cite{ATOM} proposes a tracking framework composed of the dedicated target estimation and classification components. The target estimation module is an IOU-Net's variant~\cite{IOU-Net} that can produce an accurate bounding box of the target, given the initial appearance information. DiMP~\cite{DiMP} inherits the ATOM's framework (making it end-to-end trainable) and proposes a more discriminative model predictor. DiMP achieves state-of-the-art performance on most tracking benchmarks, thereby serving a strong baseline in the tracking community. \subsection{Adversarial Attack} The adversarial attack in~\cite{Intriguing-properties-NN} indicates that CNN is highly vulnerable to attack and state-of-the-art classifiers can be easily fooled by adding visually imperceptible noises to original images. Since then, several works have focused on adversarial attacks. Early works~\cite{FGSM,Deepfool,DAG,advGAN,UEA} add perturbations in the digital world, directly changing pixel values which are fed into the networks. Latter works focus on creating physical adversarial objects, such as eyeglasses~\cite{advface}, posters~\cite{advtexture}, and animals~\cite{EOT}, in the real world, further broadening the influence of adversarial attacks. Digitally adversarial attack methods can be roughly divided into two categories: iterative-optimization-based and deep-network-based algorithms. The former ones (including FGSM~\cite{FGSM}, Deepfool~\cite{Deepfool}, and DAG~\cite{DAG}) optimize an adversarial objective function to fool deep networks and are usually time-consuming due to several iterations. However, deep-network-based methods (including advGAN~\cite{advGAN} and UEA~\cite{UEA}) use tremendous data to train a generator for adding perturbations. The latter type is generally faster than the former type because the operation for transforming an image to an adversarial one does not need to be repeated. In specific, FGSM~\cite{FGSM} hypothesizes that neural networks behave in very linear ways and proposes a ``Fast Gradient Sign Method" to attack them. Deepfool~\cite{Deepfool} generates adversarial examples by pushing data points around the classification boundary past it with minimal perturbations. AdvGAN~\cite{advGAN} is the first work to generate adversarial examples with GAN, which can run efficiently during the inference phase. Recent research on adversarial attack has extended from image classification to more challenging tasks, such as object detection. Two impressive works are iterative-optimization-based DAG~\cite{DAG} and deep-model-based UEA~\cite{UEA}. DAG~\cite{DAG} sets the difference of the classification score between adversarial and ground-truth classes as its objective function, and then optimizes it using gradient ascent. Although it achieves a high fooling rate, it is time-consuming because several iterations are needed. UEA~\cite{UEA} chooses GAN as a core component to generate perturbations and trains the generator with a carefully designed adversarial loss. UEA achieves comparative performance but is much faster than DAG, because only one-time forward propagation is needed during the inference phase after training. Adversarial objects in the physical world are more difficult to generate than digital adversarial examples because the literature~\cite{adv-autonomous,alleviate-adv} has revealed that adversarial examples generated by standard methods are not robust to the common phenomena in the physical world, such as varying viewpoints and camera noises. To overcome this problem, the expectation over transformation (EOT)~\cite{EOT} method requires not only the original single example but also its augmented examples to be confusing. Combining with the 3D-printing technique, EOT can synthesize robust physical adversarial objects. Inheriting similar ideas, the method in~\cite{advface} generates adversarial eyeglasses that can fool state-of-the-art face recognition systems, and the method in~\cite{advtexture} creates an inconspicuous poster that can deceive the simple regression-based tracker GOTURN~\cite{GOTURN}. \vspace{-3mm} \section{Cooling-Shrinking Attack} In this work, we propose an adversarial perturbation-generator for deceiving the SiamRPN++ tracker. The goal of our method is to make the target invisible to trackers, thereby leading to tracking drift. To accomplish this goal, we train the generator with a carefully designed and novel cooling-shrinking loss. Considering that SiamRPN-based trackers~\cite{SiameseRPN,DSiam,SiamRPNplusplus,Deeper-wider-SiamRPN,SiamMask} locate the target in a local search region based on the template given in the initial frame, we design two versions of perturbation-generators to attack the search regions and the template respectively. \vspace{-3mm} \subsection{Overview of SiamRPN++} So far, SiamRPN++~\cite{SiamRPNplusplus} is the most powerful SiameseRPN-based tracker, achieving state-of-the-art performance on almost all tracking datasets. The network architecture of SiamRPN++ is shown in Figure~\ref{fig-siamRPN}. Given the template $\mathcal{T}$ in the initial frame, SiamRPN++ detects target in the search region $\mathcal{SR}$ from the current frame. To be specific, the template is an image patch cropped in the first frame, providing the target's appearance information for the tracker. The tracked target generally does not move too much between two adjacent frames. Most modern trackers only locate the target in a small search region centered in the position of the previous frame, rather than the whole image. The size of the search region in the current frame is proportional to the size of the target in the previous frame. In each frame, the template $\mathcal{T}$ and search region $\mathcal{SR}$ are first passed through a shared backbone network such as ResNet50~\cite{ResNet}, and their features are processed by some non-shared neck layers and fused by depthwise correlation. Based on these features, the RPN head layers predict the classification maps $\mathcal{M_C}$ and regression maps $\mathcal{M_R}$. Specifically, SiamRPN++ produces four regression factors, two of which are related to the center offset and the other two are responsible for the scale changes. Finally, the tracker considers the position with the highest classification score as the optimal target location, and then uses the corresponding regression factors to obtain an accurate bounding box as a result of the current frame. Thus, if the final classification maps $\mathcal{M_C}$ or regression maps $\mathcal{M_R}$ are interfered, the tracker may lose the ability to locate the target or produce inaccurate results, leading to tracking failure. \begin{figure}[!h] \begin{center} \includegraphics[width=1.0\linewidth,height=0.55\linewidth]{SiamRPN++_large.pdf} \end{center} \vspace{-3mm} \caption{Network architecture of SiamRPN++~\cite{SiamRPNplusplus}. Better viewed in color with zoom-in.} \vspace{-5mm} \label{fig-siamRPN} \end{figure} \subsection{Overall Pipeline} Since the pipelines of attacking the template and attacking the search regions are quite similar, we only discuss attacking search regions for simplicity. The overall pipeline of the training generator for attacking search regions is shown in Figure~\ref{fig-search}. During the training process, we first feed $N$ pre-cropped unperturbed search regions into the perturbation-generator, adding imperceptible noises to them. Together with a clean template, these perturbed search regions are fed into the SiamRPN++ tracker's Siamese network, which produces adversarial classification and regression maps of the corresponding search regions. The SiamRPN++ tracker considers regions with the highest response on the classification maps (heatmaps) as the target. Thus, regions, where the tracked target exists on the adversarial heatmaps, are expected to have low response values. To indicate these regions, we also feed originally unperturbed search regions into the Siamese network, producing clean heatmaps. Then, an adversarial cooling-shrinking loss and an $L_2$ loss are applied together to train our perturbation-generator. The detailed training algorithm of the generator for attacking search regions is shown in Algorithm~\ref{alg::attackSR}. During the online-tracking phase, to deceive the tracker, we only need to pass a clean search region into the generator, obtaining a new adversarial one in each frame. \begin{algorithm}[h] \caption{Framework of training perturbation-generator to attack search regions} \label{alg::attackSR} \begin{algorithmic}[1] \Require $R^c$: clean search regions; $T^c$: clean template; $g_0$: randomly initialized generator \Ensure trained generator $g^{*}$ \State Initialize generator $g_0$. Initialize siamese model $S$ and freeze its parameters; \Repeat \State Get a clean template $T^c$ and a batch of clean search regions $R^c$; \State Feed $T^c$ into $S$; \State Generate adversarial noises $P=g(R^c)$ for $R^c$; \State Get adversarial search regions $R^a=R^c+P$; \State Get adversarial heatmaps and regression maps $M_H^a,M_R^a=S(R^a,T^c)$ using $R^a$; \State Get clean heatmaps $M_H^c=S(R^c,T^c)$; \State Compute cooling loss $L_C$ and shrinking loss $L_S$ based on $M_H^a$, $M_R^a$, $M_H^c$; \State Compute $L_2 $ loss $L_2=\frac{1}{N}||{R^a}-{R^c}||_2$; \State Compute total loss $L={\alpha_1}L_C+{\alpha_2}L_S+{\alpha_3}L_2$; \State Compute gradient of $L$ to generator $g$'s parameters and update with the Adam optimizer. \Until{model converges} \end{algorithmic} \end{algorithm} \begin{figure}[!h] \begin{center} \includegraphics[width=1.0\linewidth,height=0.6\linewidth]{Attack_SR_large.pdf} \end{center} \vspace{-5mm} \caption{Network architecture of perturbation-generator for search regions. Better viewed in color with zoom-in.} \label{fig-search} \end{figure} \vspace{-3mm} \subsection{Cooling-Shrinking Loss} We propose a novel cooling-shrinking loss, composed of the cooling loss $L_C$ for interfering the heatmaps $M_H$ and the shrinking loss $L_S$ for interfering the regression maps $M_R$. To determine the location of the target, we also introduce the clean heatmaps $M_H^c$ to the computation. In specific, the cooling loss $L_C$ is designed to cool down hot regions where the target may exist on the heatmaps, causing the tracker to lose the target. The shrinking loss $L_S$ is designed to force the predicted bounding box to shrink, leading to error accumulation and tracking failure. To compute these two losses conveniently, we reshape the clean heatmaps $M_H^c$, adversarial heatmaps $M_H^a$, and adversarial regression maps $M_R^a$ to 2D matrices $\widetilde{M_H^c}$, $\widetilde{M_H^a}$, $\widetilde{M_R^a}$ with shape (N,2), (N,2), (N,4) respectively. Then, $\widetilde{M_H^c}$ is activated with the softmax function, generating the probability of target $P_+$ and the probability of background $P_-$. Based on the probability $P_+$ and a predefined threshold $\mathcal{T}$, binary attention maps $\mathcal{A}$ are computed, indicating locations that we are interested in. After that, we define the cooling loss based on the difference between the confidence score of positive class $f_+$ and negative class $f_-$ on regions where $\mathcal{A}>0$. We also set a margin $m_c$ in this loss to avoid any unconstrained decrease of this loss, leading to difficulty in controlling noises' energy. Similarly, we set two scale factors $R_w,R_h$ as the core of the shrinking loss and also set margins $m_w$ and $m_h$ as we do in the cooling loss. The detailed mathematical formulas about the cooling-shrinking loss are shown in Algorithm~\ref{alg::CS-loss}. Figure~\ref{fig-heatmap} shows the effect of the cooling-shrinking loss. The second row represents the heatmaps produced by the clean template, and the third row represents heatmaps produced by the adversarial template. The adversarial heatmaps have low values on places where the target exists, making the tracker difficult to locate the tracked target. Figure~\ref{fig-otb-imgs} shows a comparison between the original results and adversarial results. After adding adversarial perturbations, the tracker becomes less scale-sensitive (Figure~\ref{fig-otb-imgs}(a)), less discriminative (Figure~\ref{fig-otb-imgs}(b)), and less target-aware (Figure~\ref{fig-otb-imgs}(c)). To be specific, in Figure~\ref{fig-otb-imgs}(a), the tracker produces shrinking boxes when the target actually grows larger, causing inaccurate results. In Figure~\ref{fig-otb-imgs}(b), the tracker recognizes other distractors as the target. In addition, in Figure~\ref{fig-otb-imgs} (c), the tracker loses the target quickly and can only re-capture it when it accidentally returns to the previous location. \vspace{-3mm} \begin{algorithm}[h] \caption{Cooling-Shrinking Loss} \label{alg::CS-loss} \begin{algorithmic}[1] \Require $M_H^c$: clean heatmaps; $M_H^a$: adversarial heatmaps; $M_R^a$: adversarial regression maps; $\mathcal{T}$: threshold for probability; $m_c$: margin for classification; $m_w$: margin for width regression factor; $m_h$: margin for height regression factor; \Ensure cooling loss $L_C$; shrinking loss $L_S$; \State Reshape $M_H^c$, $M_H^a$, $M_R^a$ to 2D matrices: $\widetilde{M_H^c}$, $\widetilde{M_H^a}$, $\widetilde{M_R^a}$; \State $P_+,P_- = softmax(\widetilde{M_H^c})$; \State $\mathcal{A} = \left\{ {\begin{array}{*{20}{c}} 1&{{P_+} \ge \mathcal{T}}\\ 0&{{P_+} < \mathcal{T}} \end{array}} \right.$; \State $f_+,f_-=\widetilde{M_H^a}*\mathcal{A}$; \State $R_x,R_y,R_w,R_h=\widetilde{M_R^a}*\mathcal{A}$; \State $L_C=\frac{1}{N}max(f_+-f_-,m_c)$; \State $L_S=\frac{1}{N}max(R_w,m_w)+\frac{1}{N}max(R_h,m_h)$; \end{algorithmic} \end{algorithm} \vspace{-5mm} \begin{figure}[!h] \begin{center} \includegraphics[width=1.0\linewidth,height=0.60\linewidth]{heatmap.pdf} \end{center} \vspace{-3mm} \caption{Search regions and their corresponding heatmaps. The first row shows search regions. The second row represents clean heatmaps generated by the clean template. The third row represents adversarial heatmaps generated by the adversarial template. } \label{fig-heatmap} \vspace{-3mm} \end{figure} \begin{figure*}[!t] \begin{center} \begin{tabular}{ccc} \includegraphics[width=0.32\linewidth,height=0.35\linewidth]{otb-fig1-new.pdf} \ & \includegraphics[width=0.32\linewidth,height=0.35\linewidth]{otb-fig2-new.pdf} \ & \includegraphics[width=0.32\linewidth,height=0.35\linewidth]{otb-fig3-new.pdf}\\ (a) \footnotesize{CarScale} & (b) \footnotesize{Bolt} & (c) \footnotesize{Doll}\\ \end{tabular} \end{center} \vspace{-3mm} \caption{Illustration of the effectiveness of generated perturbations. The green, blue and red boxes represent the groundtruth, original results and adversarial results, respectively. The blue and red lines represent the IoU's variation of the original results and adversarial results over time, respectively. Better viewed in color with zoom-in.} \label{fig-otb-imgs} \vspace{-3mm} \end{figure*} \vspace{-3mm} \subsection{Implementation Details} \vspace{-2mm} {\flushleft \textbf{Network Architectures}:} The perturbation-generator adopts the U-Net~\cite{u-net} architecture, which achieves superior performance in many pixel-level tasks. U-Net first downsamples feature maps multiple times and then upsamples them accordingly to make the input size being a power of $2$, or the output size may mismatch with the input size. In our experiment settings, the input resolution also depends on whether to attack the template or search regions. Specifically, all SiamRPN-based trackers, including SiamRPN~\cite{SiameseRPN}, DaSiamRPN~\cite{DSiam}, SiamRPN++~\cite{SiamRPNplusplus}, and SiamMask~\cite{SiamMask}, adopt the same template size as $127\times127$. However, when working in long-term scenarios, they may use different search region sizes like $255\times255$ and $831\times831$, because of switching between local and global states. Too large size may bring a heavy computational burden, and too small size may cause the loss of detailed information. Considering all these factors, we set the input resolution of the generator as $128\times128$ for attacking the template and $512\times512$ for attacking the search regions. The gap between different resolutions is bridged by padding-cropping or bilinear interpolation. For example, when attacking the template, the original template with a spatial size $127\times127$ is first padded to $128\times128$ with zero and passed through the generator to obtain the adversarial template. Then, the adversarial template is cropped back to $127\times127$ again and sent into the Siamese network. Similarly, when attacking search regions, clean search regions with a spatial size $255\times255$ are first interpolated to $512\times512$ and fed into the generator to get adversarial search regions. Then, adversarial search regions are interpolated back to $255\times255$ again and passed to the Siamese network. \vspace{-3mm} {\flushleft \textbf{Training Dataset}:} We use GOT-10K~\cite{GOT10K} as our training set, to cover more types of the tracked target. To be specific, the GOT-10K dataset includes more than 10,000 sequences and more than 500 object classes, showing high tracking diversity. We expect that models learned on this dataset could have better generalization power rather than only work on a few limited situations. We only use the train split of GOT-10K and uniformly sample frames with an interval of $10$ frames, and then crop search regions from these chosen frames. The template is cropped in the initial frame, and each search region is cropped based on the groundtruth of the last frame to simulate the situation in online tracking. In each training iteration, a template and $N$ search regions from the same video sequence are sent to our attacking model. In our experiments, $N$ is not larger than $15$ due to the limited GPU memory. \vspace{-2mm} {\flushleft \textbf{Training Loss Function}:} The generator is trained with the linear combination of the cooling loss, shrinking loss, and $L_2$ loss. The weights of these three losses can be tuned according to different biases. For example, we can increase the weight of the $L_2$ loss or decrease that of adversarial losses to make attack more unrecognizable. In our experiment setting, we choose the weights of cooling loss, shrinking loss and, $L_2$ loss as $0.1$, $1$, and $500$ respectively. The three margins $m_c$, $m_w$, $m_h$ for preventing the unconstrained decrease of adversarial losses are all set to $-5$. \vspace{-3mm} \section{Experiments} \vspace{-2mm} In this work, we implement our algorithm with Pytorch~\cite{Pytorch} deep learning framework. The hardware platform is a PC machine with an intel-i9 CPU (64GB memory) and a RTX-2080Ti GPU (11GB memory). We evaluate the proposed adversarial attack method on three datasets: OTB100~\cite{OTB2015}, VOT2018~\cite{VOT2018report}, and LaSOT~\cite{LaSOT}. To be specific, OTB100~\cite{OTB2015} contains 100 sequences, providing a fair benchmark for single object tracking. VOT2018~\cite{VOT2018report} is another challenging tracking benchmark, which simultaneously measures the tracker's accuracy and robustness. This benchmark includes 60 videos and ranks the trackers' performance with the expected average overlap (EAO) rule. LaSOT~\cite{LaSOT} is a recent large-scale tracking dataset, which covers 1400 videos with much longer time slots. We denote SiamRPN++ as SiamRPNpp for concise descriptions in the experiment section. Numerous experimental results demonstrate that our method can fool the state-of-the-art SiamRPNpp tracker with merely imperceptible perturbations on the search regions or template. We also test our perturbation-generator on another three top-performance trackers: DaSiamRPN~\cite{DSiam}, DaSiamRPN-UpdateNet~\cite{UpdateNet}, and DiMP~\cite{DiMP}. Obvious performance drop can also be observed, which shows that our method has good transferability. The speed of our attacking algorithm is also extremely fast. It only takes our model less than {\bf 9} ms to transform a clean search region to the adversarial one, and less than {\bf 3} ms to transform a clean template to the adversarial one. \vspace{-2mm} \subsection{Adversarial Attack to SiamRPNpp} {\flushleft\textbf{Attacking Search Regions Only}:} When attacking search regions, we leave the original template unchanged and only replace clean search regions with the adversarial ones in each frame. The detailed experimental results are shown in Table~\ref{tab-attack-SR}, and an obvious performance drop can be observed on all three datasets. \begin{table}[!htbp] \footnotesize \centering \caption{Effect of attacking search regions. The third column represents SiamRPNpp's original results. The fourth column represents results produced by attacking search regions. The last column represents the performance drop.} \begin{tabular}{|c|c|c|c|c|} \hline Dataset&Metric&Original&Attack SR&Drop\\ \hline \multirow{2}*{OTB100}&Success($\uparrow$)&0.696&0.349&0.347\\ \cline{2-5} &Precision($\uparrow$)&0.914&0.491&0.423\\ \hline \multirow{3}*{VOT2018}&Accuracy$(\uparrow)$&0.600&0.486&0.114\\ \cline{2-5} &Robustness$(\downarrow)$&0.234&2.074&1.840\\ \cline{2-5} &EAO$(\uparrow)$&0.414&0.073&0.341\\ \hline \multirow{2}*{LaSOT}&Norm Precision$(\uparrow)$&0.569&0.219&0.350\\ \cline{2-5} &Success$(\uparrow)$&0.496&0.180&0.316\\ \hline \end{tabular} \label{tab-attack-SR} \vspace{-3mm} \end{table} \vspace{-2mm} {\flushleft\textbf{Attacking the Template Only}:} When attacking the template, we only perturb the template once in the initial frame, replacing the original template with the adversarial one, and then leaving the rest of the tracking process undisturbed. The detailed experimental results are shown in Table~\ref{tab-attack-template}. The performance drop in this scenario is not as much as that in attacking search regions, because the tracker is hard to deceive by only adding minimal noises to the initial template. \vspace{-2mm} \begin{table}[!htbp] \centering \caption{Effect of attacking the template. The third column represents SiamRPNpp's original results. The fourth column represents results produced by attacking the template. The last column represents the performance drop.} \footnotesize \begin{tabular}{|c|c|c|c|c|} \hline Dataset&Metric&Original&Attack T&Drop\\ \hline \multirow{2}*{OTB100}&Success$(\uparrow)$&0.696&0.527&0.169\\ \cline{2-5} &Precision$(\uparrow)$&0.914&0.713&0.201\\ \hline \multirow{3}*{VOT2018}&Accuracy$(\uparrow)$&0.600&0.541&0.059\\ \cline{2-5} &Robustness$(\downarrow)$&0.234&1.147&0.913\\ \cline{2-5} &EAO$(\uparrow)$&0.414&0.123&0.291\\ \hline \multirow{2}*{LaSOT}&Norm Precision$(\uparrow)$&0.569&0.448&0.121\\ \cline{2-5} &Success$(\uparrow)$&0.496&0.393&0.103\\ \hline \end{tabular} \label{tab-attack-template} \vspace{-2mm} \end{table} \vspace{-3mm} {\flushleft\textbf{Attacking Both Search Regions and the Template}:} We also design a strategy that simultaneously attacks both search regions and the template. In this setting, we use the same generator designed for search regions to interfere with the template and search regions together. The detailed results are shown in Table~\ref{tab-attack-template-SR}. It can be seen that this strategy brings a slightly higher performance drop than only attacking search regions. \begin{table}[!htbp] \centering \caption{Effect of attacking both search regions and the template. The third column represents SiamRPNpp's original results. The fourth column represents results produced by attacking both search regions and the template. The last column represents the performance drop.} \footnotesize \begin{tabular}{|c|c|c|c|c|} \hline Dataset&Metric&Original&Attack Both&Drop\\ \hline \multirow{2}*{OTB100}&Success$(\uparrow)$&0.696&0.324&0.372\\ \cline{2-5} &Precision$(\uparrow)$&0.914&0.471&0.443\\ \hline \multirow{3}*{VOT2018}&Accuracy$(\uparrow)$&0.600&0.467&0.133\\ \cline{2-5} &Robustness$(\downarrow)$&0.234&2.013&1.779\\ \cline{2-5} &EAO$(\uparrow)$&0.414&0.073&0.341\\ \hline \multirow{2}*{LaSOT}&Norm Precision$(\uparrow)$&0.569&0.201&0.368\\ \cline{2-5} &Success$(\uparrow)$&0.496&0.168&0.328\\ \hline \end{tabular} \label{tab-attack-template-SR} \end{table} \vspace{-3mm} We also compare the performance of SiamRPNpp~\cite{SiamRPNplusplus} and its adversarial variants: SiamRPNpp+AT (attacking template), SiamRPNpp+AS (attacking search regions), and SiamRPNpp+ATS (attacking template and search regions) with other state-of-the-art trackers, such as MDNet~\cite{MDNet}, ECO~\cite{ECO}, SPLT~\cite{SPLT}, VITAL~\cite{VITAL}, StructSiam~\cite{StructSiam}, and SiamFC~\cite{SiameseFC}. The results are shown in Figure~\ref{fig-siamrpnpp_lasot} and Figure~\ref{fig-attribute}. Our adversarial attack algorithm drops the performance of SiamRPNpp significantly, making it obviously inferior to other top-performance trackers. \begin{figure}[!t] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.46\linewidth]{SOTA-Success.pdf} \ & \includegraphics[width=0.46\linewidth]{SOTA-norm-precision.pdf}\\ \end{tabular} \end{center} \vspace{-3mm} \caption{Quantitative comparison of state-of-the-art trackers on the LaSOT dataset.} \label{fig-siamrpnpp_lasot} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[width=0.9\linewidth,height=0.8\linewidth]{attribute.pdf} \end{center} \vspace{-3mm} \caption{Quantitative analysis of different attributes on the VOT2018 dataset.} \label{fig-attribute} \end{figure} \vspace{-3mm} \subsection{Ablation Study} \vspace{-2mm} {\flushleft \textbf{Influence of Shrinking Loss}:} We discuss the influence of the shrinking loss on the adversarial success rate in different situations. As explained before, the cooling loss is used to attack the classification branch, making the target invisible to the tracker. In addition, the shrinking loss is designed to disable the tracker's ability of scale estimation, thereby forcing the tracker to predict inaccurate bounding boxes. To explore the effect of the shrinking loss, we design three groups of comparison experiments on OTB100~\cite{OTB2015} and LaSOT~\cite{LaSOT}: G-Template vs. G-Template-Regress, G-Search vs. G-Search-Regress, and G-Template-Search vs. G-Template-Search-Regress. The detailed results are shown in Figure~\ref{fig-ablation_otb} and Figure~\ref{fig-ablation_lasot}. The shrinking loss does play a significant part when attacking search regions only and when attacking both search regions and template, bringing obvious extra performance drop. However, the shrinking loss also plays a negative part when attacking the template only, because it may cause the misclassification task to be suboptimal. To be specific, it is much more difficult to deceive the tracker by only perturbing template once than perturbing search regions in all frames. Thus, the generator cannot easily balance between cooling loss and $L_2$ loss when attacking only the template. Adding an extra shrinking loss may lead to serious difficulty in training, causing worse attacking performance. In summary, the shrinking loss is helpful to attack search regions but somewhat harmful to attack the template. \begin{figure}[!h] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.48\linewidth,height=0.45\linewidth]{OTB-Success.pdf} \ & \includegraphics[width=0.48\linewidth,height=0.45\linewidth]{OTB-Precision.pdf}\\ \end{tabular} \end{center} \vspace{-3mm} \caption{Quantitative comparisons between w/ and w/o shrinking loss on the OTB100 dataset. Results with the suffix "Regress" are ones with shrinking loss.} \label{fig-ablation_otb} \end{figure} \vspace{-2mm} \begin{figure}[!h] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.48\linewidth,height=0.45\linewidth]{LaSOT-Success.pdf} \ & \includegraphics[width=0.48\linewidth,height=0.45\linewidth]{LaSOT-norm-precision.pdf}\\ \end{tabular} \end{center} \vspace{-3mm} \caption{Quantitative comparisons between w/ and w/o shrinking loss on the LaSOT dataset. Results with the suffix "Regress" are ones with shrinking loss.} \label{fig-ablation_lasot} \vspace{-5mm} \end{figure} \vspace{-2mm} {\flushleft \textbf{Influence of a Discriminator}:} We also discuss the influence of a discriminator. Most previous neural-network-based adversarial attack methods~\cite{advGAN,UEA} adopt GAN structure, using a discriminator to supervise the adversarial output of the generator to be similar to the original input. However, we argue that the discriminator is not necessary. The reason why the $L_2$ loss and discriminator are applied is that we expect the perturbed image and original image to look similar. In other words, we hope that the perturbation is imperceptible. The $L_2$ loss can directly restrict the energy of noises and can be easily optimized. However, for the GAN's structure, the evolution of generator and discriminator has to be synchronized, which is hard to guarantee especially when the generator has many other tasks to learn. Thus, considering the instability of the GAN's architecture, we discard the discriminator and train the perturbation-generator only with the cooling-shrinking loss and $L_2$ loss. The visualization of clean and adversarial templates from the VOT2018 dataset is shown in Figure~\ref{fig-vot-template}. Without the help of a discriminator, the perturbation generated by our method is also quite imperceptible. \begin{figure}[!h] \begin{center} \includegraphics[width=1.0\linewidth,height=0.6\linewidth]{VOT18-noise.pdf} \end{center} \vspace{-3mm} \caption{Visualization of clean and perturbed templates of the VOT2018 dataset. Better viewed in color with zoom-in.} \label{fig-vot-template} \vspace{-5mm} \end{figure} \subsection{Further Discussions} \vspace{-2mm} {\flushleft\textbf{Speed}:} Our method also has extremely high efficiency. When attacking search regions, our method only needs less than \textbf{9 ms} to process a frame, running in more than \textbf{100 FPS}. When attacking the template, our method needs less than \textbf{3ms} to process a whole video sequence. The speed of our algorithm is much faster than that of common video flow and that of most real-time trackers, indicating that it is also imperceptible in terms of time consumption. \vspace{-3mm} {\flushleft\textbf{Noise Pattern}:} The adversarial search regions, the clean ones and their difference maps are shown in Figure~\ref{fig-noise-pattern}. It can be seen that the perturbation mainly focuses on the tracked target, leaving other regions almost not perturbed. \begin{figure}[H] \begin{center} \includegraphics[width=0.7\linewidth]{diff_map.pdf} \end{center} \vspace{-3mm} \caption{Adversarial search regions, clean ones and their difference maps. To observe pattern of difference maps clearly, the differences have been magnified by 10 times.} \label{fig-noise-pattern} \end{figure} \vspace{-5mm} {\flushleft\textbf{Comparasion with Other Noises}:} As shown in Figure~\ref{fig-noise-compare} and Table~\ref{tab-noise-compare}, compared with impulse noises and gauss noises, our adversarial perturbation is far more imperceptible but causes much larger performance drop. \vspace{-3mm} \begin{figure}[H] \centering \includegraphics[width=.8\linewidth]{noise_compare.pdf} \vspace{-2.5mm} \caption{Search regions with different kinds of noises. MAE represents mean absolute error.} \label{fig-noise-compare} \label{fig-noise-compare} \end{figure} \vspace{-5mm} \begin{table}[!htbp] \centering \caption{Comparison with other kinds of noises.} \resizebox{\columnwidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Dataset&Metric&original&ours&impulse 0.1&impulse 0.2&gauss 0.1&gauss 0.2\\ \hline \multirow{2}*{OTB100}&Success$(\uparrow)$&0.696&0.349&0.486&0.389&0.553&0.389\\ \cline{2-8} &Precision$(\uparrow)$&0.914&0.491&0.656&0.535&0.727&0.542\\ \hline \multirow{1}*{VOT2018}&EAO$(\uparrow)$&0.414&0.073&0.117&0.084&0.170&0.108\\ \cline{2-8} \hline \end{tabular} } \label{tab-noise-compare} \end{table} \vspace{-5mm} {\flushleft\textbf{Transferability}:} All aforementioned experiments are based on the SiamRPNpp~\cite{SiamRPNplusplus} tracker. % To test our method's transferability, we also apply our trained perturbation-generator to another three state-of-the-art trackers: DaSiamRPN~\cite{DSiam}, DaSiamRPN-UpdateNet~\cite{UpdateNet}, and DiMP~\cite{DiMP}. % Although all these trackers have templates and search regions, they are quite different from SiamRPNpp. To be specific, compared with SiamRPNpp, DaSiamRPN adopts a simpler backbone. % DaSiamRPN-UpdateNet proposes a learnable way to update the template, further improving DaSiamRPN's performance. % DiMP uses an online discriminative model to roughly determine the location of the target on the search region, and then applies a state estimation module to predict precise bounding boxes. % We use these three trackers as the baseline algorithms and then add perturbations to their search region in each frame. We make experiments on the LaSOT~\cite{LaSOT} dataset. % The detailed results about attacking DaSiamRPN, DaSiamRPN-UpdateNet, DiMP are shown in Table~\ref{tab-transfer}. % Although our attacking algorithm is designed for and trained with SiamRPNpp, this method can also effectively deceive other state-of-the-art trackers, causing obvious performance drop, which demonstrates the transferability of our attacking method. \vspace{-1mm} \begin{table}[!htbp] \footnotesize \centering \caption{Adversarial effect on other state-of-the-art trackers. From top to bottom, three trackers are DaSiamRPN, DaSiamRPN+UpdateNet and DiMP.} \begin{tabular}{|c|c|c|c|} \hline Tracker& &Success$(\uparrow)$&Norm Precision$(\uparrow)$\\ \hline \multirow{3}*{DaSiamRPN}&Original&0.458&0.544\\ \cline{2-4} &Adversarial&0.400&0.479\\ \cline{2-4} &Drop&0.058&0.065\\ \hline \multirow{3}*{UpdateNet}&Original&0.465&0.549\\ \cline{2-4} &Adversarial&0.399&0.478\\ \cline{2-4} &Drop&0.066&0.071\\ \hline \multirow{3}*{DiMP50}&Original&0.559&0.642\\ \cline{2-4} &Adversarial&0.492&0.567\\ \cline{2-4} &Drop&0.067&0.075\\ \hline \end{tabular} \label{tab-transfer} \end{table} \vspace{-5mm} \section{Conclusion} In this study, we present an effective and efficient adversarial attacking algorithm for deceiving single object trackers. A novel cooling-shrinking loss is proposed to train the perturbation-generator. The generator trained with this adversarial loss and $L_2$ loss can deceive SiamRPN++ at a high success rate with imperceptible noises. We show that a discriminator is not necessary in adversarial attack of the tracker because the combination of the adversarial loss and $L_2$ loss has already achieved our goal. Besides SiamRPN++, our attacking method has impressive transferability and effectively attacks other recent trackers such as DaSiamRPN, DaSiamRPN-UpdateNet, and DiMP. Our algorithm is also quite efficient and can transform clean templates/search regions to adversarial ones in a short time interval. \vspace{-2mm} \small{{\flushleft \textbf{Acknowledgement.}} The paper is supported in part by the National Key R$\&$D Program of China under Grant No. 2018AAA0102001 and National Natural Science Foundation of China under grant No. 61725202, U1903215, 61829102, 91538201, 61751212 and the Fundamental Research Funds for the Central Universities under Grant No. DUT19GJ201.} {\small \bibliographystyle{ieee_fullname}
2,869,038,154,227
arxiv
\section{Introduction} The chiral phase transition is one of the most fundamental features of QCD. Lattice field theory has been applied successfully to the study of this interesting phenomena and the associated symmetries. While traditional lattice techniques measure the chiral observables in a straightforward manner, examining the low-lying part of the eigenvalue spectrum of the Dirac operator can provide unique insights into various aspects of the symmetry breaking or restoration that accompany the phase transition. For instance, the chiral condensate, the chiral order parameter for the transition, can be expressed in terms of the eigenmode density via the following relation: \begin{equation} -\langle\bar{\psi}_q\psi_q\rangle=\int\mathrm{d}\lambda\,\rho(\lambda) \frac{2m_q}{m_q^2+\lambda^2}\,,\qquad q=l,s, \label{eqn:pbp} \end{equation} where $\rho(\lambda)$ is the spectral density of the Dirac operator and $m_q$ is the quark mass. When the chiral and infinite-volume limits are taken, one will obtain the well-known Banks-Casher relation~\cite{Banks:1979yr}, \begin{equation} \lim_{m_l\to0}\lim_{V\to\infty} -\langle\bar{\psi}_l\psi_l\rangle =\pi\lim_{\lambda\to0}\lim_{m_l\to0}\lim_{V\to\infty}\rho(\lambda). \label{eqn:bcr} \end{equation} In lattice calculation one may also examine the subtracted chiral condensate defined as, \begin{equation} \Delta_{l,s}= \langle\bar\psi_l\psi_l\rangle - \frac{m_l}{m_s}\langle\bar\psi_s\psi_s\rangle \; . \label{eqn:sub} \end{equation} The subraction removes the ultraviolet divergent piece of the chiral condensate which is linear in quark mass. The Dirac eigenvalue spectrum can be utilized to study the anomalous $U(1)_A$ symmetry as well. A similar order parameter $\Delta_{\pi-\delta}$, which is the difference between the pseudoscalar and scalar susceptibilities, can also be related to the eigenvalue spectrum, \begin{equation} \Delta_{\pi-\delta} \equiv \frac{\chi_{\pi} - \chi_{\delta}}{T^2} = \frac{1}{T^2}\int\mathrm{d}\lambda\ \rho(\lambda) \frac{4m_l^2}{\left(m_l^2+\lambda^2\right)^2}\; . \label{eqn:u1a} \end{equation} The expression above suggests that if there is a finite region above zero where the eigenvalue density vanishes (a gap), the $U(1)_A$ symmetry might be effectively restored. With chiral symmetry under good control, domain wall fermions (DWF)~\cite{Kaplan:1992bt, Furman:1994ky} are an optimum tool for the exploration of the phase transition region. The residual chiral symmetry breaking (present in the DWF formalism for finite fifth-dimensional extent $L_s$) is reflected in an additive correction to the bare quark mass ($m_\mathrm{res}$), which can be further suppressed by the adoption of the dislocation suppression determinant ratio (DSDR) action~\cite{Vranas:1999rz,Fukaya:2006vs,Renfrew:2009wu}. Despite some unphysical, massive degrees of freedom from the extra fifth dimension, the low modes of the DWF Dirac operator should resemble an ordinary four-dimensional discretized version of QCD. Moreover, within the DWF formalism, the $U(1)_A$ symmetry is only broken by axial anomaly, rather than spoiled by lattice artifacts as with staggered fermions. \section{Implementation Details} We have collected eight ensembles near the phase transition region with $2+1$ flavors of fermions. All the simulations have a $16^3\times8$ space-time volume and a fifth dimension of $L_S=32$ or $48$ and they all lie on a line of constant physics with $m_\pi\approx 200 \mathrm{MeV}$ and kaon mass almost physical~\cite{Cheng:2011lat}. Table~\ref{tab:sum} gives the basic parameters of these finite-temperature ensembles. The $N_{\rm cfg}$ column lists the number of configurations for which the eigenvalues are calculated. Figure~\ref{fig:chi} shows the disconnected susceptibilities of various temperatures and indicates a critical temperature around 160 MeV. \begin{table}[h]\footnotesize \centering \begin{tabular}{ccc|ccc|ccc|c|c} \hline $T\,(\mathrm{MeV})$&$\beta$&$L_s$&$m_{\mathrm{res}}a$ &$m_la$&$m_sa$& $\rule{0pt}{3ex} \rule[-1.2ex]{0pt}{0pt} \frac{\left<\bar{\psi}\psi\right>_l}{T^3}$& $\frac{\Delta\bar{\psi}\psi}{T^3}$&$\frac{\chi_{l,{\rm disc}}}{T^2}$ &$N_\mathrm{cfg}$& $Z_{{\rm tw} \to m_f}^{(\pi)}$\\ \hline 140&1.633&48&0.00612&-0.00136&0.0519&6.26(12)& 7.74(12)&36(3)& -& -\\ 150&1.671&48&0.00296& 0.00173&0.0500&6.32(29)& 6.10(29)&41(2)&340&1.980(7)\\ 150&1.671&32&0.00648&-0.00189&0.0464&8.39(10)& 7.06(10)&44(3)&340&1.905(6)\\ 160&1.707&32&0.00377&0.000551&0.0449&5.25(17)& 4.83(17)&43(4)&408&1.725(8)\\ 170&1.740&32&0.00209&0.00175 &0.0427&4.03(18)& 2.78(18)&35(5)&239&1.631(11)\\ 180&1.771&32&0.00132&0.00232 &0.0403&3.16(15)& 1.56(15)&25(4)&246&1.476(4)\\ 190&1.801&32&0.00076&0.00258 &0.0379&2.44(9) & 0.71(9) &11(4)&374&1.439(3)\\ 200&1.829&32&0.00046&0.00265 &0.0357&2.19(8) & 0.47(8) &10(3)&710&1.365(3)\\ \hline 0 &1.750&32&0.00188&0.00300 &0.0370& -& -& -&252&1.5685(5)\\ \hline \end{tabular} \caption{Summary of ensembles and the renormalization factors for the eigenvalue density.} \label{tab:sum} \end{table} \begin{figure}[htb] \begin{center} \begin{minipage}[t]{0.5\linewidth} \centering \resizebox{\linewidth}{!}{\input{./figs/Disc_Susc.tex}} \caption{Disconnected susceptibilites.} \label{fig:chi} \end{minipage \begin{minipage}[t]{0.5\linewidth} \centering \resizebox{\linewidth}{!}{\input{./figs/ratio_b175.tex}} \caption{Renormalization factors for the $\beta=1.750,\; 16^3\times16$ ensemble.} \label{fig:npr} \end{minipage} \end{center} \end{figure} For comparison, results from a zero-temperature ensemble with a volume of $16^3\times16$ are presented as well. In order to keep $m_\pi=200$ MeV as the temperature and $\beta$ decreases and $m_{\rm res}$ grows we must either increase $L_s$ above 32 or use a negative input quark mass. As the second and third lines of Table~\ref{tab:sum} show, this first use of a negative DWF input quark mass was successful, resulting in no exceptional configurations and giving a consistent result for $\chi_{l,{\rm disc}}$. We used the Kalkreuter-Simma \cite{Kalkreuter:1995mm} method to calculate the lowest $N_\mathrm{eig}=100$ eigenvalues of the Hermitian version of the Dirac operator $D_H\equiv R^5\gamma^5D_\mathrm{DWF}$, where $R^5$ is the reflection operator in the fifth dimension. Note the eigenvalues measured here include the mass term and are denoted by $\Lambda$, to be distinguished from those in the equations~\ref{eqn:pbp}, \ref{eqn:bcr} and \ref{eqn:u1a}. \section{Eigenvalue spectrum and its renormalization} Renormalization must be applied to the eigenvalue spectrum of the DWF operator $D_H$ before any sensible comparison can be made with either the input DWF quark mass $m_f$ or the eigenvalue densities from other four-dimensional fermion formalisms ({\it e.g.} Wilson fermions). The method we have adopted is a generalization to DWF of that proposed by Giusti and Luscher~\cite{Giusti:2008vb}, which introduces into the Lagrangian a twisted mass term, \begin{equation} {\cal L}_\mathrm{tm}(x) = \sum_{j=1}^k \overline{q}^j(x) \left(\gamma^\nu D_\nu + m + i\mu\gamma^5\tau^3\right)q^j(x) . \label{eqn:tml} \end{equation} Then Green's functions such as the six-point correlator of the charged pseudoscalar density operator $P^\pm_{ll'} = \overline{q}\,^l(x)\gamma^5\tau^\pm q^{l'}(x)$ can be expressed in terms of the eigenvalue density, \begin{eqnarray} \sigma_3(\mu)&=& -\sum_{x_i}\left\langle P^+_{1,2}(x_1) P^-_{2,3}(x_2) P^+_{3,4}(x_3) P^-_{4,5}(x_4)P^+_{5,6}(x_5) P^-_{6,1}(x_6) \right\rangle\\ &=& \left\langle{\rm Tr}\left\{ \frac{1}{\left((\gamma^5 D)^2 + \mu^2\right)^3} \right\}\right\rangle\\ &=& \int_{-\infty}^\infty \mathrm{d}\,\Lambda \rho(\Lambda) (\Lambda^2 + \mu^2)^{-3}, \label{eqn:sigma_3} \end{eqnarray} in the notation of Giusti and Luscher~\cite{Giusti:2008vb}. The charged pseudoscalar density is well defined in the continuum limit in a variety of regularization schemes which are related by multiplicative renormalization factors. Equation \ref{eqn:sigma_3} implies the same rules also will work for the eigenvalue density {\it e.g.}: \begin{equation} P_{ll'}^{\prime i} = \frac{1}{Z_{m \to m'}} P_{ll'}^i\quad \Longrightarrow\quad \rho^\prime(\Lambda^\prime)= Z_{m\to m^\prime}^{-1}\rho\left( \frac{\Lambda^\prime}{Z_{m\to m^\prime}}\right) \label{eqn:mm} \end{equation} where $Z_{m\to m^\prime}$ is the renormalization factor for the mass term treated symmetrically with $P^\pm_{ll'}$. With such inspiration, we can invent a five-dimensional analogue of the twisted-mass term $P_{ll'}^{{\rm DWF},i}(x) = \sum_{s=0}^{L_s-1} \overline{\Psi}_l(x,s)\gamma^5\tau^i\Psi_{l'}(x,L_s-1-s)$ and relate it to the usual pseudoscalar density, \begin{equation} \overline{\psi}(x) \gamma^5 \psi(x) \approx \frac{1}{Z_{{\rm tw} \to m_f}} \sum_{s=0}^{L_s-1}\overline{\Psi}(x,s)\gamma^5\Psi(x,L_s-1-s), \label{eqn:ps_equiv} \end{equation} where $\psi(x)$ is the four-dimensional operator while $\Psi(x, s)$ is the five-dimensional field. \footnote{The explicit sum over the fifth ($s$) dimension is suppressed later on for simplicity if no confusion is caused.} This renormalization factor $Z_{{\rm tw} \to m_f}$ connects the five-dimensional eigenvalue density to a more conventional density normalized in a fashion consistent with the usual bare quark mass $m_f$: \begin{equation} \rho^{m_f}(\Lambda^{m_f})= Z_{{\rm tw} \to m_f}^{-1}\rho(\Lambda^{(5d)}),\qquad \Lambda^{m_f}=Z_{{\rm tw} \to m_f}\Lambda^{(5d)}\,. \label{eqn:4d5d} \end{equation} Because of the equivalence of the two operators at long distance expressed by equation~\ref{eqn:ps_equiv}, the renormalization factor can be obtained from the ratio of the two Green's functions: \begin{equation} Z_{{\rm tw} \to m_f}^{(\pi)} = \frac{\left\langle \sum_{\vec x} \overline{\Psi}(\vec x,t)R_5\gamma^5\tau^i\Psi(\vec x,t )O_\pi^i(0)\right\rangle} {\left\langle \sum_{\vec x} \overline{\psi}(\vec x,t)\gamma^5\tau^i\psi(\vec x,t)O_\pi^i(0)\right\rangle}, \label{eqn:zpi} \end{equation} where $\tau_i$'s are the Pauli matrices in the flavor space. Results from a Coulomb gauge fixed wall source are presented in Table~\ref{tab:sum}. An alternative approach is to examine the off-shell, three-point Green's functions evaluated in Landau gauge. This is very similar to the Rome-Southampton non-perturbative renormalization (NPR) technique (RI/MOM)~\cite{Martinelli:1994ty}. The renormalization factor is extracted from the ratio of amputated vertices for the five- and four-dimentional operators. \begin{equation} Z_{{\rm tw} \to m_f}^{\rm(MOM)}(p_1,p_2) = \frac{ {\rm Tr}\left\langle \sum_{x_1,x_2} e^{i(p_2x_2-p_1x_1)} \psi_l(x_2)\overline{\Psi}_l(0)R_5\gamma^5\Psi_{l'}(0) \overline{\psi}_{l'}(x_1)\right\rangle} { {\rm Tr}\left\langle \sum_{x_1,x_2} e^{i(p_2x_2-p_1x_1)} \psi_l(x_2)\overline{\psi}_l(0)\gamma^5\psi_{l'}(0) \overline{\psi}_{l'}(x_1)\right\rangle}. \label{eqn:zmom} \end{equation} To fully utilize the whole lattice, we use a series of fixed-momentum volume sources to calculate the propagators, which is defined as \begin{equation} \eta(x\,;\,p)=e^{ip\cdot x}\,\mathbb{I}_{4\times4}\otimes\mathbb{I}_{3\times3} \; . \label{eqn:src} \end{equation} We perform our calculation using both non-exceptional kinematics, where $p_1^2=p_2^2=(p_1-p_2)^2$, and exceptional kinematics, where $p_1=p_2$. The results for the zero-temperature ensembles are presented in Table~\ref{tab:sum} and Figure~\ref{fig:npr}. Both Equation~\ref{eqn:zpi} and \ref{eqn:zmom} should give consistent results independent of temporal separation $t$ and of $p_1$ and $p_2$ respectively. Unfortunately, the NPR calculation is not feasible for the finite temperature ensembles due to large fluctuations. Therefore we only present the NPR results for the $16^3\times16$ emsemble in Figure~\ref{fig:npr}. Figure~\ref{fig:npr} shows a discrepancy between the two kinematics which is positively related to the physical momenta. This contradicts our expectation but can be plausibly explained by appreciable finite lattice spacing errors $(ap)^2$ at large momentum. Because the quantity $Z^{(\pi)}_{{\rm tw}\to m_f}$ involves the smallest momenta, we use it to renormalize the spectrum. Further renormalzation from the bare $m_f$ scheme to the conventional, continuum $\overline{\rm MS}$ scheme can then be easily performed since this final step has already been studied in detail~\cite{Aoki:2010dy}. Technical details and updated results will be available in our upcoming paper~\cite{Cheng:2011}. Figure~\ref{fig:0mev} displays the effects of the renormalization, which can be naively regarded as a rescaling of the axes. The orange vertical line denotes the smallest of the hundredth eigenvalue and the spectrum below that is supposed to be complete. The other vertical lines indicate the bare masses of the light and strange quarks. The horizontal lines are the chiral condensates divided by $\pi$, which should agree with the eigenvalue density at $\lambda=0$ as predicted by Banks-Cahser relation~\ref{eqn:bcr}. There are two significant features associated with the renormalization. First, the light quark mass now matches the likely zero-mode peak. Second, the Banks-Casher relation agrees better although it is still inaccurate at $30\%$ level. We attribute the discrepancy to finite-volume and finite-mass effects. Thus, no definitive conclusion can be drawn before studies on a larger lattice and the chiral extrapolation are performed. \begin{figure}[h] \begin{center} \begin{minipage}[t]{0.5\linewidth} \centering \resizebox{\linewidth}{!}{\input{./figs/000MeV_ml003.tex}} \end{minipage \begin{minipage}[t]{0.5\linewidth} \centering \resizebox{\linewidth}{!}{\input{./figs/000MeV_ml003_norm.tex}} \end{minipage} \end{center} \label{fig:0mev} \caption{Dirac eigenvalue spectrum for the $\beta=1.750,\; 16^3\times16$, zero temperature ensemble. The density in the left-hand panel has not been renormalized while that on the right has been changed into the normalization scheme of the usual input DWF mass $m_f$. The left-most vertical line locates the total bare quark mass, $m_f+m_{\rm res}$ which matches well with the small peak seen in the renormalized, right-hand spectrum.} \end{figure} Figure~\ref{fig:eig} shows the renormalized eigenvalue spectra at various temperatures near the phase transition region. Although not in perfect agreement with the Banks-Casher relation, at lower temperatures such as 150 and 160 MeV the chiral condensates and the eigenvalue densities are different from zero, signaling spontaneous chiral symmetry breaking. Above 170 MeV, these two quantities both start to vanish as expected for temperatures above the transition. However, it remains uncertain whether the slope of the eigenvalue density vanishes before 190 and 200 MeV, where a possible gap does begin to emerge, indicating an effective restoration of $U(1)_A$ symmetry. A small peak at the lower end at these temperatures suggests that the major contribution to $U(1)_A$ symmetry breaking may come from zero-modes, which are expected to go away as the volume increases. Therefore, we await a calculation on a larger lattice to confirm this conclusion. \begin{figure}[htb] \centering \begin{minipage}[t]{0.45\linewidth} \centering \resizebox{\linewidth}{!}{\input{./figs/150MeV_1_norm.tex}} \end{minipage}% \begin{minipage}[t]{0.45\linewidth} \centering \resizebox{\linewidth}{!}{\input{./figs/160MeV_norm.tex}} \end{minipage} \begin{minipage}[t]{0.45\linewidth} \centering \resizebox{\linewidth}{!}{\input{./figs/170MeV_norm.tex}} \end{minipage \begin{minipage}[t]{0.45\linewidth} \centering \resizebox{\linewidth}{!}{\input{./figs/180MeV_norm.tex}} \end{minipage} \begin{minipage}[t]{0.45\linewidth} \centering \resizebox{\linewidth}{!}{\input{./figs/190MeV_norm.tex}} \end{minipage}% \begin{minipage}[t]{0.45\linewidth} \centering \resizebox{\linewidth}{!}{\input{./figs/200MeV_1_norm.tex}} \end{minipage} \caption{Dirac eigenvalue spectrum for the $T=150 - 200$ MeV ensembles. Here the temperature is lowest in the upper left and largest in the lower right. The chiral symmetry breaking density of near zero eigenvalues disappears rapidly with increasing temperature and for the two highest temperature cases there appears to be a gap with very few eigenvalues just above zero. The magnified inset in these two cases show some near zero eigenvalues and a suggestive zero mode peak located at $\Lambda=m_f+m_{\rm res}$} \label{fig:eig} \end{figure} \section{Conclusions} With the chirally symmetric DWF framework, we are able to explore the chiral and $U(1)_A$ symmetries near the phase transition region. The successfully renormalized eigenvalue spectrum of the Dirac operator as well as the correlator measurements~\cite{Hegde:2011lat} suggest an effective restoration of the $U(1)_A$ symmetry at temperatures higher than $T_c$. We are looking forward to a similar simulation with larger volume to confirm our findings. I very much appreciate the help and advice from members of HotQCD and my colleagues at Columbia University. This work was supported in part by U.S. DOE grant DE-FG02-92ER40699. The simulations were carried out on the BG/P machine at LLNL, the DOE- and RIKEN-funded QCDOC machines and NYBlue machine at BNL.